0
0
mirror of https://github.com/PostHog/posthog.git synced 2024-12-01 04:12:23 +01:00
Commit Graph

8 Commits

Author SHA1 Message Date
Michael Matloka
b7fe004d6b
chore(plugin-server): Validate fetch hostnames (#17183)
* chore(plugin-server): Validate fetch hostnames

* Only apply Python host check on Cloud

* Update tests to use valid hook URLs

* Only apply plugin server host check in prod

* Update URLs in a couple more tests

* Only check hostnames on Cloud and remove port check

* Fix fetch mocking

* Roll out hostname guard per project

* Fix fetch call assertions

* Make `fetchHostnameGuardTeams` optional
2023-09-18 14:38:02 +02:00
Harry Waye
7ba6fa7148
chore(plugin-server): remove piscina workers (#15327)
* chore(plugin-server): remove piscina workers

Using Piscina workers introduces complexity that would rather be
avoided. It does offer the ability to scale work across multiple CPUs,
but we can achieve this via starting multiple processes instead. It may
also provide some protection from deadlocking the worker process, which
I believe Piscina will handle by killing worker processes and
respawning, but we have K8s liveness checks that will also handle this.

This should simplify 1. prom metrics exporting, and 2. using
node-rdkafka.

* remove piscina from package.json

* use createWorker

* wip

* wip

* wip

* wip

* fix export test

* wip

* wip

* fix server stop tests

* wip

* mock process.exit everywhere

* fix health server tests

* Remove collectMetrics

* wip
2023-05-03 14:42:16 +00:00
Harry Waye
cd82caab01
feat(ingestion): remove Graphile worker as initial ingest dependency (#12075)
* feat(ingestion): remove Graphile worker as initial ingest dependency

At the moment if the Graphile enqueing of an anonymous event fails e.g.
due to the database that it is uing to store scheduling information
fails, then we end up pushing the event to the Dead Letter Queue and to
not do anything with it further.

Here, instead of directly sending the event to the DB, we first push it
to Kafka, an `anonymous_events_buffer`, which is then committed to the
Graphile database. This means that if the Graphile DB is down, but then
comes back up, we will end up with the same results as if it was always
up*

(*) not entirely true as what is ingested also depends on the timings of
other events being ingested

* narrow typing for anonymous event consumer

* fix types import

* chore: add comment re todos for consumer

* wip

* wip

* wip

* wip

* wip

* wip

* fix typing

* Include error message in warning log

* Update plugin-server/jest.setup.fetch-mock.js

Co-authored-by: Guido Iaquinti <4038041+guidoiaquinti@users.noreply.github.com>

* Update plugin-server/src/main/ingestion-queues/anonymous-event-buffer-consumer.ts

Co-authored-by: Guido Iaquinti <4038041+guidoiaquinti@users.noreply.github.com>

* include warning icon

* fix crash message

* Update plugin-server/src/main/ingestion-queues/anonymous-event-buffer-consumer.ts

* Update plugin-server/src/main/ingestion-queues/anonymous-event-buffer-consumer.ts

Co-authored-by: Yakko Majuri <38760734+yakkomajuri@users.noreply.github.com>

* setup event handlers as KafkaQueue

* chore: instrument buffer consumer

* missing import

* avoid passing hub to buffer consumer

* fix statsd reference.

* pass graphile explicitly

* explicitly cast

* add todo for buffer healthcheck

* set NODE_ENV=production

* Update comment re. failed batches

* fix: call flush on emitting to buffer.

* chore: flush to producer

* accept that we may drop some anonymous events

* Add metrics for enqueue error/enqueued

* fix comment

* chore: add CONVERSION_BUFFER_TOPIC_ENABLED_TEAMS to switch on buffer
topic

Co-authored-by: Guido Iaquinti <4038041+guidoiaquinti@users.noreply.github.com>
Co-authored-by: Yakko Majuri <38760734+yakkomajuri@users.noreply.github.com>
2022-10-10 15:40:43 +01:00
Karl-Aksel Puulmann
9c6f20b697
chore(plugin-server): Improve tracing (#11042)
* Include kafka topic for setup

* Sample runEventPipeline/runBufferEventPipeline less frequently comparatively

This is done by duration - we still want the long transactions, but not
the short ones

* Trace enqueue plugin jobs

* Trace node-fetch

* Trace worker creation

* Various fixes

* Line up query tags properly

* Make fetch mocking work

* Resolve typing-related issues
2022-08-03 16:12:56 +03:00
Marius Andra
2da4962378
feat(plugin-server): use cdn to download mmdb database (#9279) 2022-04-01 16:04:09 +02:00
James Greenhill
434e379f9a Add 'plugin-server/' from commit '01a99a4e26b0b11f068a7073d6b94e53a7214d33'
git-subtree-dir: plugin-server
git-subtree-mainline: 776b056b6d
git-subtree-split: 01a99a4e26
2021-10-28 14:59:19 -07:00
James Greenhill
145937a435
Revert "Monorepo with updated history" 2021-10-28 14:55:17 -07:00
James Greenhill
65512ae16f
Pack up plugin-server for moving 2021-10-12 15:45:42 -07:00