0
0
mirror of https://github.com/PostHog/posthog.git synced 2024-11-28 18:26:15 +01:00
posthog/.run/Plugin Server.run.xml
Paul D'Ambra 067d73cb4f
feat: write recording summary events (#15245)
Problem
see #15200 (comment)

When we store session recording events we materialize a lot of information using the snapshot data column.

We'll soon not be storing the snapshot data so won't be able to use that to materialize that information, so we need to capture it earlier in the pipeline. Since this is only used for searching for/summarizing recordings we don't need to store every event.

Changes
We'll push a summary event to a new kafka topic during ingestion. ClickHouse can ingest from that topic into an aggregating merge tree table. So that we store (in theory, although not in practice) only one row per session.

add config to turn this on and off by team in plugin server
add code behind that to write session recording summary events to a new topic in kafka
add ClickHouse tables to ingest and aggregate those summary events
2023-05-09 14:41:16 +00:00

21 lines
874 B
XML

<component name="ProjectRunConfigurationManager">
<configuration default="false" name="Plugin Server" type="js.build_tools.npm">
<package-json value="$PROJECT_DIR$/plugin-server/package.json" />
<command value="run" />
<scripts>
<script value="start:dev" />
</scripts>
<node-interpreter value="project" />
<envs>
<env name="CLICKHOUSE_SECURE" value="False" />
<env name="DATABASE_URL" value="postgres://posthog:posthog@localhost:5432/posthog" />
<env name="KAFKA_HOSTS" value="localhost:9092" />
<env name="OBJECT_STORAGE_ENABLED" value="True" />
<env name="WORKER_CONCURRENCY" value="2" />
<env name="SESSION_RECORDING_BLOB_PROCESSING_TEAMS" value="all" />
<env name="SESSION_RECORDING_SUMMARY_INGESTION_ENABLED_TEAMS" value="all" />
</envs>
<method v="2" />
</configuration>
</component>