* 🔥 initial commit
* update readme
* Update README.md
* Update README.md
* deploy scripts
* very basic consumer setup
* add some configs and docker-compose
* formatting for testing
* add tailscale
* flip from dev to prod flag
* set default to be not prod
* default for group_id
* tailscale up
* update gitignore
* basic geolocation
* remove unused localServer
* document mmdb
* just make configs an example
* drop raw print
* add a start script (downloads the mmdb)
* add readme and update configs.example
* ts working
* if in start
* update start script
* fix start
* fix start
* fix more
* add sql endpoints for tokenId and Person lookups
* work towards filter
* sub channel
* fix subChan
* hardcode team2 token
* add cors
* only allow get and head
* add atomicbool
* add channel to kafka
* add logs
* verbose logs
* make array
* drop sub ptrs
* more logs
* helps to loop
* drop some logigng
* move sub branch
* logging
* drop log
* hog
* Deal with numeric distinct ids later
* logs
* api_key
* send 1/1000
* remove log
* remove more logs
* change response payload
* set timestamp if needed
* fill in person_id if team_id is set
* require teamid, convert to token
* clean up subs on disconnect
* log
* check for token in another place
* clean up subs on disconnect
* drop modulo and log
* fix no assign
* don't reuse db conn for now
* drop a log
* add back commented out log
* Don't block on send to client channel
* add geo bool
* only geo events
* use wrapper ip
* don't require team in geo mode
* add an endpoint and stats keeper for teams
* remove stats keeper
* start stats keeper
* wire it up
* change the shape of the response
* omit empty error
* omit empty on the stats as well
* enable logging on back pressure
* add jwt endpoint for testing
* support multiple event types
* Get Auth Setup
* jwt team is float so turn that into int
* logs
* add auth for stats endpoint
* remove tailscale and use autoTLS on public endpoints
* default to :443 for auto tls
* remove un-needed endpoints and handlers
* Use compression because... a lot of data (#9)
* add dockerfile and CI/CD (#10)
* add dockerfile and CI/CD
* Use ubuntu not alpine
couldn't build in alpine :'(
* Add MMDB download to Dockerfile (#11)
* Use clearer name for MMDB
* Don't connect to Kafka over SSL in dev
* Fix JWT token in example config
* Add postgres.url to example config
* Add expected scope
* Fix const syntax
* Put scope validation where claims are known
* Fix audience validation
* moves
* ignore livestream for ci
* main -> master
* move GA to root
* docker lint fix
* fix typo
* fixes for docker builds
* test docker build
* livestream build docker
* dang
* Update .github/workflows/livestream-docker-image.yml
Co-authored-by: Neil Kakkar <neilkakkar@gmail.com>
* Update .github/workflows/livestream-docker-image.yml
Co-authored-by: Neil Kakkar <neilkakkar@gmail.com>
* don't build posthog container when PR is pushed for rust or livestream
* Update .github/workflows/livestream-docker-image.yml
Co-authored-by: Neil Kakkar <neilkakkar@gmail.com>
* add a lot of paths-ignore
* Update .github/workflows/livestream-docker-image.yml
Co-authored-by: Neil Kakkar <neilkakkar@gmail.com>
* Dorny filters are handling most of what I was trying to do
* remove tailscale to speed up builds
* maybe?
* push container to github.com/posthog/postog
* don't build container on PR
* remove more filters because dorny
---------
Co-authored-by: Brett Hoerner <brett@bretthoerner.com>
Co-authored-by: Zach Waterfield <zlwaterfield@gmail.com>
Co-authored-by: Frank Hamand <frankhamand@gmail.com>
Co-authored-by: Michael Matloka <michal@matloka.com>
Co-authored-by: Neil Kakkar <neilkakkar@gmail.com>
use new state deployment for posthog cloud
once https://github.com/PostHog/charts/pull/1343 is merged everything
will be in place to support this (uploading assets)
this should make it significantly easier to do fast rollbacks
use new deployment trigger for temporal worker deployments
these trigger a new workflow in posthog/charts which creates a statefile
commit instead of deploying with manually set values from env vars.
the statefile commit then triggers a deploy - this means 100% of our
deployment state is codified, simplifying rollbacks and deploys
* ci: Run core backend tests with both HogQL and legacy insights
* Double the number of tests
* Fix env var setting
* Add some HOGQL_INSIGHTS_OVERRIDE overrides in tests
* Mark `Insight.last_refresh` as deprecated
* Fix bad merge
* Update query snapshots
* Update query snapshots
* Update test_team.py
* Actually remove legacy backend from matrix
* Update test_fetch_from_cache.py
* Use `update_cached_state` in `calculate_for_query_based_insight()`
* Clean up CI changes and a comment
* Fix `update_cached_state` typing
* Update test_fetch_from_cache.py
* Update test_insight_cache.py
* Update test_insight_cache.py
* Clarify `generate_insight_cache_key` as legacy function
---------
Co-authored-by: github-actions <41898282+github-actions[bot]@users.noreply.github.com>
* Revert "revert: "chore(clickhouse): Capture final SQL in Sentry errors" (#21479)"
This reverts commit 15627818f6.
* Only upgrade `clickhouse-driver` to 0.2.6
* Add "Run migrations for this PR"