* Allow overriding kafka host for clickhouse via KAFKA_URL_CLICKHOUSE env var
This is needed when using an external clickhouse which doesn't have the
same access to kafka as in-cluster traffic does.
Note that long-term we might need to also provide better auth mechanisms
here as well.
* Rename env variable
* Add a comment to keep topics in sync
* Clean up code relating to table engines
* Add snapshots for table creation queries
* Remove optional import
* Add snapshot tests for CLICKHOUSE_REPLICATION schemas
Note that these are out of sync with cloud in most cases
* Add another warning comment
* Improve naming
I was having issues with running the clickhouse/ee tests and it was just
hanging. Clickhouse appeared to be up and I could perform queries with
`clickhouse-client`. For some reason it was hanging on querying, and on closer
inspection if looks like for each of the setup queries it was hanging for 6
seconds, failing to find zookeeper, and then continuing to run setup.
It's pretty useless to continue in this case, so it seems more sensible to raise
in this case.
* Revert "Revert "Add is_deleted column to person_distinct_id (#5151)" (#5193)"
This reverts commit 401268bdba.
* A tweak for docker-compose builds
Co-authored-by: James Greenhill <fuziontech@gmail.com>
* Update PERSONS_ACTIVE_USER_SQL query
* Remove dead import
* Update lifecycle queries
* Update BREAKDOWN_ACTIVE_USER_INNER_SQL to use new persons query
* Update STICKINESS_SQL
* Update STICKINESS_PEOPLE_SQL
* Update STICKINESS_ACTIONS_SQL
* Update paths query
* Update events query
* Update CALCULATE_COHORT_PEOPLE_SQL
* Update retention queries
* Update TOP_PERSON_PROPS_ARRAY_OF_KEY_SQL
* Update EVENT_JOIN_PERSON_SQL
* Update GET_PERSON_ID_BY_ENTITY_COUNT_SQL
* Remove remaining references to old get latest person query
* Update GET_DISTINCT_IDS_BY_PROPERTY_SQL
* Fix code style issue
* Update table engine for person_distinct_id table
* don't select team_id
* Make person deletion work
* Use replacingmergetree over collapsing with is_deleted
Replacing an existing engine is hard, let's not do it
* Update query in test
* add migration
* set database on materialized views
* Update plugin server to 1.1.6
Co-authored-by: James Greenhill <fuziontech@gmail.com>
Co-authored-by: posthog-bot <posthog-bot@users.noreply.github.com>
* Make DDLs more friendly towards running on a cluster
* Use primary CLICKHOUSE host for migrations and DDL
* loose ends on person kafka create
* posthog -> cluster typo
* add cluster to KAFKA create for plugin logs
* Feed the type monster
* clusterfy local clickhouse
* test docker-compose backed github action
* run just clickhouse and postgres from docker-compose
* move option to between up and <services>
* posthog all the things
* suggest tests run on cluster
* posthog cluster for ci
* use deploy path for docker-compose
* fix for a clickhouse bug 🐛
* complete CH bug fixes
* 5439 the github actions pg configs
* remove CLICKHOUSE_DATABASE (handled automatically)
* update DATABASE_URL for code quality checks
* Missed a few DDLs on Person
* 5439 -> 5432 to please the people
* cleanup persons and use f strings <3 f strings
* remove auto parens
* Update requirements to use our fork of infi.clickhouse_orm
* fix person.py formatting
* Include boilerplate macros for a cluster
* Save Organization.available_features as a DB column
* `update_available_features()` before an organization is created too
* Run the task half past the hour
* Adjust tests and fix available_features sync on start
* Remove redundant null=False
* Fix `Organization.update_available_features()`
* Run Cloud tests in respect to `posthog-cloud` `4426-fix` branch
* Revert "Run Cloud tests in respect to `posthog-cloud` `4426-fix` branch"
This reverts commit 421e8541b3.