* api
* add performed_rollback
* add celery task and tests
* rollback test
* remove first and last
* add sentry stuff
* basic auto rollback UI
* fix errors
* testable
* add errors rollback ui
* clean up sentry keys
* clean up some ui stuff
* add some sentry context
* update ui
* fix celery
* Update posthog/api/feature_flag.py
Co-authored-by: Neil Kakkar <neilkakkar@gmail.com>
* add sentry instructions when not enabled
* add sentry context
* merge migration
* remove unnecessary field right now and update UI to 7 day trailing average
* fix migration
* fix frontend type
* activity
* reset migratioN'
* remove default
* update test
* add feature flag
* add view for conditions and make sure insight loads
* Update snapshots
Co-authored-by: Neil Kakkar <neilkakkar@gmail.com>
Co-authored-by: Li Yi Yu <li@posthog.com>
Co-authored-by: github-actions <41898282+github-actions[bot]@users.noreply.github.com>
Problem
Stacked on top of #12067 so the commit history is weird :/
Changes
adds DB wiring to add text or insights to dashboard tiles
How did you test this code?
developer tests and running the site locally
✅ move insight to dashboard
✅ duplicate insight from insight view
✅ duplicate insight from card list view
✅ duplicate insight from list view
✅ duplicate insight from dashboard
✅ remove insight from dashboard
✅ add insight to dashboard
✅ delete dashboard
✅ duplicate dashboard
✅ set card color
🤔 set card layout - updating layout starts refresh loop for dashboards
I think these changes make it more obvious but this is the case in master too. -> It's fixed (or at least worked-around) in #12132✅ update insight data updates dashboard view for that data
✅ rename insight from dashboard
* launch celery with debug logging
* autoimport a single task which decides what type of export to run
* still need to manually inject root folder so tests can clear up
* fix mock
* sketch the interaction
* correct field type
* explode a filter to steps of day length
* write to object storage maybe
* very shonky storing of gzipped files
* doesn't need an export type
* mark export type choices as deprecated
* order methods
* stage to temporary file
* can manually export the uncompressed content
* shonkily export as a csv
* wip
* with test for requesting the export
* with polling test for the API
* put existing broken CSV download back before implementing UI change
* open api change
* even more waffle
* less passive waffle
* sometimes less specific is more correct
* refactor looping
* okay snapshots
* remove unused exception variable
* fix mocks
* Update snapshots
* Update snapshots
* lift storage location to the exported asset model
* split the export tasks
* improve the temp file usage in csv exporter
* delete the test files we're creating
* add a commit to try and trigger github actions
Co-authored-by: pauldambra <pauldambra@users.noreply.github.com>
* refactor(ingestion): establish setup for json consumption from kafka into clickhouse [nuke protobuf pt. 1]
* address review
* fix kafka table name across the board
* Update posthog/async_migrations/test/test_0004_replicated_schema.py
* run checks
* feat(persons-on-events): add required person and group columns to events table
* rename
* update snapshots
* address review
* Revert "update snapshots"
This reverts commit 63d7126e08.
* address review
* update snapshots
* update more snapshots
* use runpython
* update schemas
* update more queries
* some improvements :D
* fix naming
* fix breakdown prop name
* update snapshot
* fix naming
* fix ambiguous test
* fix queries'
* last bits
* fix typo to retrigger tests
* also handle kafka and mv tables in migration
* update snapshots
* drop tables if exists
Co-authored-by: eric <eeoneric@gmail.com>
* Check async migrations instead of CLICKHOUSE_REPLICATION for mat columns
* Update a comment
* Default for CLICKHOUSE_REPLICATION
* add replication file
* Assert is replicated in tests
* Remove DROP TABLE query from cohortpeople migration
* Update snapshots
* Ignore migration in typechecker
* Truncate right table
* Add KAFKA_COLUMNS to distributed tables
* Make CLICKHOUSE_REPLICATION default to True
* Update some insert statements
* Create distributed tables during tests
* Delete from sharded_events
* Update test_migrations_not_required.py
* Improve 0002_events_sample_by is_required
1. SHOW CREATE TABLE is truncated if table has tens of materialized
columns, reasonably causing failures
2. We need to handle CLICKHOUSE_REPLICATED setups
* Update test_schema to handle CLICKHOUSE_REPLICATED, better test naming
* Fix issue with materialized columns
Note: Should make sure that these tests have coverage both ways
* Update test for recordings TTL
* Reorder table creation
* Correct schema for materialized columns on distributed tables
* Do correct setup in test_columns
* Lazily decide table to delete data from
* Make test_columns resilient to CLICKHOUSE_REPLICATION
* Make inserts resilient to CLICKHOUSE_REPLICATION
* Reset CLICKHOUSE_REPLICATION
* Create distributed tables conditionally
* Update snapshots, tests
* Fixup conftest