Problem
Stacked on top of #12067 so the commit history is weird :/
Changes
adds DB wiring to add text or insights to dashboard tiles
How did you test this code?
developer tests and running the site locally
✅ move insight to dashboard
✅ duplicate insight from insight view
✅ duplicate insight from card list view
✅ duplicate insight from list view
✅ duplicate insight from dashboard
✅ remove insight from dashboard
✅ add insight to dashboard
✅ delete dashboard
✅ duplicate dashboard
✅ set card color
🤔 set card layout - updating layout starts refresh loop for dashboards
I think these changes make it more obvious but this is the case in master too. -> It's fixed (or at least worked-around) in #12132✅ update insight data updates dashboard view for that data
✅ rename insight from dashboard
* launch celery with debug logging
* autoimport a single task which decides what type of export to run
* still need to manually inject root folder so tests can clear up
* fix mock
* sketch the interaction
* correct field type
* explode a filter to steps of day length
* write to object storage maybe
* very shonky storing of gzipped files
* doesn't need an export type
* mark export type choices as deprecated
* order methods
* stage to temporary file
* can manually export the uncompressed content
* shonkily export as a csv
* wip
* with test for requesting the export
* with polling test for the API
* put existing broken CSV download back before implementing UI change
* open api change
* even more waffle
* less passive waffle
* sometimes less specific is more correct
* refactor looping
* okay snapshots
* remove unused exception variable
* fix mocks
* Update snapshots
* Update snapshots
* lift storage location to the exported asset model
* split the export tasks
* improve the temp file usage in csv exporter
* delete the test files we're creating
* add a commit to try and trigger github actions
Co-authored-by: pauldambra <pauldambra@users.noreply.github.com>
* refactor(ingestion): establish setup for json consumption from kafka into clickhouse [nuke protobuf pt. 1]
* address review
* fix kafka table name across the board
* Update posthog/async_migrations/test/test_0004_replicated_schema.py
* run checks
* feat(persons-on-events): add required person and group columns to events table
* rename
* update snapshots
* address review
* Revert "update snapshots"
This reverts commit 63d7126e08.
* address review
* update snapshots
* update more snapshots
* use runpython
* update schemas
* update more queries
* some improvements :D
* fix naming
* fix breakdown prop name
* update snapshot
* fix naming
* fix ambiguous test
* fix queries'
* last bits
* fix typo to retrigger tests
* also handle kafka and mv tables in migration
* update snapshots
* drop tables if exists
Co-authored-by: eric <eeoneric@gmail.com>
* Check async migrations instead of CLICKHOUSE_REPLICATION for mat columns
* Update a comment
* Default for CLICKHOUSE_REPLICATION
* add replication file
* Assert is replicated in tests
* Remove DROP TABLE query from cohortpeople migration
* Update snapshots
* Ignore migration in typechecker
* Truncate right table
* Add KAFKA_COLUMNS to distributed tables
* Make CLICKHOUSE_REPLICATION default to True
* Update some insert statements
* Create distributed tables during tests
* Delete from sharded_events
* Update test_migrations_not_required.py
* Improve 0002_events_sample_by is_required
1. SHOW CREATE TABLE is truncated if table has tens of materialized
columns, reasonably causing failures
2. We need to handle CLICKHOUSE_REPLICATED setups
* Update test_schema to handle CLICKHOUSE_REPLICATED, better test naming
* Fix issue with materialized columns
Note: Should make sure that these tests have coverage both ways
* Update test for recordings TTL
* Reorder table creation
* Correct schema for materialized columns on distributed tables
* Do correct setup in test_columns
* Lazily decide table to delete data from
* Make test_columns resilient to CLICKHOUSE_REPLICATION
* Make inserts resilient to CLICKHOUSE_REPLICATION
* Reset CLICKHOUSE_REPLICATION
* Create distributed tables conditionally
* Update snapshots, tests
* Fixup conftest
* Remove event admin
* Move posthog/tasks/test/test_org_usage_report.py clickhouse version inline
* Remove postgres-specific code from org usage report
* Kill dead on_perform method
* Remove dead EventSerializer
* Remove a dead import
* Remove a dead command
* Clean up test, dont create a model
* Remove dead code
* Clean up test_element
* Clean up test event code
* Remove a dead function
* Clean up dead imports
* Remove dead code
* Code style cleanup
* Fix foss test
* Simplify fn
* Org usage fixup #3
* version insights
* version and lock update
* make sure all tests work
* restore exception
* fix test
* fix test
* add specific id
* update plugin server test utils
* cleanup
* match filtering
* use timestamp comparison
* make tests work
* one more test field
* fix more tests
* more cleanup
* lock frontend when updating and restore refresh
* pass undefined
* add timestamp to background update
* use incrementer
* add field
* snapshot
* types
* more cleanup
* update tests
* remove crumbs
* use expressions
* make nullable
* batch delete
* fill null for static cohorts
* batch_delete
* typing
* remove queryset function