0
0
mirror of https://github.com/PostHog/posthog.git synced 2024-11-30 19:41:46 +01:00
posthog/.github/workflows/e2e.yml

131 lines
5.5 KiB
YAML
Raw Normal View History

name: E2E
on:
- pull_request
jobs:
cypress:
name: Cypress E2E tests
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
runs-on: ubuntu-18.04
2021-03-30 17:17:58 +02:00
if: ${{ github.actor != 'posthog-contributions-bot[bot]' }}
strategy:
# when one test fails, DO NOT cancel the other
# containers, because this will kill Cypress processes
# leaving the Dashboard hanging ...
# https://github.com/cypress-io/github-action/issues/48
fail-fast: false
matrix:
# run 7 copies of the current job in parallel
containers: [1, 2, 3, 4, 5, 6, 7]
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
services:
postgres:
image: postgres:12
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
# Maps port 5432 on service container to the host
# Needed because `postgres` host is not discoverable for some reason
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
redis:
image: redis
ports:
# Maps port 6379 on service container to the host
# Needed because `redis` host is not discoverable for some reason
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Set up Python 3.8
uses: actions/setup-python@v2
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
with:
python-version: 3.8
2021-05-19 16:52:55 +02:00
- uses: syphar/restore-virtualenv@v1
id: cache-virtualenv
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
with:
2021-05-19 16:52:55 +02:00
requirement_files: requirements.txt # this is optional
- uses: syphar/restore-pip-download-cache@v1
if: steps.cache-virtualenv.outputs.cache-hit != 'true'
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
- name: Install python dependencies
2021-05-19 16:52:55 +02:00
if: steps.cache-virtualenv.outputs.cache-hit != 'true'
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
run: |
python -m pip install --upgrade pip
python -m pip install $(grep -ivE "psycopg2" requirements.txt | cut -d'#' -f1) --no-cache-dir --compile
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
python -m pip install psycopg2-binary --no-cache-dir --compile
- uses: actions/setup-node@v1
with:
Plugin v8 (#1946) * plugin progress * blah * add posthog config for plugins * test gitignore * new functionality for plugins * support local plugin paths * also ignore symlinks * add positional argument * fixes * small fixes * config polish * config passed to posthog plugin * ooooooops * symlink fix * cleanse dir before loading * add cache to plugins * pickle the goods * unlink symlink * pass full config * unlink even if link points to nothing * fix fix * return none if value is empty * plugin model * plugins scene * add config schema to plugins * install plugins * save descriptions * show descriptions * edit plugin * save plugin config * plugin modal * uninstall plugins * UX cleanup * add "required" to plugin config * open plugin modal after install * split to subcomponents * install custom plugins * rework backend for model plugins * Plugins on models * simple reload pubsub * fix apps not installed * fix master/main issue * fix reload command * use the github api to get the default branch * init plugins only if not running migrate/makemigrations * store plugins zip archives in postgres * tag plugins to specific versions * save plugins in pluginConfig * update pluginConfigs instead of adding new rows, remove from redux on uninstall * remove debug * run plugins from db by team * reload when deleting * remove debug * smarter handling of dynamic plugins, support local plugins again * improve typings, add some nicer warnings * yarn lock file after merge * squash migrations and add "locked" field to plugins * error if folder not found in zip * unregister plugins * skip plugin init in test mode * basic plugin test * avoid mutating the same prop hash * add pip tools to requirements.txt * fix mypy, fix manage.py script error * avoid plugins with mypy * mypy fix * abstract redis into plugin and add team_id to reload * refactor and start work on syncing with posthog.json * start testing plugin loading from json * test plugin deletion * test for syncing plugins from config * complete and then test local json plugin sync * test converting back and forward between an local and http path * remove global plugin config from plugins array in posthog.json * rename configSchema --> config_schema * fix migration after merge * rename from_cli to from_json * mypy * import pip after plugin loaded * show error details * raise exceptions visible to the frontend * sync plugins on load * access control to updating plugins from the web * access control * remove posthog.json from git * test config schema from json * if you can install via the web, you can also configure * remove separate view access * title as "Plugins" instead of "Installed Plugins" if we can't install ourselves and don't see the repository * add self.team to plugin base class * add instance_init method * refactor into files * sync global plugin config from json * make global plugins work, add test * global plugins in interface, make them take precedence over local plugins * add comments to plugin base class * reload/reset plugins before each test * add error field to plugins * add many plugin zips * add many plugin zips, fix imports * store errors on plugin object and test them * fix types * add null to error * can be with any team ID in the test * save problems running plugins in the plugin_config model * try to create redis connection pool only once * throw if no redis * mypy * get instance inside heartbeat and not top level * try caching pubsub * try pip install with -q * install pip externally * remove uuid and typing, now in stdlib * more verbosity * add pip back * catch exceptions * quiet and no input for pip * check plugin reload every 10sec on new task * fix type errors * fix requirements error message * use repository.json * only load and reload plugins on workers * rename task * support local js plugins via py-mini-racer * load js plugins from zip files * extract jsplugin class and convert to syntax that uses global functions instead of initializing a class * process events via grpc * process events with the "posthog-plugins" queue to enable plugins * remove old native python & mini racer plugin code * default to false * change env vars * fix test * remove grpc tools * skip plugins in migrate.py scripts * fix migration * change output of settings debug banner to STDERR * start posthog plugin server with worker * try to fix python 3.7 test * add fallback for the optional argument * annoying CI test debug * try to finally fix python 3.7 test * here we go again * move plugins under instance * move plugins npm start into its own folder * more console.log debugginf * and again * move plugins to separate script * more prints * fix test error * docker config * small fixes * dckerfile fix * reload plugins via pubsub, upgrade version * plugins that support team setup code * sync if made changes from config * move plugins in menu * require node 14 in heroku for better plugin support (namely ?. support) * bump node version in dockerfiles * update node versions for github actions * update the concurrency for heroku workers * update the concurrency for heroku workers (add link) * Fix migrations after merge * add ignore_result to process_event tasks * fix: docker-preview run in parallel bug * change order of commands * remove separate plugins server conf script * clarify intent * revert castaway change * add context to plugins/sync.py * change everything to ValidationError * delegate destroy to super * no request to repository url if can't install * make the if cleaner * add clarifying line * add clarifying line * fix url field type * rename get_redis_instance to get_client, move to posthog.redis * remove duplicate validation * flip if around * simplify api logic * simplify plugin_config api, fix global_plugin error * remove unnecessary field rename * mypy * Plugins UI (#2090) * base UI * more UI * load plugin image if available * toggle enabled plugin * plugin cards for available * custom plugins * change plugin configuration to drawer * asks for confirmation when enabling or disabling a plugin * loading state * separation of concerns, leave new styles for separate PR * general improvements * remove button when installation is not available * preemptively avoid merge conflict with #2114 * move papercups widget & hide bottom bar when drawer is open * allow clicking the entire plugin card * address all feedback * move plugins under "project" menu * Hide "configure" from globally enabled plugins Co-authored-by: Marius Andra <marius.andra@gmail.com> * add plugins opt in toggle to project/plugins * choose pipeline based on team setting * add "beta" * plugin opt-in opt-out pages * adjust install button * remove tasks that are never called, remove PLUGINS_ENABLED global key * fix responsive card display * fix typo and drawer width * skeleton fixes * typo * use "posthog-plugin-server" npm package * "posthog-plugin-server" doc * require the plugin server to be online before enabling plugins * remove a few needeless "?." cases * add hint for config_schema * add hint for errors * show plugin errors * stop clicks if clicking on error * show plugin errors * loading indicators * reload plugins when opting in/out * nicer beta tag * add frontend type * fix mypy error * fix test * disable plugins if MULTI_TENANCY * upgrade plugin-server version * save event with plugin error * upgrade plugin server * squashed & optimized migrations * remove unused import * updates opt-in copy & hides tech details for cloud version * fix cypress tests * compare with None * change plugins url and add redirect * remove ellipsis * use code snippet in plugin errors * change github regex * fix loading flickering on installing plugins * add comment to plugin archive * fix python style * remove pip-tools (relic from the python plugin era) * hard pin plugin server version * remove copying of posthog.json from dev dockerfile (breaks if file doesn't exist, copied later anyway) * update lockfile Co-authored-by: James Greenhill <fuziontech@gmail.com> Co-authored-by: Paolo D'Amico <paolodamico@users.noreply.github.com>
2020-11-02 15:08:30 +01:00
node-version: 14
2021-05-19 16:45:41 +02:00
- uses: actions/cache@v2
2021-05-19 17:15:58 +02:00
id: node-modules-cache
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
with:
2021-05-19 17:37:20 +02:00
path: |
**/node_modules
~/.cache/Cypress
key: ${{ runner.os }}-modules-cypress-${{ hashFiles('**/yarn.lock') }}
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
restore-keys: |
2021-05-19 17:37:20 +02:00
${{ runner.os }}-modules-cypress-
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
- name: Yarn install deps
2021-05-19 17:15:58 +02:00
if: steps.node-modules-cache.outputs.cache-hit != 'true'
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
run: |
yarn install --frozen-lockfile
2021-04-06 18:21:21 +02:00
yarn add cypress@6.7.0 cypress-terminal-report@2.1.0 @cypress/react@4.16.4 @cypress/webpack-preprocessor@5.7.0
2021-05-19 17:32:19 +02:00
- name: test
run: |
ls node_modules/
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
- uses: actions/cache@v1
name: Setup Yarn build cache
id: yarn-build-cache
with:
path: frontend/dist
key: ${{ runner.os }}-yarn-build-${{ hashFiles('frontend/src/') }}
restore-keys: |
${{ runner.os }}-yarn-build-
- name: Yarn build
run: |
yarn build
if: steps.yarn-build-cache.outputs.cache-hit != 'true'
- name: Boot PostHog
env:
SECRET_KEY: '6b01eee4f945ca25045b5aab440b953461faf08693a9abbf1166dc7c6b9772da' # unsafe - for testing only
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
REDIS_URL: 'redis://localhost'
DATABASE_URL: 'postgres://postgres:postgres@localhost:${{ job.services.postgres.ports[5432] }}/postgres'
DISABLE_SECURE_SSL_REDIRECT: 1
SECURE_COOKIES: 0
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
OPT_OUT_CAPTURE: 1
2021-05-14 16:28:04 +02:00
SELF_CAPTURE: 0
2021-03-25 15:40:20 +01:00
E2E_TESTING: 1
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
run: |
2021-05-19 17:54:23 +02:00
python manage.py collectstatic --noinput &
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
./bin/docker-migrate
python manage.py setup_dev
mkdir -p cypress/screenshots
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
./bin/docker-worker &
./bin/docker-server &
- name: Cypress run
uses: cypress-io/github-action@v2
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
with:
config-file: cypress.e2e.json
record: true
parallel: true
group: 'PostHog Frontend'
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
# Recommended: pass the GitHub token lets this action correctly
# determine the unique run id necessary to re-run the checks
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
- name: Archive test screenshots
uses: actions/upload-artifact@v1
with:
name: screenshots
path: cypress/screenshots
if: ${{ failure() }}