0
0
mirror of https://github.com/PostHog/posthog.git synced 2024-11-24 18:07:17 +01:00
posthog/.github/workflows/e2e.yml

124 lines
5.1 KiB
YAML
Raw Normal View History

name: E2E
on:
- pull_request
jobs:
cypress:
name: Cypress tests
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
runs-on: ubuntu-18.04
strategy:
# when one test fails, DO NOT cancel the other
# containers, because this will kill Cypress processes
# leaving the Dashboard hanging ...
# https://github.com/cypress-io/github-action/issues/48
fail-fast: false
matrix:
# run 3 copies of the current job in parallel
containers: [1, 2, 3, 4]
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
services:
postgres:
image: postgres:12
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
# Maps port 5432 on service container to the host
# Needed because `postgres` host is not discoverable for some reason
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
redis:
image: redis
ports:
# Maps port 6379 on service container to the host
# Needed because `redis` host is not discoverable for some reason
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Set up Python 3.7
uses: actions/setup-python@v1
with:
python-version: 3.7
- uses: actions/cache@v1
name: Cache pip dependencies
id: pip-cache
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install python dependencies
run: |
python -m pip install --upgrade pip
python -m pip install $(grep -ivE "psycopg2" requirements.txt) --no-cache-dir --compile
python -m pip install psycopg2-binary --no-cache-dir --compile
- uses: actions/setup-node@v1
with:
node-version: 12
- name: Get yarn cache directory path
id: yarn-dep-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- uses: actions/cache@v1
name: Setup Yarn dep cache
id: yarn-dep-cache
with:
path: ${{ steps.yarn-dep-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-dep-${{ hashFiles('**/yarn.lock') }}
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
restore-keys: |
${{ runner.os }}-yarn-dep-
- name: Yarn install deps
run: |
yarn install --frozen-lockfile
- uses: actions/cache@v1
name: Setup Yarn build cache
id: yarn-build-cache
with:
path: frontend/dist
key: ${{ runner.os }}-yarn-build-${{ hashFiles('frontend/src/') }}
restore-keys: |
${{ runner.os }}-yarn-build-
- name: Yarn build
run: |
yarn build
if: steps.yarn-build-cache.outputs.cache-hit != 'true'
- name: Boot PostHog
env:
SECRET_KEY: '6b01eee4f945ca25045b5aab440b953461faf08693a9abbf1166dc7c6b9772da' # unsafe - for testing only
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
REDIS_URL: 'redis://localhost'
DATABASE_URL: 'postgres://postgres:postgres@localhost:${{ job.services.postgres.ports[5432] }}/postgres'
DISABLE_SECURE_SSL_REDIRECT: 1
SECURE_COOKIES: 0
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
OPT_OUT_CAPTURE: 1
run: |
python manage.py collectstatic --noinput
mkdir -p cypress/screenshots
./bin/docker-migrate
./bin/docker-worker &
./bin/docker-server &
- name: Cypress run
uses: cypress-io/github-action@v1
with:
config-file: cypress.json
record: true
parallel: true
group: 'PostHog Frontend'
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
# Recommended: pass the GitHub token lets this action correctly
# determine the unique run id necessary to re-run the checks
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Cache yarn builds to speed up end to end testing (#927) * Cache yarn builds to speed up end to end testing * refine configs a bit * map port to localhost for redis and postgres * background posthog so we can proceed to next step * test debug=1 * debug back to false * block on posthog boot for debug * back to single boot posthog step * check response from login url * see if this is ssl redirect * more debugging around wait * print out redirect to see where it is going * print redirect location from header * this is so tedius * hit setup_admin url * ok, so we know it's 500-ing let's see what response is * reflect production docker file more closely for dep building * posthog is up, let's see what it is returning that is causing failures * Save screenshots as artifacts * rename artifact and use zip * demo is missing? * only upload artifacts if cypress fails * use the path for screenshots for artifacts * clean up wait script and call it done for this PR * correctly hash requirements for pip cache * cache build and dep separately for yarn * change to test the cache * use cypress suggested runner for actions * use parallel execution for cypress * skip python caching for now * not going to use parallel now because premium feature of cypress * do not attempt to archive video artifacts * re-enable pip cache 🤞 * bust the python cache and see if we can't get manage working * test python cache * it's just caching the pip cache... * test turning DEBUG false * reenable debug mode for now * collectstatic after yarn build * run collectstatic with noinput
2020-06-06 19:13:09 +02:00
- name: Archive test screenshots
uses: actions/upload-artifact@v1
with:
name: screenshots
path: cypress/screenshots
if: ${{ failure() }}