From https://medium.com/@taylorhughes/three-quick-tips-from-two-years-with-celery-c05ff9d7f9eb
> By default, preforking Celery workers distribute tasks to their worker processes as soon as they are received, regardless of whether the process is currently busy with other tasks.
> If you have a set of tasks that take varying amounts of time to complete — either deliberately or due to unpredictable network conditions, etc. — this will cause unexpected delays in total execution time for tasks in the queue.
This is 100% the case for us. This should "load balance" the tasks
better across workers.
* separate plugin worker for heroku
* plugin server for dev with less concurrency
* add back plugins script
* move beat log to beat
* move starting the beat into the celery worker
* add optional process types for plugins and celery
* premium redis for heroku review apps
* fix broken script
* proc/dyno names are alphanumeric
* singularize
* premium-0 redis for all heroku apps, not just review apps
* premium-0 redis also for review apps
* remove heroku redis modifications
* remove out of scope code
* run beat in bg
* ignore copying frontend/dist folder - otherwise whatever you build in docker will get overridden by local build artifacts if any exist in frontend/dist
* support configuring redis with POSTHOG_REDIS_HOST and other vars in addition to REDIS_URL
* remove "the next version" in worker requirement modal
* split beat and celery scripts
* remove chart folder
* celery heartbeat every 10sec, reduce distributed beat lock hold time
* remove dockerfile local link
* add localhost redis url for tests