Get up and running quickly with edX services.
If you are seeking info on the Vagrant-based devstack, please see https://openedx.atlassian.net/wiki/display/OpenOPS/Running+Devstack. This project is meant to replace the traditional Vagrant-based devstack with a multi-container approach driven by Docker Compose. It is still in the beta testing phase.
Tickets or issues should be filed in Jira under the platform project: https://openedx.atlassian.net/projects/PLAT/issues
You should run any make
targets described below on your local machine, not
from within a VM.
This project requires Docker 17.06+ CE. We recommend Docker Stable, but Docker Edge should work as well.
NOTE: Switching between Docker Stable and Docker Edge will remove all images and settings. Don't forget to restore your memory setting and be prepared to provision.
For macOS users, please use Docker for Mac. Previous Mac-based tools (e.g. boot2docker) are not supported.
Docker for Windows may work but has not been tested and is not supported.
Linux users should not be using the overlay
storage driver. overlay2
is tested and supported, but requires kernel version 4.0+. Check which storage
driver your docker-daemon is configured to use:
docker info | grep -i 'storage driver'
You will also need the following installed:
- make
- python pip (optional for MacOS)
All of the services can be run by following the steps below.
NOTE: Since a Docker-based devstack runs many containers, you should configure Docker with a sufficient amount of resources. Our testing found that configuring Docker for Mac with a minimum of 2 CPUs and 4GB of memory works well.
Install the requirements (optional for MacOS).
This is not required for Docker for Mac, since it comes with
docker-compose
out of the box.make requirements
The Docker Compose file mounts a host volume for each service's executing code. The host directory is defaults to be a sibling of this directory. For example, if this repo is cloned to
~/workspace/devstack
, host volumes will be expected in~/workspace/course-discovery
,~/workspace/ecommerce
, etc. These repos can be cloned with the command below.make dev.clone
You may customize where the local repositories are found by setting the DEVSTACK_WORKSPACE environment variable.
Run the provision command, if you haven't already, to configure the various services with superusers (for development without the auth service) and tenants (for multi-tenancy).
NOTE: When running the provision command, databases for ecommerce and edxapp will be dropped and recreated.
The username and password for the superusers are both
edx
. You can access the services directly via Django admin at the/admin/
path, or login via single sign-on at/login/
.Default:
make dev.provision
Provision using docker-sync:
make dev.sync.provision
Start the services. This command will mount the repositories under the DEVSTACK_WORKSPACE directory.
NOTE: it may take up to 60 seconds for the LMS to start, even after the
make dev.up
command outputsdone
.Default:
make dev.up
Start using docker-sync:
make dev.sync.up
After the services have started, if you need shell access to one of the
services, run make <service>-shell
. For example to access the
Catalog/Course Discovery Service, you can run:
make discovery-shell
To see logs from containers running in detached mode, you can either use "Kitematic" (available from the "Docker for Mac" menu), or by running the following:
make logs
To view the logs of a specific service container run make <service>-logs
.
For example, to access the logs for Ecommerce, you can run:
make ecommerce-logs
To reset your environment and start provisioning from scratch, you can run:
make destroy
For information on all the available make
commands, you can run:
make help
The provisioning script creates a Django superuser for every service.
Email: [email protected] Username: edx Password: edx
The LMS also includes demo accounts. The passwords for each of these accounts
is edx
.
Username | |
---|---|
audit | [email protected] |
honor | [email protected] |
staff | [email protected] |
verified | [email protected] |
Each service is accessible at localhost
on a specific port. The table below
provides links to the homepage of each service. Since some services are not
meant to be user-facing, the "homepage" may be the API root.
Service | URL |
---|---|
Credentials | http://localhost:18150/api/v2/ |
Catalog/Discovery | http://localhost:18381/api-docs/ |
E-Commerce/Otto | http://localhost:18130/dashboard/ |
LMS | http://localhost:18000/ |
Studio/CMS | http://localhost:18010/ |
Sometimes you may need to restart a particular application server. To do so,
simply use the docker-compose restart
command:
docker-compose restart <service>
<service>
should be replaced with one of the following:
- credentials
- discovery
- ecommerce
- lms
- studio
If you'd like to add some convenience make targets, you can add them to a local.mk
file, ignored by git.
The ecommerce image comes pre-configured for payments via CyberSource and PayPal. Additionally, the provisioning scripts
add the demo course (course-v1:edX+DemoX+Demo_Course
) to the ecommerce catalog. You can initiate a checkout by visiting
http://localhost:18130/basket/add/?sku=8CF08E5 or clicking one of the various upgrade links in the LMS. The following
details can be used for checkout. While the name and address fields are required for credit card payments, their values
are not checked in development, so put whatever you want in those fields.
- Card Type: Visa
- Card Number: 4111111111111111
- CVN: 123 (or any three digits)
- Expiry Date: 06/2025 (or any date in the future)
PayPal (same for username and password): [email protected]
Docker Compose files useful for integrating with the edx.org marketing site are available. This will NOT be useful to those outside of edX. For details on getting things up and running, see https://openedx.atlassian.net/wiki/display/OpenDev/Marketing+Site.
There are Docker CI Jenkins jobs on tools-edx-jenkins that build and push new
Docker images to DockerHub on code changes to either the configuration repository or the IDA's codebase. These images
are tagged latest
, so only the discovery and edxapp jobs are relevant at this time (see NOTES below). Images that
require tags other than latest
are built and pushed by hand. If you want to build the images on your own, the
Dockerfiles are available in the edx/configuration
repo.
NOTES:
- discovery and edxapp use the
latest
tag since their configuration changes have been merged to master branch ofedx/configuration
. - We are experimenting with hosting a
Dockerfile
in theedx/credentials
repository, hence thedevstack-slim
tag. See that repo for more information on building its image. - All other services use the
devstack
tag and are build from theclintonb/docker-devstack-idas
branch ofedx/configuration
.
BUILD COMMANDS:
git checkout master
git pull
docker build -f docker/build/edxapp/Dockerfile . -t edxops/edxapp:latest
git checkout clintonb/docker-devstack-idas
git pull
docker build -f docker/build/ecommerce/Dockerfile . -t edxops/ecommerce:devstack
The build commands above will use your local configuration, but will pull
application code from the master branch of the application's repository. If you
would like to use code from another branch/tag/hash, modify the *_VERSION
variable that lives in the ansible_overrides.yml
file beside the
Dockerfile
. Note that edx-platform is an exception; the variable to modify is edx_platform_version
and not EDXAPP_VERSION
.
For example, if you wanted to build tag release-2017-03-03
for the
E-Commerce Service, you would modify ECOMMERCE_VERSION
in
docker/build/ecommerce/ansible_overrides.yml
.
We use database dumps to speed up provisioning and generally spend less time running migrations. These dumps should be updated occasionally - when database migrations take a prolonged amount of time or we want to incorporate changes that require manual intervention.
To update the database dumps:
- Destroy and/or backup the data for your existing devstack so that you start with a clean slate.
- Disable the loading of the existing database dumps during provisioning by commenting out any calls to
load-db.sh
in the provisioning scripts. This disabling ensures a start with a completely fresh database and incorporates any changes that may have required some form of manual intervention for existing installations (e.g. drop/move tables). - Provision devstack with
make provision
. - Dump the databases and open a pull request with your updates:
./dump-db.sh ecommerce
./dump-db.sh edxapp
./dump-db.sh edxapp_csmh
You can run Django migrations as normal to apply any changes recently made
to the database schema for a particular service. For example, to run
migrations for LMS, enter a shell via make lms-shell
and then run:
paver update_db
Alternatively, you can discard and rebuild the entire database for all
devstack services by re-running make dev.provision
or
make dev.sync.provision
as appropriate for your configuration. Note that
if your branch has fallen significantly behind master, it may not include all
of the migrations included in the database dump used by provisioning. In these
cases, it's usually best to first rebase the branch onto master to
get the missing migrations.
Log into the LMS shell, source the edxapp
virtualenv, and run the
makemigrations
command with the devstack_docker
settings:
make lms-shell
source /edx/app/edxapp/edxapp_env
cd /edx/app/edxapp/edx-platform
./manage.py <lms/cms> makemigrations <appname> --settings=devstack_docker
Also, make sure you are aware of the Django Migration Don'ts as the edx-platform is deployed using the red-black method.
JavaScript packages for Node.js are installed into the node_modules
directory of the local git repository checkout which is synced into the
corresponding Docker container. Hence these can be upgraded via any of the
usual methods for that service (npm install
,
paver install_node_prereqs
, etc.), and the changes will persist between
container restarts.
Unlike the node_modules
directory, the virtualenv
used to run Python
code in a Docker container only exists inside that container. Changes made to
a container's filesystem are not saved when the container exits, so if you
manually install or upgrade Python packages in a container (via
pip install
, paver install_python_prereqs
, etc.), they will no
longer be present if you restart the container. (Devstack Docker containers
lose changes made to the filesystem when you reboot your computer, run
make down
, restart or upgrade Docker itself, etc.) If you want to ensure
that your new or upgraded packages are present in the container every time it
starts, you have a few options:
- Merge your updated requirements files and wait for a new edxops Docker image
for that service to be built and uploaded to Docker Hub. You can
then download and use the updated image (for example, via
make pull
). The discovery and edxapp images are buit automatically via a Jenkins job. All other images are currently built as needed by edX employees, but will soon be built automatically on a regular basis. See How do I build images? for more information. - You can update your requirements files as appropriate and then build your
own updated image for the service as described above, tagging it such that
docker-compose
will use it instead of the last image you downloaded. (Alternatively, you can temporarily editdocker-compose.yml
to replace theimage
entry for that service with the ID of your new image.) You should be sure to modify the variable override for the version of the application code used for building the image. See How do I build images?. for more information. - You can temporarily modify the main service command in
docker-compose.yml
to first install your new package(s) each time the container is started. For example, the part of the studio command which reads...&& while true; do...
could be changed to...&& pip install my-new-package && while true; do...
. - In order to work on locally pip-installed repos like edx-ora2, first clone
them into
../src
(relative to this directory). Then, inside your lms shell, you canpip install -e /edx/src/edx-ora2
. If you want to keep this code installed across stop/starts, modifydocker-compose.yml
as mentioned above.
Optimized static assets are built for all the edX services during provisioning, but you may want to rebuild them for a particular service after changing some files without re-provisioning the entire devstack. To do this, run the make target for the appropriate service. For example:
make credentials-static
To rebuild static assets for all service containers:
make static
You can usually switch branches on a service's repository without adverse effects on a running container for it. The service in each container is using runserver and should automatically reload when any changes are made to the code on disk. However, note the points made above regarding database migrations and package updates.
When switching to a branch which differs greatly from the one you've been
working on (especially if the new branch is more recent), you may wish to
halt the existing containers via make down
, pull the latest Docker
images via make pull
, and then re-run make dev.provision
or
make dev.sync.provision
in order to recreate up-to-date databases,
static assets, etc.
If making a patch to a named release, you should pull and use Docker images which were tagged for that release.
The LMS and CMS read many configuration settings from the container filesystem in the following locations:
/edx/app/edxapp/lms.env.json
/edx/app/edxapp/lms.auth.json
/edx/app/edxapp/cms.env.json
/edx/app/edxapp/cms.auth.json
Changes to these files will not persist over a container restart, as they are part of the layered container filesystem and not a mounted volume. However, you may need to change these settings and then have the LMS or CMS pick up the changes.
To restart the LMS/CMS process without restarting the container, kill the LMS or CMS process and the watcher process will restart the process within the container. You can kill the needed processes from a shell within the LMS/CMS container with a single line of bash script:
LMS:
kill -9 $(ps aux | grep 'manage.py lms' | egrep -v 'while|grep' | awk '{print $2}')
CMS:
kill -9 $(ps aux | grep 'manage.py cms' | egrep -v 'while|grep' | awk '{print $2}')
See the Pycharm Integration documentation.
It's possible to debug any of the containers' Python services using PDB. To do so, start up the containers as usual with:
make dev.up
This command starts each relevant container with the equivalent of the '--it' option, allowing a developer to attach to the process once the process is up and running.
To attach to the LMS/Studio containers and their process, use either:
make lms-attach
make studio-attach
Set a PDB breakpoint anywhere in the code using:
import pdb;pdb.set_trace()
and your attached session will offer an interactive PDB prompt when the breakpoint is hit.
To detach from the container, you'll need to stop the container with:
make stop
or a manual Docker command to bring down the container:
docker kill $(docker ps -a -q --filter="name=edx.devstack.<container name>")
After entering a shell for the appropriate service via make lms-shell
or
make studio-shell
, you can run any of the usual paver commands from the
edx-platform testing documentation. Examples:
paver run_quality
paver test_a11y
paver test_bokchoy
paver test_js
paver test_lib
paver test_python
If you want to instead run tests via manage.py
, you need to pass the
test_docker
settings flag instead of the test
settings used in
Vagrant. Example:
DISABLE_MIGRATIONS=1 ./manage.py lms test --settings=test_docker openedx.core.djangoapps.user_api
You can also add the test_docker
settings flag to the other examples detailed
in the testing documentation.
If you want to see the browser being automated for JavaScript or bok-choy tests, you can connect to the container running it via VNC.
Browser | VNC connection |
---|---|
Firefox (Default) | vnc://0.0.0.0:25900 |
Chrome (via Selenium) | vnc://0.0.0.0:15900 |
On macOS, enter the VNC connection string in Safari to connect via VNC. The VNC
passwords for both browsers are randomly generated and logged at container
startup, and can be found by running make vnc-passwords
.
Most tests are run in Firefox by default. To use Chrome for tests that normally
use Firefox instead, prefix the test command with
SELENIUM_BROWSER=chrome SELENIUM_HOST=edx.devstack.chrome
.
To run the base set of end-to-end tests for edx-platform, run the following make target:
make e2e-tests
If you want to use some of the other testing options described in the edx-e2e-tests README, you can instead start a shell for the e2e container and run the tests manually via paver:
make e2e-shell
paver e2e_test --exclude="whitelabel\|enterprise"
The browser running the tests can be seen and interacted with via VNC as described above (Chrome is used by default).
If you are having trouble with your containers, this sections contains some troubleshooting tips.
If a container stops unexpectedly, you can look at its logs for clues:
docker-compose logs lms
Make sure you have the latest code and Docker images.
Pull the latest Docker images by running the following command from the devstack directory:
make pull
Pull the latest Docker Compose configuration and provisioning scripts by running the following command from the devstack directory:
git pull
Lastly, the images are built from the master branches of the application repositories (e.g. edx-platform, ecommerce, etc.). Make sure you are using the latest code from the master branches, or have rebased your branches on master.
Sometimes containers end up in strange states and need to be rebuilt. Run
make down
to remove all containers and networks. This will NOT remove your
data volumes.
Somtimes you just aren't sure what's wrong, if you would like to hit the reset button
run make dev.reset
.
Running this command will perform the following steps: * Bring down all containers * Reset all git repositories to the HEAD of master * Pull new images for all services * Compile static assets for all services * Run migrations for all services
It's good to run this before asking for help.
If you want to completely start over, run make destroy
. This will remove
all containers, networks, AND data volumes.
In case you botched a migration or just want to start with a clean database.
- Open up the mysql shell and drop the database for the desired service.
`
>>> make mysql-shell
>>> mysql
>>> DROP DATABASE (insert database here)
`
- From your devstack directory, run the provision script for the service. The provision script should handle populating data such as Oauth clients and edX users and running migrations.
`
>>> ./provision-(service_name)
`
If you notice that the ownership of some (maybe all) files have changed and you
need to enter your root password when editing a file, you might
have pulled changes to the remote repository from within a container. While running
git pull
, git changes the owner of the files that you pull to the user that runs
that command. Within a container, that is the root user - so git operations
should be ran outside of the container.
To fix this situation, change the owner back to yourself outside of the container by running:
$ sudo chown <user>:<group> -R .
Most of the paver
commands require a settings flag. If omitted, the flag defaults to
devstack
, which is the settings flag for vagrant-based devstack instances.
So if you run into issues running paver
commands in a docker container, you should append
the devstack_docker
flag. For example:
$ paver update_assets --settings=devstack_docker
While running make static
within the ecommerce container you could get an error
saying:
Error: Error: EBUSY: resource busy or locked, rmdir '/edx/app/ecommerce/ecommerce/ecommerce/static/build/'
To fix this, remove the directory manually outside of the container and run the command again.
If you see the error no space left on device
on a Mac, Docker has run
out of space in its Docker.qcow2 file.
Here is an example error while running make pull
:
...
32d52c166025: Extracting [==================================================>] 1.598 GB/1.598 GB
ERROR: failed to register layer: Error processing tar file(exit status 1): write /edx/app/edxapp/edx-platform/.git/objects/pack/pack-4ff9873be2ca8ab77d4b0b302249676a37b3cd4b.pack: no space left on device
make: *** [pull] Error 1
You can clean up data by running docker system prune
, but you will first want
to run make dev.up
so it doesn't delete stopped containers.
Or, you can run the following commands to clean up dangling images and volumes:
docker image prune -f
docker volume prune -f # (Be careful, this will remove your persistent data!)
While provisioning, some have seen the following error:
...
cwd = os.getcwdu()
OSError: [Errno 2] No such file or directory
make: *** [dev.provision.run] Error 1
This issue can be worked around, but there's no guaranteed method to do so. Rebooting and restarting Docker does not seem to correct the issue. It may be an issue that is exacerbated by our use of sync (which typically speeds up the provisioning process on Mac), so you can try the following:
# repeat the following until you get past the error.
make stop
make dev.provision
Once you get past the issue, you should be able to continue to use sync versions of the make targets.
While provisioning, some have seen the following error:
...
Build failed running pavelib.assets.update_assets: Subprocess return code: 137
This error is an indication that your docker process died during execution. Most likely, this error is due to running out of memory. If your Docker configuration is set to 2GB (docker for mac default), increase it to 4GB (the current recommendation). If your Docker configuration is set to 4GB, then try 6GB.
Docker for Mac has known filesystem issues that significantly decrease performance, particularly for starting edx-platform (e.g. when you want to run a test). To improve performance, Docker Sync can be used to synchronize file data from the host machine to the containers.
You can swap between using Docker Sync and native volumes at any time, by using the make targets with or without 'sync'.
If you are using macOS, please follow the Docker Sync installation instructions before provisioning.
Check your version and make sure you are running 0.4.6 or above:
docker-sync --version
If not, upgrade to the latest version:
gem update docker-sync
If you are having issues with docker sync, try the following:
make stop
docker-sync stop
docker-sync clean
The performance improvements provided by cached consistency mode for volume mounts introduced in Docker CE Edge 17.04 are still not good enough. It's possible that the "delegated" consistency mode will be enough to no longer need docker-sync, but this feature hasn't been fully implemented yet (as of Docker 17.06.0-ce, "delegated" behaves the same as "cached"). There is a GitHub issue which explains the current status of implementing delegated consistency mode.