# Setting up your first CI pipeline
Delta CI expects a
.delta.yml file in your repository to read your CI config from.
For Python projects, we recommend using a docker image as the execution environment. In the example, we are using the official image from Microsoft.
docker: python:3.7 steps: - run: pip install pipenv when: first_run - pipenv install - pipenv run python manage.py test
# Run your config first time
Once you have committed your
.delta.yml file into your git repo and pushed it. Delta CI will start executing it immediately. During the first run, it will need to download the docker image as well as NuGet dependencies. Once you have finished the first run, all subsequent runs will be much faster.
As you may have noticed, there's no caching setup in the config. Unlike other CIs, we don't require you to configure it yourself. Instead, we just keep the disk persist between build runs, just like your dev environment.
# Environment Variables
There are two ways to set environment variables with Delta CI. Normally you set the env vars under your job in the .delta.yml config. If you want the env var to be a secret, you simply head over to the Workflows page in the dashboard and set them there. More about that further down. Note that the environment variables will only be available to the steps in the job you defined them in, not globally.
# Example with environment variables
jobs: "Test and Build": docker: python:3.7 env: SENTRY_ORG: "delta-ci" SENTRY_PROJECT: "delta-api" steps: - pipenv install - pipenv run python manage.py test - pipenv build
If you want to add a secret environment variable in your jobs, simply click your way to the Workflows page in the dashboard and you should be able to set secret environment variables there. This can be important for third-party auth tokens, Deploy Keys, or some other sensitive data you want to make available for the jobs.
# Allocate more hardware resource for a job
By default, we allocate 2vCPU and 4 GB of RAM for each job. Depends on your use case, you might want to adjust it. You have full control over the allocation as long as you don't exceed your maximum allowance.
docker: python:3.7 machine: vcpu: 8 ram: 16GB steps: ...
# Running multiple jobs at once
In our previous example, we defined only one job. If you want to run multiple jobs from a single push, you could define a workflow with multiple jobs and also define dependencies between them. To do so, you could write
.delta.yml as such.
jobs: build: docker: test: deploy: