96 KiB
obj | website | repo | rev |
---|---|---|---|
application | https://woodpecker-ci.org | https://github.com/woodpecker-ci/woodpecker | 2024-12-03 |
Woodpecker CI
Woodpecker is a simple, yet powerful CI/CD engine with great extensibility.
Workflow Syntax
The Workflow section defines a list of steps to build, test and deploy your code. The steps are executed serially in the order in which they are defined. If a step returns a non-zero exit code, the workflow and therefore the entire pipeline terminates immediately and returns an error status. The workflow files are stored in .woodpecker
inside your repository.
Example steps:
steps:
- name: backend
image: golang
commands:
- go build
- go test
- name: frontend
image: node
commands:
- npm install
- npm run test
- npm run build
In the above example we define two steps, frontend and backend. The names of these steps are completely arbitrary.
The name is optional, if not added the steps will be numerated.
Another way to name a step is by using dictionaries:
steps:
backend:
image: golang
commands:
- go build
- go test
frontend:
image: node
commands:
- npm install
- npm run test
- npm run build
Skip Commits
Woodpecker gives the ability to skip individual commits by adding [SKIP CI]
or [CI SKIP]
to the commit message. Note this is case-insensitive.
git commit -m "updated README [CI SKIP]"
Steps
Every step of your workflow executes commands inside a specified container.
The defined steps are executed in sequence by default, if they should run in parallel you can use depends_on
.
The associated commit is checked out with git to a workspace which is mounted to every step of the workflow as the working directory.
steps:
- name: backend
image: golang
commands:
+ - go build
+ - go test
File changes are incremental
Woodpecker clones the source code in the beginning of the workflow
Changes to files are persisted through steps as the same volume is mounted to all steps
steps:
- name: build
image: debian
commands:
- echo "test content" > myfile
- name: a-test-step
image: debian
commands:
- cat myfile
image
Woodpecker pulls the defined image and uses it as environment to execute the workflow step commands, for plugins and for service containers.
When using the local backend, the image entry is used to specify the shell, such as Bash or Fish, that is used to run the commands.
steps:
- name: build
+ image: golang:1.6
commands:
- go build
- go test
- name: publish
+ image: plugins/docker
repo: foo/bar
services:
- name: database
+ image: mysql
Woodpecker supports any valid Docker image from any Docker registry.
Woodpecker does not automatically upgrade container images. Example configuration to always pull the latest image when updates are available:
steps:
- name: build
image: golang:latest
+ pull: true
commands
Commands of every step are executed serially as if you would enter them into your local shell.
steps:
- name: backend
image: golang
commands:
+ - go build
+ - go test
There is no magic here. The above commands are converted to a simple shell script.
Only build steps can define commands. You cannot use commands with plugins or services.
entrypoint
Allows you to specify the entrypoint for containers. Note that this must be a list of the command and its arguments (e.g. ["/bin/sh", "-c"]
).
If you define commands, the default entrypoint will be ["/bin/sh", "-c", "echo $CI_SCRIPT | base64 -d | /bin/sh -e"]
. You can also use a custom shell with CI_SCRIPT
(Base64-encoded) if you set commands.
environment
Woodpecker provides the ability to pass environment variables to individual steps.
For more details, check the environment docs.
secrets
Woodpecker provides the ability to store named parameters external to the YAML configuration file, in a central secret store. These secrets can be passed to individual steps of the workflow at runtime.
For more details, check the secrets docs.
failure
Some of the steps may be allowed to fail without causing the whole workflow and therefore pipeline to report a failure (e.g., a step executing a linting check). To enable this, add failure: ignore
to your step. If Woodpecker encounters an error while executing the step, it will report it as failed but still executes the next steps of the workflow, if any, without affecting the status of the workflow.
steps:
- name: backend
image: golang
commands:
- go build
- go test
+ failure: ignore
when
- Conditional Execution
Woodpecker supports defining a list of conditions for a step by using a when block. If at least one of the conditions in the when block evaluate to true the step is executed, otherwise it is skipped. A condition is evaluated to true if all subconditions are true. A condition can be a check like:
steps:
- name: slack
image: plugins/slack
settings:
channel: dev
+ when:
+ - event: pull_request
+ repo: test/test
+ - event: push
+ branch: main
The slack step is executed if one of these conditions is met:
- The pipeline is executed from a pull request in the repo test/test
- The pipeline is executed from a push to maiǹ
repo
Example conditional execution by repository:
steps:
- name: slack
image: plugins/slack
settings:
channel: dev
+ when:
+ - repo: test/test
branch
Branch conditions are not applied to tags.
Example conditional execution by branch:
steps:
- name: slack
image: plugins/slack
settings:
channel: dev
+ when:
+ - branch: main
The step now triggers on main branch, but also if the target branch of a pull request is main. Add an event condition to limit it further to pushes on main only.
Execute a step if the branch is main or develop:
when:
- branch: [main, develop]
Execute a step if the branch starts with prefix/*:
when:
- branch: prefix/*
The branch matching is done using doublestar, note that a pattern starting with *
should be put between quotes and a literal /
needs to be escaped. A few examples:
Execute a step using custom include and exclude logic:
when:
- branch:
include: [main, release/*]
exclude: [release/1.0.0, release/1.1.*]
event
Available events: push
, pull_request
, pull_request_closed
, tag
, release
, deployment
, cron
, manual
Execute a step if the build event is a tag:
when:
- event: tag
Execute a step if the pipeline event is a push to a specified branch:
when:
- event: push
+ branch: main
Execute a step for multiple events:
when:
- event: [push, tag, deployment]
cron
This filter only applies to cron events and filters based on the name of a cron job.
Make sure to have a event: cron condition in the when-filters as well.
when:
- event: cron
cron: sync_* # name of your cron job
ref
The ref filter compares the git reference against which the workflow is executed. This allows you to filter, for example, tags that must start with v:
when:
- event: tag
ref: refs/tags/v*
status
There are use cases for executing steps on failure, such as sending notifications for failed workflow / pipeline. Use the status constraint to execute steps even when the workflow fails:
steps:
- name: slack
image: plugins/slack
settings:
channel: dev
+ when:
+ - status: [ success, failure ]
platform
This condition should be used in conjunction with a matrix workflow as a regular workflow will only be executed by a single agent which only has one arch.
Execute a step for a specific platform:
when:
- platform: linux/amd64
Execute a step for a specific platform using wildcards:
when:
- platform: [linux/*, windows/amd64]
matrix
Execute a step for a single matrix permutation:
when:
- matrix:
GO_VERSION: 1.5
REDIS_VERSION: 2.8
instance
Execute a step only on a certain Woodpecker instance matching the specified hostname:
when:
- instance: stage.woodpecker.company.com
path
Path conditions are applied only to push and pull_request events. It is currently only available for GitHub, GitLab and Gitea (version 1.18.0 and newer)
Execute a step only on a pipeline with certain files being changed:
when:
- path: 'src/*'
You can use glob patterns to match the changed files and specify if the step should run if a file matching that pattern has been changed include or if some files have not been changed exclude.
For pipelines without file changes (empty commits or on events without file changes like tag), you can use on_empty to set whether this condition should be true (default) or false in these cases.
when:
- path:
include: ['.woodpecker/*.yaml', '*.ini']
exclude: ['*.md', 'docs/**']
ignore_message: '[ALL]'
on_empty: true
evaluate
Execute a step only if the provided evaluate expression is equal to true. Both built-in CI_
and custom variables can be used inside the expression.
The expression syntax can be found in the docs of the underlying library.
Run on pushes to the default branch for the repository owner/repo:
when:
- evaluate: 'CI_PIPELINE_EVENT == "push" && CI_REPO == "owner/repo" && CI_COMMIT_BRANCH == CI_REPO_DEFAULT_BRANCH'
Run on commits created by user woodpecker-ci:
when:
- evaluate: 'CI_COMMIT_AUTHOR == "woodpecker-ci"'
Skip all commits containing please ignore me in the commit message:
when:
- evaluate: 'not (CI_COMMIT_MESSAGE contains "please ignore me")'
Run on pull requests with the label deploy:
when:
- evaluate: 'CI_COMMIT_PULL_REQUEST_LABELS contains "deploy"'
depends_on
Normally steps of a workflow are executed serially in the order in which they are defined. As soon as you set depends_on
for a step a directed acyclic graph will be used and all steps of the workflow will be executed in parallel besides the steps that have a dependency set to another step using depends_on
:
steps:
- name: build # build will be executed immediately
image: golang
commands:
- go build
- name: deploy
image: plugins/docker
settings:
repo: foo/bar
+ depends_on: [build, test] # deploy will be executed after build and test finished
- name: test # test will be executed immediately as no dependencies are set
image: golang
commands:
- go test
Note:
You can define a step to start immediately without dependencies by adding an emptydepends_on: []
. By settingdepends_on
on a single step all other steps will be immediately executed as well if no further dependencies are specified.
steps:
- name: check code format
image: mstruebing/editorconfig-checker
depends_on: [] # enable parallel steps
...
volumes
Woodpecker gives the ability to define Docker volumes in the YAML. You can use this parameter to mount files or folders on the host machine into your containers.
For more details check the volumes docs.
detach
Woodpecker gives the ability to detach steps to run them in background until the workflow finishes.
For more details check the service docs.
directory
Using directory, you can set a subdirectory of your repository or an absolute path inside the Docker container in which your commands will run.
services
Woodpecker can provide service containers. They can for example be used to run databases or cache containers during the execution of workflow.
For more details check the services docs.
workspace
The workspace defines the shared volume and working directory shared by all workflow steps. The default workspace base is /woodpecker
and the path is extended with the repository URL. So an example would be /woodpecker/src/github.com/octocat/hello-world
.
The workspace can be customized using the workspace block in the YAML file:
+workspace:
+ base: /go
+ path: src/github.com/octocat/hello-world
steps:
- name: build
image: golang:latest
commands:
- go get
- go test
Note:
Plugins will always have the workspace base at /woodpecker
The base attribute defines a shared base volume available to all steps. This ensures your source code, dependencies and compiled binaries are persisted and shared between steps.
workspace:
+ base: /go
path: src/github.com/octocat/hello-world
steps:
- name: deps
image: golang:latest
commands:
- go get
- go test
- name: build
image: node:latest
commands:
- go build
The path attribute defines the working directory of your build. This is where your code is cloned and will be the default working directory of every step in your build process. The path must be relative and is combined with your base path.
workspace:
base: /go
+ path: src/github.com/octocat/hello-world
matrix
Woodpecker has integrated support for matrix builds. Woodpecker executes a separate build task for each combination in the matrix, allowing you to build and test a single commit against multiple configurations.
For more details check the matrix build docs.
`labels
You can set labels for your workflow to select an agent to execute the workflow on. An agent will pick up and run a workflow when every label assigned to it matches the agents labels.
To set additional agent labels, check the agent configuration options. Agents will have at least four default labels: platform=agent-os/agent-arch
, hostname=my-agent
, backend=docker
(type of the agent backend) and repo=*
. Agents can use a *
as a wildcard for a label. For example repo=*
will match every repo.
Workflow labels with an empty value will be ignored. By default, each workflow has at least the repo=your-user/your-repo-name
label. If you have set the platform attribute for your workflow it will have a label like platform=your-os/your-arch
as well.
You can add additional labels as a key value map:
+labels:
+ location: europe # only agents with `location=europe` or `location=*` will be used
+ weather: sun
+ hostname: "" # this label will be ignored as it is empty
steps:
- name: build
image: golang
commands:
- go build
- go test
Filter by platform:
To configure your workflow to only be executed on an agent with a specific platform, you can use the platform key. Have a look at the official go docs for the available platforms. The syntax of the platform is GOOS/GOARCH
like linux/arm64
or linux/amd64
.
Example:
Assuming we have two agents, one linux/arm
and one linux/amd64
. Previously this workflow would have executed on either agent, as Woodpecker is not fussy about where it runs the workflows. By setting the following option it will only be executed on an agent with the platform linux/arm64
.
+labels:
+ platform: linux/arm64
steps:
[...]
clone
Woodpecker automatically configures a default clone step if not explicitly defined. When using the local backend, the plugin-git binary must be on your $PATH
for the default clone step to work. If not, you can still write a manual clone step.
You can manually configure the clone step in your workflow for customization:
+clone:
+ git:
+ image: woodpeckerci/plugin-git
steps:
- name: build
image: golang
commands:
- go build
- go test
Example configuration to override depth:
clone:
- name: git
image: woodpeckerci/plugin-git
+ settings:
+ partial: false
+ depth: 50
Example configuration to use a custom clone plugin:
clone:
- name: git
+ image: octocat/custom-git-plugin
Example configuration to clone Mercurial repository:
clone:
- name: hg
+ image: plugins/hg
+ settings:
+ path: bitbucket.org/foo/bar
skip_clone
By default Woodpecker is automatically adding a clone step. This clone step can be configured by the clone property. If you do not need a clone step at all you can skip it using:
skip_clone: true
when
- Global workflow conditions
Woodpecker gives the ability to skip whole workflows (not just steps) based on certain conditions by a when
block. If all conditions in the when block evaluate to true the workflow is executed, otherwise it is skipped, but treated as successful and other workflows depending on it will still continue.
For more information about the specific filters, take a look at the step-specific when filters.
Example conditional execution by branch:
+when:
+ branch: main
+
steps:
- name: slack
image: plugins/slack
settings:
channel: dev
The workflow now triggers on main, but also if the target branch of a pull request is main.
depends_on
Woodpecker supports to define multiple workflows for a repository. Those workflows will run independent from each other. To depend them on each other you can use the depends_on
keyword.
runs_on
Workflows that should run even on failure should set the runs_on
tag.
steps:
- name: notify
image: debian:stable-slim
commands:
- echo notifying
depends_on:
- deploy
+runs_on: [ success, failure ]
Privileged mode
Woodpecker gives the ability to configure privileged mode in the YAML. You can use this parameter to launch containers with escalated capabilities.
Info:
Privileged mode is only available to trusted repositories and for security reasons should only be used in private environments. See project settings to enable trusted mode.
steps:
- name: build
image: docker
environment:
- DOCKER_HOST=tcp://docker:2375
commands:
- docker --tls=false ps
- name: services
docker:
image: docker:dind
commands: dockerd-entrypoint.sh --storage-driver=vfs --tls=false
+ privileged: true
Matrix Workflows
Woodpecker has integrated support for matrix workflows. Woodpecker executes a separate workflow for each combination in the matrix, allowing you to build and test against multiple configurations.
Example matrix definition:
matrix:
GO_VERSION:
- 1.4
- 1.3
REDIS_VERSION:
- 2.6
- 2.8
- 3.0
Example matrix definition containing only specific combinations:
matrix:
include:
- GO_VERSION: 1.4
REDIS_VERSION: 2.8
- GO_VERSION: 1.5
REDIS_VERSION: 2.8
- GO_VERSION: 1.6
REDIS_VERSION: 3.0
Matrix variables are interpolated in the YAML using the ${VARIABLE}
syntax, before the YAML is parsed. This is an example YAML file before interpolating matrix parameters:
matrix:
GO_VERSION:
- 1.4
- 1.3
DATABASE:
- mysql:8
- mysql:5
- mariadb:10.1
steps:
- name: build
image: golang:${GO_VERSION}
commands:
- go get
- go build
- go test
services:
- name: database
image: ${DATABASE}
Secrets
Woodpecker provides the ability to store named parameters external to the YAML configuration file, in a central secret store. These secrets can be passed to individual steps of the pipeline at runtime.
Woodpecker provides three different levels to add secrets to your pipeline. The following list shows the priority of the different levels. If a secret is defined in multiple levels, will be used following this priorities: Repository secrets > Organization secrets > Global secrets.
- Repository secrets: They are available to all pipelines of an repository.
- Organization secrets: They are available to all pipelines of an organization.
- Global secrets: Can be configured by an instance admin. They are available to all pipelines of the whole Woodpecker instance and should therefore only be used for secrets that are allowed to be read by all users.
Secrets are exposed to your pipeline steps and plugins as uppercase environment variables and can therefore be referenced in the commands section of your pipeline, once their usage is declared in the secrets section:
steps:
- name: docker
image: docker
commands:
+ - echo $docker_username
+ - echo $DOCKER_PASSWORD
+ secrets: [ docker_username, DOCKER_PASSWORD ]
The case of the environment variables is not changed, but secret matching is done case-insensitively. In the example above, DOCKER_PASSWORD
would also match if the secret is called docker_password
.
You can set an setting or environment value from secrets using the from_secret
syntax.
In this example, the secret named secret_token
would be passed to the setting named token
, which will be available in the plugin as environment variable named PLUGIN_TOKEN
(See plugins for details), and to the environment variable TOKEN_ENV
.
steps:
- name: docker
image: my-plugin
+ environment:
+ TOKEN_ENV:
+ from_secret: secret_token
+ settings:
+ token:
+ from_secret: secret_token
Please note parameter expressions are subject to pre-processing. When using secrets in parameter expressions they should be escaped.
steps:
- name: docker
image: docker
commands:
- - echo ${docker_username}
- - echo ${DOCKER_PASSWORD}
+ - echo $${docker_username}
+ - echo $${DOCKER_PASSWORD}
secrets: [ docker_username, DOCKER_PASSWORD ]
Secrets are not exposed to pull requests by default. You can override this behavior by creating the secret and enabling the pull_request
event type, either in UI or by CLI.
Registries
Woodpecker provides the ability to add container registries in the settings of your repository. Adding a registry allows you to authenticate and pull private images from a container registry when using these images as a step inside your pipeline. Using registry credentials can also help you avoid rate limiting when pulling images from public registries.
You must provide registry credentials in the UI in order to pull private container images defined in your YAML configuration file.
These credentials are never exposed to your steps, which means they cannot be used to push, and are safe to use with pull requests, for example. Pushing to a registry still requires setting credentials for the appropriate plugin.
Cron
To create a new cron job adjust your pipeline config(s) and add the event filter to all steps you would like to run by the cron job:
steps:
- name: sync_locales
image: weblate_sync
settings:
url: example.com
token:
from_secret: weblate_token
+ when:
+ event: cron
+ cron: "name of the cron job" # if you only want to execute this step by a specific cron job
Then create a new cron job in the repository settings.
The supported schedule syntax can be found here.
Examples: @every 5m
, @daily
, 0 30 * * * *
...
Info
Woodpeckers cron syntax starts with seconds instead of minutes as used by most linux cron schedulers.
Example: "At minute 30 every hour" would be0 30 * * * *
instead of30 * * * *
Environment Variables
Woodpecker provides the ability to pass environment variables to individual pipeline steps. Note that these can't overwrite any existing, built-in variables. Example pipeline step with custom environment variables:
steps:
- name: build
image: golang
+ environment:
+ CGO: 0
+ GOOS: linux
+ GOARCH: amd64
commands:
- go build
- go test
Please note that the environment section is not able to expand environment variables. If you need to expand variables they should be exported in the commands section.
steps:
- name: build
image: golang
- environment:
- - PATH=$PATH:/go
commands:
+ - export PATH=$PATH:/go
- go build
- go test
${variable}
expressions are subject to pre-processing. If you do not want the pre-processor to evaluate your expression it must be escaped:
steps:
- name: build
image: golang
commands:
- - export PATH=${PATH}:/go
+ - export PATH=$${PATH}:/go
- go build
- go test
Built-in environment variables
This is the reference list of all environment variables available to your pipeline containers. These are injected into your pipeline step and plugins containers, at runtime.
NAME | Description |
---|---|
CI |
CI environment name (value: woodpecker ) |
Repository | |
CI_REPO |
repository full name <owner>/<name> |
CI_REPO_OWNER |
repository owner |
CI_REPO_NAME |
repository name |
CI_REPO_REMOTE_ID |
repository remote ID, is the UID it has in the forge |
CI_REPO_SCM |
repository SCM (git) |
CI_REPO_URL |
repository web URL |
CI_REPO_CLONE_URL |
repository clone URL |
CI_REPO_CLONE_SSH_URL |
repository SSH clone URL |
CI_REPO_DEFAULT_BRANCH |
repository default branch (main) |
CI_REPO_PRIVATE |
repository is private |
CI_REPO_TRUSTED |
repository is trusted |
Current Commit | |
CI_COMMIT_SHA |
commit SHA |
CI_COMMIT_REF |
commit ref |
CI_COMMIT_REFSPEC |
commit ref spec |
CI_COMMIT_BRANCH |
commit branch (equals target branch for pull requests) |
CI_COMMIT_SOURCE_BRANCH |
commit source branch (empty if event is not pull_request or pull_request_closed ) |
CI_COMMIT_TARGET_BRANCH |
commit target branch (empty if event is not pull_request or pull_request_closed ) |
CI_COMMIT_TAG |
commit tag name (empty if event is not tag ) |
CI_COMMIT_PULL_REQUEST |
commit pull request number (empty if event is not pull_request or pull_request_closed ) |
CI_COMMIT_PULL_REQUEST_LABELS |
labels assigned to pull request (empty if event is not pull_request or pull_request_closed ) |
CI_COMMIT_MESSAGE |
commit message |
CI_COMMIT_AUTHOR |
commit author username |
CI_COMMIT_AUTHOR_EMAIL |
commit author email address |
CI_COMMIT_AUTHOR_AVATAR |
commit author avatar |
CI_COMMIT_PRERELEASE |
release is a pre-release (empty if event is not release ) |
Current pipeline | |
CI_PIPELINE_NUMBER |
pipeline number |
CI_PIPELINE_PARENT |
number of parent pipeline |
CI_PIPELINE_EVENT |
pipeline event (see pipeline events) |
CI_PIPELINE_URL |
link to the web UI for the pipeline |
CI_PIPELINE_FORGE_URL |
link to the forge's web UI for the commit(s) or tag that triggered the pipeline |
CI_PIPELINE_DEPLOY_TARGET |
pipeline deploy target for deployment events (i.e. production) |
CI_PIPELINE_DEPLOY_TASK |
pipeline deploy task for deployment events (i.e. migration) |
CI_PIPELINE_STATUS |
pipeline status (success, failure) |
CI_PIPELINE_CREATED |
pipeline created UNIX timestamp |
CI_PIPELINE_STARTED |
pipeline started UNIX timestamp |
CI_PIPELINE_FINISHED |
pipeline finished UNIX timestamp |
CI_PIPELINE_FILES |
changed files (empty if event is not push or pull_request ), it is undefined if more than 500 files are touched |
Current workflow | |
CI_WORKFLOW_NAME |
workflow name |
Current step | |
CI_STEP_NAME |
step name |
CI_STEP_NUMBER |
step number |
CI_STEP_STATUS |
step status (success, failure) |
CI_STEP_STARTED |
step started UNIX timestamp |
CI_STEP_FINISHED |
step finished UNIX timestamp |
CI_STEP_URL |
URL to step in UI |
Previous commit | |
CI_PREV_COMMIT_SHA |
previous commit SHA |
CI_PREV_COMMIT_REF |
previous commit ref |
CI_PREV_COMMIT_REFSPEC |
previous commit ref spec |
CI_PREV_COMMIT_BRANCH |
previous commit branch |
CI_PREV_COMMIT_SOURCE_BRANCH |
previous commit source branch |
CI_PREV_COMMIT_TARGET_BRANCH |
previous commit target branch |
CI_PREV_COMMIT_URL |
previous commit link in forge |
CI_PREV_COMMIT_MESSAGE |
previous commit message |
CI_PREV_COMMIT_AUTHOR |
previous commit author username |
CI_PREV_COMMIT_AUTHOR_EMAIL |
previous commit author email address |
CI_PREV_COMMIT_AUTHOR_AVATAR |
previous commit author avatar |
Previous pipeline | |
CI_PREV_PIPELINE_NUMBER |
previous pipeline number |
CI_PREV_PIPELINE_PARENT |
previous pipeline number of parent pipeline |
CI_PREV_PIPELINE_EVENT |
previous pipeline event (see pipeline events) |
CI_PREV_PIPELINE_URL |
previous pipeline link in CI |
CI_PREV_PIPELINE_FORGE_URL |
previous pipeline link to event in forge |
CI_PREV_PIPELINE_DEPLOY_TARGET |
previous pipeline deploy target for deployment events (ie production) |
CI_PREV_PIPELINE_DEPLOY_TASK |
previous pipeline deploy task for deployment events (ie migration) |
CI_PREV_PIPELINE_STATUS |
previous pipeline status (success, failure) |
CI_PREV_PIPELINE_CREATED |
previous pipeline created UNIX timestamp |
CI_PREV_PIPELINE_STARTED |
previous pipeline started UNIX timestamp |
CI_PREV_PIPELINE_FINISHED |
previous pipeline finished UNIX timestamp |
CI_WORKSPACE |
Path of the workspace where source code gets cloned to |
System | |
CI_SYSTEM_NAME |
name of the CI system: woodpecker |
CI_SYSTEM_URL |
link to CI system |
CI_SYSTEM_HOST |
hostname of CI server |
CI_SYSTEM_VERSION |
version of the server |
Forge | |
CI_FORGE_TYPE |
name of forge (gitea, github, ...) |
CI_FORGE_URL |
root URL of configured forge |
Internal - Please don't use! | |
CI_SCRIPT |
Internal script path. Used to call pipeline step commands. |
CI_NETRC_USERNAME |
Credentials for private repos to be able to clone data. (Only available for specific images) |
CI_NETRC_PASSWORD |
Credentials for private repos to be able to clone data. (Only available for specific images) |
CI_NETRC_MACHINE |
Credentials for private repos to be able to clone data. (Only available for specific images) |
Global environment variables
If you want specific environment variables to be available in all of your pipelines use the WOODPECKER_ENVIRONMENT
setting on the Woodpecker server. Note that these can't overwrite any existing, built-in variables.
WOODPECKER_ENVIRONMENT=first_var:value1,second_var:value2
These can be used, for example, to manage the image tag used by multiple projects.
WOODPECKER_ENVIRONMENT=GOLANG_VERSION:1.18
String Substitution
Woodpecker provides the ability to substitute environment variables at runtime. This gives us the ability to use dynamic settings, commands and filters in our pipeline configuration.
Example commit substitution:
steps:
- name: docker
image: plugins/docker
settings:
+ tags: ${CI_COMMIT_SHA}
String Operations
Woodpecker also emulates bash string operations. This gives us the ability to manipulate the strings prior to substitution. Example use cases might include substring and stripping prefix or suffix values.
OPERATION | DESCRIPTION |
---|---|
${param} |
parameter substitution |
${param,} |
parameter substitution with lowercase first char |
${param,,} |
parameter substitution with lowercase |
${param^} |
parameter substitution with uppercase first char |
${param^^} |
parameter substitution with uppercase |
${param:pos} |
parameter substitution with substring |
${param:pos:len} |
parameter substitution with substring and length |
${param=default} |
parameter substitution with default |
${param##prefix} |
parameter substitution with prefix removal |
${param%%suffix} |
parameter substitution with suffix removal |
${param/old/new} |
parameter substitution with find and replace |
Example variable substitution with substring:
steps:
- name: docker
image: plugins/docker
settings:
+ tags: ${CI_COMMIT_SHA:0:8}
Example variable substitution strips v prefix from v.1.0.0:
steps:
- name: docker
image: plugins/docker
settings:
+ tags: ${CI_COMMIT_TAG##v}
Plugins
Plugins are pipeline steps that perform pre-defined tasks and are configured as steps in your pipeline. Plugins can be used to deploy code, publish artifacts, send notification, and more.
plugin-git
This plugin is automatically introduced into your pipeline as the first step. Its purpose is to clone your Git repository.
Overriding Settings
clone:
git:
image: woodpeckerci/plugin-git
settings:
depth: 50
lfs: false
Settings
Settings Name | Default | Description |
---|---|---|
depth |
none | If specified, uses git's --depth option to create a shallow clone with a limited number of commits, overwritten by partial . Setting it to 0 disables shallow cloning |
lfs |
true |
Set this to false to disable retrieval of LFS files |
recursive |
false |
Clones submodules recursively |
skip-verify |
false |
Skips the SSL verification |
tags |
false (except on tag event) |
Fetches tags when set to true, default is false if event is not tag else true |
submodule-overrides |
none | Override submodule urls |
submodule-update-remote |
false |
Pass the --remote flag to git submodule update |
submodule-partial |
true |
Update submodules via partial clone (depth=1) |
custom-ssl-path |
none | Set path to custom cert |
custom-ssl-url |
none | Set url to custom cert |
backoff |
5sec |
Change backoff duration |
attempts |
5 |
Change backoff attempts |
branch |
$CI_COMMIT_BRANCH |
Change branch name to checkout to |
partial |
true (except if tags are fetched) |
Only fetch the one commit and it's blob objects to resolve all files, overwrite depth with 1 |
home |
Change HOME var for commands executed, fail if it does not exist | |
remote |
$CI_REPO_CLONE_URL |
Set the git remote url |
remote-ssh |
$CI_REPO_CLONE_SSH_URL |
Set the git SSH remote url |
object-format |
detected from commit SHA | Set the object format for Git initialization. Supported values: sha1 , sha256 . |
sha |
$CI_COMMIT_SHA |
git commit hash to retrieve |
ref |
none | Set the git reference to retrieve |
path |
$CI_WORKSPACE |
Set destination path to clone to |
use-ssh |
false |
Clone using SSH |
ssh-key |
none | SSH key for SSH clone |
Ansible
Woodpecker CI plugin to execute Ansible playbooks. This plugin is a fork of drone-plugins/drone-ansible with substantial modifications of the source code.
Installing required python module dependencies
Many ansible modules require additional python dependencies to work. Because ansible is run inside an alpine-based container, these dependencies must be installed dynamically during playbook execution.
It is important to use delegate_to: localhost
as otherwise the pip module will install the dependency on the remote host, which will not have an effect.
- name: Install required pip dependencies
delegate_to: localhost
ansible.builtin.pip:
name: <name>
state: present
extra_args: --break-system-packages
Without --break-system-packages
alpine will complain aiming for plain pip3 packages being installed system-wide. Alternatively, one can also use the apk/packages module if the required pip module is available as an python3-<name>
package
Effient handling of Ansible dependencies
By default, each step using the plugin will install the required dependencies using ansible-galaxy install -r requirements.yml
. Often, one wants to run multiple playbooks in different steps, ideally in parallel. In this case, a step which installs the requirements for all subsequent steps is useful.
steps:
"Install galaxy requirements":
image: pad92/ansible-alpine
commands:
- ansible-galaxy install -r requirements.yml
In addition, Ansible dependencies can be cached. This avoids having to re-download them for each build, saving bandwith and time. If root access to the Woodpecker instance is given, one can mount a volume to the container and store the dependencies there.
steps:
"Install galaxy requirements":
image: pad92/ansible-alpine
volumes:
- /root/woodpecker-cache/collections:/tmp/collections
commands:
- cp -r /tmp/collections $${CI_WORKSPACE}/
- ansible-galaxy install -r requirements.yml
- cp -r $${CI_WORKSPACE}/collections /tmp/
In the above example, the first command copies the cached dependencies to the workspace directory. After the installation, the dependencies are copied back to the cache directory. Note that this requires the creation of the cache directory on the host upfront (i.e. /root/woodpecker-cache
). The location of the cache directory can be adjusted to the user's needs.
Mounting the cache directory directly to $${CI_WORKSPACE}/collections
is not feasible due to the following reasons:
- The volume mount conflicts with the volume mount providing the workspace directory to each container
- The mount would need to be added to each step as otherwise the dependencies are missing in these
Settings
Settings Name | Default | Description |
---|---|---|
become-method |
none | privilege escalation method to use |
become-user |
none | run operations as this user |
become |
false |
run operations with become |
check |
false |
run in "check mode"/dry-run, do not apply changes |
connection |
none | connection type to use |
diff |
false |
show the differences (may print secrets!) |
extra-vars |
none | set additional variables via key=value list or map or load them from yaml/json files via @ prefix |
flush-cache |
false |
clear the fact cache for every host in inventory |
force-handlers |
none | run handlers even if a task fails |
forks |
5 |
number of parallel processes to use |
galaxy-force |
true |
force overwriting an existing role or collection |
galaxy |
none | path to galaxy requirements file |
inventory |
none | specify inventory host path |
limit |
none | limit selected hosts to an additional pattern |
list-hosts |
false |
outputs a list of matching hosts |
list-tags |
false |
list all available tags |
list-tasks |
false |
list all tasks that would be executed |
module-path |
none | prepend paths to module library |
playbook |
none | list of playbooks to apply |
private-key |
none | SSH private key to connect to host |
requirements |
none | path to python requirements file |
scp-extra-args |
none | specify extra arguments to pass to scp only |
sftp-extra-args |
none | specify extra arguments to pass to sftp only |
skip-tags |
none | skip tasks and playbooks with a matching tag |
ssh-common-args |
none | specify common arguments to pass to sftp/scp/ssh |
ssh-extra-args |
none | specify extra arguments to pass to ssh only |
start-at-task |
none | start the playbook at the task matching this name |
syntax-check |
false |
perform a syntax check on the playbook |
tags |
none | only run plays and tasks tagged with these values |
timeout |
none | override the connection timeout in seconds |
user |
none | connect as this user |
vault-id |
none | the vault identity to used |
vault-password |
none | vault password |
verbose |
0 |
level of verbosity, 0 up to 4 |
Examples
steps:
'[CI Agent] ansible (apply)':
image: woodpeckerci/plugin-ansible
settings:
playbook: playbooks/ci/agent.yml
diff: true
inventory: environments/prod/inventory.ini
syntax_check: false
limit: ci_agent_prod
become: true
user: root
private_key:
from_secret: id_ed25519_ci
extra_vars:
woodpecker_agent_secret:
from_secret: woodpecker_agent_secret
woodpecker_agent_secret_baarkerlounger:
from_secret: woodpecker_agent_secret_baarkerlounger
plugin-release
Woodpecker CI plugin to create a release and upload assets in the forge.
If the release already exists matching the tag, it will be used without overwriting. Files will still be uploaded based on the file-exists
setting.
Supports Gitea, Forgejo and GitHub.
Settings
Settings Name | Default | Description |
---|---|---|
api-key |
none | API access token |
files |
none | List of files to upload (accepts globs) |
file-exists |
overwrite |
What to do if files already exist; one of overwrite , fail , or skip |
checksum |
none | Generate checksums for specific files |
checksum-file |
CHECKSUMsum.txt |
name used for checksum file. CHECKSUM is replaced with the chosen method (default: CHECKSUMsum.txt ) |
checksum-flatten |
false |
include only the basename of the file in the checksum file |
target |
CI_REPO_DEFAULT_BRANCH |
Branch where further development happens (usually main ) |
draft |
false |
Create a draft release |
skip-verify |
false |
Visit base-url and skip verifying certificate |
prerelease |
false |
Create a pre-release |
base-url |
CI_FORGE_URL |
Base URL |
upload-url |
https://uploads.github.com/ |
upload url for GitHub |
note |
none | File or string with notes for the release (ex: changelog) |
title |
none | File or string with the title for the release |
env-file |
none | Path to a .env file to load |
overwrite |
false |
force overwrite existing release information (title, note and publish if release was draft before and draft=true , discussion category if none) |
discussion-category |
none | create a discussion in the given category (github) |
generate-release-notes |
false |
automatically generate GitHub release notes |
env-file |
none | load env vars from file |
Example
publish:
image: woodpeckerci/plugin-release
settings:
files:
# Could also be "hello-world*" to match both
- 'hello-world'
- 'hello-world.exe'
api_key:
from_secret: ACCESS_TOKEN
Git Push
Use this plugin for commit and push an git repo. You will need to supply Drone / Woodpecker with a private SSH key or use the same credentials as the cloned repo to being able to push changes.
- name: push commit
image: appleboy/drone-git-push
settings:
branch: master
remote: ssh://git@git.heroku.com/falling-wind-1624.git
force: false
commit: true
An example of pushing a branch back to the current repository:
- name: push commit
image: appleboy/drone-git-push
settings:
remote_name: origin
branch: gh-pages
local_ref: gh-pages
An example of specifying the path to a repo:
- name: push commit
image: appleboy/drone-git-push
settings:
remote_name: origin
branch: gh-pages
local_ref: gh-pages
path: path/to/repo
Parameter Reference
setting | description |
---|---|
ssh_key | private SSH key for the remote machine (make sure it ends with a newline) |
remote | target remote repository (if blank, assume exists) |
remote_name | name of the remote to use locally (default "deploy") |
branch | target remote branch, defaults to master |
local_branch | local branch or ref to push (default "HEAD") |
path | path to git repo (if blank, assume current directory) |
force | force push using the --force flag, defaults to false |
skip_verify | skip verification of HTTPS certs, defaults to false |
commit | add and commit the contents of the repo before pushing, defaults to false |
commit_message | add a custom message for commit, if it is omitted, it will be [skip ci] Commit dirty state |
empty_commit | if you only want generate an empty commit, you can do it using this option |
tag | if you want to add a tag to the commit, you can do it using this option. You must also set followtags to true if you want the tag to be pushed to the remote |
author_name | the name to use for the author of the commit (if blank, assume push commiter name) |
author_email | the email address to use for the author of the commit (if blank, assume push commiter name) |
followtags | push with --follow-tags option |
rebase | pull --rebase before pushing |
S3 Plugin
The S3 plugin uploads files and build artifacts to your S3 bucket, or S3-compatible bucket such as Minio. The below pipeline configuration demonstrates simple usage:
steps:
upload:
image: woodpeckerci/plugin-s3
settings:
bucket: my-bucket-name
access_key: a50d28f4dd477bc184fbd10b376de753
secret_key: ****************************************
source: public/**/*
target: /target/location
Source the aws credentials from secrets:
steps:
upload:
image: woodpeckerci/plugin-s3
settings:
bucket: my-bucket-name
access_key:
from_secret: aws_access_key_id
secret_key:
from_secret: aws_secret_access_key
source: public/**/*
target: /target/location
Use the build number in the S3 target prefix:
steps:
upload:
image: woodpeckerci/plugin-s3
settings:
bucket: my-bucket-name
source: public/**/*
target: /target/location/${CI_BUILD_NUMBER}
Configure the plugin to strip path prefixes when uploading:
steps:
upload:
image: woodpeckerci/plugin-s3
settings:
bucket: my-bucket-name
source: public/**/*
target: /target/location
strip_prefix: public/
Configure the plugin to exclude files from upload and compress:
steps:
upload:
image: woodpeckerci/plugin-s3
settings:
bucket: my-bucket-name
source: public/**/*
target: /target/location
exclude:
- **/*.xml
compress: true
Configure the plugin to connect to a Minio server:
steps:
upload:
image: woodpeckerci/plugin-s3
settings:
bucket: my-bucket-name
source: public/**/*
target: /target/location
path_style: true
endpoint: https://play.minio.io:9000
Settings
setting | description |
---|---|
endpoint | custom endpoint URL (optional, to use a S3 compatible non-Amazon service) |
access_key | amazon key (optional) |
secret_key | amazon secret key (optional) |
bucket | bucket name |
region | bucket region (us-east-1, eu-west-1, etc) |
acl | access to files that are uploaded (private, public-read, etc) |
source | source location of the files, using a glob matching pattern. Location must be within the woodpecker workspace. |
target | target location of files in the bucket |
encryption | if provided, use server-side encryption |
strip_prefix | strip the prefix from source path |
exclude | glob exclusion patterns |
path_style | whether path style URLs should be used (true for minio) |
env_file | load env vars from file |
compress | prior to upload, compress files and use gzip content-encoding (false by default) |
overwrite | overwrite existing files (true by default) |
Docker Buildx
Woodpecker CI plugin to build multiarch Docker images with buildx. This plugin is a fork of thegeeklab/drone-docker-buildx which itself is a fork of drone-plugins/drone-docker.
Settings
Settings Name | Default | Description |
---|---|---|
dry-run |
false |
disables docker push |
repo |
none | sets repository name for the image (can be a list) |
username |
none | sets username to authenticates with |
password |
none | sets password / token to authenticates with |
aws_access_key_id |
none | sets AWS_ACCESS_KEY_ID for AWS ECR auth |
aws_secret_access_key |
none | sets AWS_SECRET_ACCESS_KEY for AWS ECR auth |
aws_region |
us-east-1 |
sets AWS_DEFAULT_REGION for AWS ECR auth |
password |
none | sets password / token to authenticates with |
email |
none | sets email address to authenticates with |
registry |
https://index.docker.io/v1/ |
sets docker registry to authenticate with |
dockerfile |
Dockerfile |
sets dockerfile to use for the image build |
tag /tags |
none | sets repository tags to use for the image |
platforms |
none | sets target platform for build |
provenance |
none | sets provenance for build |
remote-builders |
none | sets remote builders for build |
ssh-key |
none | sets an ssh key to connect to remote builders |
Examples
publish-next-agent:
image: woodpeckerci/plugin-docker-buildx
settings:
repo: woodpeckerci/woodpecker-agent
dockerfile: docker/Dockerfile.agent.multiarch
platforms: windows/amd64,darwin/amd64,darwin/arm64,freebsd/amd64,linux/amd64,linux/arm64/v8
tag: next
username:
from_secret: docker_username
password:
from_secret: docker_password
when:
branch: ${CI_REPO_DEFAULT_BRANCH}
event: push
publish:
image: woodpeckerci/plugin-docker-buildx
settings:
platforms: linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/riscv64,linux/s390x
repo: codeberg.org/${CI_REPO_OWNER}/hello
registry: codeberg.org
tags: latest
username: ${CI_REPO_OWNER}
password:
from_secret: cb_token
docker-build:
image: woodpeckerci/plugin-docker-buildx
settings:
repo: codeberg.org/${CI_REPO_OWNER}/hello
registry: codeberg.org
dry-run: true
output: type=oci,dest=${CI_REPO_OWNER}-hello.tar
Advanced Settings
Settings Name | Default | Description |
---|---|---|
mirror |
none | sets a registry mirror to pull images |
storage_driver |
none | sets the docker daemon storage driver |
storage_path |
/var/lib/docker |
sets the docker daemon storage path |
bip |
none | allows the docker daemon to bride ip address |
mtu |
none | sets docker daemon custom mtu setting |
custom_dns |
none | sets custom docker daemon dns server |
custom_dns_search |
none | sets custom docker daemon dns search domain |
insecure |
false |
allows the docker daemon to use insecure registries |
ipv6 |
false |
enables docker daemon IPv6 support |
experimental |
false |
enables docker daemon experimental mode |
debug |
false |
enables verbose debug mode for the docker daemon |
daemon_off |
false |
disables the startup of the docker daemon |
buildkit_debug |
false |
enables debug output of buildkit |
buildkit_config |
none | Can only be changed for insecure image. Sets content of the docker buildkit TOML config |
buildkit_driveropt |
none | Can only be changed for insecure image. Adds one or multiple --driver-opt buildx arguments for the default buildkit builder instance |
tags_file |
none | overrides the tags option with values in a file named .tags ; multiple tags can be specified separated by a newline |
context |
. |
sets the path of the build context to use |
auto_tag |
false |
generates tag names automatically based on git branch and git tag, tags supplied via tags are additionally added to the auto_tags without suffix |
default_suffix /auto_tag_suffix |
none | generates tag names with the given suffix |
default_tag |
latest |
overrides the default tag name used when generating with auto_tag enabled |
label /labels |
none | sets labels to use for the image in format <name>=<value> |
default_labels /auto_labels |
true |
sets docker image labels based on git information |
build_args |
none | sets custom build arguments for the build |
build_args_from_env |
none | forwards environment variables as custom arguments to the build |
secrets |
none | Sets the build secrets for the build |
quiet |
false |
enables suppression of the build output |
target |
none | sets the build target to use |
cache_from |
none | sets configuration for cache source |
cache_to |
none | sets configuration for cache export |
cache_images |
none | a list of images to use as cache. |
pull_image |
true |
enforces to pull base image at build time |
compress |
false |
enables compression of the build context using gzip |
config |
none | sets content of the docker daemon json config |
purge |
true |
enables cleanup of the docker environment at the end of a build |
no_cache |
false |
disables the usage of cached intermediate containers |
add_host |
none | sets additional host:ip mapping |
output |
none | sets build output in formattype=<type>[,<key>=<value>] |
logins |
none | option to log into multiple registries |
env_file |
none | load env vars from specified file |
ecr_create_repository |
false |
creates the ECR repository if it does not exist |
ecr_lifecycle_policy |
none | AWS ECR lifecycle policy |
ecr_repository_policy |
none | AWS ECR repository policy |
ecr_scan_on_push |
none | AWS: whether to enable image scanning on push |
http_proxy |
none | Set an http proxy if needed. It is also forwarded as build arg called "HTTP_PROXY". |
https_proxy |
none | Set an https proxy if needed. It is also forwarded as build arg called "HTTPS_PROXY". |
no_proxy |
none | Set (sub-)domains to be ignored by proxy settings. It is also forwarded as build arg called "NO_PROXY". |
Multi registry push example
settings:
repo: a6543/tmp,codeberg.org/6543/tmp
tag: demo
logins:
- registry: https://index.docker.io/v1/
username: a6543
password:
from_secret: docker_token
mirrors:
- "my-docker-mirror-host.local"
- registry: https://codeberg.org
username: "6543"
password:
from_secret: cb_token
- registry: https://<account-id>.dkr.ecr.<region>.amazonaws.com
aws_region: <region>
aws_access_key_id:
from_secret: aws_access_key_id
aws_secret_access_key:
from_secret: aws_secret_access_key
Using remote builders
When building for multiple platforms, you might want to offload some builds to a remote server, to avoid emulation. To support this, provide a list build servers to remote-builders
. These servers will need key authentication, so you will also need to provide a (private) SSH key.
build:
image: woodpeckerci/plugin-docker-buildx
settings:
platforms: linux/amd64,linux/arm64
repo: codeberg.org/${CI_REPO_OWNER}/hello
registry: codeberg.org
dry-run: true
ssh-key:
from_secret: ssh_key
remote-builders: root@my-amd64-build-server,root@my-arm64-build-server
If you want to mix local and remote builders, the list can include "local":
build:
image: woodpeckerci/plugin-docker-buildx
settings:
platforms: linux/amd64,linux/arm64
repo: codeberg.org/${CI_REPO_OWNER}/hello
registry: codeberg.org
dry-run: true
ssh-key:
from_secret: ssh_key
remote-builders: local,root@my-arm64-build-server
Services
Woodpecker provides a services section in the YAML file used for defining service containers. The below configuration composes database and cache containers.
Services are accessed using custom hostnames. In the example below, the MySQL service is assigned the hostname database and is available at database:3306
.
steps:
- name: build
image: golang
commands:
- go build
- go test
services:
- name: database
image: mysql
- name: cache
image: redis
You can define a port and a protocol explicitly:
services:
- name: database
image: mysql
ports:
- 3306
- name: wireguard
image: wg
ports:
- 51820/udp
Service containers generally expose environment variables to customize service startup such as default usernames, passwords and ports. Please see the official image documentation to learn more.
services:
- name: database
image: mysql
+ environment:
+ - MYSQL_DATABASE=test
+ - MYSQL_ALLOW_EMPTY_PASSWORD=yes
- name: cache
image: redis
Service and long running containers can also be included in the pipeline section of the configuration using the detach parameter without blocking other steps. This should be used when explicit control over startup order is required.
steps:
- name: build
image: golang
commands:
- go build
- go test
- name: database
image: redis
+ detach: true
- name: test
image: golang
commands:
- go test
Containers from detached steps will terminate when the pipeline ends.
Service containers require time to initialize and begin to accept connections. If you are unable to connect to a service you may need to wait a few seconds or implement a backoff.
steps:
- name: test
image: golang
commands:
+ - sleep 15
- go get
- go test
services:
- name: database
image: mysql
Volumes
Woodpecker gives the ability to define Docker volumes in the YAML. You can use this parameter to mount files or folders on the host machine into your containers.
note
Info: Volumes are only available to trusted repositories and for security reasons should only be used in private environments. See project settings to enable trusted mode.
steps:
- name: build
image: docker
commands:
- docker build --rm -t octocat/hello-world .
- docker run --rm octocat/hello-world --test
- docker push octocat/hello-world
- docker rmi octocat/hello-world
volumes:
+ - /var/run/docker.sock:/var/run/docker.sock
Please note that Woodpecker mounts volumes on the host machine. This means you must use absolute paths when you configure volumes. Attempting to use relative paths will result in an error.
Status Badges
Woodpecker has integrated support for repository status badges. These badges can be added to your website or project readme file to display the status of your code.
<scheme>://<hostname>/api/badges/<repo-id>/status.svg
The status badge displays the status for the latest build to your default branch (e.g. main). You can customize the branch by adding the branch query parameter.
<scheme>://<hostname>/api/badges/<repo-id>/status.svg?branch=<branch>
Please note status badges do not include pull request results, since the status of a pull request does not provide an accurate representation of your repository state.
Prometheus
Woodpecker is compatible with Prometheus and exposes a /metrics
endpoint if the environment variable WOODPECKER_PROMETHEUS_AUTH_TOKEN
is set. Please note that access to the metrics endpoint is restricted and requires the authorization token from the environment variable mentioned above.
global:
scrape_interval: 60s
scrape_configs:
- job_name: 'woodpecker'
bearer_token: dummyToken...
static_configs:
- targets: ['woodpecker.domain.com']
An administrator will need to generate a user API token and configure in the Prometheus configuration file as a bearer token. Please see the following example:
global:
scrape_interval: 60s
scrape_configs:
- job_name: 'woodpecker'
+ bearer_token: dummyToken...
static_configs:
- targets: ['woodpecker.domain.com']
Docker-Compose
version: '3'
services:
woodpecker-server:
image: woodpeckerci/woodpecker-server:latest
ports:
- 8000:8000
volumes:
- woodpecker-server-data:/var/lib/woodpecker/
environment:
- WOODPECKER_OPEN=true
- WOODPECKER_HOST=${WOODPECKER_HOST}
- WOODPECKER_GITHUB=true
- WOODPECKER_GITHUB_CLIENT=${WOODPECKER_GITHUB_CLIENT}
- WOODPECKER_GITHUB_SECRET=${WOODPECKER_GITHUB_SECRET}
- WOODPECKER_AGENT_SECRET=${WOODPECKER_AGENT_SECRET}
woodpecker-agent:
image: woodpeckerci/woodpecker-agent:latest
command: agent
restart: always
depends_on:
- woodpecker-server
volumes:
- woodpecker-agent-config:/etc/woodpecker
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WOODPECKER_SERVER=woodpecker-server:9000
- WOODPECKER_AGENT_SECRET=${WOODPECKER_AGENT_SECRET}
volumes:
woodpecker-server-data:
woodpecker-agent-config: