Pipeline YAML
This page describes the formal pipeline YAML specification for Semaphore.
Overview
Semaphore uses YAML for the definition of pipeline. Every Semaphore project requires at least one pipeline to work. If you don't want to write pipelines by hand, you can use the visual workflow editor.
Execution order
You cannot assume that jobs
in the same task
run in any particular order. They run in parallel on a resource availability basis.
To force execution order, you must use block dependencies
. Semaphore only starts a block
when all their dependencies are completed.
Comments
Lines beginning with #
are considered comments and are ignored by the YAML parser.
version
The version of the pipeline YAML specification to be used. The only value supported is v1.0
version: v1.0
name
A Unicode string for the pipeline name. It is strongly recommended that you give descriptive names to your Semaphore pipelines.
name: The name of the Semaphore pipeline
agent
Defines the global agent's machine
type and os_image
to run jobs
. See agents to learn more.
The agent
can contain the following properties:
- [
machine
]: VM machine type to run the jobs - [
containers
]: optional Docker containers to run the jobs
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
The default agent
can be overridden inside tasks
.
machine
Part of the agent
definition. It defines the global VM machine type to run the jobs.
It requires two properties:
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
type
Part of the agent
definition. It selects the hardware or self-hosted agent type that runs the jobs.
By default, Semaphore uses the built-in s1-kubernetes
agent which runs your jobs in a pod on the same cluster where the Semaphore server is running.
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
os_image
Part of the agent
definition. This is an optional property to specify the Operating System image to mount on the machine
. The value is not used when running Docker based environments or Kubernetes self-hosted agents.
containers
An optional part of agent
. Defines an array of Docker image names to run jobs. The container
property is required when using Docker based environments or Kubernetes self-hosted agents.
The first container in the list runs the jobs. You may optionally add more items that run as separate containers. All containers can reference each other via their names, which are mapped to hostnames using DNS records.
Each container
entry can have:
name
: the name of the containerimage
: the image for the container- `env_vars: optional list of key-value pairs to define environment variables
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
- name: db
image: 'registry.semaphoreci.com/postgres:9.6'
name
Defines the unique name
of the container. The name is mapped to the container hostname and can be used to communicate with other containers.
agent:
machine:
type: e1-standard-2
containers:
- name: main
image: 'registry.semaphoreci.com/ruby:2.6'
image
Defines the Docker image to run inside the container.
agent:
machine:
type: e1-standard-2
containers:
- name: main
image: 'registry.semaphoreci.com/ruby:2.6'
env_vars
An optional array of key-value pairs. The keys are exported as environment variables when the container starts.
You can define special variables to modify the container initialization:
user
: the active user inside the containercommand
: overrides the Docker image's CMD commandentrypoint
: overrides the Docker image' ENTRYPOINT entry
You may also supply environment variables with env_vars
and secrets
.
agent:
machine:
type: e1-standard-2
containers:
- name: main
image: 'registry.semaphoreci.com/ruby:2.6'
- name: db
image: 'registry.semaphoreci.com/postgres:9.6'
user: postgres
secrets:
- name: mysecret
env_vars:
- name: POSTGRES_PASSWORD
value: keyboard-cat
For secrets
, only environment variables defined in the secret are imported. Any files in the secret are ignored.
execution_time_limit
Defines an optional time limit for executing the pipeline. Jobs are forcibly terminated once the time limit is reached. The default value is 1 hour.
The execution_time_limit
property accepts one of two options:
hours
: time limit expressed in hours. Maximum value is 24minutes
: The time limit is expressed in minutes. The maximum value is 1440
You can only either hours
or minutes
. Not both.
This property is also available on blocks
and jobs
.
version: v1.0
name: Using execution_time_limit
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
execution_time_limit:
hours: 3
fail_fast
This optional property defines what happens when a job fails. It accepts the following properties:
If both are set, stop
is evaluated first. If fail_fast
is not defined, jobs continue running following declared dependencies
when a job fails.
stop
The stop
property causes all running jobs to stop as soon as one job fails. It requires a when
property that defines a condition according to Conditions DSL.
In the following configuration, blocks A and B run in parallel. Block C runs after block B is finished. If Block A fails and the workflow is initiated from a non-master branch all running jobs stop immediately.
version: v1.0
name: Setting fail fast stop policy
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
fail_fast:
stop:
when: "branch != 'master'"
blocks:
- name: Block A
dependencies: []
task:
jobs:
- name: Job A
commands:
- sleep 10
- failing command
- name: Block B
dependencies: []
task:
jobs:
- name: Job B
commands:
- sleep 60
- name: Block C
dependencies: ["Block B"]
task:
jobs:
- name: Job C
commands:
- sleep 60
cancel
The cancel
property causes all non-started jobs to be canceled as soon as one job fails. Already running jobs are allowed to finish. This property requires a when
property that defines a condition according to Conditions DSL.
In the following configuration, blocks A and B run in parallel. Block C runs after block B is finished. If Block A fails in a workflow that was initiated from a non-master branch:
- Block B is allowed to finish
- Block C is canceled, i.e. it never starts
version: v1.0
name: Setting fail fast cancel policy
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
fail_fast:
cancel:
when: "branch != 'master'"
blocks:
- name: Block A
dependencies: []
task:
jobs:
- name: Job A
commands:
- sleep 10
- failing command
- name: Block B
dependencies: []
task:
jobs:
- name: Job B
commands:
- sleep 60
- name: Block C
dependencies: ["Block B"]
task:
jobs:
- name: Job C
commands:
- sleep 60
queue
The optional queue
property enables you to assign pipelines to custom execution queues or to configure the way the pipelines are processed when queuing happens.
There are two queueing strategies:
- Direct assignment: assigns all pipelines from the current pipeline file to a shared queue
- Conditional assignment: defines assignment rules based on conditions
See Pipeline Queues for more information.
Direct assignment
This option allows you to can use the name
, scope
, and processing
properties as direct sub-properties of the queue
property.
The following rules apply:
name
orprocessing
properties are requiredscope
can only be set ifname
is definedname
should hold the string that uniquely identifies the desired queue within the configured scope- if you omit
name
if you only wish theprocessing
property. Thename
is autogenerated from the Git commit details. scope
can have one of two values:project
ororganizations
. The default isproject
When scope: project
the queues with the same values for the name
property in different projects are not queued together.
When scope: organization
the pipelines from the queue will be queued together with pipelines from other projects within the server that have a queue configuration with the same name
and scope
values.
The processing
property can have two values:
serialized
: the pipelines in the queue will be queued and executed one by one in ascending order, according to creation time. This is the defaultparallel
: all pipelines in the queue will be executed as soon as they are created and there will be no queuing.
Conditional assignment
In this option, you define an array of items with queue configurations as a sub-property of the queue
property. Each array item can have the same properties, i.e. name
, scope
, and processing
, as in direct assignment.
In addition, you need to supply a when
property using the Conditions DSL. When the queue
configuration is evaluated in this approach, the when
conditions from the items in the array are evaluated one by one starting with the first item in the array.
The evaluation is stopped as soon as one of the when
conditions is evaluated as true
, and the rest of the properties from the same array item are used to configure the queue for the given pipeline.
This means that the order of the items in the array is important and that items should be ordered so that those with the most specific conditions are defined first, followed by those with more generalized conditions (e.g. items with conditions such as branch = 'develop'
should be ordered before those with branch != 'master'
).
b
auto_cancel
Sets a strategy for auto-canceling pipelines in a queue when a new pipeline appears. Two values are supported:
At least one of them is required. If both are set, running
will be evaluated first.
running
When this property is set, queued and running pipelines are canceled as soon as a new workflow is triggered. This property requires a when
property that defines a condition according to Conditions DSL.
In the following configuration, all pipelines initiated from a non-master branch will run immediately after auto-stopping everything else in front of them in the queue.
version: v1.0
name: Setting auto-cancel running strategy
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
auto_cancel:
running:
when: "branch != 'master'"
blocks:
- name: Unit tests
task:
jobs:
- name: Unit tests job
commands:
- echo Running unit test
queued
When this property is set, only queued are canceled as soon as a new workflow is triggered. Already-running pipelines are allowed to finish. This property requires a when
property that defines a condition according to Conditions DSL.
In the following configuration, all pipelines initiated from a non-master branch will cancel any queued pipelines and wait for the one that is running to finish before starting.
version: v1.0
name: Setting auto-cancel queued strategy
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
auto_cancel:
queued:
when: "branch != 'master'"
blocks:
- name: Unit tests
task:
jobs:
- name: Unit tests job
commands:
- echo Running unit test
global_job_config
Set global properties to be applied to all jobs and blocks in the pipeline. It can contain any of these properties:
prologue
epilogue
secrets
env_vars
priority
The defined configuration values have the same syntax as the ones defined on the task
or a jobs
level and are applied to all the tasks and jobs in a pipeline.
In the case of prologue
and env_vars
the global values, i.e. values from global_job_config
, are exported first, and those defined on a task level thereafter. This allows for the overriding of global values for the specific task if the need arises.
In the case of epilogue
, the order of exporting is reversed, so, for example, one can first perform specific cleanup commands before global ones.
secrets
are simply merged since order plays no role here.
In the case of the priority
, the global values are added at the end of the list of priorities, and their conditions are defined at the job level. This allows for job-specific priorities to be evaluated first, and only if none of them match will the global values be evaluated and used.
version: "v1.0"
name: An example of using global_job_config
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
global_job_config:
prologue:
commands:
- checkout
env_vars:
- name: TEST_ENV_VAR
value: test_value
blocks:
- name: Linter
task:
jobs:
- name: Linter
commands:
- echo $TEST_ENV_VAR
- name: Unit tests
task:
jobs:
- name: Unit testing
commands:
- echo $TEST_ENV_VAR
- name: Integration Tests
task:
jobs:
- name: Integration testing
commands:
- echo $TEST_ENV_VAR
blocks
Defines an array of items that hold the elements of a pipeline. Each element of that array is called a block and can have these properties:
name
dependencies
task
(mandatory)skip
run
name
An optional name for the block.
dependencies
Defines the flow of execution between blocks. When no dependencies are set, blocks run in parallel.
The following example runs Block A
and Block B
in parallel at the beginning of a pipeline. Block C
runs only after Block A
and Block B
have finished.
version: "v1.0"
name: Pipeline with dependencies
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: "Block A"
dependencies: []
task:
jobs:
- name: "Job A"
commands:
- echo "output"
- name: "Block B"
dependencies: []
task:
jobs:
- name: "Job B"
commands:
- echo "output"
- name: "Block C"
dependencies: ["Block A", "Block B"]
task:
jobs:
- name: "Job C"
commands:
- echo "output"
If you use the dependencies
property in one block, you have to specify dependencies for all other blocks as well. The following pipeline is invalid because dependencies are missing for Block A
and Block B
.
version: "v1.0"
name: Invalid pipeline
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: "Block A"
task:
jobs:
- name: "Job A"
commands:
- echo "output"
- name: "Block B"
task:
jobs:
- name: "Job B"
commands:
- echo "output"
- name: "Block C"
dependencies: ["Block A", "Block B"]
task:
jobs:
- name: "Job C"
commands:
- echo "output"
skip
The skip
property is optional and it allows you to define conditions, written in Conditions DSL which are based on the branch name or tag name of the current push that initiated the entire pipeline. If a condition defined in this way is evaluated to be true, the block will be skipped.
When a block is skipped, it means that it will immediately finish with a passed
result without actually running any of its jobs.
Its result_reason will be set to skipped
and other blocks which depend on its passing will be started and executed as if this block executed regularly and all of its jobs passed.
Example of a block that has been skipped on all branches except master:
version: v1.0
name: The name of the Semaphore project
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: Inspect Linux environment
skip:
when: "branch != 'master'"
task:
jobs:
- name: Print Environment variables
commands:
- echo $SEMAPHORE_PIPELINE_ID
- echo $HOME
It is not possible to have both skip
and run
properties defined for the same block.
run
The run
property is optional and it allows you to define a condition, written in Conditions DSL that is based on properties of the push which initiated the entire workflow.
If the run condition is evaluated as true, the block and all of its jobs will run, otherwise, the block will be skipped.
When a block is skipped, it means that it will immediately finish with a passed
result and a skipped
result_reason, without actually running any of its jobs.
Example of a block that is run only on the master branch:
version: v1.0
name: The name of the Semaphore project
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: Inspect Linux environment
run:
when: "branch = 'master'"
task:
jobs:
- name: Print Environment variables
commands:
- echo $SEMAPHORE_PIPELINE_ID
- echo $HOME
It is not possible to have both skip
and run
properties defined for the same block.
task
The task
property defines the jobs
in the blocks along with all its optional properties:
version: v1.0
name: The name of the Semaphore project
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: Inspect Linux environment
task:
jobs:
- name: Print Environment variables
commands:
- echo $SEMAPHORE_PIPELINE_ID
- echo $HOME
agent
The agent
section under a task
section is optional and can coexist with the global agent
definition at the beginning of a Pipeline YAML file. The properties and the possible values of the agent
section can be found in the agent
reference.
An agent
block under a task
block overrides the global agent
definition.
version: v1.0
name: YAML file example with task and agent.
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: Run in Linux environment
task:
jobs:
- name: Learn about SEMAPHORE_GIT_DIR
commands:
- echo $SEMAPHORE_GIT_DIR
- name: Run in macOS environment
task:
agent:
machine:
type: a1-standard-4
os_image: macos-xcode15
jobs:
- name: Using agent job
commands:
- echo $PATH
env_vars
The elements of an env_vars
array are name and value pairs that hold the name of the environment variable and the value of the environment variable.
version: v1.0
name: A Semaphore project
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
task:
jobs:
- name: Check environment variables
commands:
- echo $HOME
- echo $PI
- echo $VAR1
env_vars:
- name: PI
value: "3.14159"
- name: VAR1
value: This is Var 1
The indentation level of the prologue
, epilogue
, env_vars
, and jobs
properties should be the same.
prologue
A prologue
block in a task
block is used when you want to execute certain commands prior to the commands of each job of a given task
. This is usually the case with initialization commands that install software, start or stop services, etc.
version: v1.0
name: YAML file illustrating the prologue property
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: Display a file
task:
jobs:
- name: Display hw.go
commands:
- ls -al
- cat hw.go
prologue:
commands:
- checkout
epilogue
An epilogue
block should be used when you want to execute commands after a job has finished, either successfully or unsuccessfully.
Please notice that a pipeline will not fail if one or more commands in the epilogue
fail to execute for some reason. Also, epilogue commands will not run if the job was stopped, canceled, or timed-out.
There are three types of epilogue commands:
- Always executed: defined with
always
in the epilogue section. - Executed when the job passes: defined with
on_pass
in the epilogue section. - Executed when the job fails: defined with
on_fail
in the epilogue sections.
The order of command execution is as follows:
- First, the
always
commands are executed. - Then, the
on_pass
oron_fail
commands are executed.
version: v1.0
name: YAML file illustrating the epilogue property
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: Linux version
task:
jobs:
- name: Execute uname
commands:
- uname -a
epilogue:
always:
commands:
- echo "this command is executed for both passed and failed jobs"
on_pass:
commands:
- echo "This command runs if job has passed"
on_fail:
commands:
- echo "This command runs if job has failed"
Commands can be defined as a list directly in the YAML file, as in the above example, or via the commands_file
property:
version: v1.0
name: YAML file illustrating the epilogue property
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: Linux version
task:
jobs:
- name: Execute uname
commands:
- uname -a
epilogue:
always:
commands_file: file_with_epilogue_always_commands.sh
on_pass:
commands_file: file_with_epilogue_on_pass_commands.sh
on_fail:
commands_file: file_with_epilogue_on_fail_commands.sh
Where the content of the files is a list of commands, as in the following example:
echo "hello from command file"
echo "hello from $SEMAPHORE_GIT_BRANCH/$SEMAPHORE_GIT_SHA"
The location of the file is relative to the pipeline file. For example, if your pipeline file is located in .semaphore/semaphore.yml
, the file_with_epilogue_always_commands.sh
in the above example is assumed to live in .semaphore/file_with_epilogue_always_commands.sh
.
secrets
A secret is a place for keeping sensitive information in the form of environment variables and small files. Sharing sensitive data in secret is both safer and more flexible than storing it using plain text files or environment variables that anyone can access.
The secrets
property is used for importing all the environment variables and files from an existing secret into a Semaphore server.
If one or more names of the environment variables from two or more imported secrets are the same, then the shared environment variables will have the value that was found in the secret that was imported last. The same rule applies to the files in secrets.
Additionally, if you try to use a name
value that does not exist, the pipeline fails to execute.
name
The name
property is compulsory in a secrets
block because it specifies the secret that you want to import. The secret or secrets must be found within the active server.
All files in secrets are restored in the home directory of the user of the agent, usually mapped to /home/semaphore
.
version: v1.0
name: Pipeline configuration with secrets
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- task:
jobs:
- name: Using secrets
commands:
- echo $USERNAME
- echo $PASSWORD
secrets:
- name: mysql-secrets
Environment variables imported from a secrets
property are used like regular environment variables defined in an env_vars
block.