Skip to main content

Jobs

Jobs get stuff done. This page explains to create and configure jobs.

Job lifecycle

Jobs run arbitrary shell commands inside a dedicated environment called an agent. Agents can take many forms, including ephemeral Docker containers, Kubernetes pods, or x86/ARM Virtual Machines.

When a job is scheduled, the following happens:

  1. Allocate agent: pick a suitable agent from the pool of warm agents
  2. Initialize: execute setup steps such as importing environment variables, loading SSH keys, mounting secrets, and installing the Semaphore toolbox
  3. Run commands: execute your commands
  4. End job and save logs: the job activity log is saved for future inspection
  5. Destroy agent: the used agent is discarded along with all its contents

Job Lifecycle

note

You can get non-ephemeral agents with self-hosted agents.

Jobs, blocks and pipelines

Semaphore uses jobs, blocks and pipelines to structure the workflow.

  • Job: the minimum unit of work, a sequence of commands. Every job exists inside a block
  • Block: contains one of more jobs. Jobs in the same block run concurrently and share properties
  • Pipeline: a group of blocks connected by dependencies. A workflow may span multiple pipelines

Pipeline, block and job

How to create a job

You can create a job with the visual editor or by creating a YAML file.

Open your project on Semaphore and press Edit Workflow.

Screenshot of the project opened in Semaphore. The Edit Workflow button is highlighted

  1. Select the first block
  2. Type your shell commands
  3. Press Run the workflow, then press Looks good, Start →

New job being edited

Semaphore automatically starts the job when the file is saved. Click the running job to follow the progress and view its log.

Job log

tip

Do not use exit in the job commands. Doing so terminates the terminal session and marks the job as failed. If you want force a non-exit status code use return <int> instead.

Run jobs in parallel

Jobs in the same block always run in parallel.

To run two jobs in parallel:

  1. Select the block
  2. Press + Add job
  3. Type the job name and commands

Adding a second job

Here you can also:

  • Delete a job by pressing the X sign next to it.
  • Delete the whole block along with the jobs by scrolling down and clicking on Delete block...
note

You can't share files between jobs living in the same block.

Run jobs in sequence

If you want to run jobs in sequence, i.e. not in parallel, you must define them in separate blocks.

  1. Click on +Add Block
  2. Type the name of the block
  3. Adjust dependencies to define execution order
  4. Type the name and commands for the job

Adding a second job and using dependencies to define execution order

tip

All files are lost when the job ends. This happens because each jobs are allocated to different agents. Use cache or artifact to preserve files and directories between jobs.

Using dependencies

You can use block depedencies to control the execution flow of the workflow. See block dependencies to learn more.

Semaphore toolbox

The Semaphore toolbox is a set of built-in command line tools to carry essential tasks in your jobs such as cloning the repository or moving data between jobs.

The most-used tools in the Semaphore toolbox are:

  • checkout clones the remote Git repository
  • cache speeds up jobs by caching downloaded files
  • artifact saves and moves files between jobs
  • sem-version changes the active version for a language or runtime
  • sem-service starts database and services for testing

checkout

The checkout command clones the remote Git repository and cds into the repository directory so you're ready to work.

The following example shows the first commands for working with a Node.js project. We run checkout to get a local copy of the code. Next, we can run npm install because we can assume that package.json and package-lock.json exist in the current directory.

checkout
npm install

Here is how the same code looks in a Semaphore job.

Running checkout with the visual editor

How does checkout work?

Semaphore defines four environment variables to control how checkout works:

cache

note

Using cache in self-hosted agents requires additional setup steps.

The main function of the cache is to speed up job execution by caching downloaded files.

The cache store and cache restore commands can detect well-known dependency managers and persist files autoatically. Let's say we want to speed up npm install, here is how to do it:

checkout
cache restore
npm install
cache store

The highlighted lines show how to use the cache:

  • cache store: saves node_modules to non-ephemeral storage. It knows it's a Node project because it found package.json in the working folderx.
  • cache restore: retrieves the cached copy of node_modules to the working directoryx.

Cache is not limited to Node.js. It works with several languages and frameworks. Alternatively, you can use cache with any kind of file or folder but in that case, you need to supply additional arguments

artifact

note

Using artifact in self-hosted agents requires additional setup steps.

The artifact command can be used:

  • as a way to move files between jobs and runs
  • as persistent storage for artifacts like compiled binaries or bundles

The following example shows how to persist files between jobs. In the first job we have:

checkout
npm run build
artifact push workflow dist

In the following jobs, we can access the content of the dist folder with:

artifact pull workflow dist

Let's do another example: this time we want to save the compiled binary hello.exe:

checkout
go build
artifact push project hello.exe
Artifact namespaces

Semaphore uses three separate namespaces of artifacts: job, workflow, and project. The syntax is:

artifact <pull|push> <job|workflow|project> </path/to/file/or/folder>

The namespace used controls at what level the artifact is accessible:

  • job artifacts are only accessible to the job that created it. Useful for collecting debugging data
  • workflow artifacts are accessible to all jobs in all running pipelines. The main use case is to pass data between jobs.
  • project artifacts are always accessible. They are ideal for storing final deliverables.

For more information, see the Semaphore toolbox documentation.

sem-version

The sem-version is a Linux utility to change the active language or runtime.

The syntax is:

sem-version <target> <version>

For example, to use Node.js version v20.9.0:

sem-version node 20.9.0
node --version
checkout
npm install
npm test

See the toolbox to view all languages supported by this tool.

See languages for language-specific guidance.

tip

If the language you need is not available in the pre-built images, you can still any language version with Docker Environments.

sem-service

The sem-service utility is used to start and stop databases and other popular services.

The syntax is:

sem-service <command> <service-name> <version>

For example, to start a PostgreSQL v16:

sem-service start postgres 16
checkout
npm install
npm test

You don't need to manually stop services at the end of the job. They are terminated automatically. See the toolbox to view all services supported by this tool.

Debugging jobs

Video Tutorial: Debugging tools

This section shows tips to detect and debug failing jobs.

Why my job has failed?

Semaphore ends the job as soon as a command ends with non-zero exit status. Once a job has failed, no new jobs will be started and the workflow is marked as failed.

Open the job log to see why it failed. The problematic command is shown in red. You can click on the commands to expand their output.

Job log with the error shown

tip

If you want to ignore the exit status of a command append || true at the end. For example:

echo "the next command might fail, that's OK, I don't care"
command_that_might_fail || true
echo "continuing job..."

Interactive debug with SSH

You can debug a job interactively by SSHing into the agent. This is a very powerful feature for troubleshooting.

An interactive SSH session

note

If this is the first time using an interactive session you need to install and connect the Semaphore command line tool.

To open an interactive session, open the job log and:

  1. Click on SSH Debug
  2. Copy the command shown
  3. Run the command in a terminal

How to connect with SSH for the first time

You'll be presented with a welcome message like this:

* Creating debug session for job 'd5972748-12d9-216f-a010-242683a04b27'
* Setting duration to 60 minutes
* Waiting for the debug session to boot up ...
* Waiting for ssh daemon to become ready.

Semaphore CI Debug Session.

- Checkout your code with `checkout`
- Run your CI commands with `source ~/commands.sh`
- Leave the session with `exit`

Documentation: https://docs.semaphoreci.com/essentials/debugging-with-ssh-access/.

semaphore@semaphore-vm:~$

To run the actual job commands in the SSH session:

source ~/commands.sh

You can actually run anything in the agent, including commands that were not actually part of the job. Exit the session to end the job.

By default, the duration of the SSH session is limited to one hour. To run longer debug sessions, pass the duration flag to the previous command as shown below:

sem debug job <job-id> --duration 3h
note

Interactive sessions may be unavailable when access policies for secrets is enabled.

Inspecting running jobs

You attach a terminal console to a running job. The steps are the same as debugging a job. The only difference is that Semaphore presents the following command (only while the job is running):

sem attach <job-id>

You can explore running processes, inspect the environment variables, and take a peek at the log files to help identify problems with your jobs.

note

Inspecting running jobs may be unavailable when access policies for secrets is enabled.

Port forwarding

When SSH is not enough to troubleshoot an issue, you can use port forwarding to connect to services listening to ports in the agent.

A typical use case for this feature is troubleshooting end-to-end tests. Let's say a test is failing and you can't find any obvious cause from the logs alone. Port forwarding the HTTP port in the agent to your local machine can reveal how the application "looks".

To start a port-forwarding session:

sem port-forward <job-id> <local-port> <remote-port>

For example, to forward an application listening on port 3000 in the agent to your machine on port 6000:

sem port-forward <job-id> 6000 3000

You can now connect to http://localhost:6000 to view the application running remotely in the agent.

note

Port-forwarding only works for Virtual Machine-based agents. It's not available in Docker environments.

Block settings

The settings you configure on the block are applied to all the contained jobs.

Prologue

Commands in the prologue run before each job in the block. Use this to run common setup commands like downloading dependencies, setting the runtime version, or starting test services.

  1. Select the block
  2. Open the prologue section and add your shell commands.

In the example below we use checkout to clone the repository at the start of every job in the block.

Adding commands to the prologue

Epilogue

Commands in the epilogue are executed after each job in the job ends. There are three epilogue types:

  • Execute always: always runs after the job ends, even if the job failed
  • If job has passed: commands to run when the job passes (all commands exited with zero status)
  • If job has failed: commands to run when the job failed (one command exited with non-zero status)
  1. Select the block
  2. Open the epilogue section (you may need to scroll down) and add your commands

In the example below we use artifact to save build artifacts and log files.

Editing the block&#39;s epilogue

Environment variables

Video Tutorial: How to use environment variables

Environment variables are exported into the shell environment of every job in the block. You must supply the variable name and value.

To add an environment variable:

  1. Select the block
  2. Open the Environment Variables section (you may need to scroll down)
  3. Set your variable name and value
  4. Press +Add env vars if you need more variables

Environment variables

Environment variables or shell exports?

You can define environment variables in two ways:

  • by putting them in the environment variables section
  • by using export commands in the job window: export NODE_ENV="production"

Secrets

Secrets are enabled at the block level and available to all the jobs in the block. You must create the secret before you can add it to a block.

To enable existing secrets in a block:

  1. Select the block
  2. Open the Secrets section (you may need to scroll down)
  3. Enable the checkbox next to the secret

The secret values are now available for all jobs in the block.

Importing secrets

Skip/run conditions

You can choose to skip or run the block only under certain conditions. Skipping a block means that none of its job are executed.

Use cases for this feature include skipping a block on certain branches, or working with monorepo projects.

  1. Select the block
  2. Open the Skip/Run conditions section (you may need to scroll down)
  3. Select Run this block when... or Skip this block when...
  4. Type the conditions to run or skip the block

Editing skip/run conditions

Agent

Here you can override the pipeline-level agent for a specific job. You can select VMs running Linux, macOS, or Windows (self-hosted only) in both X86 and ARM architectures. This setting also allows you to run the job in self-hosted agents or in Docker environments.

  1. Select the block
  2. Open the Agent section (you may need to scroll down)
  3. Select the Environment Type
  4. Select the OS Image
  5. Select the Machine Type

Overriding the global agent

Job parallelism

Job parallelism expands a job into multiple parallel jobs. You can use this feature to run a test suite faster by spreading the load among multiple agents.

To take full advantage of job parallelism you need to partition your test suite. Semaphore does not partition tests automatically, but it enables 3rd party test runners like Knapsack or Semaphore Test Booster (Ruby) to even the load with a partition strategy that you can configure.

When coupled your own partitioning strategy, test parallelism allows you to speed up large tests suite by horizontally scaling the tests.

When job parallelism is enabled two new environment variables are available in the job environment:

  • SEMAPHORE_JOB_COUNT: the total number of jobs running on the parallelism set
  • SEMAPHORE_JOB_INDEX: a value between 1 and $SEMAPHORE_JOB_COUNT representing the current job instance of the parallelism set

To use job parallelism, follow these steps:

  1. Open the workflow editor
  2. Select or create a job
  3. Open the section under the job Configure parallelism or a job matrix
  4. Select Multiple instances
  5. Drag the slider to select the number of jobs to run in the parallelism set
  6. Type the commands, you can use the variable counter as environment variables
  7. Press Run the workflow, then Start

Setting up job parallalelism

note

It's not possible to use job parallelism at the same time as job matrices.

Job matrix

A job matrix is a more advanced form of job parallelism where you can define multiple variables with different values and run all the possible permutations.

For example, let's say we want to test our application using three Node.js versions using npm and yarn

  • Node.js versions: v22.5.1, v21.7.3, and v20.15.1
  • Package managers: npm, and yarn

We have a total of 6 possible test jobs when we take into account all permutations. Usually, we would need to manually create these jobs, but with job matrix we can specify the variables and values and let Semaphore expand one job into all the possible permutations.

To create a job matrix, follow these steps:

  1. Open the workflow editor
  2. Select or create the job
  3. Open the section under the job Configure parallelism or a job matrix
  4. Select Multiple instances based on a matrix
  5. Type the variable names
  6. Type the possible values of the variable separated by commands
  7. Add more variables as needed
  8. Type the commands. The variables are available as environment variables
  9. Press Run the workflow, then Start

Semaphore automatically expands all possible permutations and adds the variables part of the job name

Configuring job matrix

info

Using job matrices causes Semaphore to run an initialization job before your jobs are executed.

Job priority

Every job in Semaphore has an internal priority value from 0 to 100. Job prioritization determines which jobs will get a machine assigned first when all agents are in use.

The priority of a job matters when there are more jobs than available agents. Because paid plans do not enforce limits on the number of available agents, the job priority value is only useful in two situations:

  • For projects on organizations on free and open source plans. These plans enforce concurrency limits
  • For projects running on a limited number of self-hosted agents

Default priorities

The priorities are assigned automatically according to the table below, but they can be configured on a per-job or per-pipeline basis.

Job typeBranchDefault priority
Promotion pipelinemaster65
Promotion pipelinenon-master55
Promotion pipelinetags55
Promotion pipelinepull requests55
Initial pipelinemaster60
Initial pipelinenon-master50
Initial pipelinetags50
Initial pipelinepull requests50
After pipelineany45
Tasksany40

Assigning priorities

To assign a different priority to a specific job, follow these steps:

  1. Open the pipeline YAML
  2. Locate the jobs
  3. Create a priority key
  4. Define a value and a condition
  5. Save the file and push it to the repository

The following example shows how to assign a higher priority to specific jobs when the branch is master:

Assigning priorities to specific jobs
version: v1.0
name: Job priorities
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004

blocks:
- name: Tests
task:
jobs:
- name: Unit tests
priority:
- value: 70
when: "branch = 'master'"
- value: 45
when: true
commands:
- make unit-test
- name: Integration tests
priority:
- value: 58
when: "branch = 'master'"
- value: 42
when: true
commands:
- make integration-test

To change the priority to all jobs in a pipeline, follow these steps:

  1. Open the pipeline YAML
  2. Locate the jobs
  3. Add a global_job_config key at the root of the YAM
  4. Create a priority key
  5. Define a value and a condition
  6. Save the file and push it to the repository

The following example does the same as the one above, but using a global config:

version: "v1.0"
name: An example of using global_job_config
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004

global_job_config:
priority:
- value: 70
when: "branch = 'master'"
- value: 45
when: true

blocks:
- name: Tests
task:
jobs:
- name: Unit tests
commands:
- make unit-test
- name: Integration tests
commands:
- make integration-test

See the pipeline YAML reference for more details.

Job and block limits

Semaphore enforces a few limits to prevent misconfigured jobs and runaway processes from consuming too many resources.

This section describes the limits that Semaphore applies to jobs and blocks. See pipelines limits to see limits that for pipelines.

Job duration

Jobs have a 1 hour limit. Jobs exceeding this limit are terminated.

You can change the limit up to a maximum value of 24 hours.

To change the maximum duration for a single job:

  1. Open the pipeline YAML
  2. Locate the job
  3. Add and execution_time_limit element
  4. Add hours or minutes, set the new value
  5. Save the file and push it to the repository
Changing max duration for a single job
version: v1.0
name: Pipeline using execution_time_limit
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
blocks:
- name: Job limited to 3 hours
execution_time_limit:
hours: 3
commands:
- checkout
- npm install
- npm test
note

See pipeline global time limit to change the maximum duration for all jobs in a pipeline.

Max blocks per pipeline

A pipeline can have up to 100 blocks. This limit is not configurable.

If you have a use case in which this limit is too constraining, please contact us at support@semaphoreci.com and we will try to work out a solution.

Max jobs per block

A block can have up to 50 jobs. This limit is not configurable.

If you have a use case in which this limit is too constraining, please contact us at support@semaphoreci.com and we will try to work out a solution.

Max job log size

Job logs have a limit of 16 megabytes, which is roughly 100,000 lines. This limit is not configurable.

The following log message indicates that the job has exceeded the limit:

Content of the log is bigger than 16MB. Log is trimmed.

You can workaround this limitation by setting the following environment variable, which makes Semaphore upload the log file as an artifact when the limit is exceeded.

SEMAPHORE_AGENT_UPLOAD_JOB_LOGS=when-trimmed

See also