In this page we will walk through the steps required to generate a Seccomp profile for a Docker image. This guide has been prepared for use in the Software Security Summer School (SSSS'20).
NOTE: It is assumed that the Installation Guide has been followed. Reading the User Guide prior to performing these steps is also advised.
After you connect to the provided AWS instance, open a terminal and run the following commands.
We will first check that we are running the correct kernel version required for completing the hands-on exercise.
uname --kernel-release
This should print the kernel version which should be as follows:
4.15.0-1054-aws
It is critical that you see the correct Linux kernel version.
Please use the raise hand feature of Webex to notify one of
the panelists if the version does not match.
1. Change your current working directory to the root of the respository (/home/ubuntu/confine).
cd /home/ubuntu/confine
2. This time choose one of the following Docker images: Apache Httpd or MySQL. Try to find the correct image-url by searching for the image in Docker Hub. MySQL needs extra arguments. If you like the challenge choose that.
3. Open a new file, name it as you like. We will use myimages.json in the following examples. If you choose another name you need to change the rest of the commands accordingly. You can use your favorite text editor (vim, nano, emacs).
vim myimages.json
4. Use the example from Hands-on Exercise 1 to complete the following JSON file.
{ "??": { "enable": "true", "image-name": "??", "image-url": "??", "options": "??", "dependencies": {} } }
5. Now we are ready to run Confine using the following command and generate the Seccomp profile for Nginx.
Note: You must run the following command as root.
sudo python3.7 confine.py -l libc-callgraphs/glibc.callgraph -m libc-callgraphs/musllibc.callgraph -i myimages.json -o output/ -p default.seccomp.json -r results/ -g go.syscalls/
The script will now start analyzing the Nginx Docker image. We will go through
each step the script is performing and explain the output:
a) The script prints the following line, showing it has started its analysis.
------------------------------------------------------------------------ //////////////////////////////////////////////////////////////////////// ----->Starting analysis for image: [IMAGENAME]<----- ////////////////////////////////////////////////////////////////////////
b) Then it will start the monitoring phase, which uses
Sysdig to identify the binaries executed in the container. This phase lasts for 60 seconds.
(For more details on why we do this please refer to the
about page.)
In case it is the first time we are hardening a Docker image and we haven't
previously extracted the list of binaries and libraries it will first print:
Cache doesn't exist, must extract binaries and libraries
Then it will monitor the executed binaries by running sysdig, generating the following output:
--->Starting MONITOR phase: Running sysdig multiple times. Run count: 1 from total: 3 Ran container sleeping for 60 seconds to generate logs and extract execve system calls len(psList) from sysdig: 39 Container: nginx extracted psList with 52 elements Running sysdig multiple times. Run count: 2 from total: 3 Ran container sleeping for 60 seconds to generate logs and extract execve system calls len(psList) from sysdig: 48 Container: nginx extracted psList with 62 elements Running sysdig multiple times. Run count: 3 from total: 3 Ran container sleeping for 60 seconds to generate logs and extract execve system calls len(psList) from sysdig: 45 Container: nginx extracted psList with 63 elements Container: nginx PS List: {'env', '/usr/sbin/sh', '/usr/bin/basename', 'find', time...) Finished copying identified binaries and libraries <---Finished MONITOR phase
In case we have previously ran the dynamic analysis phase and extracted all the binaries and libraries, it will only run once. We need this to generate the logs created for the Docker image as our baseline to validate the correctness of the generated Seccomp profile.
c) The execution of the script can differ in this step, depending on whether the binaries have been extracted or not. In case the dynamic analysis has successfully extracted the set of binaries and libraries from the container, it does not need to copy the binaries and libraries and it will skip this step. Otherwise, it will first generate the list of binaries used in the container and then start copying them.
Starting to copy identified binaries and libraries (This can take some time...) Finished copying identified binaries and libraries <---Finished MONITOR phase
d) After the executables have been extracted, the script then starts extracting any direct system calls in them using objdump. It will go over all the files copied from the container to the temporary output folder and identify direct system calls.
--->Starting Direct Syscall Extraction Extracting direct system call invocations <---Finished Direct Syscall Extraction
e) Then, it extracts the list of functions imported by each binary and library.
--->Starting ANALYZE phase Extracting imported functions and storing in libs.out <---Finished ANALYZE phase
f) After it extracts all the direct system calls and combines the imported libc functions with the set of system calls required by those libc functions, it generates the set of prohibited system calls and prints the following line:
--->Starting INTEGRATE phase, extracting the list required system calls Traversing libc call graph to identify required system calls Generating final system call filter list ************************************************************************************ Container Name: [IMAGENAME] Num of filtered syscalls (original): n ************************************************************************************ <---Finished INTEGRATE phase
h) Now that the unnecessary system calls have been extracted, we generate the respective Seccomp profile and validate if it works correctly by launching the container with our generated Seccomp profile.
--->Validating generated Seccomp profile: results//[IMAGENAME].seccomp.json ************************************************************************************ Finished validation. Container for image: nginx was hardened SUCCESSFULLY! ************************************************************************************
If you see the message Container for image: $$$ was hardened successfully!, this means that the Seccomp profile passed our validation steps.
IMPORTANT: If you did not see the message above
please ask for help from one of the panelists.
i) Finally, the analysis of the Nginx Docker image has finished and Confine prints the following line:
/////////////////////////////////////////////////////////////////////////////////////// ----->Finished extracting system calls for [IMAGENAME], sleeping for 5 seconds<----- /////////////////////////////////////////////////////////////////////////////////////// --------------------------------------------------------------------------------------
6. Now that the analysis is complete, we can view the binaries and libraries that were identified as required for the proper execution of the container (stored in ./output/[IMAGENAME]):
ls -lh ./output/[IMAGENAME]
And the generated Seccomp profile (stored in ./results/nginx.seccomp.json):
cat ./results/nginx.seccomp.json
0. Change your current working directory to the root of the respository again (/home/ubuntu/confine).
cd /home/ubuntu/confine
1. Which system calls can be filtered? How many are they?
cat results/[IMAGENAME].seccomp.json | grep name cat results/[IMAGENAME].seccomp.json | grep name | wc -l
2. Determine which kernel CVEs have been mitigated by disabling the above system calls. You can use the filterToCveProfile.py script to map the generated Seccomp profile to the mitigated CVEs.
python3.7 filterProfileToCve.py -c cve.files/cveToStartNodes.csv.validated -f results/profile.report.details.csv -o results -v cve.files/cveToFile.json.type.csv --manualcvefile cve.files/cve.to.syscall.manual --manualtypefile cve.files/cve.to.vulntype.manual
-c: Path to the file containing a map between each CVE and all the starting
nodes which can reach the vulnerable point in the kernel call graph
-f: This file is generated after you run Confine for a set of containers.
It can be found in the results path in the root of the repository.
-o: Name of the prefix of the file you would like to store the results in.
-v: A CSV file containing the mapping of CVEs to their vulnerability type.
--manualcvefile: Some CVEs have been gathered manually which can be specified
using this option.
--manualtypefile: A file containing the mapping of CVEs identified manually to
their respective vulnerability type.
-d: Enable/disable debug mode which prints much more log messages.
Note: The scripts required to generate the mapping between the kernel
functions
and their CVEs are in a separate repository. You do not need to recreate those
results.
After you run the script above a single file will be created named
results.container.csv. Each line corresponds to a CVE mitigated in at least one of the Docker images
provided in the profile.report.details.csv file. Keep in mind
that the generated file will have the CVEs for all containers hardened in any
previous run. The last column in the CSV shows the names of the Docker images
which each CVE has been mitigated in.
Line format:
cveid;system call names(can be more than one);cve-type(can be more than one);was-it-mitigated-by-the-default-seccomp-policies;number-of-docker-images-affected;names of docker images
CVE-2015-8539;add_key, keyctl;Denial Of Service, Gain privileges;True;1;nginx
3. View all the mitigated CVEs reported in results.container.csv. Is CVE-2017-5123 mitigated by applying the Seccomp profile? You can also view the subset of CVEs that were not mitigated by the default Docker Seccomp filter, but are now mitigated by Confine.
cat results.container.csv cat results.container.csv | grep False