NETRESEC Network Security Blog
People sometimes ask me when I will teach my network forensics class in the United States. The US is undoubtedly the country with the most advanced and mature DFIR community, so it would be awesome to be able to give my class there. However, not being a U.S. person and not working for a U.S. company makes it rather difficult for me to teach in the United States (remember what happened to Halvar Flake?).
So if you’re from the Americas and would like to take my network forensics class, then please don’t wait for me to teach my class at a venue close to you – because I probably won’t. My recommendation is that you instead attend my upcoming training at 44CON in London this September.
The network forensics training in London will cover topics such as:
- Analyzing a web defacement
- Investigating traffic from a remote access trojan (njRAT)
- Analyzing a Man-on-the-Side attack (much like QUANTUM INSERT)
- Finding a backdoored application
- Identifying botnet traffic through whitelisting
- Rinse-Repeat Threat Hunting
The first day of training will focus on analysis using only open source tools. The second day will primarily cover training on commercial software from Netresec, i.e. NetworkMiner Professional and CapLoader. All students enrolling in the class will get a full 6 month license for both these commercial tools.
Hope to see you at the 44CON training in London!
Posted by Erik Hjelmvik on Tuesday, 25 April 2017 14:33:00 (UTC/GMT)
In November last year Alexa admitted in a tweet that they had stopped releasing their CSV file with the one million most popular domains.
Members of the Internet measurement and infosec research communities were outraged, surprised and disappointed since this domain list had become the de-facto tool for evaluating the popularity of a domain. As a result of this Cisco Umbrella (previously OpenDNS) released a free top 1 million list of their own in December the same year. However, by then Alexa had already announced that their “top-1m.csv” file was back up again.
The Alexa list was unavailable for just about a week but this was enough for many researchers, developers and security professionals to make the move to alternative lists, such as the one from Umbrella. This move was perhaps fueled by Alexa saying that the “file is back for now”, which hints that they might decide to remove it again later on.
We’ve been leveraging the Alexa list for quite some time in NetworkMiner and CapLoader in order to do DNS whitelisting, for example when doing threat hunting with Rinse-Repeat. But we haven’t made the move from Alexa to Umbrella, at least not yet.
Malware Domains in the Top 1 Million Lists
Threat hunting expert Veronica Valeros recently pointed out that there are a great deal of malicious domains in the Alexa top one million list.
I often recommend analysts to use the Alexa list as a whitelist to remove “normal” web surfing from their PCAP dataset when doing threat hunting or network forensics. And, as previously mentioned, both NetworkMiner and CapLoader make use of the Alexa list in order to simplify domain whitelisting. I therefore decided to evaluate just how many malicious domains there are in the Alexa and Umbrella lists.
hpHosts EMD (by Malwarebytes)
- URL: https://hosts-file.net/emd.txt
- Total malicious domains: 154144
|Whitelisted malicious domains:||1365||1458|
|Percent of malicious domains whitelisted:||0.89%||0.95%|
Malware Domain Blocklist
- URL: http://www.malwaredomains.com/?page_id=66
- Total malicious domains: 18270
|Whitelisted malicious domains:||84||63|
|Percent of malicious domains whitelisted:||0.46%||0.34%|
- URL: http://cybercrime-tracker.net/ Total malicious domains: 7861
|Whitelisted malicious domains:||15||10|
|Percent of malicious domains whitelisted:||0.19%||0.13%|
The results presented above indicate that Alexa and Umbrella both contain roughly the same number of malicious domains. The percentages also reveal that using Alexa or Umbrella as a whitelist, i.e. ignore all traffic to the top one million domains, might result in ignoring up to 1% of the traffic going to malicious domains. I guess this is an acceptable number of false negatives since techniques like Rinse-Repeat Intrusion Detection isn’t intended to replace traditional intrusion detection systems, instead it is meant to be use as a complement in order to hunt down the intrusions that your IDS failed to detect. Working on a reduced dataset containing 99% of the malicious traffic is an acceptable price to pay for having removed all the “normal” traffic going to the one million most popular domains.
One significat difference between the two lists is that the Umbrella list contains subdomains (such as www.google.com, safebrowsing.google.com and accounts.google.com) while the Alexa list only contains main domains (like “google.com”). In fact, the Umbrella list contains over 1800 subdomains for google.com alone! This means that the Umbrella list in practice contains fewer main domains compared to the one million main domains in the Alexa list. We estimate that roughly half of the domains in the Umbrella list are redundant if you only are interested in main domains. However, having sub domains can be an asset if you need to match the full domain name rather than just the main domain name.
Data Sources used to Compile the Lists
Image: The Alexa Extension for Firefox
The two lists are compiled in different ways, which can be important to be aware of depending on what type of traffic you are analyzing. Alexa primarily receives web browsing data from users who have installed one of Alexa’s many browser extensions (such as the Alexa browser toolbar shown above). They also gather additional data from users visiting web sites that include Alexa’s tracker script.
Cisco Umbrella, on the other hand, compile their data from “the actual world-wide usage of domains by Umbrella global network users”. We’re guessing this means building statistics from DNS queries sent through the OpenDNS service that was recently acquired by Cisco.
This means that the Alexa list might be better suited if you are only analyzing HTTP traffic from web browsers, while the Umbrella list probably is the best choice if you are analyzing non-HTTP traffic or HTTP traffic that isn’t generated by browsers (for example HTTP API communication).
As noted by Greg Ferro, the Umbrella list contains test domains like “www.example.com”. These domains are not present in the Alexa list.
We have also noticed that the Umbrella list contains several domains with non-authorized gTLDs, such as “.home”, “.mail” and “.corp”. The Alexa list, on the other hand, only seem to contain real domain names.
Resources and Raw Data
Both the Alexa and Cisco Umbrella top one million lists are CSV files named “top-1m.csv”. The CSV files can be downloaded from these URL’s:
- Alexa: http://s3.amazonaws.com/alexa-static/top-1m.csv.zip
- Umbrella: http://s3-us-west-1.amazonaws.com/umbrella-static/top-1m.csv.zip
The analysis results presented in this blog post are based on top-1m.csv files downloaded from Alexa and Umbrella on March 31, 2017. The malware domain lists were also downloaded from the three respective sources on that same day.
We have decided to share the “false negatives” (malware domains that were present in the Alexa and Umbrella lists)
for transparency. You can download the lists with all false negatives from here:
Hands-on Practice and Training
If you wanna learn more about how a list of common domains can be used to hunt down intrusions in your network, then please register for one of our network forensic trainings. The next training will be a pre-conference training at 44CON in London.
Posted by Erik Hjelmvik on Monday, 03 April 2017 14:47:00 (UTC/GMT)
We are today happy to announce the release of CapLoader 1.5.
This new version of CapLoader parses pcap and
Support for ICMP Flows
CapLoader is designed to group packets together that belong to the same bi-directional flow, i.e. all UDP, TCP and SCTP packets with the same 5-tuple (regardless of direction) are considered being part of the same flow.
- /fʌɪv ˈtjuːp(ə)l/
- A combination of source IP, destination IP, source port, destination port and transport protocol (TCP/UDP/SCTP) used to uniquely identify a flow or layer 4 session in computer networking.
The flow concept in CapLoader 1.5 has been extended to also include ICMP. Since there are no port numbers in the ICMP protocol CapLoader sets the source and destination port of ICMP flows to 0. The addition of ICMP in CapLoader also allows input filters and display filters like “icmp” to be leveraged.
Image: CapLoader 1.5 showing only ICMP flows due to display filter 'icmp'.
TCP Stream Reassembly
One of the foundations for making CapLoader a super fast tool for reading and filtering PCAP files is that it doesn’t attempt to reassemble TCP streams. This means that CapLoader’s Transcript view will show out-of-order segments in the order they were received and retransmitted segments will be displayed twice.
The steps required to reassemble a TCP stream to disk with Wireshark are:
- Right-click a TCP packet in the TCP session of interest.
- Select “Follow > TCP Stream”.
- Choose direction in the first drop-down-list (client-to-server or server-to-client).
- Change format from “ASCII” to “Raw” in the next drop-down-menu.
- Press the “Save as...” button to save the reassembled TCP stream to disk.
Unfortunately Wireshark fails to properly reassemble some TCP streams. As an example the current stable release of Wireshark (version 2.2.5) shows duplicate data in “Follow TCP Stream” when there are retransmissions with partially overlapping segments. We have also noticed some additional bugs related to TCP stream reassembly in other recent releases of Wireshark. However, we’d like to stress that Wireshark does perform a correct reassembly of most TCP streams; it is only in some specific situations that Wireshark produces a broken reassembly. Unfortunately a minor bug like this can cause serious consequences, for example when the TCP stream is analyzed as part of a digital forensics investigation or when the extracted data is being used as input for further processing. We have therefore decided to include a TCP stream reassembly engine in CapLoader 1.5. The steps required to reassemble a TCP stream in CapLoader are:
- Double click a TCP flow of interest in the “Flows” tab to open a flow transcript.
- Click the “Save Client Byte Stream” or “Save Server Byte Stream” button to save the data stream for the desired direction to disk.
Extracting TCP streams from PCAP files this way not only ensures that the data stream is correctly reassembled, it is also both faster and simpler than having to pivot through Wireshark’s Follow TCP Stream feature.
PCAP Icon Context Menu
The PCAP icon in CapLoader is designed to allow easy drag-and-drop operations in order to open a set of selected flows in an external packet analysis tool, such as Wireshark or NetworkMiner. Right-clicking this PCAP icon will bring up a context menu, which can be used to open a PCAP with the selected flows in an external tool or copy the PCAP to the clipboard. This context menu has been extended in CapLoader 1.5 to also include a “Save As” option. Previous versions of CapLoader required the user to drag-and-drop from the PCAP icon to a folder in order to save filtered PCAP data to disk.
Faster Parsing with Protocol Identification
CapLoader can identify over 100 different application layer protocols, including HTTP, SSL, SSH, RTP, RTCP and SOCKS, without relying on port numbers. The protocol identification has previously slowed down the analysis quite a bit, which has caused many users to disable this powerful feature. This new release of of CapLoader comes with an improved implementation of the port-independent protocol identification feature, which enables PCAP files to be loaded twice as fast as before with the “Identify protocols” feature enabled.
Works in Linux and macOS
One major improvement in CapLoader 1.5 is that this release is compatible with the Mono framework, which makes CapLoader platform independent. This means that you can now run CapLoader on your Mac or Linux machine if you have Mono installed. Please refer to our previous blog posts about how to run NetworkMiner in various flavors of Linux and macOS to find out how to install Mono on your computer. You will, however, notice a performance hit when running CapLoader under Mono instead of using Windows since the Mono framework isn't yet as fast as Microsoft's .NET Framework.
Image: CapLoader 1.5 running in Linux (Xubuntu).
We’d like to thank Sooraj for reporting a bug in the “Open With” context menu of CapLoader’s PCAP icon. This bug has been fixed in CapLoader 1.5 and Sooraj has been awarded an official “PCAP or it didn’t happen” t-shirt for reporting the bug.
Image: PCAP or it didn't happen t-shirt
Have a look at our Bug Bounty Program if you also wanna get a PCAP t-shirt!
Downloading CapLoader 1.5
Everything mentioned in this blog post, except for the protocol identification feature, is available in our free trial version of CapLoader.
To try it out, simply grab a copy here:
https://www.netresec.com/?page=CapLoader#trial (no registration needed)
All paying customers with an older version of CapLoader can download a free update to version 1.5 from our customer portal.
Posted by Erik Hjelmvik on Tuesday, 07 March 2017 09:11:00 (UTC/GMT)
NetworkMiner can reassemble files transferred over protocols such as HTTP, FTP, TFTP, SMB, SMB2, SMTP, POP3 and IMAP simply by reading a PCAP file. NetworkMiner stores the extracted files in a directory called “AssembledFiles” inside of the NetworkMiner directory.
Files extracted by NetworkMiner from the DFRWS 2008 challenge file suspect.pcap
NetworkMiner is a portable tool that is delivered as a zip file. The tool doesn’t require any installation, you simply just extract the zip file to your PC. We don’t provide any official guidance regarding where to place NetworkMiner, users are free to place it wherever they find it most fitting. Some put the tool on the Desktop or in “My Documents” while others prefer to put it in “C:\Program Files”. However, please note that normal users usually don’t have write permissions to sub-directories of %programfiles%, which will prevent NetworkMiner from performing file reassembly.
Unfortunately, previous versions of NetworkMiner didn’t alert the user when it failed to write to the AssembledFiles directory. This means that the tool would silently fail to extract any files from a PCAP file. This behavior has been changed with the release of NetworkMiner 2.1. Now the user gets a windows titled “Insufficient Write Permissions” with a text like this:
User is unauthorized to access the following file:
File(s) will not be extracted!
Follow these steps to set adequate write permissions to the AssembledFiles directory in Windows:
- Open the Properties window for the AssembledFiles directory
- Open the “Security” tab
- Press “Edit” to change permissions
- Select the user who will be running NetworkMiner
- Check the “Allow”checkbox for Write permissions
- Press the OK button
If you are running NetworkMiner under macOS (OS X) or Linux, then please make sure to follow our installation instructions, which include this command:
sudo chmod -R go+w AssembledFiles/
Once you have set up the appropriate write permissions you should be able to start NeworkMiner and open a PCAP file in order to have the tool automatically extract files from the captured network traffic.
Posted by Erik Hjelmvik on Friday, 03 March 2017 09:44:00 (UTC/GMT)
I released the first version of NetworkMiner on February 16, 2007, which is exactly 10 years ago today.
One of the main uses of NetworkMiner today is to reassemble file transfers from PCAP files and save the extracted files to disk. However, as you can see in the screenshot above, the early versions of NetworkMiner didn’t even have a Files tab. In fact, the task that NetworkMiner was originally designed for was simply to provide an inventory of the hosts communicating on a network.
How it all started
So, why did I start designing a passive asset detection system when I could just as well have used a port scanner like Nmap to fingerprint the devices on a network? Well, I was working with IT security at the R&D department of a major European energy company at the time. As part of my job I occasionally performed IT security audits of power plants. During these audits I typically wanted to ensure that there were no rouge or unknown devices on the network. The normal way of verifying this would be to perform an Nmap scan of the network, but that wasn’t an option for me since I was dealing with live industrial control system networks. I knew from personal experience that a network scan could cause some of the industrial control system devices to drop their network connections or even crash, so active scanning wasn’t a viable option. Instead I chose to setup a SPAN port at a central point of the network, or even install a network TAP, and then capture network traffic to a PCAP file during a few hours. I found the PCAP files being a great source, not only for identifying the hosts present at a network, but also in order to discover misconfigured devices. However, I wasn’t really happy with the tools available for visualizing the devices on the network, which is why I stated developing NetworkMiner in my spare time.
As I continued improving NetworkMiner I pretty soon ended up writing my own TCP reassembly engine as well as parsers for HTTP and the CIFS protocol (a.k.a SMB). With these protocols in place I was able to extract files downloaded through HTTP or SMB to disk with NetworkMiner, which turned out to be a killer feature.
Image: Monthly downloads of NetworkMiner from SourceForge
With the ability to extract file transfers from PCAP files NetworkMiner steadily gained popularity as a valuable tool in the field of network forensics, which motivated me to make the tool even better. Throughout these past 10 years I have single-handedly implemented over 60 protocols in NetworkMiner, which has been a great learning experience for me.
- 2007-02-16 First release of NetworkMiner on SourceForge
- 2008-08-01 NetworkMiner is featured in Russ McRee’s toolsmith column for August 2008.
- 2008-09-01 Richard Bejtlich blogs about NetworkMiner 0.85 on the TaoSecurity blog
- 2011-03-04 First release of NetworkMiner Professional and the command line version NetworkMinerCLI.
- 2011-11-11 NetworkMiner ranked #85 among SecTools.Org’s “Top 125 Network Security Tools”
- 2011-11-19 Cross platform support (works in Linux, Mac OSX etc).
- 2011-12-16 REMnux Linux distro includes NetworkMiner
- 2011-12-27 Security Onion Linux distro includes NetworkMiner
- 2013-09-15 Support for PcapNG capture file format in NetworkMiner Professional 1.5
- 2016-02-19 Project hosting moved from SourceForge to netresec.com
People sometimes ask me what I’m planning to add to the next version of NetworkMiner. To be honest; I never really know. In fact, I’ve realized that those with the best ideas for features or protocols to add to NetworkMiner are those who use NetworkMiner as part of their jobs, such as incident responders and digital forensics experts across the globe.
I therefore highly value feedback from users, so if you have requests for new features to be added to the next version, then please feel free to reach out and let me know!
Posted by Erik Hjelmvik on Thursday, 16 February 2017 09:11:00 (UTC/GMT)
The training will touch upon topics relevant for law enforcement as well as incident response, such as investigating a defacement, finding backdoors and dealing with a machine infected with real malware. We will also be carving lots of files, emails and other artifacts from the PCAP dataset as well as perform Rinse-Repeat Intrusion Detection in order to detect covert malicious traffic.
Day 1 - March 20, 2017
The first training day will focus on open source tools that can be used for doing network forensics. We will be using the Security Onion linux distro for this part, since it contains pretty much all the open source tools you need in order to do network forensics.
Day 2 - March 21, 2017
We will spend the second day mainly using NetworkMiner Professional and CapLoader, i.e. the commercial tools from Netresec. Each student will be provided with a free 6 month license for the latest version of NetworkMiner Professional (see our recent release of version 2.1) and CapLaoder. This is a unique chance to learn all the great features of these tools directly from the guy who develops them (me!).
The Troopers conference and training will be held at the Print Media Academy (PMA) in Heidelberg, Germany.
Print Media Academy, image credit: Alex Hauk
Keeping the class small
The number of seats in the training will be limited in order to provide a high-quality interactive training. However, keep in mind that this means that the we might run out of seats for the network forensics class!
I would like to recommend those who wanna take the training to also attend the Troopers conference on March 22-24. The conference will have some great talks, like these ones:
- Hunting Them All by @verovaleros
- Ruler - Pivoting Through Exchange by @_staaldraad
- Graph me, I’m famous! – Automated static malware analysis and indicator extraction for binaries by @pinkflawd and @rafi0t
Please note that the tickets to the Troopers conference are also limited, and they seem to sell out quite early each year. So if you are planning to attend the network forensics training, then I recommend that you buy an “All Inclusive” ticket, which includes a two-day training and a conference ticket.
You can read more about the network forensics training at the Troopers website.
The network forensics training at Troopers is now sold out. However, there are still free seats available in our network forensics class at 44CON in London in September.
Posted by Erik Hjelmvik on Tuesday, 24 January 2017 07:20:00 (UTC/GMT)
We are releasing a new version of NetworkMiner today. The latest and greatest version of NetworkMiner is now 2.1.
Yay! /throws confetti in the air
Better Email Parsing
I have spent some time during 2016 talking to digital forensics experts at various law enforcement agencies. I learned that from time to time criminals still fail to use encryption when reading their email. This new release of NetworkMiner therefore comes with parsers for POP3 and IMAP as well as an improved SMTP parser. These are the de facto protocols used for sending and receiving emails, and have been so since way back in the 90’s.
Messages tab in NetworkMiner 2.1 showing extracted emails
Not only does NetworkMiner show the contents of emails within the tool, it also extracts all attachments to disk and even saves each email as an .eml file that can be opened in an external email reader in order to view it as the suspect would.
Extracted email ”Get_free_s.eml” opened in Mozilla Thunderbird
There are several protocols that can be used to provide logical separation of network traffic, in order to avoid using multiple physical networks to keep various network segments, security domains or users apart. Some of these techniques for logical separation rely on tagging or labeling, while others are tunneling the encapsulated traffic. Nevertheless, it’s all pretty much the same thing; an encapsulation protocol is used in order to wrap protocol X inside protocol Y, usually while adding some metadata in the process.
NetworkMiner has been able to parse the classic encapsulation protocols 802.1Q, GRE and PPPoE since 2011, but we now see an increased use of protocols that provide logical separation for virtualization and cloud computing environments. We have therefore added parsers in NetworkMiner 2.1 for VXLAN and OpenFlow, which are the two most widely used protocols for logical separation of traffic in virtualized environments. We have also added decapsulation of MPLS and EoMPLS (Ethernet-over-MPLS) to NetworkMiner 2.1.
The new release additionally comes with support for the SOCKS protocol,
which is an
NetworkMiner 2.1 can read packets directly from a local PacketCache service by clicking ”File > Read from PacketCache”. This eliminates the need to run a PowerShell script in order to dump a PCAP file with packets recently captured by PacketCache.
HTTP Partial Content / Range Requests
Byte serving is a feature in HTTP that makes it possible to retrieve only a segment of a file, rather than the complete file, by using the “Range” HTTP header. This feature is often used by BITS in order to download updates for Windows. But we have also seen malware use byte serving, for example malware droppers that attempt to download malicious payloads in a stealthy manner. See Ursnif and Dridex for examples of malware that utilize this technique.
NetworkMiner has previously only reassembled the individual segments of a partial content download. But as of version 2.1 NetworkMiner has the ability to piece together the respective parts into a complete file.
SSL/TLS and X.509 Certificates
NetworkMiner has been able to extract X.509 certificates to disk for many years now, simply by opening a PCAP file with SSL traffic. However, in the 2.1 release we’ve added support for parsing out the SSL details also from FTP’s “AUTH TLS” (a.k.a explicit TLS or explicit SSL) and STARTTLS in SMTP.
NetworkMiner now also extracts details from the SSL handshake and X.509 certificate to the Parameters tab, such as the requested SNI hostname and the Subject CN from the certificate.
SSL handshake details and certificate info passively extracted from captured HTTPS session to mega.co.nz
The new features mentioned so far are all part of the free open source version of NetworkMiner. But we have also added a few additional features to the Professional edition of NetworkMiner as part of the 2.1 release.
The “Browsers” tab of NetworkMiner Professional has been extended with a feature for tracking online ads and web trackers. We are using EasyList and EasyPrivacy from easylist.to in order to provide an up-to-date tracking of ads and trackers. HTTP requests related to ads are colored red, while web tracker requests are blue. These colors also apply to the Files tab and can be modified in the Settings menu (Tools > Settings).
NetworkMiner Professional 2.1 showing advertisments (red) and Internet trackers (blue).
The reason why NetworkMiner Pro now tracks ads and trackers is because these types of requests can make up almost half of the HTTP requests that a normal user makes while surfing the web today. Doing forensics on network traffic from a suspect criminal can be a very time consuming task, we therefore hope that being able to differentiate between what traffic that is initiated by the user rather than being triggered by an online advertisement service or internet tracker can save time for investigators.
The RIPE database previously contained a bug that prevented NetworkMiner Professional from properly leveraging netname info from RIPE. This bug has now been fixed, so that the Host Details can be enriched with details from RIPE for IP addresses in Europe. To enable the RIPE database you’ll first have to download the raw data by clicking Tools > Download RIPE DB.Host Details enriched with RIPE description and netname
We have also extended the exported details about the hosts in the CSV and XML files from NetworkMiner Professional and the command line tool NetworkMinerCLI. The exported information now contains details such as IP Time-to-Live and open ports.
Upgrading to version 2.1
Users who have purchased a license for NetworkMiner Professional 2.0 can download a free update to version 2.1 from our customer portal.
Those who instead prefer to use the free and open source version can grab the latest version of NetworkMiner from the official NetworkMiner page.
There are several persons I would like to thank for contributing with feature requests and bug reports that have been used to improve NetworkMiner. I would like to thank Dietrich Hasselhorn, Christian Reusch, Jasper Bongertz, Eddi Blenkers and Daniel Spiekermann for their feedback that have helped improve the SMB, SMB2 and HTTP parsers as well as implementing various encapsulation protocols. I would also like to thank several investigators at the Swedish, German and Dutch police as well as EUROPOL for the valuable feedback provided by them.
Posted by Erik Hjelmvik on Wednesday, 11 January 2017 14:30:00 (UTC/GMT)
Remember the days back in the 90s when you could cripple someones Internet connection simply by issuing a few PING command like “ping -t [target]”? This type of attack was only successful if the victim was on a dial-up modem connection. However, it turns out that a similar form of ICMP flooding can still be used to perform a denial of service attack; even when the victim is on a gigabit network.
The 90's called and wanted their ICMP flood attack back
Analysts at TDC-SOC-CERT (Security Operations Center of the Danish telecom operator TDC) noticed how a certain type of distributed denial-of-service (DDoS) attacks were more effective than others. The analysts found that a special type of ICMP flooding attack could disrupt the network throughput for some customers, even if the attack was just using a modest bandwidth (less than 20Mbit/s). It turned out that Destination Unreachable ICMP messages (ICMP type 3), such as “port unreachable” (code 3) was consuming significantly more resources on some firewalls compared to the more common ICMP Echo messages associated with the Ping command. The TDC team have dubbed this particular ICMP flooding attack method “BlackNurse”.
TDC's own report about BlackNurse says:
“The BlackNurse attack attracted our attention, because in our anti-DDoS solution we experienced that even though traffic speed and packets per second were very low, this attack could keep our customers' operations down. This even applied to customers with large internet uplinks and large enterprise firewalls in place.”
Cisco ASA firewalls is one product line that can be flooded using the BlackNurse attack. Cisco were informed about the BlackNurse attack in June this year, but they decided to not classify this vulnerability as a security issue. Because of this there is no CVE or other vulnerability number associated with BlackNurse.
Evaluation of BlackNurse Denial-of-Service Attacks
Members of the TDC-SOC-CERT set up a lab network to evaluate how effective ICMP type 3 attacks were compared to other ICMP flooding methods. In this setup they used hping3 to send ICMP floods like this:
ICMP net unreachable (ICMP type 3, code 0):
hping3 --icmp -C 3 -K 0 --flood [target]
ICMP port unreachable (ICMP type 3, code 3) a.k.a. “BlackNurse”:
hping3 --icmp -C 3 -K 3 --flood [target]
ICMP Echo (Ping):
hping3 --icmp -C 8 -K 0 --flood [target]
ICMP Echo with code 3:
hping3 --icmp -C 8 -K 3 --flood [target]
The tests showed that Cisco ASA devices used more CPU resources to process the destination unreachable flood attacks (type 3) compared to the ICMP Echo traffic. As a result of this the firewalls start dropping packets, which should otherwise have been forwarded by the firewall, when hit by a BlackNurse attack. When the packet drops become significant the customer behind the firewall basically drops off the internet.
The tests also showed that a single attacking machine running hping3 could, on its own, produce enough ICMP type 3 code 3 packets to consume pretty much all the firewall's resources. Members of the TDC-SOC-CERT shared a few PCAP files from their tests with me, so that their results could be verified. One set of these PCAP files contained only the attack traffic, where the first part was generated using the following command:
hping3 --icmp -C 3 -K 3 -i u200 [target]
The “-i u200” in the command above instructs hping3 to send one packet every 200 microseconds. This packet rate can be verified simply by reading the PCAP file with a command like this:
tshark -c 10 -r attack_record_00001.pcapng -T fields -e frame.time_relative -e frame.time_delta -e frame.len -e icmp.type -e icmp.code
0.000000000 0.000000000 72 3 3
0.000207000 0.000207000 72 3 3
0.000415000 0.000208000 72 3 3
0.000623000 0.000208000 72 3 3
0.000830000 0.000207000 72 3 3
0.001038000 0.000208000 72 3 3
0.001246000 0.000208000 72 3 3
0.001454000 0.000208000 72 3 3
0.001661000 0.000207000 72 3 3
0.001869000 0.000208000 72 3 3
The tshark output confirms that hping3 sent an ICMP type 3 code 3 (a.k.a. “port unreachable”) packet every 208 microseconds, which amounts to rougly 5000 packets per second (pps) or 2.7 Mbit/s. We can also use the capinfos tool from the wireshark/tshark suite to confirm the packet rate and bandwidth like this:
Number of packets: 48 k
File size: 5000 kB
Data size: 3461 kB
Capture duration: 9.999656 seconds
First packet time: 2016-06-08 12:25:19.811508
Last packet time: 2016-06-08 12:25:29.811164
Data byte rate: 346 kBps
Data bit rate: 2769 kbps
Average packet size: 72.00 bytes
Average packet rate: 4808 packets/s
A few minutes later they upped the packet rate, by using the “--flood” argument, instead of the 200 microsecond inter-packet delay, like this:
hping3 --icmp -C 3 -K 3 --flood [target]
Number of packets: 3037 k
File size: 315 MB
Data size: 218 MB
Capture duration: 9.999996 seconds
First packet time: 2016-06-08 12:26:19.811324
Last packet time: 2016-06-08 12:26:29.811320
Data byte rate: 21 MBps
Data bit rate: 174 Mbps
Average packet size: 72.00 bytes
Average packet rate: 303 kpackets/s
The capinfos output reveals that hping3 was able to push a whopping 303.000 packets per second (174 Mbit/s), which is way more than what is needed to overload a network device vulnerable to the BlackNurse attack. Unfortunately the PCAP files I got did not contain enough normal Internet background traffic to reliably measure the degradation of the throughput during the denial of service attack, so I had to resort to alternative methods. The approach I found most useful for detecting disruptions in the network traffic was to look at the roundtrip times of TCP packets over time.
The graph above measures the time between a TCP data packet and the ACK response of that data segment (called “tcp.analysis.ack_rtt” in Wireshark). The graph shows that the round trip time only rippled a little due to the 5000 pps BlackNurse attack, but then skyrocketed as a result of the 303 kpps flood. This essentially means that “normal” traffic was was prevented from getting though the firewall until the 303 kpps ICMP flood was stopped. However, also notice that even a sustained attack of just 37 kpps (21 Mbit/s or 27 μs inter-packet delay) can be enough to take a gigabit firewall offline.
Detecting BlackNurse Attacks
TDC-SOC-CERT have released the following SNORT IDS rules for detecting the BlackNurse attack:
alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"TDC-SOC - Possible BlackNurse attack from external source "; itype:3; icode:3; detection_filter:track by_dst, count 250, seconds 1; reference:url, soc.tdc.dk/blacknurse/blacknurse.pdf; metadata:TDC-SOC-CERT,18032016; priority:3; sid:88000012; rev:1;)
alert icmp $HOME_NET any -> $EXTERNAL_NET any (msg:"TDC-SOC - Possible BlackNurse attack from internal source"; itype:3; icode:3; detection_filter:track by_dst, count 250, seconds 1; reference:url, soc.tdc.dk/blacknurse/blacknurse.pdf; metadata:TDC-SOC-CERT,18032016; priority:3; sid:88000013; rev:1;)
Protecting against BlackNurse Attacks
The recommendation from TDC is to deny ICMP type 3 messages sent to the WAN interface of Cisco ASA firewalls in order to prevent the BlackNurse attack. However, before doing so, please read the following excerpt from the Cisco ASA 5500 Series Command Reference:
“We recommend that you grant permission for the ICMP unreachable message type (type 3). Denying ICMP unreachable messages disables ICMP Path MTU discovery, which can halt IPSec and PPTP traffic. See RFC 1195 and RFC 1435 for details about Path MTU Discovery.”
In order to allow Path MTU discovery to function you will need to allow at least ICMP type 3 code 4 packets (fragmentation needed) to be received by the firewall. Unfortunately filtering or rate-limiting on a Cisco ASA does not seem to have an affect against the BlackNurse attack, the CPU max out anyway. Our best recommendation for protecting a Cisco ASA firewall against the BlackNurse attack is therefore to rate-limit incoming ICMP traffic on an upstream router.
Another alternative is to upgrade the Cisco ASA to a more high-end one with multiple CPU cores, since the BlackNurse attack seems to not be as effective on muti-core ASA's. A third mitigation option is to use a firewall from a different vendor than Cisco. However, please note that it's likely that other vendors also have products that are vulnerable to the BlackNurse attack.
Update November 12, 2016
Devices verified by TDC to be vulnerable to the BlackNurse attack:
- Cisco ASA 5505, 5506, 5515, 5525 and 5540 (default settings)
- Cisco ASA 5550 (Legacy) and 5515-X (latest generation)
- Cisco 897 router
- Cisco 6500 router (with SUP2T and Netflow v9 on the inbound interface)
- Fortigate 60c and 100D (even with drop ICMP on). See response from Fortinet.
- Fortinet v5.4.1 (one CPU consumed)
- Palo Alto (unless ICMP Flood DoS protection is activated). See advisory from Palo Alto.
- SonicWall (if misconfigured)
- Zyxel NWA3560-N (wireless attack from LAN Side)
- Zyxel Zywall USG50
Update November 17, 2016
There seems to be some confusion/amusement/discussion going on regarding why this attack is called the “BlackNurse”. Also, googling “black nurse” might not be 100 percent safe-for-work, since you risk getting search results with inappropriate videos that have nothing to do with this attack.
The term “BlackNurse”, which has been used within the TDC SOC for some time to denote the “ICMP 3,3” attack, is actually referring to the two guys at the SOC who noticed how surprisingly effective this attack was. One of these guys is a former blacksmith and the other a nurse, which was why a college of theirs jokingly came up with the name “BlackNurse”. However, although it was first intended as a joke, the team decided to call the attack “BlackNurse” even when going public about it.
Posted by Erik Hjelmvik on Thursday, 10 November 2016 07:40:00 (UTC/GMT)