Showing blog posts from 2014
It has, so far, been publicly reported that three ICS vendors have spread the Havex Remote-Access-Tool (RAT) as part of their official downloads. We've covered the six pieces of software from these three vendors in our blog post ”Full Disclosure of Havex Trojans”. In this blog post we proceed by analyzing network traffic generated by Havex.
Indicators of Compromise
Before going into details of our analysis we'd like to recommend a few other resources that can be used to detect the Havex RAT. There are three Havex IDS signatures available via Emerging Threats. There are also Yara rules and OpenIOC signatures available for Havex. Additionally, the following domains are known to be used in the later versions (043 and 044) of Havex according to Kaspersky:
The Havex RAT Command-and-Control (C2) protocol is based on HTTP POST requests, which typically look something like this:
As you can see, four variables are sent in the QueryString of this HTTP POST request; namely id, v1, v2 and q. Let's take a closer look to see what data is actually sent to the C2 server in the QueryString.
170393861 (Windows XP)
498073862 (Windows 7)
498139398 (Windows 7, SP1)
q=45474bca5c3a10c8e94e56543c2bd (Havex 043)
q=0c6256822b15510ebae07104f3152 (Havex 043)
q=214fd4a8895e07611ab2dac9fae46 (Havex 044)
q=35a37eab60b51a9ce61411a760075 (Havex 044)
Analyzing a Havex PCAP
I had the pleasure to discuss the Havex Malware with Joel Langill, when we met at the 4SICS conference in Stockholm last month. Joel was nice enough to provide me with a 800 MB PCAP file from when he executed the Havex malware in an Internet connected lab environment.
Image: CapLoader transcript of Havex C2 traffic
I used the command line tool NetworkMinerCLI (in Linux) to automatically extract all HTTP downloads from Joel's PCAP file to disk. This way I also got a CSV log file with some useful metadata about the extracted files. Let's have a closer look at what was extracted:
$ mono NetworkMinerCLI.exe -r new-round-09-setup.pcap
Closing file handles...
970167 frames parsed in 1337.807 seconds.
$ cut -d, -f 1,2,3,4,7,12 new-round-09-setup.pcap.FileInfos.csv | head
SourceIP SourcePort DestinationIP DestinationPort FileSize Frame
18.104.22.168 TCP 80 192.168.1.121 TCP 1238 244 676 B 14
22.214.171.124 TCP 80 192.168.1.121 TCP 1261 150 B 1640
126.96.36.199 TCP 80 192.168.1.121 TCP 1286 359 508 B 3079
188.8.131.52 TCP 80 192.168.1.121 TCP 1311 236 648 B 4855
184.108.40.206 TCP 80 192.168.1.121 TCP 1329 150 B 22953
220.127.116.11 TCP 80 192.168.1.121 TCP 1338 150 B 94678
18.104.22.168 TCP 80 192.168.1.121 TCP 1346 150 B 112417
22.214.171.124 TCP 80 192.168.1.121 TCP 1353 150 B 130108
126.96.36.199 TCP 80 192.168.1.121 TCP 1365 150 B 147902
Files downloaded through Havex C2 communication are typically modules to be executed. However, these modules are downloaded in a somewhat obfuscated format; in order to extract them one need to do the following:
- Base64 decode
- Decompress (bzip2)
- XOR with ”1312312”
To be more specific, here's a crude one-liner that I used to calculate MD5 hashes of the downloaded modules:
$ tail -c +95 C2_download.html | base64 -d | bzcat -d | xortool-xor -s "1312312" -f - -n | tail -c +330 | md5sum
To summarize the output from this one-liner, here's a list of the downloaded modules in Joel's PCAP file:
|Downloaded HTML MD5||Extracted module MD5|
All three extracted modules are known binaries associated with Havex. The third module is one of the Havex OPC scanner modules, let's have a look at what happens on the network after this module has been downloaded!
Analyzing Havex OPC Traffic
In Joel's PCAP file, the OPC module download finished at frame 5117. Less then a second later we see DCOM/MS RPC traffic. To understand this traffic we need to know how to interpret the UUID's used by MS RPC.
Marion Marschalek has listed 10 UUID's used by the Havex OPC module in order to enumerate OPC components. However, we've only observed four of these commands actually being used by the Havex OPC scanner module. These commands are:
|MS RPC UUID||OPC-DA Command|
Of these commands the ”IOPC Browse” is the ultimate goal for the Havex OPC scanner, since that's the command used to enumerate all OPC tags on an OPC server. Now, let's have a look at the PCAP file to see what OPC commands (i.e. UUID's) that have been issued.
$ tshark -r new-round-09-setup.first6000.pcap -n -Y 'dcerpc.cn_bind_to_uuid != 99fcfec4-5260-101b-bbcb-00aa0021347a' -T fields -e frame.number -e ip.dst -e dcerpc.cn_bind_to_uuid -Eoccurrence=f -Eheader=y
frame.nr ip.dst dcerpc.cn_bind_to_uuid
5140 192.168.1.97 000001a0-0000-0000-c000-000000000046
5145 192.168.1.11 000001a0-0000-0000-c000-000000000046
5172 192.168.1.97 000001a0-0000-0000-c000-000000000046
5185 192.168.1.11 9dd0b56c-ad9e-43ee-8305-487f3188bf7a
5193 192.168.1.97 000001a0-0000-0000-c000-000000000046
5198 192.168.1.11 55c382c8-21c7-4e88-96c1-becfb1e3f483
5212 192.168.1.11 00000143-0000-0000-c000-000000000046
5247 192.168.1.11 000001a0-0000-0000-c000-000000000046
5257 192.168.1.11 00000143-0000-0000-c000-000000000046
5269 192.168.1.11 00000143-0000-0000-c000-000000000046
5274 192.168.1.11 39c13a4d-011e-11d0-9675-0020afd8adb3
5280 192.168.1.11 39c13a4d-011e-11d0-9675-0020afd8adb3
5285 192.168.1.11 39227004-a18f-4b57-8b0a-5235670f4468
5286 192.168.1.11 39227004-a18f-4b57-8b0a-5235670f4468
We can thereby verify that the IOPCBrowse command was sent to one of Joel's OPC servers in frame 5285 and 5286. However, tshark/Wireshark is not able to parse the list of OPC items (tags) that are returned from this function call. Also, in order to find all IOPCBrowse commands in a more effective way we'd like to search for the binary representation of this command with tools like ngrep or CapLoader. It would even be possible to generate an IDS signature for IOPCBrowse if we'd know what to look for.
The first part of an MSRPC UUID is typically sent in little endian, which means that the IOPCBrowse command is actually sent over the wire as:
04 70 22 39 8f a1 57 4b 8b 0a 52 35 67 0f 44 68
Let's search for that value in Joel's PCAP file:
Image: Searching for IOPCBrowse byte sequence with CapLoader
Image: CapLoader with 169 extracted flows matching IOPCBrowse UUID
Apparently 169 flows contain one or several packets that match the IOPCBrowse UUID. Let's do a “Flow Transcript” and see if any OPC tags have been sent back to the Havex OPC scanner.
Image: CapLoader Transcript of OPC-DA session
Oh yes, the Havex OPC scanner sure received OPC tags from what appears to be a Waterfall unidirectional OPC gateway.
Another way to find scanned OPC tags is to search for a unique tag name, like “Bucket Brigade” in this example.
Posted by Erik Hjelmvik on Wednesday, 12 November 2014 21:09:00 (UTC/GMT)
The Havex backdoor is developed and used by a hacker group called Dragonfly, who are also known as "Energetic Bear" and "Crouching Yeti". Dragonfly is an APT hacker group, who have been reported to specifically target organizations in the energy sector as well as companies in other ICS sectors such as industrial/machinery, manufacturing and pharmaceutical.
In my 4SICS talk I disclosed a previously unpublished comprehensive view of ICS software that has been trojanized with the Havex backdoor, complete with screenshots, version numbers and checksums.
Dale Petersen, founder of Digital Bond, expressed the following request regarding the lack of public information about the software trojanized with Havex:
If the names of the vendors that unwittingly spread Havex were made public, the wide coverage would likely reach most of the affected asset owners.
Following Dale's request we decided to publish the information presented at 4SICS also in this blog post, in order to reach as many affected asset owners as possible. The information published here is based on our own sandbox executions of Havex malware samples, which we have obtained via CodeAndSec and malwr.com. In addition to what I presented at 4SICS, this blog post also includes new findings published by Joel "scadahacker" Langill in version 2.0 of his Dragonfly white paper, which was released just a couple of hours after my talk.
In Symantec's blog post about Havex they write:
Three different ICS equipment providers were targeted and malware was inserted into the software bundles
Trojanized MESA Imaging driver
The first vendor known to have their software trojanized by the Dragonfly group was the Swiss company MESA Imaging, who manufacture industrial grade cameras for range measurements.
Image: Screenshot of trojanized MESA Imaging driver installer from our sandbox execution
|Product:||Swiss Ranger version 188.8.131.526 (libMesaSR)|
|Exposure:||Six weeks in June and July 2013 (source: Symantec)|
eWON / Talk2M
The second vendor to have their software trojanized was the Belgian company eWON, who provide a remote maintenance service for industrial control systems called “Talk2M”.
Back in January 2014, the eWON commercial web site www.ewon.biz had been compromised. A corrupted eCatcherSetup.exe file had been uploaded into the CMS (Content Management System) of www.ewon.biz web site. eCatcher download hyperlinks were rerouted to this corrupted file. The corrupted eCatcherSetup.exe contained a malware which could, under restricted conditions, compromise the Talk2M login of the infected user.
Image: Screenshot of trojanized Talk2M eCatcher installer from our sandbox execution
|Product:||Talk2M eCatcher version 184.108.40.20673|
|Exposure:||Ten days in January 2014, 250 copies downloaded (source: Symantec)|
Prior to version 2.0 of Joel's Dragonfly report, eCatcher was the only product from eWON known to be infected with the Havex backdoor. However, Joel's report also listed a product called “eGrabit”, which we managed to obtain a malware sample for via malwr.com.
Image: Screenshot of trojanized eGrabIt installer from our sandbox execution
|Product:||eGrabIt 220.127.116.11 (version 3.0 Build 82)|
|Backdoor:||Havex RAT 038|
MB Connect Line
The most recent company known to have their software infected with the Havex backdoor was the German company MB Connect Line GmbH, who are known for their industrial router mbNET and VPN service mbCONNECT24.
MB Connect Line published a report about the Dragonfly intrusion in September 2014, where they write:
On 16th of April 2014 our website www.mbconnectline.com has been attacked by hackers. The files mbCHECK (Europe), VCOM_LAN2 and mbCONFTOOL have been replaced with infected files. These files were available from 16th of April 2014 to 23th of April 2014 for download from our website. All of these files were infected with the known Trojan Virus Havex Rat.
Image: Screenshot of trojanized mbCONFTOOL installer from our sandbox execution
|Company:||MB Connect Line GmbH|
|Product:||mbCONFTOOL V 1.0.1|
|Exposure:||April 16 to April 23, 2014 (source: MB Connect Line)|
|Backdoor:||Havex RAT 043|
Image: Screenshot of trojanized mbCHECK application from our sandbox execution
|Company:||MB Connect Line GmbH|
|Product:||mbCHECK (EUROPE) V 1.1.1|
|Exposure:||April 16 to April 23, 2014 (source: MB Connect Line)|
|Backdoor:||Havex RAT 043|
Notice how only mbCHECK for users in Europe was trojanized, there has been no report of the USA/CAN version of mbCHECK being infected with Havex.
We have not been able to get hold of a malware sample for the trojanized version of VCOM_LAN2. The screenshot below is therefore from a clean version of this software.
Image: Screenshot VCOM_LAN2 installer
|Company:||MB Connect Line GmbH|
|Exposure:||April 16 to April 23, 2014 (source: MB Connect Line)|
Conclusions on Havex Trojans
The vendors who have gotten their software trojanized by Dragonfly are all European ICS companies (Switzerland, Belgium and Germany). Additionally, only the mbCHECK version for users in Europe was infected with Havex, but not the one for US / Canada. These facts indicate that the Dragonfly / Energetic Bear threat actor seems to primarily target ICS companies in Europe.
Next: Detecting Havex with NSM
Read our follow-up blog post Observing the Havex RAT, which shows how to detect and analyze network traffic from ICS networks infected with Havex.
Posted by Erik Hjelmvik on Monday, 27 October 2014 11:11:00 (UTC/GMT)
GreatFire.org, who monitor the Great Firewall of China (GFW), also published a
blog post on their website earlier today saying:
This is clearly a malicious attack on Apple in an effort to gain access to usernames and passwords and consequently all data stored on iCloud such as iMessages, photos, contacts, etc.
Fake SSL Certificate
In their blog post GreatFire also linked a packet capture file, which we have analyzed in order to verify the MITM attack. We loaded the PcapNG file into NetworkMiner Professional and extracted the X.509 SSL certificate.
The extracted certificate can be downloaded from here. Also, here are a few details from this X.509 certificate:
$ openssl x509 -inform DER -in www.icloud.com.cer -noout -issuer -subject -startdate -enddate -fingerprint
notBefore=Oct 4 10:35:47 2014 GMT
notAfter=Oct 4 10:35:47 2015 GMT
As reported elsewhere, the certificate was self signed, which means that browsers and most iPhone apps will either inform the user about the connection being unsafe or simply close the connection (see update at the bottom of this blog post regarding the missing certificate verification in Apple iOS). This use of self signed certificates is consistent with previous SSL MITM attacks performed in China against GitHub, Google, Yahoo and live.com.
Location of the MITM Attack
By looking at host the information provided by NetworkMiner for the fake iCloud SSL server we can see that it is just six router hops away from the client (having an IP TTL value of 58). This indicates that the MITM attack is being performed within China, since we'd expect to see at least three more router hops if the packets were coming from outside China.
The same PCAP file also contains packets from the same IP address on TCP port 80, which have traveled 11 hops (IP TTL 53). We therefore assume that only traffic to TCP port 443 is being MITM'ed.
This TTL analysis also matches various TCP traceroutes we've seen to the MITM'ed iCloud SSL service on 18.104.22.168:443.
My traceroute [v0.85]mtr TCP 443 traceroute to 22.214.171.124 (source: http://pastebin.com/8Y6ZwfzG)
siyanmao-k29 (0.0.0.0) Sat Oct 18 19:26:07 2014
Host Loss% Snt Last Avg Best Wrst StDev
1. 192.168.1.1 0.0% 17 0.6 0.7 0.6 0.8 0.0
2. ------------- 0.0% 16 2.8 2.6 1.7 3.3 0.3
3. ------------- 0.0% 16 2.0 2.2 1.4 4.0 0.4
5. 126.96.36.199 0.0% 16 6.4 7.7 4.3 27.0 5.2
6. 188.8.131.52 25.0% 16 168.5 171.4 166.8 201.3 9.4
The mtr TCP traceroute above indicates that MITM attacks are performed in AS4134 (China Telecom).
bearice@Bearice-Mac-Air-Haswell ~tcptraceroute to 184.108.40.206 443 (source: bearice on GitHub)
%tcptraceroute 220.127.116.11 443
Selected device en0, address 192.168.100.16, port 52406 for outgoing packets
Tracing the path to 18.104.22.168 on TCP port 443 (https), 30 hops max
1 192.168.100.254 1.737 ms 0.793 ms 0.798 ms
2 22.214.171.124 2.893 ms 2.967 ms 2.422 ms
3 126.96.36.199 2.913 ms 2.893 ms 3.968 ms
4 188.8.131.52 4.824 ms 2.658 ms 3.902 ms
5 184.108.40.206 3.626 ms 6.532 ms 3.794 ms
6 220.127.116.11 27.539 ms 26.821 ms 27.661 ms
7 a23-59-94-46.deploy.static.akamaitechnologies.com (18.104.22.168) [open] 30.064 ms 29.899 ms 30.126 ms
The tcptraceroute above indicates that MITM attacks are also performed in AS4837 (China Unicom).
The Tcproute screenshot above shows that also CHINANET backbone network (China Telecom) seems to be used to carry out the MITM attacks.
Judging from these TCP traceroutes the MITM attacks seem to be taking place at several different locations rather centrally in the Chinese Internet infrastructure. To be more specific, it appears as if the MITM attacks are being performed on backbone networks belonging to China Telecom (CHINANET) as well as China Unicom.
UPDATE (October 22)
A vulnerability notice (CVE-2014-4449) has now been published, where Apple confirm that fake SSL certificates (like the Chinese fake one) were not verified by Apple iOS before 8.1. Apple released the first details about this vulnerability just a few hours after this blog post was published. Here's the text from the CVE description:
iCloud Data Access in Apple iOS before 8.1 does not verify X.509 certificates from TLS servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.This means that the Chinese MITM of iCloud could potentially have revealed a significant number of iCloud credentials as well as private data (images, videos, documents etc) to the attackers. Or, as @Exploit_This tweeted: "So china wants our nudes?"
Posted by Erik Hjelmvik on Monday, 20 October 2014 13:35:00 (UTC/GMT)
GreatFire.org sent out a tweet yesterday saying that “Yahoo appears to under Man-in-the-middle attack in China. 3rd case of country-wide MITM, after Google, Github”.
Mashable later ran a story called “China Appears to Attack Yahoo in Latest Censorship of Hong Kong Protests”, where Lorenzo Franceschi-Bicchierai write:
In what's almost unprecedented, China appears to be targeting Yahoo with what's called a "man-in-the-middle attack." With such an attack, connections to Yahoo.com, which are normally encrypted, would be vulnerable to snooping, and Chinese censors could also block search terms or specific Yahoo links with the goal of preventing Chinese netizens from accessing information about the protests in Hong Kong.
In this blog post we verify that there is an ongoing Man-in-the-Middle (MITM) attack by analyzing two different packet capture files.
|Capture Location||Capture Date||Filename||MD5|
Both PCAP files were created with Wireshark/dumpcap using a capture filter of “host 22.214.171.124”, which is the IP address that was reported to be MITM'ed by the Great Firewall of China (GFW). This IP address is located in Hong Kong and is used by Yahoo to host www.yahoo.com, hk.yahoo.com etc. for users in this part of Asia.
Time-To-Live (TTL) Analysis
We estimate the distance between the end users and the Yahoo server in Hong Kong to be at least 10 router hops. However, the IP packets coming back to the users have IP TTL values of 58 (Wuxi) and 57 (Zhengzhou). This implies that the MITM is performed somewhere in China, just 6 or 7 router hops away from the users. This is consistent with what we've observed in previous MITM attacks performed by China against GitHub and Google.
IMAGE: Hosts tab in CapLoader showing TTL 57 for hk.yahoo.com:443
X.509 Certificate Analysis
$ openssl x509 -inform DER -in yahoo.com.cer -noout -issuer -subject -startdate -enddate -fingerprint
notBefore=Sep 23 11:30:17 2014 GMT
notAfter=Sep 23 11:30:17 2015 GMT
The certificate is a self signed certificate for “yahoo.com”. The fact that the MITM uses a self signed certificate makes the attack easily detectable even for the non-technical user, since the web browser will typically display a warning about the site not being trusted.
Some may think it's odd that China can't forge a properly signed certificate for their SSL MITM attack. However, they've used very similar self signed certificates also in their previous MITM attacks against GitHub and Google. The purpose of GFW (a.k.a. “Golden Shield”) is to censor the Internet, so the primary goal with this MITM attack isn't to covertly spy on Chinese Yahoo searches. Regardless if the end users notice the MITM or not, a self signed X.509 cert is enough in order to see what they are searching for and “kill” their connection to Yahoo when queries like “Umbrella Revolution” and “Tiananmen Square Protests” are observed.
Posted by Erik Hjelmvik on Wednesday, 01 October 2014 21:55:00 (UTC/GMT)
The Chinese are running a MITM attack on SSL encrypted traffic between Chinese universities and Google. We've performed technical analysis of the attack, on request from GreatFire.org, and can confirm that it is a real SSL MITM against www.google.com and that it is being performed from within China.
We were contacted by GreatFire.org yesterday (September 3) with a request to analyze two packet captures from suspected MITM-attacks before they finalized their blog post. The conclusions from our analysis is now published as part of GreatFire.org's great blog post titled “Authorities launch man-in-the-middle attack on Google”.
In their blog post GreatFire.org write:
From August 28, 2014 reports appeared on Weibo and Google Plus that users in China trying to access google.com and google.com.hk via CERNET, the country’s education network, were receiving warning messages about invalid SSL certificates. The evidence, which we include later in this post, indicates that this was caused by a man-in-the-middle attack.
While the authorities have been blocking access to most things Google since June 4th, they have kept their hands off of CERNET, China’s nationwide education and research network. However, in the lead up to the new school year, the Chinese authorities launched a man-in-the-middle (MITM) attack against Google.
Our network forensic analysis was performed by investigating the following to packet capture files:
|Capture Location||Client Netname||Capture Date||Filename||MD5|
|Peking University||PKU6-CERNET2||Aug 30, 2014||google.com.pcap||aba4b35cb85ed218 7a8a7656cd670a93|
|Chongqing University||CQU6-CERNET2||Sep 1, 2014||google_fake.pcapng||3bf943ea453f9afa 5c06b9c126d79557|
Client and Server IP adresses
The analyzed capture files contain pure IPv6 traffic (CERNET is a IPv6 network) which made the analysis a bit different then usual. We do not disclose the client IP addresses for privacy reasons, but they both seem legit; one from Peking University (netname PKU6-CERNET2) and the other from Chongqing University (CQU6-CERNET2). Both IP addresses belong to AS23910, named "China Next Generation Internet CERNET2".
The IP addresses received for www.google.com were in both cases also legit, so the MITM wasn't carried out through DNS spoofing. The Peking University client connected to 2607:f8b0:4007:804::1013 (GOOGLE-IPV6 in United States) and the connection from Chongqing University went to 2404:6800:4005:805::1010 (GOOGLE_IPV6_AP-20080930 in Australia).
Time-To-Live (TTL) Analysis
The Time-To-Live (TTL) values received in the IP packets from www.google.com were in both cases 248 or 249 (note: TTL is actually called ”Hop Limit” in IPv6 nomenclature, but we prefer to use the well established term ”TTL” anyway). The highest possible TTL value is 255, this means that the received packets haven't made more than 6 or 7 router hops before ending up at the client. However, the expected number of router hops between a server on GOOGLE-IPV6 and the client at Peking University is around 14. The low number of router hops is is a clear indication of an IP MITM taking place.
Image: CapLoader with both capture files loaded, showing TTL values
Here is an IPv6 traceroute from AS25795 in Los Angeles towards the IP address at Peking University (generated with ARP Networks' 4or6.com tool):
#traceroute -6 2001:da8:201:[REDACTED]
1 2607:f2f8:1600::1 (2607:f2f8:1600::1) 1.636 ms 1.573 ms 1.557 ms
2 2001:504:13::1a (2001:504:13::1a) 40.381 ms 40.481 ms 40.565 ms
3 * * *
4 2001:252:0:302::1 (2001:252:0:302::1) 148.409 ms 148.501 ms 148.595 ms
5 * * *
6 2001:252:0:1::1 (2001:252:0:1::1) 148.273 ms 147.620 ms 147.596 ms
7 pku-bj-v6.cernet2.net (2001:da8:1:1b::2) 147.574 ms 147.619 ms 147.420 ms
8 2001:da8:1:50d::2 (2001:da8:1:50d::2) 148.582 ms 148.670 ms 148.979 ms
9 cernet2.net (2001:da8:ac:ffff::2) 147.963 ms 147.956 ms 147.988 ms
10 2001:da8:201:[REDACTED] 147.964 ms 148.035 ms 147.895 ms
11 2001:da8:201:[REDACTED] 147.832 ms 147.881 ms 147.836 ms
12 2001:da8:201:[REDACTED] 147.809 ms 147.707 ms 147.899 ms
As can be seen in the traceroute above, seven hops before the client we find the 2001:252::/32 network, which is called “CNGI International Gateway Network (CNGIIGN)”. This network is actually part of CERNET, but on AS23911, which is the network that connects CERNET with its external peers. A reasonable assumption is therefore that the MITM is carried out on the 2001:252::/32 network, or where AS23910 (2001:da8:1::2) connects to AS23911 (2001:252:0:1::1). This means that the MITM attack is being conducted from within China.
Response Time Analysis
The round-trip time between the client and server can be estimated by measuring the time from when the client sends it initial TCP SYN packet to when it receives a TCP SYN+ACK from the server. The expected round-trip time for connecting from CERNET to a Google server overseas would be around 150ms or more. However, in the captures we've analyzed the TCP SYN+ACK package was received in just 8ms (Peking) and 52ms (Chongqing) respectively. Again, this is a clear indication of an IP MITM taking place, since Google cannot possibly send a response from the US to CERNET within 8ms regardless of how fast they are. The fast response times also indicate that the machine performing the MITM is located fairly close to the network at Peking University.
Even though the machine performing the MITM was very quick at performing the TCP tree-way handshake we noticed that the application layer communication was terribly slow. The specification for the TLS handshake (RFC 2246) defines that a ClientHello message should be responded to with a ServerHello. Google typically send their ServerHello response almost instantly, i.e. the response is received after one round-trip time (150ms in this case). However, in the analyzed captures we noticed ServerHello response times of around 500ms.
X.509 Certificate Analysis
The X.509 certificates were extracted from the two PCAP files to .cer files using NetworkMiner. We noticed that both users received identical certificates, which were both self signed for “google.com”. The fact that the MITM used a self signed certificate makes the attack easily detectable even for the non-technical user, since the web browser will typically display a warning about the site not being trusted. Additionally the X.509 certificate was created for ”google.com” rather than ”*.google.com”. This is an obvious miss from the MITM'ers side since they were attempting to MITM traffic to ”www.google.com” but not to ”google.com”.
Image: NetworkMiner showing list of X.509 certificates extracted from the two PCAP files
Certificate SHA1 fingerprint: f6beadb9bc02e0a152d71c318739cdecfc1c085d
Certificate MD5 fingerprint: 66:D5:D5:6A:E9:28:51:7C:03:53:C5:E1:33:14:A8:3B
A copy of the fake certificate is available on Google drive thanks to GreatFire.org.
All evidence indicates that a MITM attack is being conducted against traffic between China’s nationwide education and research network CERNET and www.google.com. It looks as if the MITM is carried out on a network belonging to AS23911, which is the outer part of CERNET that peers with all external networks. This network is located in China, so we can conclude that the MITM was being done within the country.
It's difficult to say exactly how the MITM attack was carried out, but we can dismiss DNS spoofing as the used method. The evidence we've observed instead indicate that the MITM attack is performed either by performing IP hijacking or by simply reconfiguring a router to forward the HTTPS traffic to a transparent SSL proxy. An alternative to changing the router config would also be to add an in-line device that redirects the desired traffic to the SSL proxy. However, regardless of how they did it the attacker would be able to decrypt and inspect the traffic going to Google.
We can also conclude that the method used to perform the MITM attack was similar to the Chinese MITM on GitHub, but not identical.
Posted by Erik Hjelmvik on Thursday, 04 September 2014 23:55:00 (UTC/GMT)
This guide describes how to get NetworkMiner running on Mac OS X Mavericks (version 10.9.3).
After the download of “Mono MRE installer” has completed, just run the installer:
Press “Continue” to proceed installing the Mono Framework using the guided installer.
When the Mono Framework has been installed you can extract the downloaded NetworkMiner zip archive. Then start NetworkMiner from the terminal like this:
$ mono --arch=32 NetworkMiner.exe
Live sniffing with NetworkMiner on Mac OS X
Live sniffing with WinPcap or Raw Sockets is only available when running NetworkMiner in Windows.
However, live sniffing can still be achieved on Mac OSX (as well as in Linux) by using the PCAP-over-IP functionality.
Press the “Start Receiving” button and then use tcpdump to do live sniffing and forward all captured packets to NetworkMiner like this:
$ sudo tcpdump -i en0 -s0 -U -w - | nc localhost 57012
The preferred way to use NetworkMiner is, however, to load previously captured packets in a PCAP file and let NetworkMiner dig out all interesting details like transmitted files, images, messages, SSL certificates etc.
Microsoft .NET Windows.Forms GUI applications don't run on 64 bit macOS systems running Mono.
This will cause the application to hang/freeze during startup when the GUI window is about to be rendered, throwing errors such as:
- Unable to start NetworkMiner: An exception was thrown by the type initializer for System.Windows.Forms.WindowsFormsSynchronizationContext
- Unhandled Exception: System.TypeInitializationException: An exception was thrown by the type initializer for System.Windows.Forms.ThemeEngine
$ mono --arch=32 /opt/NetworkMiner/NetworkMiner.exe
We'd like to thank Fredrik Pettai for reporting this issue and Joel Langill for suggesting the workaround.
Posted by Jonas Lejon on Tuesday, 24 June 2014 21:25:00 (UTC/GMT)
We've released version 1.6 of NetworkMiner today!Image credits: Confetti in Toronto by Winnie Surya
The new features in NetworkMiner 1.6 include:
Reassembled files and images can be opened with external tools by drag-and-dropping items from NetworkMiner's Files or Images tabs onto your favorite editor or viewer.
- Email extraction
Improved extraction of emails and attachments sent over SMTP.
- DNS analysis
Failed DNS lookups that result in NXDOMAIN and SERVFAIL are displayed in the DNS tab along with the flags in the DNS response.
- Live sniffing
Improved live sniffing performance.
Remote live sniffing enabled by bringing the PCAP-over-IP feature into the free open source version of NetworkMiner.
Identifying Malware DNS lookups
DNS traffic from the Kuluoz-Asprox botnet (PCAP file available via Contagio)
Note the NXDOMAIN responses and “No” in Alexa top 1 million column in the screenshot above; these domains are probably generated by a domain generation algorithm (DGA).
Live Sniffing with Pcap-over-IP
The PCAP-over-IP functionality enables live sniffing also on non-Windows machines, simply by running tcpdump (or dumpcap) and netcat like this:
# tcpdump -i eth0 -s0 -U -w - | nc localhost 57012
To receive the Pcap-over-IP stream in NetworkMiner, simply press Ctrl+R and select a TCP port.
For more information about this feature please see our previous blog post about the PCAP‑over‑IP feature.
The professional version of NetworkMiner additionally contains the following improvements of the command line tool NetworkMinerCLI:
- Enabled reading of PCAP and PcapNG data from standard input (STDIN)
- Full support for PCAP-over-IP
- More detailed DNS logging in NetworkMinerCLI's CSV export of DNS responses
The ability to read PCAP data from STDIN with NetworkMinerCLI makes it really simple to do live extraction of emails and email attachments. Here's an example showing how to do live SMTP extraction in Linux:
# tcpdump -i eth0 -s0 -w - port 25 or 587 | mono NetworkMinerCLI.exe -r - -w /var/log/smtp_extraction/
The syntax for extracting emails and attachments in Windows is very similar:
C:\>dumpcap.exe -i 1 -f "port 25 or 587" -w - | NetworkMinerCLI.exe -r -
The TCP ports 25 and 587, which are used in the capture filter above, are the standard port numbers for SMTP. In order to do live extraction of files sent over HTTP, simply use “port 80” as capture filter instead. Likewise, X.509 certificates can also be extracted from HTTPS sessions simply by using “port 443” as capture filter.
Download NetworkMiner 1.6
The most recent release of the free (open source) version of NetworkMiner can be downloaded from SourceForge or our NetworkMiner product page. Paying customers can download an update for NetworkMiner Professional from our customer portal.
We would like to thank Dan Eriksson (FM CERT) and Lenny Hansson (Danish GovCERT) for submitting bug reports and feature requests.
Posted by Erik Hjelmvik on Monday, 16 June 2014 11:00:00 (UTC/GMT)
The phrase "PCAP or it didn't happen" is often used in the network security field when someone want proof that an attack or compromise has taken place. One such example is the recent OpenSSL heartbleed vulnerability, where some claim that the vulnerability was known and exploited even before it was discovered by Google's Neel Mehta and Codenomicon.
Image: PCAP or it didn't happen pwnie, original by Nina
"Anyone reproduced observations of #Heartbleed attacks from 2013?"and Liam Randall (of Bro fame) tweeted:
"If someone finds historical exploits of #Heartbleed I hope they can report it. Lot's of sites mining now."
It is unfortunately not possible to identify Heartbleed attacks by analyzing log files, as stated by the following Q&A from the heartbleed.com website:
Can I detect if someone has exploited this against me?
Exploitation of this bug does not leave any trace of anything abnormal happening to the logs.
Hence, the only reliable way of detecting early heartbleed attacks (i.e. prior to April 7) is to analyze old captured network traffic from before April 7. In order to do this you should have had a full packet capture running, which was configured to capture and store all your traffic. Unfortunately many companies and organizations haven't yet realized the value that historical packet captures can provide.
Why Full Packet Capture Matters
Some argue that just storing netflow data is enough in order to do incident response. However, detecting events like the heartbleed attack is impossible to do with netflow since you need to verify the contents of the network traffic.
Not only is retaining historical full packet captures useful in order to detect attacks that have taken place in the past, it is also extremely valuable to have in order to do any of the following:
- IDS Verification
Investigate IDS alerts to see if they were false positives or real attacks.
- Post Exploitation Analysis
Analyze network traffic from a compromise to see what the attacker did after hacking into a system.
- Exfiltration Analysis
Assess what intellectual property that has been exfiltrated by an external attacker or insider.
- Network Forensics
Perform forensic analysis of a suspect's network traffic by extracting files, emails, chat messages, images etc.
Setting up a Full Packet Capture
The first step, when deploying a full packet capture (FPC) solution, is to install a network tap or configure a monitor port in order to get a copy of all packets going in and out from your networks. Then simply sniff the network traffic with a tool like dumpcap or netsniff-ng. Another alternative is to deploy a whole network security monitoring (NSM) infrastructure, preferably by installing the SecurityOnion Linux distro.
A network sniffer will eventually run out of disk, unless captured network traffic is written to disk in a rung buffer manner (use "-b files" switch in dumpcap) or there is a scheduled job in place to remove the oldest capture files. SecurityOnion, for example, normally runs its "cleandisk" cronjob when disk utilization reaches 90%.
The ratio between disk space and utilized bandwidth becomes the maximum retention period for full packet data. We recommend having a full packet capture retention period of at least 7 days, but many companies and organizations are able to store several month's worth of network traffic (disk is cheap).
Big Data PCAP Analysis
Okay, you've got a PCAP store with multiple terabytes of data. Then what? How do you go about analyzing such large volumes of captured full content network traffic? Well, tasks like indexing and analyzing PCAP data are complex matters than are beyond the scope of this blog post. We've covered the big data PCAP analysis topic in previous blog posts, and there is more to come. However, capturing the packets to disk is a crucial first step in order to utilize the powers of network forensics. Or as the saying goes “PCAP or it didn't happen”.
We now have T-shirts with "PCAP or it didn't happen" print for sale!
Posted by Erik Hjelmvik on Thursday, 01 May 2014 21:45:00 (UTC/GMT)
A new function in the free version of CapLoader 1.2 is the "Find Keyword" feature. This keyword search functionality makes it possible to seek large capture files for a string or byte pattern super fast!
You might say, so what? PCAP string search can already be done with tools like tcpflow, ngrep and even Wireshark; what's the benefit of adding yet another tool to this list? One benefit is that CapLoader doesn't just give you the packet or content that matched the keyword, it will instead extract the whole TCP or UDP flow that contained the match. CapLoader also supports many different encodings, which is demonstrated in this blog post.
Here are a few quick wins with CapLoader's keyword search feature:
- Track User-Agent - Search for a specific user agent string to extract all the HTTP traffic from a particular browser or malware.
- Track Domain Name - Search for a particular domain name to get all DNS lookups as well as web traffic relating to that domain (including HTTP "referer" field matches).
- Extract Messages - Search for a keyword in e-mail or chat traffic to get the whole e-mail or conversation, not just the single packet that matched.
- Extract Files - Search for a unique string or byte sequence in a file (such as a piece of malware) to enable extraction of the complete file transfer.
EXAMPLE: DigitalCorpora M57
As an example, let's search the digital corpora file net-2009-12-06-11:59.pcap (149 MB) for the keyword "immortal". Follow these steps in order to veify our analysis using the free edition of CapLoader.
Start CapLoader and select File -> Open URL, enter:
- Edit -> Find Keyword (or Ctrl+F), enter "immortal"
- Click the "Find and Select All Matching Flows" button
- One TCP flow is now selected (Flow_ID 5469, 192.168.1.104:2592 -> 192.168.1.1:25)
- Right click the selected flow (ID 5469) and select "Flow Transcript"
CapLoader transcript of SMTP email flowLooks as if an email has been sent with an attachment named "microscope1.jpg". However, the string "immortal" cannot be seen anywhere in the transcript view. The match that CapLoader found was actually in the contents of the attachment, which has been base64 encoded in the SMTP transfer in accordance with RFC 2045 (MIME).
The email attachment can easily be extracted from the PCAP file using NetworkMiner. However, to keep things transparent, let's just do a simple manual verification of the matched data. The first three lines of the email attachment are:
/9j/4AAQSkZJRgABAQEAkACQAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBDecoding this with base64 gives us:
0000000: ffd8 ffe0 0010 4a46 4946 0001 0101 0090 ......JFIF......
0000010: 0090 0000 ffdb 0043 0001 0101 0101 0101 .......C........
0000020: 0101 0101 0101 0101 0101 0101 0101 0101 ................
0000030: 0101 0101 0101 0101 0101 0101 0101 0101 ................
0000040: 0101 0101 0101 0101 0101 0101 0101 0101 ................
0000050: 0101 0101 0101 0101 01ff db00 4301 0101 ............C...
0000060: 0101 0101 0101 0101 0101 0101 0101 0101 ................
0000070: 0101 0101 0101 0101 0101 0101 0101 0101 ................
0000080: 7061 7373 776f 7264 3d69 6d6d 6f72 7461 password=immorta
0000090: 6c01 0101 0101 0101 0101 0101 0101 ffc0 l...............
Tools like ngrep, tcpflow and Wireshark won't find any match for the string "immortal" since they don't support searching in base64 encoded data. CapLoader, on the other hand, supports lots of encodings.
Supported Text Encodings
CapLoader currently supports fast searching of text strings in any of the following encodings:
- Base64 (used in email attachments and HTTP POST's)
- DNS label encoding (RFC 1035)
- Quoted Printable (used in body of email messages)
- URL encoding
CapLoader also supports several local character sets, including the following code pages:
- 437 MS-DOS Latin US
- 850 MS-DOS Latin 1
- 932 Japanese
- 936 Simplified Chinese
- 949 Korean
- 1251 Windows Cyrillic (Slavic)
- 1256 Windows Arabic
Having all these encodings also makes it possible to search network traffic for words like хакер, القراصنة, ハッカー, 黑客 or 해커.
CapLoader is a commercial tool that also comes in a free trial edition. The search feature is available in both versions, so feel free to download CapLoader and try it your self!
CapLoader is available from the following URL:
Posted by Erik Hjelmvik on Wednesday, 02 April 2014 13:15:00 (UTC/GMT)
A new feature in the recently released CapLoader 1.2 is the ability to carve network packets from any file and save them in the PCAP-NG format. This fusion between memory forensics and network forensics makes it possible to extract sent and received IP frames, with complete payload, from RAM dumps as well as from raw disk images.
CapLoader will basically carve any TCP or UDP packet that is preceded by an IP frame (both IPv4 and IPv6 are supported), and believe me; there are quite a few such packets in a normal memory image!
We've made the packet carver feature available in the free version of CapLoader, so feel free to give it a try!
The packet carving feature makes it possible do much better analysis of network traffic in memory dumps compared to Volatility's connscan2. With Volatility you basically get the IP addresses and port numbers that communicated, but with CapLoader's packet carver you also get the contents of the communication!
EXAMPLE: Honeynet Banking Troubles Image
I loaded the publicly available “Banking Troubles” memory image from the Honeynet Project into CapLoader to exemplify the packet carver's usefulness in a digital forensics / incident response (DFIR) scenario.
CapLoader 1.2 Carving Packets from HoneyNet Memory Image
22 TCP/UDP Flows were carved from the memory image by CapLoader
Let's look at the network traffic information that was extracted in the Honeynet Project's own solution for the Banking Troubles Challenge:
python volatility connscan2 -f images/hn_forensics.vmem"
Local Address Remote Address Pid
------------------------- ------------------------- ------
192.168.0.176:1176 126.96.36.199:80 888
192.168.0.176:1189 192.168.0.1:9393 1244
192.168.0.176:2869 192.168.0.1:30379 1244
192.168.0.176:2869 192.168.0.1:30380 4
0.0.0.0:0 188.8.131.52:0 0
127.0.0.1:1168 127.0.0.1:1169 888
192.168.0.176:1172 184.108.40.206:80 888
127.0.0.1:1169 127.0.0.1:1168 888
192.168.0.176:1171 220.127.116.11:80 888
192.168.0.176:1178 18.104.22.168:80 1752
192.168.0.176:1184 22.214.171.124:80 880
192.168.0.176:1185 126.96.36.199:80 880
"This connection [marked in bold above] was opened by AcroRd32.exe (PID 1752) and this represents an additional clue that an Adobe Reader exploit was used in order to download and execute a malware sample."
The solution doesn't provide any evidence regarding what Acrobat Reader actually used the TCP connection for. Additionally, none of the three finalists managed to prove what was sent over this connection.
To view the payload of this TCP connection in CapLoader, I simply right-clicked the corresponding row and selected “Flow Transcript”.
Transcript of TCP flow contents (much like Wireshark's Follow-TCP-Stream)
We can see that the following was sent from 192.168.0.176 to 188.8.131.52:
GET /load.php?a=a&st=Internet%20Explorer%206.0&e=2 HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)
Notice that the HTTP GET request took place at the end of the TCP session. Odd? Well, CapLoader doesn't know the timestamp of carved packets, so they are simply ordered as they were discovered in the dump file. The timestamp generated for each carved packet represents where in the image/dump the packet was found. Or more precise: the number of microseconds since EPOCH (1970-01-01 00:00:00) is the offset (in bytes) from where the packet was carved.
Hence, we know that the HTTP GET request can be found between offset 37068800 and 37507072 in the image (a 428 kB region). To be more exact we can open the generated PcapNG file with Wireshark or Tshark to get the timestamp and length of the actual HTTP GET request packet.
tshark.exe -r Bob.vmem.pcapng" -R http.request -T fields -e frame.time_epoch -e frame.len -e http.request.uri
31.900664000 175 *
37.457920000 175 *
37.462016000 286 /load.php?a=a&st=Internet%20Explorer%206.0&e=2
37.509120000 175 *
37.519360000 245 /~produkt/983745213424/34650798253
37.552128000 266 /root.sxml
37.570560000 265 /l3fw.xml
37.591040000 274 /WANCommonIFC1.xml
37.607424000 271 /WANIPConn1.xml
Now, lets verify that the raw packet data is actually 37462016 bytes into the memory dump.
xxd -s 37462016 -l 286 Bob.vmemYep, that's our HTTP GET packet preceded by an Ethernet, IP and TCP header.
23ba000: 0021 9101 b248 000c 2920 d71e 0800 4500 .!...H..) ....E.
23ba010: 0110 3113 4000 8006 8e1a c0a8 00b0 d496 ..1.@...........
23ba020: a4cb 049a 0050 7799 0550 f33b 7886 5018 .....Pw..P.;x.P.
23ba030: faf0 227e 0000 4745 5420 2f6c 6f61 642e .."~..GET /load.
23ba040: 7068 703f 613d 6126 7374 3d49 6e74 6572 php?a=a&st=Inter
23ba050: 6e65 7425 3230 4578 706c 6f72 6572 2532 net%20Explorer%2
23ba060: 3036 2e30 2665 3d32 2048 5454 502f 312e 06.0&e=2 HTTP/1.
23ba070: 310d 0a41 6363 6570 743a 202a 2f2a 0d0a 1..Accept: */*..
23ba080: 4163 6365 7074 2d45 6e63 6f64 696e 673a Accept-Encoding:
23ba090: 2067 7a69 702c 2064 6566 6c61 7465 0d0a gzip, deflate..
23ba0a0: 5573 6572 2d41 6765 6e74 3a20 4d6f 7a69 User-Agent: Mozi
23ba0b0: 6c6c 612f 342e 3020 2863 6f6d 7061 7469 lla/4.0 (compati
23ba0c0: 626c 653b 204d 5349 4520 362e 303b 2057 ble; MSIE 6.0; W
23ba0d0: 696e 646f 7773 204e 5420 352e 313b 2053 indows NT 5.1; S
23ba0e0: 5631 290d 0a48 6f73 743a 2073 6561 7263 V1)..Host: searc
23ba0f0: 682d 6e65 7477 6f72 6b2d 706c 7573 2e63 h-network-plus.c
23ba100: 6f6d 0d0a 436f 6e6e 6563 7469 6f6e 3a20 om..Connection:
23ba110: 4b65 6570 2d41 6c69 7665 0d0a 0d0a Keep-Alive....
Give it a Try!
Wanna verify the packet carving functionality? Well, that's easy! Just follow these three steps:
Download a sample memory image (thanks for the great resource Volatility Team!)
Download the free RAM dumper DumpIt and dump your own computer's memory.
Locate an existing file that already contains parts of your RAM, such as pagefile.sys or hiberfil.sys
- Download the free version of CapLoader and open the memory dump.
Select destination for the generated PcapNG file with carved packets and hit the “Carve” button!
Carving Packets from Proprietary and odd Capture Formats
CapLoader can parse PCAP and PcapNG files, which are the two most widely used packet capture formats. However, the packet carving features makes it possible to extract packets from pretty much any capture format, including proprietary ones. The drawback is that timestamp information will be lost.
We have successfully verified that CapLaoder can carve packets from the following network packet capture / network trace file formats:
- .ETL files created with netsh or logman. These Event Trace Log files can be created without having WinPcap installed.
- .CAP files created with Microsoft Network Monitor
- .ENC files (NA Sniffer) from IBM ISS products like the Proventia IPS (as well as Robert Graham's old BlackICE)
- .ERF files from Endace probes
Posted by Erik Hjelmvik on Monday, 17 March 2014 10:05:00 (UTC/GMT)
CapLoader version 1.2 was released today, with lots of new powerful features.
The most significant additions in CapLaoder 1.2 are:
- Network packet carving, i.e. the ability to carve full content network packets from RAM dumps, disk images etc.
- Flows can be hidden/filtered in the user interface.
- Full content keyword search in capture files.
- Flow can be selected based on TCP flags.
- Better handling of broken and corrupt capture files.
In addition to these updates, customers using the commercial edition of CapLoader also get an updated protocol database. This update improves the Port Independent Protocol Identification (PIPI) feature in CapLoader with more protocols and better accuracy. Not only does this help analysts detect services like SSH, FTP and HTTP running on non-standard ports, but the protocol database also includes signatures for malware and APT C2 traffic like ZeroAccess, Zeus, Gh0st RAT and Poison Ivy RAT.
An update for CapLoader to version 1.2 is available for previous customers via our customer portal.
The free trial version of CapLoader can be downloaded from http://www.netresec.com/?page=CapLoader
CapLoader 1.2 with suspect.pcap (from DFRWS 2008) loaded and Transcript window open
Posted by Erik Hjelmvik on Wednesday, 12 March 2014 14:45:00 (UTC/GMT)
NetworkMiner is a network forensics tool primarily developed for Windows OS's, but it actually runs just fine also in other operating systems with help of the Mono Framework. This guide shows how to install NetworkMiner in three different Linux distros (Ubuntu, Fedora and Arch Linux).
STEP 1: Install Mono
Ubuntu (also other Debian based distros like Xubuntu and Kali Linux)
sudo apt-get install libmono-system-windows-forms4.0-cilIf you're on an old version of Debian/Ubuntu (e.g. Ubuntu 14.04) then you first need to add the Mono Project GPG signing key and the package repository.
sudo apt-get install libmono-system-web4.0-cil
sudo apt-get install libmono-system-net4.0-cil
sudo apt-get install libmono-system-runtime-serialization4.0-cil
sudo apt-get install libmono-system-xml-linq4.0-cil
Fedora (credit Renegade0x6)
sudo yum -y install mono-core
sudo yum -y install mono-basic mono-winforms expect
ArchLinux (credit: Tyler Fisher)
sudo pacman -Sy mono
STEP 2: Install NetworkMiner
wget www.netresec.com/?download=NetworkMiner -O /tmp/nm.zip
sudo unzip /tmp/nm.zip -d /opt/
sudo chmod +x NetworkMiner.exe
sudo chmod -R go+w AssembledFiles/
sudo chmod -R go+w Captures/
STEP 3: Run NetworkMiner
NetworkMiner 1.2 running under Ubuntu Linux, with “day12-1.dmp” from the M57-Patents Scenario loaded.
Live sniffing with NetworkMiner
In order to capture packets (sniff traffic) in Linux you will have to use the “PCAP-over-IP” feature. NetworkMiner is, however, not really designed for packet capturing; it is primarily a tool for parsing and analyzing PCAP files containing previously sniffed traffic.
Posted by Erik Hjelmvik on Saturday, 01 February 2014 20:45:00 (UTC/GMT)