NETRESEC Network Security Blog - Tag : NetFlow

rss Google News

CapLoader 1.8 Released

CapLoader 1.8

We are happy to announce the release of CapLoader 1.8 today!

CapLoader is primarily used to filter, slice and dice large PCAP datasets into smaller ones. This new version contains several new features that improves this filtering functionality even further. To start with, the “Keyword Filter” can now be used to filter the rows in the Flows, Services or Hosts tabs using regular expressions. This enables the use of matching expressions like this:

  • amazon|akamai|cdn
    Show only rows containing any of the strings “amazon” “akamai” or “cdn”.
  • microsoft\.com\b|windowsupdate\.com\b
    Show only servers with domain names ending in “microsoft.com” or “windowsupdate.com”.
  • ^SMB2?$
    Show only SMB and SMB2 flows.
  • \d{1,3}\.\d{1,3}\.\d{1,3}\.255$
    Show only IPv4 address ending with “.255”.

For a reference on the full regular expression syntax available in CapLoader, please see Microsoft’s regex “Quick Reference”.

One popular workflow supported by CapLoader is to divide all flows (or hosts) into two separate datasets, for example one “normal” and one “malicious” set. The user can move rows between these two sets, where only one set is visible while the rows in the other set are hidden. To switch which dataset that is visible versus hidden the user needs to click the [Invert Hiding] button (or use the [Ctrl]+[Tab] key combination). With this new release we’ve also made the “Invert Hiding” functionality available by clicking the purple bar, which shows the number of rows present in the currently viewed set.

CapLoader Invert Hiding GIF

Readers with a keen eye might also notice that the purple bar charts are now also accompanied by a number, indicating how many rows that are visible after each filter is applied. The available filters are: Set Selection, BPF and Keyword Filter.

NetFlow + DNS = Great Success!

CapLoader’s main view presents the contents of the loaded PCAP files as a list of netflow records. Since the full PCAP is available, CapLoader also parses the DNS packets in the capture files in order to enrich the netflow view with hostnames. Recently PaC shared a great idea with us, why not show how many failed DNS lookups each client does? This would enable generic detection of DGA botnets without using blacklists. I’m happy to announce that this great idea made it directly into this new release! The rightmost column in CapLoader’s hosts tab, called “DNS_Fails”, shows how many percent of a client’s DNS requests that have resulted in an NXRESPONSE or SRVFAIL response.

CapLoader 1.8

Two packet capture files are loaded into CapLoader in the screenshot above; one PCAP file from a PC infected with the Shifu malware and one PCAP file with “normal traffic” (thanks @StratosphereIPS for sharing these capture files). As you can see, one of the clients (10.0.2.107) has a really high DNS failure ratio (99.81%). Unsurprisingly, this is also the host that was infected with the Shifu, which uses a domain generation algorithm (DGA) to locate its C2 servers.

Apart from parsing A and CNAME records from DNS responses CapLoader now also parses AAAA DNS records (IPv6 addresses). This enables CapLoader to map public domain names to hosts with IPv6 addresses.

Additional Updates

The new CapLoader release also comes with several other new features and updates, such as:

  • Added urlscan.io service for domain and IP lookups (right-click a flow or host to bring up the lookup menu).
  • Flow ID coloring based on 5-tuple, and clearer colors in timeline Gantt chart.
  • Extended default flow-timeout from 10 minutes to 2 hours for TCP flows.
  • Changed flow-timout for non-TCP flows to 60 seconds.
  • Upgraded to .NET Framework 4.7.2.

Updating to the Latest Release

Users who have previously purchased a license for CapLoader can download a free update to version 1.8 from our customer portal. All others can download a free 30 day trial from the CapLoader product page (no registration required).

Credits

We’d like to thank Mikael Harmark, Mandy van Oosterhout and Ulf Holmström for reporting bugs that have been fixed in this release. We’d also like to thank PaC for the DNS failure rate feature request mentioned in this blog post.

Posted by Erik Hjelmvik on Tuesday, 28 May 2019 10:45:00 (UTC/GMT)

Tags: #CapLoader#NetFlow#regex#DNS#DGA#Stratosphere

Share: Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=1950482


Analyzing Kelihos SPAM in CapLoader and NetworkMiner

This network forensics video tutorial covers how to analyze SPAM email traffic from the Kelihos botnet. The analyzed PCAP file comes from the Stratosphere IPS project, where Sebastian Garcia and his colleagues execute malware samples in sandboxes. The particular malware sample execution we are looking at this time is from the CTU-Malware-Capture-Botnet-149-2 dataset.

Resources

IOCs
990e5daa285f5c9c6398811edc68a659
e4f7fa6a0846e4649cc41d116c40f97835d3bb7d3d0391d3540482f077aa4493
6c55 5545 0310 4840

Check out our series of network forensic video tutorials for more tips and tricks on how to analyze captured network traffic.

Posted by Erik Hjelmvik on Monday, 19 February 2018 06:37:00 (UTC/GMT)

Tags: #Netresec#PCAP#CapLoader#NetworkMiner#videotutorial#video#tutorial#NetFlow#extract#Stratosphere

Share: Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=182053b


CapLoader 1.3 Released

CapLoader Logo

A new version of our heavy-duty PCAP parser tool CapLoader is now available. There are many new features and improvements in this release, such as the ability to filter flows with BPF, domain name extraction via passive DNS parser and matching of domain names against a local white list.


Filtering with BPF

The main focus in the work behind CapLoader 1.3 has been to fully support the Rinse-Repeat Intrusion Detection methodology. We've done this by improving the filtering capabilities in CapLoader. For starters, we've added an input filter, which can be used to specify IP addresses, IP networks, protocols or port numbers to be parsed or ignored. The input filter uses the Berkeley Packet Filter (BPF) syntax, and is designed to run really fast. So if you wanna analyze only HTTP traffic you can simply write “port 80” as your input filter to have CapLoader only parse and display flows going to or from port 80. We have also added a display filter, which unlike Wireshark also uses BPF. Thus, once a set of flows is loaded one can easily apply different display filters, like “host 194.9.94.80” or “net 192.168.1.0/24”, to apply different views on the parsed data.

CapLoader BPF Input Filter and Display Filter
Image: CapLoader with input filter "port 80 or port 443" and display filter "not net 74.125.0.0/16".

The main differences between the input filter and display filter are:

  • Input filter is much faster than the display filter, so if you know beforehand what ports, protocols or IP addresses you are interested in then make sure to apply them as an input filter. You will notice a delay when applying a display filter to a view of 10.000 flows or more.
  • In order to apply a new input filter CapLoader has to reload all the opened PCAP files (which is done by pressing F5). Modifying display filters, on the other hand, only requires you to press Enter or hit the “Apply” button.
  • Previously applied display filters are accessible in a drop-down menu in the GUI, but no history is kept of previous input filters.


NetFlow + DNS == true

The “Flows” view in CapLoader gives a great overview of all TCP, UDP and SCTP flows in the loaded PCAP files. However, it is usually not obvious to an analyst what every IP address is used for. We have therefore added a DNS parser to CapLoader, so that all DNS packets can be parsed in order to map IP addresses to domain names. The extracted domain names are displayed for each flow, which is very useful when performing Rinse-Repeat analysis in order to quickly remove “known good servers” from the analysis.


Leveraging the Alexa top 1M list

As we've show in in our previous blog post “DNS whitelisting in NetworkMiner”, using a list of popular domain names as a whitelist can be an effective method for finding malware. We often use this approach in order to quickly remove lots of known good servers when doing Rinse-Repeat analysis in large datasets.

Therefore, just as we did for NetworkMiner 1.5, CapLoader now includes Alexa's list of the 1 million most popular domain names on the Internet. All domain names, parsed from DNS traffic, are checked against the Alexa list. Domains listed in the whitelist are shown in CapLoader's “Server_Alexa_Domian” column. This makes it very easy to sort on this column in order to remove (hide) all flows going to “normal” servers on the Internet. After removing all those flows, what you're left with is pretty much just:

  • Local traffic (not sent over the Internet)
  • Outgoing traffic to either new or obscure domains

Manually going through the remaining flows can be very rewarding, as it can reveal C2 traffic from malware that has not yet been detected by traditional security products like anti-virus or IDS.

Flows in CapLoader with DNS parsing and Alexa lookup
Image: CapLoader with malicious flow to 1.web-counter[.]info (Miuref/Boaxxe Trojan) singled out due to missing Alexa match.

Many new features in CapLoader 1.3

The new features highlighted above are far from the only additions made to CapLoader 1.3. Here is a more complete list of improvements in this release:

  • Support for “Select Flows in PCAP” to extract and select 5-tuples from a PCAP-file. This can be a Snort PCAP with packets that have triggered IDS signatures. This way you can easily extract the whole TCP or UDP flow for each signature match, instead of just trying to make sense of one single packet per alert.
  • Improved packet carver functionality to better carve IP, TCP and UPD packets from any file. This includes memory dumps as well as proprietary and obscure packet capture formats.
  • Support for SCTP flows.
  • DNS parser.
  • Alexa top 1M matching.
  • Input filter and display filter with BPF syntax.
  • Flow Producer-Consumer-Ratio PCR.
  • Flow Transcript can be opened simply by double-clicking a flow.
  • Find form updated with option to hide non-matching flows instead of just selecting the flows that matched the keyword search criteria.
  • New flow transcript encoding with IP TTL, TCP flags and sequence numbers to support analysis of Man-on-the-Side attacks.
  • Faster loading of previously opened files, MD5 hashes don't need to be recalculated.
  • A selected set of flows in the GUI can be inverted simply by right-clicking the flow list and selecting “Invert Selection” or by hitting Ctrl+I.


Downloading CapLoader 1.3

All these new features, except for the Alexa lookup of domain names, are available in our free trial version of CapLoader. So to try out these new features in CapLoader, simply grab a trial download here:
https://www.netresec.com/?page=CapLoader#trial (no registration needed)

All paying customers with an older version of CapLoader can grab a free update for version 1.3 at our customer portal.

Posted by Erik Hjelmvik on Monday, 28 September 2015 07:30:00 (UTC/GMT)

Tags: #CapLoader#BPF#Berkeley Packet Filter#Rinse-Repeat#DNS#Alexa#PCAP#Passive DNS#NetFlow#Malware#C2

Share: Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=15914E3


PCAP or it didn't happen

The phrase "PCAP or it didn't happen" is often used in the network security field when someone want proof that an attack or compromise has taken place. One such example is the recent OpenSSL heartbleed vulnerability, where some claim that the vulnerability was known and exploited even before it was discovered by Google's Neel Mehta and Codenomicon.

PCAP or it didn't happen pwnie, original by Nina on http://n924.deviantart.com
Image: PCAP or it didn't happen pwnie, original by Nina

After the Heartbleed security advisory was published, EFF tweeted:

"Anyone reproduced observations of #Heartbleed attacks from 2013?"
and Liam Randall (of Bro fame) tweeted:
"If someone finds historical exploits of #Heartbleed I hope they can report it. Lot's of sites mining now."

Liam Randall (@Hectaman) tweeting about historical Heartbleed searches Heartbleed

It is unfortunately not possible to identify Heartbleed attacks by analyzing log files, as stated by the following Q&A from the heartbleed.com website:

Can I detect if someone has exploited this against me?

Exploitation of this bug does not leave any trace of anything abnormal happening to the logs.

Additionally, IDS  signatures  for detecting the Heartbleed attacks weren't available until after implementations of the exploit code were being actively used in the wild.

Hence, the only reliable way of detecting early heartbleed attacks (i.e. prior to April 7) is to analyze old captured network traffic from before April 7. In order to do this you should have had a full packet capture running, which was configured to capture and store all your traffic. Unfortunately many companies and organizations haven't yet realized the value that historical packet captures can provide.

Why Full Packet Capture Matters

Some argue that just storing netflow data is enough in order to do incident response. However, detecting events like the heartbleed attack is impossible to do with netflow since you need to verify the contents of the network traffic.

Not only is retaining historical full packet captures useful in order to detect attacks that have taken place in the past, it is also extremely valuable to have in order to do any of the following:

  • IDS Verification
    Investigate IDS alerts to see if they were false positives or real attacks.

  • Post Exploitation Analysis
    Analyze network traffic from a compromise to see what the attacker did after hacking into a system.

  • Exfiltration Analysis
    Assess what intellectual property that has been exfiltrated by an external attacker or insider.

  • Network Forensics
    Perform forensic analysis of a suspect's network traffic by extracting files, emails, chat messages, images etc.

Setting up a Full Packet Capture

netsniff-ng logo

The first step, when deploying a full packet capture (FPC) solution, is to install a network tap or configure a monitor port in order to get a copy of all packets going in and out from your networks. Then simply sniff the network traffic with a tool like dumpcap or netsniff-ng. Another alternative is to deploy a whole network security monitoring (NSM) infrastructure, preferably by installing the SecurityOnion Linux distro.

A network sniffer will eventually run out of disk, unless captured network traffic is written to disk in a rung buffer manner (use "-b files" switch in dumpcap) or there is a scheduled job in place to remove the oldest capture files. SecurityOnion, for example, normally runs its "cleandisk" cronjob when disk utilization reaches 90%.

The ratio between disk space and utilized bandwidth becomes the maximum retention period for full packet data. We recommend having a full packet capture retention period of at least 7 days, but many companies and organizations are able to store several month's worth of network traffic (disk is cheap).

Big Data PCAP Analysis

Okay, you've got a PCAP store with multiple terabytes of data. Then what? How do you go about analyzing such large volumes of captured full content network traffic? Well, tasks like indexing and analyzing PCAP data are complex matters than are beyond the scope of this blog post. We've covered the big data PCAP analysis topic in previous  blog posts, and there is more to come. However, capturing the packets to disk is a crucial first step in order to utilize the powers of network forensics. Or as the saying goes “PCAP or it didn't happen”.

Posted by Erik Hjelmvik on Thursday, 01 May 2014 21:45:00 (UTC/GMT)

Tags: #capture#sniffer#IDS#forensics

Share: Facebook   Twitter   Reddit   Hacker News Short URL: https://netresec.com/?b=1452D4C

X / twitter

NETRESEC on X / Twitter: @netresec

Mastodon

NETRESEC on Mastodon: @netresec@infosec.exchange