This video tutorial is a walkthrough of how you can analyze the PCAP file
(created by Brad Duncan).
The capture file contains a malicious Word Document (macro downloader), Emotet (banking trojan),
TrickBot/Trickster (banking trojan) and an EternalChampion (CVE-2017-0146)
exploit used to perform lateral movement.
The free and open source network forensics tool NetworkMiner now comes with improved extraction of files
and metadata from several protocols as well as a few GUI updates.
But the biggest improvements for version 2.3 are in the commercial tool NetworkMiner Professional,
which now supports VoIP call audio extraction and playback as well as OSINT lookups of file hashes,
IP addresses, domain names and URLs.
I’m happy to announce that NetworkMiner 2.3 now does an even better job than before at extracting files
and metadata from several protocols. Improvements have been made in the parsers for the following protocols:
HTTP, IEC-104, IPv4, Modbus, SIP, SMB, SMB2, SMTP and SSL/TLS.
We have also added support for the SNMP protocol in NetworkMiner 2.3,
so that SNMP community strings can be extracted and displayed on the Parameters and Credentials tabs.
Another change is that timestamps are now displayed using the UTC time zone instead of using the local time zone.
We have also fixed a few GUI quirks in order to further improve the usability of the tool.
The commercial version of NetworkMiner, i.e. NetworkMiner Professional,
comes with several additional improvements which are presented below.
VoIP Call Playback
NetworkMiner Professional has received a new tab called “VoIP”,
which enables audio playback of VoIP calls that are using SIP and RTP with G.711 μ-law or A-law encoding
(u-Law is primarily used in North America and Japan while A-law is used in Europe and most other parts of the world).
The audio streams from the VoIP calls are also extracted to disk as .WAV files when codecs
(u-Law and A-Law) is used. NetworkMiner Professional also attempts to reassemble RTP streams encoded with
G.722 to .au files.
OSINT Lookups of IP Addresses, Domains, URLs and File Hashes
Right-clicking a network host in NetworkMiner Professional’s Hosts tab brings up a context menu with options for performing lookups of IP
and domain names using external sources. We refer to this method as open-source intelligence (OSINT) because the accessed data resides at
publicly available sources.
Clicking on an OSINT provider brings up a webpage with more detailed information about the selected IP address, such as
mnemonic Passive DNS,
However, if you’re lazy like me, then you’ll probably click the “All above!” option instead,
which will bring up all of the sources in separate tabs in your browser.
The full list of OSINT providers available for IP lookups includes
APNIC Whois, BFK Passive DNS,
ExoneraTor, Google Public DNS, GreenSnow.co, Hurricane Electric, IBM X-Force, Internet Storm Center, mnemonic Passive DNS,
UrlQuery and VirusTotal.
Right-clicking a URL in the Browsers tab brings up a similar context menu, which additionally includes the following services for URL lookups:
Google Safe Browsing, IBM X-Force,
Did you know that the malware analysis service Hybrid Analysis provides free API keys to people in the IT security community?
This is a great move by the Hybrid Analysis team, and we’re happy to announce that we have leveraged their API in NetworkMiner Professional
in order to submit files for analysis directly from within the NetworkMiner GUI.
The API integration also enables you to query for an IP on Hybrid Analysis to see which previously submitted samples has communicated
with that particular IP address.
Here are the steps required to enable the Hybrid Analysis API integration:
Start NetworkMiner Pro, open the Tools > Settings menu and input your API key
I would like to thank
Chris Sistrunk, Mats Karlsson and Michael Nilsson for suggesting several of the protocol and GUI improvements
that have been incorporated into this new release.
I’d also like to thank Doug Green and Ahmad Nawawi for discovering and reporting bugs in the
IP and SSL parser respectively.
Upgrading to Version 2.3
Users who have purchased a license for NetworkMiner Professional 2.x can download a free update to version 2.3 from our
Those who instead prefer to use the free and open source version can grab the latest version of NetworkMiner from the official
⛏ FOR GREAT JUSTICE! ⛏
Posted by Erik Hjelmvik on Tuesday, 03 April 2018 06:27:00 (UTC/GMT)
People sometimes ask me when I will teach my
network forensics class in the United States.
The US is undoubtedly the country with the most advanced and mature DFIR community,
so it would be awesome to be able to give my class there.
However, not being a U.S. person and not working for a U.S. company makes it rather difficult for me to teach in the United States
(remember what happened to Halvar Flake?).
So if you’re from the Americas and would like to take my network forensics class,
then please don’t wait for me to teach my class at a venue close to you – because I probably won’t.
My recommendation is that you instead attend my
upcoming training at 44CON in London this September.
The network forensics training in London will cover topics such as:
Analyzing a web defacement
Investigating traffic from a remote access trojan (njRAT)
The first day of training will focus on analysis using only open source tools.
The second day will primarily cover training on commercial software from Netresec, i.e.
NetworkMiner Professional and
All students enrolling in the class will get a full 6 month license for both these commercial tools.
The training will touch upon topics relevant for law enforcement as well as incident response,
such as investigating a defacement, finding backdoors and dealing with a machine infected with real malware.
We will also be carving lots of files, emails and other artifacts from the PCAP dataset as well as perform
Rinse-Repeat Intrusion Detection in order to detect covert malicious traffic.
Day 1 - March 20, 2017
The first training day will focus on open source tools that can be used for doing network forensics.
We will be using the Security Onion linux distro for this part,
since it contains pretty much all the open source tools you need in order to do network forensics.
Day 2 - March 21, 2017
We will spend the second day mainly using NetworkMiner Professional and
i.e. the commercial tools from Netresec.
Each student will be provided with a free 6 month license for the latest version of
NetworkMiner Professional (see our recent release of version 2.1) and CapLaoder.
This is a unique chance to learn all the great features of these tools directly from the guy who develops them (me!).
The Troopers conference and training will be held at the
Print Media Academy (PMA) in Heidelberg, Germany.
The number of seats in the training will be limited in order to provide a high-quality interactive training.
However, keep in mind that this means that the we might run out of seats for the network forensics class!
I would like to recommend those who wanna take the training to also attend the
Troopers conference on March 22-24.
The conference will have some great talks, like these ones:
However, my greatest takeaway from last year's Troopers was the awesome hallway track,
i.e. all the great conversations I had with all the smart people who came to Troopers.
Please note that the tickets to the Troopers conference are also limited,
and they seem to sell out quite early each year.
So if you are planning to attend the network forensics training, then I recommend that you buy an “All Inclusive” ticket,
which includes a two-day training and a conference ticket.
I'm happy to announce that I will teach a two-day
Network Forensics class at the upcoming
Troopers conference in March!
The first day of training (March 14) will cover how to use open source tools to analyze intrusions and malware in captured network traffic.
On day two (March 15) I will show attendees some tips and tricks for how to use software developed by us at Netresec, i.e.
NetworkMiner Professional and
This training is a rare opportunity to learn how to use this software directly from the main developer (me).
Everyone taking the class will also get a free 6 month personal license for both NetworkMiner Pro and CapLoader.
Scenario and Dataset
The dataset analyzed in the class has been created using REAL physical machines and a REAL internet connection.
All traffic on the network is captured to PCAP files by a
The scenario includes events, such as:
I am a long time skeptic when it comes to blacklists and other forms of signature based detection mechanisms.
The information security industry has also declared the signature based anti-virus approach
dead several times during the past 10 years.
Yet, we still rely on anti-virus signatures, IDS rules, IP blacklists, malware domain lists, YARA rules etc.
to detect malware infections and other forms of intrusions in our networks.
This outdated approach puts a high administrative burden on IT and security operations today,
since we need to keep all our signature databases up to date,
both when it comes to end point AV signatures as well as IDS rules and other signature based detection methods
and threat feeds. Many organizations probably spend more time and money on updating all these blacklists
and signature databases than actually investigating the security alerts these detection systems generate.
What can I say; the world is truly upside down...
I would therefore like to use this blog post to briefly describe an effective blacklist-free approach
for detecting malware and intrusions just by analyzing network traffic.
My approach relies on a combination of whitelisting and common sense anomaly detection
(i.e. not the academic statistical anomaly detection algorithms that never seem to work in reality).
I also encourage CERT/CSIRT/SOC/SecOps units to practice Sun Tzu's old ”know yourself”,
or rather ”know your systems and networks” approach.
Know your enemy and know yourself and you can fight a hundred battles without disaster.
My method doesn't rely on any dark magic, it is actually just a simple
built on the following steps:
Look at network traffic
Define what's normal (whitelist)
After looping through these steps a few times you'll be left with some odd network traffic,
which will have a high ratio of maliciousness.
The key here is, of course, to know what traffic to classify as ”normal”.
This is where ”know your systems and networks” comes in.
What Traffic is Normal?
I recently realized that Mike Poor seems to be thinking
along the same lines, when I read his foreword to Chris Sanders' and Jason Smith's
The next time you are at your console, review some logs. You might think... "I don't know what to look for".
Start with what you know, understand, and don't care about.
Discard those. Everything else is of interest.
Following Mike's advice we might, for example, define“normal” traffic as:
HTTP(S) traffic to popular web servers on the Internet on standard ports (TCP 80 and 443).
SMB traffic between client networks and file servers.
DNS queries from clients to your name server on UDP 53, where the servers successfully answers with an A, AAAA, CNAME, MX, NS or SOA record.
...any other traffic which is normal in your organization.
Whitelisting IP ranges belonging to Google, Facebook, Microsoft and Akamai as
”popular web servers” will reduce the dataset a great deal, but that's far from enough.
One approach we use is to perform DNS whitelisting
by classifying all servers with a domain name listed in Alexa's Top 1 Million list as ”popular”.
You might argue that such a method just replaces the old blacklist-updating-problem with
a new whitelist-updating-problem. Well yes, you are right to some extent,
but the good part is that the whitelist changes very little over time compared to a blacklist.
So you don't need to update very often. Another great benefit is that the whitelist/rinse-repeat approach
also enables detection of 0-day exploits and C2 traffic of unknown malware,
since we aren't looking for known badness – just odd traffic.
I often use Argus with Racluster to quickly search a large collection of session data via the command line, especially for unexpected entries. Rather than searching for specific data, I tell Argus what to omit, and then I review what’s left.
In his book Richard also mentions that he uses a similar methodology when going on “hunting trips”
(i.e. actively looking for intrusions without having received an IDS alert):
Sometimes I hunt for traffic by telling Wireshark what to ignore so that I can examine what’s left behind. I start with a simple filter, review the results, add another filter, review the results, and so on until I’m left with a small amount of traffic to analyze.
I personally find Rinse-Repeat Intrusion Detection ideal for threat hunting,
especially in situations where you are provided with a big PCAP dataset to answer the classic question
“Have we been hacked?”.
However, unfortunately the “blacklist mentality” is so conditioned among incident responders
that they often choose to crunch these datasets through blacklists and signature databases in order to
then review thousands of alerts, which are full of false positives.
In most situations such approaches are just a huge waste of time and computing power,
and I'm hoping to see a change in the incident responders' mindsets in the future.
I teach this “rinse-repeat” threat hunting method in our
Network Forensics Training.
In this class students get hands-on experience with a dataset of 3.5 GB / 40.000 flows,
which is then reduced to just a fraction through a few iterations in the rinse-repeat loop.
The remaining part of the PCAP dataset has a very high ratio of hacking attacks as well as
command-and-control traffic from RAT's, backdoors and botnets.
We have now published a blog post detailing how to use dynamic protocol detection
to identify services running on non-standard ports. This is a good example on how to put the Rinse-Repeat methodology into practice.
Posted by Erik Hjelmvik on Monday, 17 August 2015 08:45:00 (UTC/GMT)
Our class is held the days before the SEC-T conference,
which is a great technical information security conference in Stockholm, and at the same venue (Nalen).
Visitors can thereby plan 4 days of training and conferencing in Stockholm without having to transfer between hotels.
The Network Forensics class consists of a mix of theory and hands-on labs, where students will learn to analyze Full Packet Capture (FPC) files.
The scenarios in the labs are primarily focused at network forensics for incident response,
but are also relevant for law enforcement/internal security etc. where the network traffic of a suspect or insider is being monitored.
NetworkMiner is a network forensics tool primarily developed for Windows OS's, but it actually runs just fine also in other operating systems with help of the Mono Framework.
This guide shows how to install NetworkMiner in three different Linux distros (Ubuntu, Fedora and Arch Linux).
STEP 1: Install Mono
Ubuntu (also other Debian based distros like Xubuntu and Kali Linux)
NetworkMiner 1.2 running under Ubuntu Linux, with “day12-1.dmp” from the M57-Patents Scenario loaded.
Another way to try out NetworkMiner in Linux is to spin up one of the Live CD's that has the tool installed,
such as Security Onion, REMnux or NST.
Live sniffing with NetworkMiner
In order to capture packets (sniff traffic) in Linux you will have to use the “PCAP-over-IP” feature.
NetworkMiner is, however, not really designed for packet capturing; it is primarily a tool for parsing and analyzing PCAP files containing previously sniffed traffic.
We recommend using other tools such as tcpdump, dumpcap or netsniff-ng in order to reliably capture packets to a PCAP file.
You can read more on how to sniff traffic in our Sniffing Tutorial.
Posted by Erik Hjelmvik on Saturday, 01 February 2014 20:45:00 (UTC/GMT)