NETRESEC Network Security Blog


Analyzing Kelihos SPAM in CapLoader and NetworkMiner

This network forensics video tutorial covers how to analyze SPAM email traffic from the Kelihos botnet. The analyzed PCAP file comes from the Stratosphere IPS project, where Sebastian Garcia and his colleagues execute malware samples in sandboxes. The particular malware sample execution we are looking at this time is from the CTU-Malware-Capture-Botnet-149-2 dataset.

Resources

IOCs
990e5daa285f5c9c6398811edc68a659
e4f7fa6a0846e4649cc41d116c40f97835d3bb7d3d0391d3540482f077aa4493
6c55 5545 0310 4840

Check out our series of network forensic video tutorials for more tips and tricks on how to analyze captured network traffic.

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: http://netres.ec/?b=182053B

Posted by Erik Hjelmvik on Monday, 19 February 2018 06:37:00 (UTC/GMT)


Antivirus Scanning of a PCAP File

This second video in our series of network forensic video tutorials covers a quick and crude way to scan a PCAP file for malware. It's all done locally without having to run the PCAP through an IDS. Kudos to Lenny Hanson for showing me this little trick!

Antivirus Scanning of a PCAP File

Resources

IOCs
178.62.142.240
soquumaihi.co.vu
9fd51fb05cb0ea89185fc1355ebf047cC
8cf7b281a0db4029456e416dbe05d21d17af0cad86f67e054268f5e2c46c43ed
119.238.10.9
96b430041aed13413ec2b5ae91954f39
e79ef634265b9686f90241be0e05940354dc2c2b43d087e09bb846eec34dad35

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: http://netres.ec/?b=1820D24

Posted by Erik Hjelmvik on Monday, 12 February 2018 08:00:00 (UTC/GMT)


Examining an x509 Covert Channel

Jason Reaves gave a talk titled “Malware C2 over x509 certificate exchange” at BSides Springfield 2017, where he demonstrated that the SSL handshake can be abused by malware as a covert command-and-control (C2) channel.

Jason Reaves presenting at BSides Springfield 2017

He got the idea while analyzing the Vawtrak malware after discovering that it read multiple fields in the X.509 certificate provided by the server before proceeding. Jason initially thought these fields were used as a C2 channel, but then realized that Vawtrak performed a variant of certificate pinning in order to discover SSL man-in-the-middle attempts.

Nevertheless, Jason decided to actually implement a proof-of-concept (PoC) that uses the X.509 certificate as a C2 channel. Jason’s code is now available on GitHub along with a PCAP file demonstrating this covert C2 channel. Of course I couldn’t resist having a little look at this PCAP file in NetworkMiner.

The first thing I noticed was that the proof-of-concept PCAP ran the SSL session on TCP 4433, which prevented NetworkMiner from parsing the traffic as SSL. However, I was able to parse the SSL traffic with NetworkMiner Professional just fine thanks to the port-independent-protocol-identification feature (a.k.a Dynamic Port Detection), which made the Pro-version parse TCP 4433 as SSL/TLS.

X.509 certificates extracted from PCAP with NetworkMiner
Image: X.509 certificates extracted from PCAP with NetworkMiner

A “normal” x509 certificate size is usually around 1kB, so certificates that are 11kB should be considered as anomalies. Also, opening one of these .cer files reveals an extremely large value in the Subject Key Identifier field.

X.509 certificate with MZ header in the Subject Key Identifier field

Not only is this field very large, it also starts with the familiar “4D 5A” MZ header sequence.

NetworkMiner additionally parses details from the certificates that it extracts from PCAP files, so the Subject Key Identifier field is actually accessible from within NetworkMiner, as shown in the screenshot below.

Parameters tab in NetworkMiner showing X.509 certificate details

You can also see that NetworkMiner validates the certificate using the local trusted root certificates. Not surprisingly, this certificates is not trusted (certificate valid = FALSE). It would be most unlikely that anyone would manage to include arbitrary data like this in a signed certificate.


Extracting the MZ Binary from the Covert X.509 Channel

Even though NetworkMiner excels at pulling out files from PCAPs, this is definitively an occasion where manual handling is required. Jason’s PoC implementation actually uses a whopping 79 individual certificates in order to transfer this Mimikatz binary, which is 785 kB.

Here’s a tshark oneliner you can use to extract the Mimicatz binary from Jason's example PCAP file.

tshark -r mimikatz_sent.pcap -Y 'ssl.handshake.certificate_length gt 2000' -T fields -e x509ce.SubjectKeyIdentifier -d tcp.port==4433,ssl | tr -d ':\n' | xxd -r -p > mimikatz.exe

Detecting x509 Anomalies

Even though covert channels using x509 certificates isn’t a “thing” (yet?) it’s still a good idea to think about how this type of covert signaling can be detected. Just looking for large Subject Key Identifier fields is probably too specific, since there are other fields and extensions in X.509 that could also be used to transmit data. A better approach would be to alert on certificates larger than, let’s say, 3kB. Multiple certificates can also be chained together in a single TLS handshake certificate record, so it would also make sense to look for handshake records larger than 8kB (rough estimate).

Bro IDS logo

This type of anomaly-centric intrusion detection is typically best done using the Bro IDS, which provides easy programmatic access to the X.509 certificate and SSL handshake.

There will be false positives when alerting on large certificates in this manner, which is why I recommend to also check if the certificates have been signed by a trusted root or not. A certificate that is signed by a trusted root is very unlikely to contain malicious data.

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: http://netres.ec/?b=182E662

Posted by Erik Hjelmvik on Tuesday, 06 February 2018 12:13:00 (UTC/GMT)


Zyklon Malware Network Forensics Video Tutorial

We are releasing a series of network forensics video tutorials throughout the next few weeks. First up is this analysis of a PCAP file containing network traffic from the "Zyklon H.T.T.P." malware.

Analyzing a Zyklon Trojan with Suricata and NetworkMiner

Resources
https://www.malware-traffic-analysis.net/2017/07/22/index.html
https://github.com/Security-Onion-Solutions/security-onion
https://www.arbornetworks.com/blog/asert/wp-content/uploads/2017/05/zyklon_season.pdf
http://doc.emergingthreats.net/2017930

IOCs
service.tellepizza.com
104.18.40.172
104.18.41.172
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.3pre) Gecko/20070302 BonEcho/2.0.0.3pre
gate.php
.onion
98:1F:D2:FF:DC:16:B2:30:1F:11:70:82:3D:2E:A5:DC
65:8A:5C:76:98:A9:1D:66:B4:CB:9D:43:5C:DE:AD:22:38:37:F3:9C
E2:50:35:81:9F:D5:30:E1:CE:09:5D:9F:64:75:15:0F:91:16:12:02:2F:AF:DE:08:4A:A3:5F:E6:5B:88:37:D6

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: http://netres.ec/?b=182B4AC

Posted by Erik Hjelmvik on Monday, 05 February 2018 07:30:00 (UTC/GMT)


Don't Delete PCAP Files - Trim Them!

We are happy to release TrimPCAP today! TrimPCAP is a free open source tool that reduces the size of capture files in an intelligent way.

TrimPCAP

The retention period of a packet capture solution is typically limited by either legal requirements or available disk space. In the latter case the oldest capture files are simply removed when the storage starts getting full. This means that if there is a long ongoing session, such as a download of a large ISO file, streamed video or a reverse shell backdoor, then the start of this session will likely be removed.

I know from experience that it’s painful to analyze network traffic where the start of a session is missing. The most important and interesting stuff generally happens in the beginning of each session, such as the HTTP GET request for an ISO file. As an analyst you don’t need to look at all the other packets in that ISO download (unless you believe the ISO contains malware), it’s enough to see that there is a GET request for the file and the server responds with a “200 OK”.

CapLoader Transcript of ISO download Image: CapLoader transcript of an ISO download

If that download had been truncated, so that only the last few packets were remaining, then it would be really difficult to know what was being downloaded. The same is true also for other protocols, including proprietary C2 protocols used by botnets and other types of malware.


  TrimPCAP  

TrimPCAP is designed to overcome the issue with truncated sessions by removing data from the end of sessions rather than from the beginning. This also comes with a great bonus when it comes to saving on disk usage, since the majority of the bytes transferred across the Internet are made up of big sessions (a.k.a “Elephant Flows”). Thus, by trimming a PCAP file so that it only contains the first 100kB of each TCP and UDP session it’s possible to significantly reduce required storage for that data.

The following command reduces the PCAP dataset used in our Network Forensics Training from 2.25 GB to just 223 MB:

user@so$ python trimpcap.py 102400 /nsm/sensor_data/so-eth1/dailylogs/*/*
Trimming capture files to max 102400 bytes per flow.
Dataset reduced by 90.30% = 2181478771 bytes
user@so$

A maximum session size (or "flow cutoff") of 100kB enables trimpcap.py to reduce the required storage for that dataset to about 10% of its original size, which will significantly extend the maximum retention period.


Putting TrimPCAP Into Practice

Let’s assume that your organization currently has a maximum full content PCAP retention period of 10 days and that trimming sessions to 100kB reduces the required storage to 10%. TrimPCAP will then enable you to store 5 days with full session data plus 50 additional days with trimmed sessions using the same disk space as before!

TrimPCAP retention period extension

A slightly more advanced scheme would be to have multiple trim limits, such as trimming to 1MB after 3 days, 100kB after 6 days and 10kB after 30 days. Such a setup would probably extend your total retention period from 10 days to over 100 days.

An even more advanced trimming scheme is implemented in our packet capture agent PacketCache. PacketCache constantly trims its PCAP dataset because it is designed to use only 1 percent of a PC’s RAM to store observed packets, in case they are needed later on for incident response. PacketCache uses a trim limit which declines individually for each observed TCP and UDP session depending on when they were last observed.


Downloading TrimPCAP

TrimPCAP is open source software and is released under the GNU General Public License version 2 (GPLv2). You can download your own copy of TrimPCAP from the official TrimPCAP page:
https://www.netresec.com/?page=TrimPCAP

Happy trimming!

✂- - - - - - - - -

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: http://netres.ec/?b=17CF81F

Posted by Erik Hjelmvik on Tuesday, 05 December 2017 12:40:00 (UTC/GMT)


CapLoader 1.6 Released

CapLoader 1.6

CapLoader is designed to simplify complex tasks, such as digging through gigabytes of PCAP data looking for traffic that sticks out or shouldn’t be there. Improved usability has therefore been the primary goal, when developing CapLoader 1.6, in order to help our users do their work even more efficiently than before.

Some of the new features in CapLoader 1.6 are:

  • Context aware selection and filter suggestions when right-clicking a flow, session or host.
  • Support for IPv6 addresses in the BPF syntax for Input Filter as well as Display Filter.
  • Flows that are inactive for more than 60 minutes are considered closed. This timeout is configurable in Tools > Settings.


Latency Measurements

CapLoader 1.6 also introduces a new column in the Flows tab labeled “Initial_RTT”, which shows the Round Trip Time (RTT) measured during the start of a session. The RTT is defined as “the time it takes for a signal to be sent plus the time it takes for an acknowledgment of that signal to be received”. RTT is often called “ping time” because the ping utility computes the RTT by sending ICMP echo requests and measuring the delay until a reply is received.

Initial RTT in CapLoader Flows Tab
Image: CapLoader 1.6 showing ICMP and TCP round trip times.

But using a PCAP file to measure the RTT between two hosts isn’t as straight forward as one might think. One complicating factor is that the PCAP might be generated by the client, server or by any device in between. If we know that the sniffing point is at the client then things are simple, because we can then use the delta-time between an ICMP echo request and the returning ICMP echo response as RTT. In lack of ping traffic the same thing can be achieved with TCP by measuring the time between a SYN and the returning SYN+ACK packet. However, consider the situation when the sniffer is located somewhere between the client and server. The previously mentioned method would then ignore the latency between the client and sniffer, the delta-time will therefore only show the RTT between the sniffer and the server.

This problem is best solved by calculating the Initial RTT (iRTT) as the delta-time between the SYN packet and the final ACK packet in a TCP three-way handshake, as shown here:

Initial Round Trip Time in PCAP Explained
Image: Initial RTT is the total time of the black/bold packet traversal paths.

Jasper Bongertz does a great job of explaining why and how to use the iRTT in his blog post “Determining TCP Initial Round Trip Time”, so I will not cover it in any more detail here. However, keep in mind that iRTT can only be calculated this way for TCP sessions. CapLoader therefore falls back on measuring the delta time between the first packet in each direction when it comes to transport protocols like UDP and ICMP.


Exclusive Features Not Available in the Free Trail

The new features mentioned so far are all available in the free 30 day CapLoader trial, which can be downloaded from our CapLoader product page (no registration required). But we’ve also added features that are only available in the commercial/professional edition of CapLoader. One such exclusive feature is the matching of hostnames against the Cisco Umbrella top 1 million domain list. CapLoader already had a feature for matching domain names against the Alexa top 1 million list, so the addition of the Umbrella list might seem redundant. But it’s actually not, the two lists are compiled using different data sources and therefore complement each other (see our blog post “Domain Whitelist Benchmark: Alexa vs Umbrella” for more details). Also, the Umbrella list contains subdomains (such as www.google.com, safebrowsing.google.com and accounts.google.com) while the Alexa list only contains main domains (like “google.com”). CapLoader can therefore do more fine-granular domain matching with the Umbrella list (requiring a full match of the Umbrella domain), while the Alexa list enables a more rough “catch ‘em all” approach (allowing *.google.com to be matched).

CapLoader Hosts tab with ASN, Alexa and Umbrella details

CapLoader 1.6 also comes with an ASN lookup feature, which presents the autonomous system number (ASN) and organization name for IPv4 and IPv6 addresses in a PCAP file (see image above). The ASN lookup is built using the GeoLite database created by MaxMind. The information gained from the MaxMind ASN database is also used to provide intelligent display filter CIDR suggestions in the context menu that pops up when right-clicking a flow, service or host.

CapLoader Flows tab with context menu for Apply as Display Filter
Image: Context menu suggests Display Filter BPF “net 104.84.152.0/17” based on the server IP in the right-clicked flow.

Users who have previously purchased a license for CapLoader can download a free update to version 1.6 from our customer portal.


Credits and T-shirts

We’d like to thank Christian Reusch for suggesting the Initial RTT feature and Daan from the Dutch Ministry of Defence for suggesting the ASN lookup feature. We’d also like to thank David Billa, Ran Tohar Braun and Stephen Bell for discovering and reporting bugs in CapLoader which now have been fixed. These three guys have received a “PCAP or it didn’t happen” t-shirt as promised in our Bug Bounty Program.

Got a t-shirt for crashing CapLoader

If you too wanna express your view of outlandish cyber attack claims without evidence, then please feel free to send us your bug reports and get rewarded with a “PCAP or it didn’t happen” t-shirt!

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: http://netres.ec/?b=17ABA35

Posted by Erik Hjelmvik on Monday, 09 October 2017 08:12:00 (UTC/GMT)


Hunting AdwindRAT with SSL Heuristics

An increasing number of malware families employ SSL/TLS encryption in order to evade detection by Network Intrusion Detection Systems (NIDS). In this blog post I’m gonna have a look at Adwind, which is a cross-platform Remote Access Trojan (RAT) that has been using SSL to conceal it’s traffic for several years. AdwindRAT typically connects SSL sessions to seemingly random TCP ports on the C2 servers. Hence, a heuristic that could potentially be used to hunt for Adwind RAT malware is to look for SSL traffic going to TCP ports that normally don’t use SSL. However, relying on ONLY that heuristic would generate way too many false positives.

Brad Duncan did an interesting writeup about Adwind RAT back in 2015, where he wrote:

I saw the same certificate information used last week, and it continues this week.
  • commonName = assylias
  • organizationName = assylias.Inc
  • countryName = FR
Currently, this may be the best way to identify Adwind-based post-infection traffic. Look for SSL traffic on a non-standard TCP port using that particular certificate.

Unfortunately, Adwind RAT has evolved to use other CN’s in their new certificates, so looking for “assylias.Inc” will not cut it anymore. However, looking for SSL traffic on non-standard TCP ports still holds on the latest Adwind RAT samples that we’ve analyzed.

The PT Research Attack Detection Team (ADT) sent an email with IDS signatures for detecting AdwindRAT to the Emerging-Sigs mailing list a few days ago, where they wrote:

“We offer one of the ways to detect malicious AdwindRAT software inside the encrypted traffic. Recently, the detection of this malicious program in network traffic is significantly reduced due to encryption. As a result of the research, a stable structure of data fragments was created.”

Not only is it awesome that they were able to detect static patterns in the encrypted data, they also provided 25 PCAP files containing AdwindRAT traffic. I loaded these PCAP files into NetworkMiner Professional in order to have a look at the X.509 certificates. NetworkMiner Professional supports Port-Independent Protocol Identification (PIPI), which means that it will automatically identify the C2 sessions as SSL, regardless of which port that is used. It will also automatically extract the X.509 certificates along with any other parameters that can be extracted from the SSL handshake before the session goes encrypted.

X.509 certificates extracted from AdwindRAT PCAP by NetworkMiner Image: Files extracted from ADT’s PCAP files that mach “Oracle” and “cer”.

In this recent campaign the attackers used X.509 certificates claiming to be from Oracle. The majory of the extracted certificates were exactly 1237 bytes long, so maybe they’re all identical? This is what the first extracted X.509 certificate looks like:

Self-signed Oracle America, Inc. X.509 certificate

The cert claims to be valid for a whopping 100 years!

Self-signed Oracle America, Inc. X.509 certificate

Self-signed, not trusted.

However, after opening a few of the other certificates it's clear that each C2 server is using a unique X.509 certificate. This can be quickly confirmed by opening the parameters tab in NetworkMiner Pro and showing only the Certificate Hash or Subject Key Identifier values.

NetworkMiner Parameters tab showing Certificate Hash values Image: Certificate Hash values found in Adwind RAT’s SSL traffic

I also noted that the CN of the certificates isn’t constant either; these samples use CN’s such as “Oracle America”, “Oracle Tanzania”, “Oracle Arusha Inc.”, “Oracle Leonardo” and “Oracle Heaven”.

The CN field is normally used to specify which domain(s) the certificate is valid for, together with any additinoal Subject Alternative Name field. However, Adwind RAT’s certificates don’t contain any domain name in the CN field and they don’t have an Alternative Name record. This might very well change in future versions of this piece of malware though, but I don’t expect the malware authors to generate a certificate with a CN matching the domain name used by each C2 server. I can therefore use this assumption in order to better hunt for Adwind RAT traffic.

But how do I know what public domain name the C2 server has? One solution is to use passive DNS, i.e. to capture all DNS traffic in order to do passive lookups locally. Another solution is to leverage the fact that the Adwind RAT clients use the Server Name Indication (SNI) when connecting to the C2 servers.

TLS Server Name (aka SNI) and Subject CN values don’t match for AdwindRAT Image: TLS Server Name (aka SNI) and Subject CN values don’t match for AdwindRAT

TLS Server Name (SNI) with matching Subject CN from Google Image: TLS Server Name (SNI) with matching Subject CN from Google.

My conclusion is therefore that Brad’s recommendations from 2015 are still pretty okay, even for the latest wave of Adwind RAT traffic. However, instead of looking for a fix CN string I’d prefer to use the following heuristics to hunt for this type of C2 traffic:

  • SSL traffic to non-standard SSL port
  • Self signed X.509 certificate
  • The SNI domain name in the Client Hello message does not match the CN or Subject Alternative Name of the certificate.

These heuristics will match more than just Adwind RAT traffic though. You’ll find that the exact same heuristics will also help identify other pieces of SSL-enabled malware as well as Tor traffic.

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: http://netres.ec/?b=1798DC3

Posted by Erik Hjelmvik on Monday, 04 September 2017 19:01:00 (UTC/GMT)


NetworkMiner 2.2 Released

NetworkMiner 2.2 = Harder Better Faster Stronger

NetworkMiner 2.2 is faster, better and stronger than ever before! The PCAP parsing speed has more than doubled and even more details are now extracted from analyzed packet capture files.

The improved parsing speed of NetworkMiner 2.2 can be enjoyed regardless if NetworkMiner is run in Windows or Linux, additionally the user interface is more responsive and flickers way less when capture files are being loaded.

User Interface Improvements

The keyword filter available in the Files, Messages, Sessions, DNS and Parameters tabs has been improved so that the rows now can be filtered on a single column of choice by selecting the desired column in a drop-down list. There is also an “Any column” option, which can be used to search for the keyword in all columns.

Keyword drop-down in NetworkMiner's Parameters tab

The Messages tab has also received an additional feature, which allows the filter keyword to be matched against the text in the message body as well as email headers when the “Any column” option is selected. This allows for an efficient analysis of messages (such as emails sent/received through SMTP, POP3 and IMAP as well as IRC messages and some HTTP based messaging platforms), since the messages can be filtered just like in a normal e-mail client.

We have also given up on using local timestamp formats; timestamps are now instead shown using the yyyy-MM-dd HH:mm:ss format with time zone explicitly stated.

Protocol Parsers

NetworkMiner 2.2 comes with a parser for the Remote Desktop Protocol (RDP), which rides on top of COTP and TPKT. The RDP parser is primarily used in order to extract usernames from RDP cookies and show them on the Credentials tab. This new version also comes with better extraction of SMB1 and SMB2 details, such as NTLM SSP usernames.

RDP Cookies extracted with NetworkMiner 2.2

One big change that has been made behind the scenes of NetworkMiner is the move from .NET Framework 2.0 to version 4.0. This move doesn’t require any special measures to be taken for most Microsoft Windows users since the 4.0 Framework is typically already installed on these machines. If you’re running NetworkMiner in Linux, however, you might wanna check out our updated blog post on how to install NetworkMiner in Linux.

We have also added an automatic check for new versions of NetworkMiner, which runs every time the tool is started. This update check can be disabled by adding a --noupdatecheck switch to the command line when starting NetworkMiner.

NetworkMiner.exe --noupdatecheck capturefile.pcap

NetworkMiner Professional

Even though NetworkMiner 2.2 now uses ISO-like time representations NetworkMiner still has to decide which time zone to use for the timestamps. The default decision has always been to use the same time zone as the local machine, but NetworkMiner Professional now additionally comes with an option that allows the user to select whether to use UTC (as nature intended), the local time zone or some other custom time zone for displaying timestamps. The time zone setting can be found in the “Tools > Settings” menu.

The Port-Independent-Protocol-Detection (PIPI) feature in NetworkMiner Pro has been improved for more reliable identification of HTTP, SSH, SOCKS, FTP and SSL sessions running on non-standard port numbers.

CASE / JSON-LD Export

We are happy to announce that the professional edition of NetworkMiner 2.2 now has support for exporting extracted details using the Cyber-investigation Analysis Standard Expression (CASE) format, which is a JSON-LD format for digital forensics data. The CASE export is also available in the command line tool NetworkMinerCLI.

We would like to thank Europol for recommending us to implement the CASE export format in their effort to adopt CASE as a standard digital forensic format. Several other companies in the digital forensics field are currently looking into implementing CASE in their tools, including AccessData, Cellebrite, Guidance, Volatility and XRY. We believe the CASE format will become a popular format for exchanging digital forensic data between tools for digital forensics, log correlation and SIEM solutions.

We will, however, still continue supporting and maintaining the CSV and XML export formats in NetworkMiner Professional and NetworkMinerCLI alongside the new CASE format.

Credits

I would like to thank Sebastian Gebhard and Clinton Page for reporting bugs in the Credentials tab and TFTP parsing code that now have been fixed. I would also like to thank Jeff Carrell for providing a capture file that has been used to debug an issue in NetworkMiner’s OpenFlow parser. There are also a couple of users who have suggested new features that have made it into this release of NetworkMiner. Marc Lindike suggested the powerful deep search of extracted messages and Niclas Hirschfeld proposed a new option in the PCAP-over-IP functionality that allows NetworkMiner to receive PCAP data via a remote netcat listener.

Upgrading to Version 2.2

Users who have purchased a license for NetworkMiner Professional 2.x can download a free update to version 2.2 from our customer portal.

Those who instead prefer to use the free and open source version can grab the latest version of NetworkMiner from the official NetworkMiner page.

More... Share  |  Facebook   Twitter   Reddit   Hacker News Short URL: http://netres.ec/?b=17888CB

Posted by Erik Hjelmvik on Tuesday, 22 August 2017 11:37:00 (UTC/GMT)

twitter

NETRESEC on Twitter

Follow @netresec on twitter:
» twitter.com/netresec


book

Recommended Books

» The Practice of Network Security Monitoring, Richard Bejtlich (2013)

» Applied Network Security Monitoring, Chris Sanders and Jason Smith (2013)

» Network Forensics, Sherri Davidoff and Jonathan Ham (2012)

» The Tao of Network Security Monitoring, Richard Bejtlich (2004)

» Practical Packet Analysis, Chris Sanders (2017)

» Windows Forensic Analysis, Harlan Carvey (2009)

» TCP/IP Illustrated, Volume 1, Kevin Fall and Richard Stevens (2011)

» Industrial Network Security, Eric D. Knapp and Joel Langill (2014)