Monitoring network file stores by analysing network traffic!
Network based file stores have been around for quite some time now and they continue to be a popular way to share data within organizations. While cloud based services such as Dropbox and Office 365 are popular, network based file stores will be around for a long time.
There are many reasons why organizations choose to store their data locally on their network. For many, it comes down to the security risks of storing confidential data outside of their networks. For others, it is the convenience of locally stored data which can be easily accessed and it won’t go offline if Internet connectivity is lost.
However, network based file stores have become the number one target for Ransomware attacks. All it takes is for one infected client to encrypt all data on network file shares. For this very reason alone, it is vital that you have some level of visibility as to what is happening on your network file stores. From my own experience, I know of three approaches:
- Agent\client based software solutions
- Native logging on file server
- Network traffic analysis
I am not going into any detail on the agent\client options as they are very vendor specific and I don’t know of any that does not impact on file server performance.
Server based file auditing
Native logging on servers may be a viable option if you manage a very small network. Microsoft provide a way for auditing specific files or folders hosted on Windows servers. The logging can be very noisy, you can get hundreds of individual events logged when a user accesses a single file. The screen-grab below shows a sample event which shows what user accessed the file.
One piece of information that it does not provide is client IP address. This makes it more difficult to track down what network device is accessing the data. This is crucial information if you want to track down a Ransomware infected client and disconnect it from the network.
File auditing using network traffic analysis
Microsoft server based file shares are the most popular, followed by NFS file shares which are popular on networks where you have UNIX clients. Microsoft servers use the SMB protocol to communicate with clients. Both of these protocols are used to transmit various types of data like actions (read, write, delete, etc.) and the actual file data itself.
The screengrab below shows a sample NFS packet, and the associated file name is clearly shown in the packet payload. This file name is sometimes referred to as metadata, a subset of a complete packet capture.
If you want to monitor activity to network file shares, you just need to extract this metadata from the network packets which are going to and from your file servers. The major advantage of this approach, is that you don’t need to install anything on your clients or servers, just monitor the network traffic.
There are many ways to get a source of network packets on your network. The most popular are TAPs, SPAN\Mirror ports or dedicated network visibility appliances. Flow based options (NetFlow, SFlow and others) will not work, as they typically don’t export packet content data. In the rare cases where they do, it is sampled so you will be missing a lot of data.
Once you have your data source in place, you need to connect it to a network traffic analysis system which can extract the relevant metadata from the network packets. There are many options out there like LANGuardian - which I will admit we develop! Systems like LANGuardian store this metadata so you have a real time and historical audit trail. Why not check out our online demo at this link and see for yourself how you can get an audit trail of file activity from network packet data.
To finish off with a tip for Ransomware monitoring: No matter what file activity monitoring you choose, you should watch out for the rate of file renames. While rename is a normal action, it is not one that is used frequently by network users. When Ransomware strikes, the rename rate will increase significantly, so you will just need to configure your monitoring tool. Learn more with this super video - https://www.youtube.com/watch?v=NqbpEvO1zGw
Darragh Delaney is the Director of Technical Services at NetFort . Darragh is Cisco CCNA certified and has extensive experience in the IT industry, having previously worked for O2 and Tyco before joining NetFort in 2005. As Director of Technical Services and Customer Support, he interacts on a daily basis with NetFort customers and is responsible for the delivery of a high quality technical and customer support service.
Editor's comment - Darragh and the NetFort Team are leaders in Metadata capturing and visualization. Look for the next article "What is Network Metadata" from John Bronson of NetFort. example - http://apps.americanbar.org/lpm/lpt/articles/tch06061.shtml
Metadata is rapidly becoming the best network analysis method and has been accepted by courts in e-discovery processes.