Don’t Forget The Basics of Network Security (Part 2)

April 26 2017, by Jamie Gillespie | Category: Cloud Services

In an earlier article, we looked at network security basics and how we can borrow guidance from the PCI Data Security Standard to make sure our firewall rules are documented and reviewed on a regular basis. This article continues our look at the basics of information security, with a focus on the server side.




It’s so important that the ASD Top 4 mentions patching twice. From a security perspective, if there is a known vulnerability, nothing is better than applying the patch to remove the vulnerability. I won’t go into depth here, as hopefully the concept is already well understood.


Network Security_2_Macquarie Cloud Services_2



If your data is even remotely important, you need to ensure you have regular backups and that at least some of them are kept offline where they cannot be interfered with by the same misadventures that may impact live data on servers. Similar to Schrödinger’s cat, if you don’t regularly test your backups by doing a restore, then your backups are both valid and corrupt at the same time. Murphy’s Law of course would give worse odds and say untested backups would always be corrupt when you need them the most.


Network Security_2_Macquarie Cloud Services_4



Worthy of an entire article on its own, both system and application logs contain a wealth of information for multiple purposes. Application logs should be descriptive enough to provide not just information to debug errors, but also to provide insight into security event successes and failures that would otherwise go unnoticed at the operating system level. If not, then it may be useful to have a word with the developers so they understand the security benefits of descriptive logs. Operating system logs are quite verbose by default, which then leads to the next challenge of managing these logs.

The benefits of centralised logging far outweigh the effort in setting it up. Servers and applications can be configured to keep a local rolling log while forwarding a copy of each log entry to a central server. Once disparate logs are together in one location, you can start looking for activity that would normally fly under the radar on a single server, but is actually part of a coordinated and distributed attack across the entire server fleet. You also have the ability to filter out only the useful or interesting log entries and forward them onto a SIEM for deeper analysis. This is an important step, as most SIEM products are licensed based on the volume of ingested logs.


Network Security_2_Macquarie Cloud Services_5



Something commonly overlooked is ensuring all servers and network devices are using a common source for time synchronisation. While on the surface this doesn’t appear to be security related, not having a common reference for time makes correlating logs between disparate systems difficult to impossible. Another requirement for having synchronised clocks is the use of time-based one-time passwords (TOTP) which use the current time as part of the input to a cryptographic hash function.

Internet Access

While we previously talked about firewalls from a network perspective, you also need to think about firewalling traffic from a server perspective. Servers typically don’t need unfettered access to the Internet, and should be restricted to the minimum access required. Servers should use internal update servers (e.g. Windows Server Update Services and RedHat Satellite), and have access to a secure jump server or bastion to allow moving data in and out. You can then allow very limited access to official web sites for application updates that cannot be proxied internally. Servers with excessive Internet access are commonly used by employees to perform non-business related web browsing which may lead to accidental infection and compromise of the server. If the server is compromised via any means, Internet access would allow the attacker to easily ex-filtrate data to any number of public file transfer servers.


Network Security_2_Macquarie Cloud Services_3


Extraneous Services

Operating systems and applications come loaded with additional services, demo content, and configuration options that are not required for operating a production server. Many of these extras also include vulnerabilities that increase your attack surface and overall risk exposure. You should perform a systematic inspection of everything on your servers and remove or disable anything not absolutely required for the server to perform its business requirement. This should be done for at least the production servers, preferably before they are put into production. Remember of course that vulnerable test or development servers can still be compromised and used as a stepping stone to pivot and attack other servers internally, so don’t forget them in your scanning and mitigation activities. Similar to extraneous services, you should inspect all default configurations to ensure your servers (and network devices) are not vulnerable by default.

Hopefully this has given you some guidance for improving your server security, or at least a reminder to skilled practitioners not to forget the basics when being bombarded with new acronyms and silver bullet solutions.

If you have any suggestions for improving security basics, or want other areas covered in future articles, please let us know.


Jamie Gillespie

About the author.

Security Architect, Jamie Gillespie is responsible for developing and enhancing Macquarie Cloud Services’ cyber security solutions and mission critical infrastructure. With more than 16 years in building CSIRT capabilities worldwide and extending Security Operations for a large multinational, Jamie is passionate about all areas of security from physical to human.

See all articles by this author