Server admins must consider potential security problems when deploying infrastructure to ensure it is protected from attackers. For example, admins should harden systems, especially going beyond the network to account for employees who work remotely. They also need to keep data encrypted when moving and when at rest.
David Clinton, a server admin and AWS Certified Solutions Architect, wrote Linux Security Fundamentals to give admins a high-level overview of Linux security best practices. These steps ensure a company’s infrastructure is protected from attackers and stays that way.
“Whether you’re a professional Linux admin, a developer, a data engineer or even just a regular technology consumer, you’ll both be safer and more effective at everything you do if you can understand and apply security best practices,” Clinton wrote.
Editor’s note: This transcript has been edited for length and clarity.
Who do you recommend reads this Linux security book, and what fundamental security lessons should they come away with?
Clinton: I think anybody could use it. It covers a lot of ground; everybody will have seen some of the content before, but I don’t think too many people will be intimately familiar with [the information in] every chapter. And there are some things that everybody should read, just because it’s so desperately important.
Ransomware is exploding, right? It’s a multibillion-dollar industry now — and it’s not disappearing. Ransomware has been increasing exponentially over the last five years. I read a New Yorker article about how there is a whole industry built around ransomware. There are people who make a living working for insurance companies negotiating with the ransomware cartels. And there are people who consult with thousands of companies to beef up their security.
The steps that will protect you from 99% of ransomware attacks are all in my book. And it’s only a two-hour read. It’ll take you longer to implement them all.
My book covers the basics of Linux security best practices that admins need to know. For example, say you’re running a web server or corporate infrastructure. One basic aspect is to have backups and a tested recovery protocol to get those backups in place should something happen. Just that alone will be helpful against ransomware because you can be back up and running quickly. I bricked my own workstation while playing with Python and was back up and running in half an hour thanks to a fresh Ubuntu install from a USB stick and then accessing my archives from [Amazon Simple Storage Service]. If you have a backup and you have a tested recovery protocol, it can save you from almost any mistake or attack. Obviously, if you have a multitiered architecture to recover, it’ll be more complicated — but it can all be done.
You started out as a server admin. What vulnerabilities did you come across the most often?
Clinton: A big problem is too many open ports. You’ll have multiple people working on a single website or deployment; each team has their own applications and test applications. They may try something out and then leave it there. Even though they’ve gone on and use some other software, it’s still there, and the port number is still open. Any sniffing attacker is going to see the port number open. And, even if it was properly configured, [attackers] may be able to get in that way.
One option to solve this is reducing your exposure surface by shutting down software when not using it. Have one person, let’s say, in charge of the whole infrastructure, watching and making sure that there’s nothing running that shouldn’t be running. That’s complicated because keeping track of large, sprawling projects is a huge task. It’s the potential cause of a lot of problems. Therefore, it’s important to have regular vulnerability scans. Sometimes, companies hire outside organizations to … scan your internet-facing infrastructure and look for open ports that shouldn’t be open and vulnerabilities and unpatched software. This scanning is very valuable but also very expensive. But any organization worth more than a half-million dollars really should have vulnerability scans quite regularly.
You mention consultants conducting vulnerability scanning. Would you recommend companies use only external teams or internal and external?
Clinton: It’s a mixture. Internal teams … know the system already and should spot bad configurations quicker. You want external consultants to see how vulnerable your infrastructure is to people with no knowledge of your systems. Related to that, you can see how much information is already out there; this is called open source intelligence [OSINT].
There may very well be too much information on the internet about your company’s internal workings that can be accessed freely and legally.
Clinton shares two surprising sources of data for attackers: the career-focused social media site LinkedIn and the tech forum Server Fault.
Another place you can go to is Server Fault, a website where developers go to ask tech questions. Most developers and admins aren’t constantly drawing on their personal memory to code, instead they often cut and paste from online forums. So, you can trace the questions and answers given by a company’s employees to trace what they use. OSINT is so prevalent that there is software that can go out and automate the process of trying to find out how much about your company is out there publicly.
It makes you realize just how vulnerable your company really is — and that is without phishing attacks added in. [With phishing attacks,] someone can call or email an employee and trick them into revealing their password or other credentials. This is where a lot of ransomware attacks originate from — is an employee responding improperly to what appears to be a perfectly innocent email?
In your book, you mention the importance of failover backups. Would you say that is the best way to prepare your company against common Linux vulnerabilities, or is it not always the best tool?
Clinton: It depends how urgently you need your system back up and running. If it’s just a workstation, there is less urgency, and you don’t need a failover. But, if it’s a U.S. government website, for example, which needs continuous uptime, then you have no choice but to use a failover system. Failover is a brilliant system. And, with the big cloud providers like AWS, there’s really no excuse not to do it right, especially where the failover is off-site. You’re not keeping the failover in the same physical location as your production system. If something goes wrong with the production location, then the failover is not going to help you. It really lets you distribute your archives across a lot of external infrastructure.
I use AWS, and you can set up a database replication with their [Relational Database Service] system. You can have a front-end and back-end multitiered website back up and running after only a few minutes following a catastrophic failure. It’s impressive because, if you design it right, it’s almost bulletproof. This would solve a lot of ransomware problems because adversaries can’t hold you for ransom if you have backups and can be up and running again shortly after an attack.
Over your career as a server admin, what is the most important security lesson you’ve learned?
Clinton: Well, I guess it’s the simple security fundamentals: cyber hygiene, patch management, reducing your footprint by closing ports, and using encryption and firewalls. This is all security basics that have been around for decades; you just need to use it. Another aspect to consider is automating your Linux management tasks. There is no way an admin will remember to do a backup every week for more than two weeks. So, automate everything that you can; otherwise, it just doesn’t happen.