About the author
Marco Rottigni is the Chief Technical Security Officer EMEA for Qualys, the cloud platform for IT, security and compliance across an organisation’s global IT assets.
IT security teams have more information at their disposal than ever before. However, the sheer amount of data is not helping to solve problems.
So how can you avoid this problem and keep your teams focused on the biggest priorities? The answers lie in better data consolidation, prioritisation and processes.
Data, data everywhere, but do we stop to think?
First, it’s important to verify the sources of information that you have available today that provide you with IT, security and compliance data.
IT teams that have more established processes rely on IT Asset Management (ITAM) or configuration management database (CMDB) systems while less formalised approaches will see data fragmentation across a mix of spreadsheets and proprietary databases.
Compliance data is stored mainly on spreadsheets or documents, sometimes originating from auditing or consultancy firms. Some organisations use specialised software to track compliance and execute controls, but this is often siloed when teams don’t communicate with each other.
Other questions you need to consider include: Do you have too many sources that overlap each other? Can you consolidate these sets of data to make this easier, either through reducing the number of tools that you have in place or by getting the data into one place?
If you do plan to synchronise multiple data sources together, it is important to confirm that this takes place on a reliable and consistent basis. If this is difficult – or relies on manual work to achieve consistent and timely results – then it might be more convenient and more accurate to consolidate your tools and products where you can. This can simplify the results and help you focus.
Once you have gone through these sources of data, it’s time to look at how to improve your use of this data. Rather than simply adding more data to the mix, this means looking at the context and the accuracy of your data. In this case, context involves providing you with the right data, filtered to meet a specific goal or requirement; accuracy involves providing more up to date information that is based on what is taking place now, rather than from a day or a week ago. Improving accuracy and context can then help you enrich these various data sets.
To achieve this, it’s important to go through your processes for handling and using this data on a day-to-day basis. For example, what does your vulnerable surface prioritisation and remediation process look like today? Is it an effective and efficient approach, or does it require more oversight to provide good results?
Every organisation should strive for accuracy for one simple reason: the lack of accurate data leads to too much information, which has to be investigated before it can be defined as useless and eliminated.
According to an IDC study, The State of Security Operations, the average security investigation takes one to four hours per incident and involves two SecOps team members. Given the skills shortage in security, the biggest business advantage of accurate data is operational efficiency. Accurate data reduces the number of events to investigate, ensures your team only investigates events that matter and frees up your skilled staff for other tasks.
To achieve greater accuracy and unlock greater operational efficiency, there are a number of sources of data that can be used in tandem, from cyber threat intelligence information for understanding your exposure and exploitability in real time through to IT Asset Management data that can tell you what is installed and the status of those assets in near real time. When combined these two sources can help you gain insight into what new security issues apply to your organisation and how quickly those issues require remediation or where an issue might need another form of mitigation.
Thinking outside the box
So far, these considerations should help you take a practical approach to managing assets that are connected to the network on a regular basis. However, today’s IT consists of many more devices and services that either don’t join the network frequently or are hosted and managed by third parties. It doesn’t matter whether those services are co-hosted by local organisations or by one of the big public cloud providers, such as Amazon or Microsoft, these are assets and applications that need to be managed.
For each external platform your company operates or uses, you should have the same granularity of data that you have internally. Equally, this information should be centralised alongside your internal data, so that you can look at everything in context, regardless of which platform is involved. This is essential for achieving a pervasive level of insight across your company’s whole IT landscape.
As more IT moves into the cloud, the volume of data will continue to grow, based on continuous scanning for vulnerabilities, changes in IT assets and rapid deployment of new assets over time. Being able to manage all this information is a headache when it comes to spotting potential issues; however, it is essential to work out which items are the most important to the business.
Managing this amount of information involves looking at which applications or items are critical to the business, and ensuring they receive attention when any changes or updates take place. There may also be security issues that are so serious that they need immediate attention. By ranking these updates, your team can prioritise their efforts. This data set should also provide alerts for conditions that meet security risk criteria and be searchable for particular issues, so that any unpatched IT assets can be automatically flagged for the team to deal with.
Centralisation of data supports the goals of many teams. Ultimately, IT operations and asset management teams, IT security departments and compliance professionals all require the same data about their organisation’s IT landscape. What is different is their perspective and their actions.
If, for example, we consider a virtual cloud server instance in an AWS account. To follow these new best practices, we would install an agent in the golden image, which will start collecting data from the moment any new server image is generated.
For the IT staff, the agent will provide valuable information on what resources it uses; where it is geolocated; when it was booted last; what software is installed on it; any proprietary or open source software it uses and any end of life information about that software. In contrast, the security team will want to assess this agent data to indicate any new vulnerability, to detect signs of compromise and understand if exploits are available for the detected vulnerabilities. Finally, this should inform the team if patches are available for remediation that should be deployed.
The compliance team will want to check whether the server is complying with the set of controls included in any applicable audit framework. Examples here would include PCI DSS for payment card data or data covered by GDPR guidelines.
As we’ve illustrated, you can help all these teams achieve greater consistency for all the processes involved by creating a single central ‘source of truth’ based on IT asset data while also minimising the required effort for data processing and propagation.
Similarly, this data is very useful for managing other stakeholders within the business when it comes to security issues. When high-profile publications share stories of the latest security breaches or hacks, the number of people that will be interested in security will go up. Being able to provide them with information proactively on these issues – from whether they are issues at all, through to specific data on remediation plans – will go a long way to ensuring that everyone feels confident in the organisation’s security plans. Even when security issues are not pressing, this can greatly aid in the perception of what needs to be remediated.
Looking at the bigger picture around IT asset data
Managing security relies more and more on data. Without this insight, it becomes increasingly difficult to prioritise issues and ensure that all IT assets are secure. However, dealing with the volume of data created across IT is its own problem, if you don’t have the right tools at your disposal.
There may be existing sets of data across the business created by teams all looking to meet their own goals, but building a single source of truth that is accurate and can underpin all these use cases is more efficient. Going back to our previous example of a cloud server instance, we can avoid excessive duplication of manual work when IT decides to decommission a server because it is no longer needed. Instead of having to update a menagerie of spreadsheets and databases across departments from one area of the company to another, a centralised platform results in a change that instantly updates relevant departments. The server disappears from IT, the server’s perceived risk will be removed from Security’s dashboard and compliance will automatically improve.
Centralising all this data and getting a single viewpoint on all IT assets – regardless of where they are at any one point in time – is therefore essential. Consolidating this data should also make it easier to manage, analyse and search through information on assets, software and installed updates. Rather than a morass of data, this should provide you with a more detailed picture of all the security changes that matter and the priorities based on your real-world environment.
Marco Rottigni, Chief Technical Security Officer EMEA at Qualys