- Use Cases
IT and Infrastructure
Become an Axonian
Achieving a complete, comprehensive, credible asset inventory remains elusive for many enterprises in 2020. An often overlooked yet key component to achieving asset inventory “nirvana” is ensuring that the data is as near real-time as possible.
Because of IT networks’ rapidly changing nature, an asset inventory is simply not credible if it hasn’t been updated in at least 12 to 24 hours.
An effective inventory promotes efficient IT operations by informing processes and decisions. Because many decisions are made in real-time, current and accurate data is essential. An outdated inventory can lead to incomplete and inconclusive findings, poor decisions based on inaccurate data, loss of time, and overall headaches for technology practitioners.
Let’s take a look at just a few examples.
There’s no better example than a security operations analyst triaging a security alert.
Security analysts are overwhelmed with a constant barrage of alerts, and it’s their job to make decisions quickly, based on the data at hand. Stale data can lead to a wide variety of outcomes — most negative.
When analysts use asset inventory data, they can quickly correlate a network-based sensor alert (the attacker activity) with key target system data (patch, vulnerability, open ports). This leads to an important series of determinations, including the criticality and exploitability of the target host and — most importantly — the overall impact and response of the security team.
Out-of-date information for applied patches, open ports, and existing system vulnerabilities can lead to false determinations.
Ignoring a potentially dangerous situation can lead to significant downtime, data breach, and financial losses.
Incorrectly escalating an alert to the small, overworked incident response teams can result in inefficient use of critical resource time and frustrating working conditions.
Time is the enemy.
Threat and vulnerability management presents us with another example of the need for a near real-time inventory.
Companies spend considerable resources each year on their vulnerability management program. Program costs are high. Not only are you looking at the expense of acquiring and maintaining the software. But you also have to consider the operational costs of managing the vulnerability scanning cycle, analyzing results, applying patches, and rescanning systems across the environment,
Unfortunately, these programs consistently miss a significant number of devices because most enterprises don’t have the latest status for ephemeral devices like containers, VDIs, and virtual machines.
The number of ephemeral devices in the average environment is rapidly growing. That’s risky, because these devices dynamically appear and disappear — often between inventory system updates.
The lack of threat and vulnerability data for unscanned systems can result in potential avenues for threat actors to gain a foothold. Each missed device represents accretion of risk, value dilution of the scanning tool, and ineffective use of personnel resources.
The impact of stale inventory information shows up big in governance, risk, and compliance (GRC) programs.
Companies spend millions each year in preparation, participation, and remediation of compliance audits and assets. Each audit or assessment is generally scoped to specific quantities and types of systems/devices across the enterprise. When the device inventory is missing devices, has incomplete device characterization, and or out of date device disposition, preparation is typically incomplete, audits tend to have negative results, and extra time and effort is expended by staff resources.
Football teams don’t use week five injury reports when preparing for their week 12 opponent. The same principle applies to GRC teams preparing for audits and assessments.
Negative outcomes can be avoided. All it takes is proper planning using the latest information about the in-scope assets.
We already know that traditional asset discovery methodologies are proven to fail:
It’s time for a new approach — and the solution is simple: Every company already has all the asset data they will ever need.
Each data source across the enterprise is constantly being updated by the assets themselves. Every time a device communicates on the network, an agent checks in, a scan is conducted, and a patch is deployed, asset information is created or updated in one or more systems.
The latest information for every single device — whether managed or unmanaged, container, virtual machine or physical machine, whether server, laptop or IoT device — exists in at least one (often many) data sources and technology consoles across the typical enterprise.
This is a certainty, and the key to a paradigm shift in thinking about asset management. Taking advantage of this conceptual approach and frequently polling these data sources will find all the assets and provide a near real-time, complete picture of each individual asset.
41 Madison Avenue, 37th Floor
New York, NY 10010