Skip to content
    Search

    What you'll read:

    • Definitions and context around incident response terminology and the stages represented
    • Why it's important to have clear definitions when it comes to incident response

     

    Differentiating between various signals and threats in cybersecurity can be as nuanced as trying to understand the body’s response to pain. Imagine this: you wake up one morning with muscle soreness. On its own, this could be a harmless result of a new workout routine – hardly cause for alarm. However, in the right context, this minor symptom might hint at something more serious, perhaps a sign of an underlying health issue that demands a closer look.

    In cybersecurity, we approach challenges with a systematic process like that of a medical doctor. A doctor methodically examines symptoms, orders and interprets tests to diagnose an underlying cause, and, if necessary, refers patients to a specialist. Similarly, security experts navigate through a complex landscape, dealing with events, indicators, investigations, incidents, and breaches. Each of these terms represents a different stage in understanding and responding to a potential threat. Highlighting and documenting the differences between these stages is critical for building a security operations function.

    This post will guide you through defining these terms, unraveling their differences to help you set up a clear framework for when your organization’s cybersecurity disclosure obligations may come into play.

    Something happened, but does it matter?

    At the foundational level, NIST describes an event as, “any observable occurrence in a network or system.” Security Operations teams often aggregate events via Security Information Event Management (SIEM) systems, and rely on this event dataset for everything from proactive research and measurement to real-time monitoring and forensics. Many teams also choose to collect details about the state of their assets – via asset inventories and other posture management tools – and generate events from these systems by comparing the states over time.

    Because event or state data on its own is typically not valuable without focusing on identifying data for a specific purpose, SIEMs and asset inventories usually include some form of “rules,” which can surface certain types of events in the form of alerts. These suspicious events surfaced via alerts are indicators. They are akin to early symptoms with interesting context, like muscle soreness with a confirmation of no recent physical activity. Per NIST, indicators are signals that an attack is imminent or is currently underway, which should warrant further attention. In particular, “indicators of compromise” (IOCs) – forensic artifacts from previously confirmed intrusions –are commonly used to compare internal events against known bad activity.

    If it matters, what does it mean?

    Once indicators are identified, investigations commence to further contextualize them. This process is akin to a doctor analyzing the results of the tests ordered based on observed symptoms. Comparing internal indicators with external IOCs, or conducting deeper queries via aggregation systems like SIEMs or asset inventories, helps understand the nature of any events that the alerts surfaced. Investigations vary in depth and duration, aiming to clarify the context and relevance of the indicators, with the ultimate goal of confidently explaining the activity as benign.

    Finally, an investigation escalates to an incident when it confirms the certain or likely adverse impact on confidentiality, integrity, or availability of systems. This stage should trigger specific response protocols, as outlined in guidelines like NIST 800-61.

    What about a "breach?"

    Because certain types of incidents are defined in (ever-changing) regulations, it’s important to include a legal expert’s opinion on the nature of an incident, including any requirement for public or other notification. Similar to how your general practitioner doctor might refer you to a specialist to make or confirm a certain diagnosis, a company’s security team is unlikely to possess the required legal knowledge, especially given specific jurisdictional context, to determine whether the incident constitutes a data breach.

    The benefit of clear definitions

    With a growing number of external forces mandating different degrees of cybersecurity disclosure, organizations now face the challenge of defining their own obligations in the face of securities or privacy regulations, or even private contracts with customers. The lack of clear definitions or thresholds in many of these rules means companies need to establish their own internal thresholds and decision-making processes to understand the flow of security operations, including the lifecycle of an event through its possible evolution to a formal incident.

    Even though developing your own process and definitions can invite skepticism, you might find yourself in a position where you have to defend your approach. Defending a well thought-out approach is a much better position to be in than having to defend why you have no approach at all.

    Sign up to get first access to our latest resources