Skip to content
    Search

    What you'll read:

    • The top takeaways from CISA-NCSC's recent guidance for the secure development and use of artificial intelligence.
    • What the guidance signals about the increasing role AI will play in the development of new tools and solutions. 

    Artificial intelligence is the most headline-grabbing technology of the day. As companies quickly move to incorporate AI into their workflows and product offerings, a subset of technology industry professionals are urging caution. AI, in its current state, is nascent. And while it’s filled with potential, that potential could be good and bad. 

    To address these concerns, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) recently released a set of guidelines for secure development of AI-based products. The guidelines emphasize the importance of sticking to CISA’s Secure by Design principles and are meant to apply to any system that uses AI. 

    How did we get here?

    Since the public launch of ChatGPT in November 2022, it’s been hard to escape talk about how artificial intelligence (AI) is going to be life-changing, industry-changing, and hugely revenue-generating. Corporations big and small, across nearly every industry, are rushing toward an “AI-first” approach to business growth. But even as numerous organizations embrace AI, an equal number have raised questions about its risks. In fact, one survey conducted by the American Psychological Association found that nearly four out of 10 U.S. employees expressed concern that AI might take some, or all, of their job duties in the future.

    Regardless, businesses are forging ahead. Funding for startups that use AI or claim to have a product to secure AI are the only startups still raking in impressive investments – and for good reason. In cybersecurity, AI has already proven to be hugely beneficial in detecting unusual behavior and helping practitioners spot potential cyber attacks before they occur. At Axonius, we’ve incorporated AI in certain areas of our platform to help our customers extract actionable insights. For instance, our Query Assistant is enabled by ChatGPT to help translate users’ natural language questions into Axonius queries, streamlining query creation and helping users save time and increase productivity.

    However, with the rising interest in incorporating AI into tools and solutions, it’s no surprise that lawmakers are starting to issue guidance and propose legislation for the secure development and use of AI. The recent guidance issued by CISA and NCSC is the most notable (and possibly most thorough) to date. 

    Why CISA-NCSC’s guidance is important

    According to the published guidelines, the aim is to assist “providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others.” The primary stated focus of the guidance is “providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs).”

    The guidance, itself, is laid out in four sections meant to span the development lifecycle. They are:

    1. Secure design
    2. Secure development
    3. Secure deployment
    4. Secure operation and maintenance

    You can read the entire document here. But the guidance (which, if you haven’t had a chance to read closely, was “co-sealed by 23 domestic and international cybersecurity organizations”) is significant for several key reasons:

    1. Data Privacy and Security: “AI security” is simply data security — only at a much greater scale and speed. To iterate on why this is important: the new guidelines help ensure that the systems used to generate AI models and the sensitive data used in AI algorithms are secured with multi-layered controls which help protect individuals’ and organizations’ privacy. However, as this guidance is specific to AI development and use, the publication outlines steps for staff awareness of AI-specific threats and risks; the need for threat modeling; assessing the appropriateness of AI system design choices and training models; system monitoring, testing, and documentation; and incident management procedures, just to name a few important processes and procedures.
    2. Robustness and Resilience: Adding on to the Secure by Design principles of the previous bullet, the guidelines state that AI systems and algorithms should be resilient to adversarial attacks and unexpected disruptions. The security principles outlined in the document help developers build resilient systems that can withstand many types of attempts at compromise.
    3. Accountability and Transparency: One of the primary elements of CISA’s Secure by Design principles is embracing “radical transparency and accountability.” While AI providers might worry about exposing intellectual property, they must be forthright about their strategy for and execution of AI models so that individuals and organizations adversely affected by AI (whether that’s copyright infringement, unauthorized data disclosure, or a whole host of other nastiness) have some recourse in the event of a compromise. The guidance says that builders should “release models, applications, or systems only after subjecting them to appropriate and effective security evaluation such as benchmarking and red teaming” and that they “are clear to [your] users about known limitations or potential failure modes.” 
    4. Global Standards: Although CISA and the NCSC were the primary parties responsible for these AI guidelines, many international cybersecurity organizations cooperated on the effort. This type of global effort underscores the necessity for standardization — something missing for much other cybersecurity guidance — and fosters consistency and interoperability across different AI systems. 
    5. Trust and Acceptance: One of the deepest concerns about AI is that it can’t be trusted to protect human interests. As such, the guidelines help providers and users think through how to build systems that are hardened to security threats, misuse, and abuse. Trust is essential for widespread acceptance and adoption of AI technologies, and these guidelines contribute to the establishment of trust and reliability.
    6. Regulatory Compliance: More AI-specific regulatory compliance is likely coming in the new year. Data protection and data privacy laws already exist for numerous industries and geographies. AI protection laws will be similar and, perhaps, even more stringent. Companies that adhere to these guidelines will increase their preparedness for demonstrating compliance when the time comes, and possibly even more importantly, have greater capability to defend against compromises that endanger individuals and organizations.

    What happens next?

    The joint guidance from CISA, the NCSC, and their partners is a big deal because it sets an early precedent. Too often, technology innovators forgo security and privacy implications to speed up delivery, leaving security teams with the arduous task of playing catch-up. 

    Those who take CISA-NCSC’s suggestions and principles into account as they build their AI tools will be better positioned to mitigate risks, protect privacy, and foster trust among users and stakeholders. Because this guidance was formed on a global level, it should help standardize expectations and keep AI providers’ minimum viable on a level playing field.

    But as the guidance suggests, even after incorporating Secure by Design principles into the development cycle, it’s still important to monitor system behavior and input. If you don’t have a good grasp on which users are accessing which devices, software, SaaS apps, and more, then it’ll be that much harder to spot potential intrusions.

     

    Sign up to get first access to our latest resources