The rise of generative AI-powered tools promises a significant surge in productivity. However, tech leaders deploying these tools grapple with understanding their associated cybersecurity vulnerabilities.
Take, for instance, Microsoft’s Copilot. This generative AI feature is swiftly becoming a staple within Microsoft’s enterprise software suite. As such technologies become more embedded, it’s incumbent upon corporate leaders to gauge what these new functions entail, especially from a security standpoint.
Historically, companies have leaned on detailed inventories to manage their supply chains, ensuring they have a clear understanding of each product’s origin. In the software realm, there’s a growing emphasis on creating a “software bill of materials.” This documentation gives an in-depth look into a software’s makeup, from open-source elements to proprietary components.
Such detailed listings aim to enable companies to discern their software’s intricate components. This, in turn, makes it easier to identify potential security flaws, like the infamous Log4j glitch, and address them more efficiently. The extensive cyber breach resulting from compromised software by SolarWinds in 2020 pushed many businesses to rethink their affiliations with third-party software providers.
Analysts argue that the intricate nature of vast language models makes them a daunting task to audit comprehensively. Jeff Pollard from Forrester Research voiced the anxiety of many in the sector, pointing out the lack of clarity and control in some of these AI features.
Also Read: From ChatGPT to GPT-4, Language Models at Risk
Further, David Johnson from Europol shared at a Brussels conference how generative AI can inadvertently introduce security vulnerabilities. This is especially true if the models were initially trained using flawed code.
Emerging startups like Protect AI are capitalizing on this burgeoning interest in generative AI. They offer services that allow businesses to monitor the ingredients of their bespoke AI systems, flagging potential security breaches and unauthorized code insertions.
Bryan Wise, CIO at 6sense, suggests that a rigorous vetting process is vital before embracing new AI functionalities. Questions regarding data usage and assurances that data isn’t used to refine external models are becoming paramount for most CIOs. Established vendors, such as Microsoft, do offer some solace, as Rob Franch from Carriage Services notes.
Yet, a different facet of the cybersecurity conundrum emerges with AI assistants assisting in code writing. Tools like Amazon’s CodeWhisperer and the GitHub Copilot by Microsoft proffer code solutions and technical advice to developers. However, their use could inadvertently result in misleading code annotations, insecure code practices, or the unintentional exposure of more system details than intended, warns Pollard from Forrester.
As the landscape of generative AI continues to evolve, businesses find themselves in a race to maximize benefits while keeping potential security pitfalls at bay.
0 Comments