Emerging AI Tools Raising Cybersecurity Concerns

Reading Time: ( Word Count: )

August 14, 2023
Nextdoorsec-course

The rise of generative AI-powered tools promises a significant surge in productivity. However, tech leaders deploying these tools grapple with understanding their associated cybersecurity vulnerabilities.

Take, for instance, Microsoft’s Copilot. This generative AI feature is swiftly becoming a staple within Microsoft’s enterprise software suite. As such technologies become more embedded, it’s incumbent upon corporate leaders to gauge what these new functions entail, especially from a security standpoint.

Historically, companies have leaned on detailed inventories to manage their supply chains, ensuring they have a clear understanding of each product’s origin. In the software realm, there’s a growing emphasis on creating a “software bill of materials.” This documentation gives an in-depth look into a software’s makeup, from open-source elements to proprietary components.

Such detailed listings aim to enable companies to discern their software’s intricate components. This, in turn, makes it easier to identify potential security flaws, like the infamous Log4j glitch, and address them more efficiently. The extensive cyber breach resulting from compromised software by SolarWinds in 2020 pushed many businesses to rethink their affiliations with third-party software providers.

Analysts argue that the intricate nature of vast language models makes them a daunting task to audit comprehensively. Jeff Pollard from Forrester Research voiced the anxiety of many in the sector, pointing out the lack of clarity and control in some of these AI features.

Also Read: From ChatGPT to GPT-4, Language Models at Risk

Further, David Johnson from Europol shared at a Brussels conference how generative AI can inadvertently introduce security vulnerabilities. This is especially true if the models were initially trained using flawed code.

Emerging startups like Protect AI are capitalizing on this burgeoning interest in generative AI. They offer services that allow businesses to monitor the ingredients of their bespoke AI systems, flagging potential security breaches and unauthorized code insertions. 

Bryan Wise, CIO at 6sense, suggests that a rigorous vetting process is vital before embracing new AI functionalities. Questions regarding data usage and assurances that data isn’t used to refine external models are becoming paramount for most CIOs. Established vendors, such as Microsoft, do offer some solace, as Rob Franch from Carriage Services notes.

Yet, a different facet of the cybersecurity conundrum emerges with AI assistants assisting in code writing. Tools like Amazon’s CodeWhisperer and the GitHub Copilot by Microsoft proffer code solutions and technical advice to developers. However, their use could inadvertently result in misleading code annotations, insecure code practices, or the unintentional exposure of more system details than intended, warns Pollard from Forrester.

As the landscape of generative AI continues to evolve, businesses find themselves in a race to maximize benefits while keeping potential security pitfalls at bay.

Lucas Maes

Lucas Maes

Author

Cybersecurity guru, encryption wizard, safeguarding data with 10+ yrs of IT defense expertise. Speaker & author on digital protection.

Other interesting articles

Top Security Practices to Protect Your Data in Cloud Services

Top Security Practices to Protect Your Data in Cloud Services

Cloud services make storing and accessing your data simple and flexible, but they also bring new security ...
Boosting Efficiency With Law Firm IT Solutions: A Guide for Small Practices

Boosting Efficiency With Law Firm IT Solutions: A Guide for Small Practices

Small law firms often juggle multiple responsibilities with limited resources, making efficiency a top priority. ...
Automated vs Manual Penetration Testing

Automated vs Manual Penetration Testing

Pentesting is largely divided into two methodologies: Automated vs Manual Penetration Testing. Both have ...
0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *