blog post

Indie AI Tools Pose Massive Dangers

The rapid adoption of Artificial Intelligence (AI) is putting Chief Information Security Officers (CISOs) and cybersecurity teams in a familiar, yet challenging position, reminiscent of the SaaS shadow IT era.

Employees, enticed by the efficiency of AI tools, are increasingly using them covertly, bypassing established IT and cybersecurity review procedures. This trend is driven by the astonishing growth of platforms like ChatGPT, which reached 100 million users within just 60 days of launch, signaling an escalating demand for AI tools in the workplace.

A recent study revealed that some workers have boosted their productivity by 40% using generative AI, amplifying the pressure on CISOs and their teams to accelerate AI adoption and, at times, overlook the use of unsanctioned AI tools. However, this approach poses significant risks, especially as employees are drawn to AI tools developed by smaller entities like solopreneurs and indie developers, who often lack the stringent security measures of larger, established companies.

Indie AI app developers typically have less robust security infrastructure, legal oversight, and compliance compared to their enterprise counterparts. The risks associated with these indie AI tools include:

  1. Data Leakage: Generative AI tools, particularly those using large language models (LLMs), often retain user prompts for training or debugging purposes, creating vulnerabilities for data exposure.
  2. Content Quality Issues: LLMs are prone to ‘hallucinations’ – creating outputs that are nonsensical or inaccurate. This poses a risk for organizations relying on AI for content generation without proper human review processes.
  3. Product Vulnerabilities: Smaller organizations developing AI tools are more likely to overlook common product vulnerabilities, making them susceptible to attacks such as prompt injection and traditional vulnerabilities like SSRF, IDOR, and XSS.
  4. Compliance Risks: Many indie AI vendors lack mature privacy policies and internal regulations, potentially leading to non-compliance issues in tightly regulated industries.

The integration of indie AI tools with enterprise SaaS applications is particularly concerning. Employees seeking to enhance productivity often link AI tools to daily-use SaaS systems like Google Workspace, Salesforce, or M365, unwittingly opening backdoors for threat actors to access sensitive company data. This risk is exacerbated by the fact that these indie AI tools often do not meet the security standards of more established SaaS platforms.

In response to these emerging risks, experts recommend several strategies for CISOs and cybersecurity teams:

  1. Standard Due Diligence: Teams should thoroughly review the terms of service for any new AI tools, understanding the legal implications and potential risks involved.
  2. Implementing or Revising Application and Data Policies: Establish clear guidelines on what AI tools are permitted and what types of data can be fed into them.
  3. Regular Employee Training and Education: Educate employees on the risks of data leaks, breaches, and the implications of AI-to-SaaS connections.
  4. Critical Questions in Vendor Assessments: Ensure rigorous security and compliance checks are in place during the vendor assessment process, particularly for indie AI tools.
  5. Building Relationships and Accessibility: CISOs and security teams should work closely with business leaders and employees, presenting themselves as enablers rather than obstacles to the adoption of AI tools.

As AI continues to reshape the cybersecurity landscape, the challenge for CISOs is to balance the potential benefits of these tools with the need to maintain robust security measures and protect against emerging threats. Easy to talk about, but hard to do, yet we really have no choice if we are to continue taking cyberthreats seriously.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.