Okay, so I just wrapped up a really insightful interview from VentureBeat with Itamar Golan, the co-founder and CEO of Prompt Security (now part of SentinelOne). The topic? Why generative AI security needs its own dedicated space, rather than just being tacked on as another feature. It’s a fascinating deep dive into the world of AI security, and I wanted to share some key takeaways that really resonated with me.
Golan’s journey is pretty compelling. He was working with transformer architectures before they became the bedrock of today’s LLMs. That early experience building AI-powered security features convinced him that these LLM applications were basically opening up a whole new can of worms when it came to potential attacks.
And honestly, the stats he throws out are a bit alarming. VentureBeat research suggests that shadow AI (AI tools used without IT’s knowledge or approval) can cost businesses an average of $4.63 million per breach, which is 16% higher than the average breach cost. IBM’s 2025 data reveals that a staggering 97% of breached organizations are missing basic AI access controls. Furthermore, Cyberhaven data indicates that 73.8% of ChatGPT workplace accounts are unauthorized, with enterprise AI usage having surged by 61x in just 24 months.
Golan highlights a critical point: “We see 50 new AI apps a day, and we’ve already cataloged over 12,000. Around 40% of these default to training on any data you feed them, meaning your intellectual property can become part of their models.” It’s a wild west out there, and businesses need to get a grip.
Building a Category, Not Just a Feature
Golan’s key strategic decision, and one that I think is absolutely brilliant, was to build a category, not just a feature. He framed Prompt Security as the AI security control layer for the entire enterprise. Think about that – instead of competing on specific features like prompt injection defense, he aimed to own the entire AI security space. This allowed them to command a bigger budget, sit at the CISO’s table as a strategic partner, and create long-term value. He wasn’t just trying to win a feature race; he was building a whole new game.
He also made a conscious choice to embrace enterprise complexity early on, prioritizing things like self-hosted and hybrid deployment models, and covering a wider range of enterprise surfaces. This made them “enterprise-ready” before the market even fully understood the need.
The Customer-Facing AI Nightmare Scenario
Golan shared a story that really brought the urgency of AI security home. A large, regulated company launched a customer-facing AI support agent, ticking all the boxes with standard security measures. But within weeks, a user with no technical skills was able to “prompt-inject” the agent, gaining access to other customers’ support tickets and sensitive data. Yikes!
This incident underscored the fact that AI democratizes risk. It makes systems hackable by people who previously lacked the skillset, shrinks the time it takes to find vulnerabilities, and significantly expands the potential damage.
Five Key Takeaways:
- Shadow AI is a Real Threat: Employees are using AI tools you probably don’t even know about, and that creates a huge security blind spot.
- AI Security Needs a Holistic Approach: Point solutions aren’t enough. You need a comprehensive security layer that covers all AI touchpoints across your organization.
- Enable Safe Usage, Don’t Just Restrict: Instead of banning AI, find ways to let employees use it safely. This fosters adoption and trust.
- Think Category, Not Feature: Position AI security as a strategic imperative, not just another tool in the box.
- Customer-Facing AI is a Major Risk: Don’t underestimate the potential for attacks on your customer-facing AI applications.
FAQ: Generative AI Security
1. What is generative AI security, and why is it important?
Generative AI security involves protecting AI applications and models from various threats, such as data leakage, prompt injection, and unauthorized access. It’s crucial to ensure the safety, privacy, and integrity of data and systems.
2. What are the main risks associated with using generative AI in the workplace?
The main risks include shadow AI (unapproved AI tools), data leakage, compliance violations, prompt injection attacks, and exposure of sensitive information.
3. What is prompt injection, and how can it compromise AI systems?
Prompt injection is a technique where malicious prompts are used to manipulate an AI model’s behavior, potentially leading to unauthorized actions, data breaches, or the disclosure of sensitive information.
4. How can businesses gain visibility into their employees’ AI usage?
Businesses can use AI security platforms that offer shadow AI discovery capabilities to identify and monitor the AI tools being used by employees, even if they are not officially sanctioned.
5. What is data leakage in the context of generative AI, and how can it be prevented?
Data leakage occurs when sensitive data is inadvertently exposed or transmitted through AI applications. It can be prevented by implementing real-time data sanitization, which automatically removes sensitive information from prompts before they reach external models.
6. What are some basic AI access controls that organizations should implement?
Basic AI access controls include authentication, authorization, data encryption, and monitoring to ensure that only authorized users can access AI tools and sensitive data.
7. How should CISOs (Chief Information Security Officers) approach securing generative AI within their organizations?
CISOs should frame AI security as a natural extension of existing data protection mandates, focusing on protecting assets like corporate data, IP, and user trust within this rapidly expanding channel.
8. What is the difference between securing internal AI use and securing customer-facing AI applications?
Securing internal AI use involves managing employee access and preventing data leakage, while securing customer-facing AI applications requires protecting against prompt injection attacks, preventing cross-tenant data leakage, and ensuring the privacy of customer data.
9. How can companies ensure compliance when using generative AI tools?
Companies can ensure compliance by implementing data governance policies, monitoring AI usage, and using AI security solutions that help meet regulatory requirements.
10. What role will AI play in the future of cybersecurity?
AI is expected to become an integral part of the defense fabric, not only as something to secure but also as a tool that actively enhances security by detecting threats, automating responses, and improving overall cybersecurity posture.
The market is definitely heating up, with major players like Palo Alto Networks, Tenable, and Cisco acquiring AI security firms. The message is clear: security needs to be baked into your AI strategy from the beginning, or you risk becoming another statistic. I am curious to see how the AI security landscape will evolve. What are your thoughts on this?