In 2023, Air Canada made headlines for its ‘lying AI chatbot’ case, a significant moment for businesses using AI. The airline faced controversy after its AI-powered chatbot gave a customer misleading flight information. The subsequent ruling held the airline fully responsible for the incorrect information.
The case highlighted an important truth: Businesses are accountable for every interaction on their platforms, including AI-powered ones.
Enterprises must proactively secure their AI systems, ensuring they are robust, reliable, and free from vulnerabilities.
Ruchir Fatwa, co-founder of SydeLabs and now VP of Engineering at Protect AI following its acquisition, has observed the AI security landscape. SydeLabs and Protect AI provide tools to ensure the security of AI systems, including an AI Red Teaming System, which tests for vulnerabilities like data leaks, prompt injections, and toxicity. Their AI Firewall, an intent-based solution, monitors user inputs for safe communication with AI systems.
In this episode, Ruchir spoke to Srikrishnan, co-founder of Rocketlane, about the unique security challenges of AI, how companies can improve their security, and why early attention to AI safety is essential for businesses.
Here are the main points from their conversation:
Companies can’t attribute AI-generated responses to machines—these are still seen as coming from the enterprise to the customer.
In recent deployments and testing, the Protect.AI team has observed concerning trends around safety and security, often due to newer AI models being less secure or how companies implement these systems.
Many enterprises, including mid-market customers, fine-tune their models using customer or private data, assuming sensitive information is protected.
However, there are easy ways to leak this data, like role-playing scenarios designed to reveal another customer’s information.
These incidents highlight the need for stronger safeguards in AI deployments.
When companies deploy AI solutions, they must clearly define the scope of responsibility for customer safety—especially given the uncertainty around liability, particularly with open-source models. For example, if a startup builds on a model like Llama and sells a feature that produces biased or unsafe results, the responsibility question becomes important.
Many startups are now being transparent about their processes—such as the fine-tuning, security measures, and safety tests. Sharing this information with customers builds trust and encourages them to ask the right questions of other providers. While no system is 100% secure, demonstrating necessary precautions helps reassure customers and may reduce the likelihood of being held solely responsible for security issues.
AI security is distinct because it creates a new attack surface by merging traditional systems with human-like interactions, making them vulnerable to social engineering.
Key factors making AI security challenging include:
This new model creates a single access point for vast amounts of data and tasks, posing significant security challenges. Access to sensitive systems could result in unintended consequences if not properly managed.
Beyond security, AI raises concerns about safety, brand reputation, and intellectual property, which companies deploying AI systems must manage.
Protecting AI systems from fraud and abuse requires a holistic approach that includes:
Enterprises can be more prepared due to their proactive approach to identifying and addressing potential risks.
One example is AI teams aware of potential issues and continuously try to break their systems to find vulnerabilities. Recently, a single model approach is broken down into smaller, specific models for particular tasks. For instance, one model might only handle greetings, ensuring it doesn't respond beyond that. This allows for stricter guardrails, with an outer model upfront that decides which question goes where, and as confidence grows, more modules can increase capabilities.
Another example is when enterprises carefully consider the models they deploy. For example, if a company switches from OpenAI’s GPT to an open-source model like Llama, this change might seem simple but can come with new security risks. While both models may appear similar in performance, the types of attacks that work on GPT might not work on Llama, and there may be new vulnerabilities to consider. These enterprises don’t just evaluate models based on cost and performance; they also consider non-functional factors like security to ensure their systems remain secure as they scale.
As AI becomes integral to products and solutions, here are a few steps you need to take:
To be better prepared for deploying AI at scale, companies should consider a few best practices to ensure success and minimize risks. These include:
Each category requires a different approach, and even if AI is not deployed directly in your systems, you must consider how employees are using AI tools. For example, many AI providers don’t protect users from issues like copyright infringement, which can expose the company to risks.
Further Reading
2025 goal setting for professional services organization
The 6 pillars of client-centric professional services