AI creates a dilemma for businesses: don't implement it yet, and you risk missing out on productivity gains and other potential benefits; but if you do it wrong, you could put your business and your customers at absolute risk. This is where a new wave of “AI security” startups comes in, with the premise that these threats, such as jailbreaking and rapid injection, cannot be ignored.
Like an Israeli startup Noma and competitors based in the United States Hidden layer And Protect AIBritish university spin-off Mental Guard is one of them. “AI is still software, so all the cyber risks you've probably heard about also apply to AI,” said its CEO and CTO, Professor Peter Garraghan (right in image above). above). But “if we consider the opaque nature and inherently random behavior of neural networks and systems,” he added, this also warrants a new approach.
In Mindgard's case, this approach is Dynamic Application Security Testing for AI (DAST-AI) targeting vulnerabilities that can only be detected at runtime. This involves continuous, automated red teaming, a way to simulate attacks based on Mindgard's threat library. For example, it can test the robustness of image classifiers against contradictory contributions.
On this front and beyond, Mindgard's technology owes its experience to Garraghan's experience as a professor and researcher focused on AI security. The field is evolving quickly: ChatGPT didn't exist when he entered it, but he sensed that NLP and image models could face new threats, he told TechCrunch.
Since then, what seemed focused on the future came true in a rapidly growing industry, but LLMs continue to evolve, and so do the threats. Garraghan believes its ongoing links with Lancaster University can help the company keep pace: Mindgard will automatically own the intellectual property of the work of an additional 18 PhD students over the coming years. “No company in the world gets such a deal.”
Although it has ties to research, Mindgard is already a commercial product, and more specifically a SaaS platform, with co-founder Steve Street leading the way as COO and CRO. (One of the original co-founders, Neeraj Suri, who was involved in the research, is no longer with the company.)
Enterprises are a natural customer for Mindgard, as are red teams and traditional pen testers, but the company also works with AI startups that need to show their customers that they are preventing AI risks, a Garraghan said.
Since many of these potential customers are based in the United States, the company has added an American touch to its cap table. After lifting a £3 million funding round in 2023, Mindgard is now announcing a new $8 million seed round led by Boston-based .406 Ventures, with participation from Atlantic Bridge, WillowTree Investments and existing investors IQ Capital and Lakestar.
The funding will help “build the team, product development, R&D and everything else you can expect from a startup,” but also expand into the United States. Its recently appointed vice president of marketing, former Next DLP CMO Fergal Glynn, is based in Boston. However, the company also plans to keep R&D and engineering in London.
With a headcount of 15 people, the Mindgard team is relatively small and will remain so, with a goal of reaching 20-25 people by the end of next year. Indeed, AI security “is not even at its peak yet.” But when AI begins to deploy everywhere and security threats follow suit, Mindgard will be ready. Garraghan says: “We built this company to do good in the world, and the positive good here is that people can trust and use AI safely. »
#British #university #spinoff #Mindgard #protects #companies #threats