Seeking to bring greater security to AI systems, Protection AI today raised $13.5 million in a seed funding round led by Acrew Capital and Boldstart Ventures in partnership with Knollwood Capital, Pelion Ventures and Aviso Ventures. Co-founder and CEO Ian Swanson said capital will be invested into product development and customer outreach as Protection AI comes out of hiding.
Protect AI claims to be one of the few security companies focused entirely on developing tools to protect AI systems and machine learning models from exploitation. This product suite aims to help developers identify and fix AI and machine learning security vulnerabilities at various stages of the machine learning lifecycle, Swanson said, including vulnerabilities that could expose sensitive data.
“As the use of machine learning models in product applications continues to grow exponentially, we recognize the unique needs and concerns around machine learning code that AI developers need for products and solutions,” Swanson told TechCrunch in an email interview. “We’ve investigated and discovered specific exploits and provided tools to mitigate the risk involved. [machine learning] pipelines.”
Swanson encouraged protection from Darian Dehganpisheh and Badar Ahmed a year ago. Swanson and Dehganpisheh previously worked together on the AI and machine learning business at Amazon Web Services (AWS). Swanson was the global leader of the AWS AI Customer Solutions team and Dehganpisheh was the global leader of Machine Learning Solution Architects. Ahmed Meet Swanson at Swanson’s last startup, DataScience.com, which was acquired by Oracle in 2017. Ahmed and Swanson also worked together at Oracle, where Swanson was VP of AI and machine learning.
Protect AI’s first product, NB Defense, is designed to run on the Jupiter notebook, a digital notebook popular among data scientists in the AI community. (A 2018 GitHub analysis found that more than 2.5 million public Jupyter notebooks were in use at the time of the report’s release. This number has certainly risen since then.) The code, libraries, and frameworks needed to train, run, and test AI systems—for security concerns and Provides solutions.
What problematic elements might an AI project notebook contain? Swanson suggests internal use of authentication tokens and other credentials. NB Defense also requires an “unauthorized” license that prohibits the use of personally identifiable information (such as names and phone numbers) and open source code in a commercial system.
Jupiter notebooks are typically used as scratchboards rather than production environments, and most are locked away from prying eyes. According to Darkread’s analysis, less than 1% of the approximately 10,000 Jupiter notebooks on the public web are configured for open access. But it is true that the exploits are not only theoretical. Last December, security firm Lightspin exposed a technique that allowed an attacker to run any code on a victim’s notebook against AWS SageMaker accounts, a fully managed machine learning service by Amazon.
Other research firms, including Aqua Security, have found that improperly secured Jupiter notebooks are vulnerable to Python-based ransomware and cryptocurrency mining attacks. In a 2020 survey of businesses using Microsoft AI, most said they don’t have the right tools in place to protect their machine learning models.
Sounding the alarm bells may be overdue. A Gartner report predicts an increase in AI cyberattacks by the end of this year, but there is no indication that attacks will scale. But Swanson’s defense is the key.
“[Many] Existing security code checking solutions are not compatible with Jupyter notebooks. These vulnerabilities and more are due to a lack of focus and innovation from current cybersecurity solution providers, and the biggest difference for Protect AI is the real threats and vulnerabilities in AI systems,” said Swanson.
In addition to Jupiter notebooks, Protection AI works with common AI development tools like Amazon SageMaker, Azure ML and Google Vertex AI Workbench, Swanson says. Available for free to start, paid options will be introduced in the future.
“Machine learning is complex, and pipelines that deliver machine learning at scale create and multiply cybersecurity blind spots, preventing important threats from being adequately understood and mitigated. Additionally, emerging compliance and regulatory frameworks will continue to drive the need for AI systems to strengthen their data sources, models and software supply chain to meet governance, risk management and compliance requirements, Swanson continued. “AI delivers unique capabilities and deep knowledge to enterprises and AI to help enterprises of all sizes meet today’s and tomorrow’s unique, emerging and increasingly secure and safe AI-powered digital experiences.”
It is very promising. But protection AI has the advantage of entering a market with relatively few direct competitors. Perhaps the closest is Resistant AI, which is developing AI systems to protect algorithms from automated attacks.
The pre-revenue AI Protection is not disclosing how many customers it has today. But Swanson said the company has placed “enterprises in the Fortune 500” on stands, including finance, healthcare and life sciences, as well as energy, gaming, digital businesses and fintech.
“We will use our funding to add additional team members in software development, engineering, security and go-to-market roles in 2023 as we grow our customers, build partners and value chain participants,” Swanson said. at 15. “We have a multi-year financial runway to sustain this field.