As the use of AI becomes more common throughout the enterprise, the demand for products that make it easier to inspect, discover and fix critical AI errors is increasing. After all, AI is costly — Gartner predicted in 2021 that a third of tech providers would invest $1 million or more in AI by 2023 — and debugging an algorithm gone wrong threatens to inflate the development budget. A separate Gartner report found that only 53% of projects make it from prototypes to production, presumably due in part to errors — a substantial loss, if one were to total up the spending.
Fed up with the high failure rate — and the fact that menial (if important) data preparation tasks, like loading and cleaning data, still take up the bulk of data scientists’ time — Vikram Chatterji, Atindriyo Sanyal and Yash Sheth cofounded Galileo, a service designed to act as a collaborative system of record for AI model development. Galileo monitors the AI development processes, leveraging statistical algorithms to pinpoint potential points of system failure.
“There were no purpose-built machine learning data tools in the market, so [we] started Galileo to build the machine learning data tooling stack, beginning with a [specialization in] unstructured data,” Chatterji told TechCrunch via email. “[The service] helps machine learning teams improve their data sets … by surfacing critical cohorts of data that may be underrepresented or erroneous, while being an all-round solution to encourage data scientists to proactively track data changes in production and mitigates mistakes and gaps in their models from leaking into the real world.”
Chatterji has a background in data science, having worked at Google for three years at Google AI. Sanyal was a senior software engineer at Apple, focusing mainly on Siri-related products, before becoming an engineering lead on Uber’s AI team. As for Sheth, he also worked at Google as a staff software engineer, managing the Google Speech Recognizer platform.
With Galileo, which today emerged from stealth with $5.1 million in seed funding, Chatterji, Sanyal and Sheth set out to create a product that could scale across the entire AI workflow — from pre-development to post-production — as well as data modalities like text, speech, and vision. Available in private beta and built to be deployable in an on-premises environment, Galileo aims to systematize pipelines across teams using “auto-loggers” and algorithms that spotlight system-breaking issues.
Finding these issues is often a major pain point for data scientists. According to one recent survey (from MLOps Community), 84.3% of data scientists and machine learning engineers say that the time required to detect and diagnose problems with a model is a problem for their teams, while over one in four (26.2%) admit that it takes them a week or more to detect and fix issues.
“The discussion around machine learning within the enterprise has shifted from ‘What do I use this for?’ to ‘How can I make my machine learning workflows faster, better, cheaper?,’” Chatterji said. “Galileo … enforces the necessary rigor and the proactive application of research-backed techniques every step of the way in productionizing machine learning models … [It] leads to an order of magnitude improvement on how teams deal with the messy, mind-numbing task of improving their machine learning datasets.”
Galileo fits into the emerging practice of MLOps, which combines machine learning, DevOps and data engineering to deploy and maintain AI models in production environments. The market for MLOps services could reach $4 billion by 2025, by one estimation, and includes startups like Databricks, DataRobot, Algorithmia and incumbents like Google Cloud and Amazon Web Services.
While investor interest in MLOps is on the rise, cash doesn’t necessarily translate to success. Even the best MLOps platforms today can’t solve every common problem associated with AI workflows, particularly when business executives aren’t able to quantify the return on investment of these initiatives. The MLOps Community poll found that convincing stakeholders when a new model is better, for example, remains an issue “at least sometimes” for over 80% of machine learning practitioners.
Chatterji points to Kaggle CEO Anthony Goldbloom’s investment in Galileo — The Factory led the round with participation from Goldbloom — as a sign of the company’s differentiation. Chatterji says that Galileo currently has “dozens” of paying customers ranging from Fortune 500 companies to early-stage startups — revenue that Galileo plans to leverage to triple the size of its 14-person team by the end of the year.
“Galileo has focused on flipping the otherwise painstaking task of machine learning data inspection, to make it easy and provide intelligent data insights fast,” Chatterji said. “The user only has to add a few lines of code.”
To date, Galileo has raised $5.1 million in total venture capital.