Making foundation models accessible: The battle between closed and open source AI

The massive explosion of generative AI models for text and image has been unavoidable lately. As these models become increasingly capable, “foundation model” is a relatively new term being tossed around. So what is a foundation model?

The term remains somewhat vague. Some define it by the number of parameters, and therefore, how large a neural network is, and others by the number of unique and hard tasks that the model can perform. Is making AI models larger and larger and the model’s ability to tackle multiple tasks really that exciting? If you take away all the hype and marketing language, what is truly exciting about these new generations of AI models is this: They fundamentally changed the way we interface with computers and data. Think about companies like Cohere, Covariant, Hebbia and You.com.

We’ve now entered a critical phase of AI where who gets to build and serve these powerful models has become an important discussion point, particularly as ethical issues begin to swirl, like who has a right to what data, whether models violate reasonable assumptions of privacy, whether consent for data usage is a factor, what constitutes “inappropriate behavior” and much more. With questions such as these on the table, it is reasonable to assume that those in control of AI models will perhaps be the most important decision-makers of our time.

Is there a play for open source foundation models?

Because of the ethical issues associated with AI, the call to open source foundation models is gaining momentum. But building foundation models isn’t cheap. They require tens of thousands of state-of-the-art GPUs and many machine learning engineers and scientists. The realm of building foundation models to date has only been accessible by the cloud giants and extremely well-funded startups sitting on a war chest of hundreds of millions of dollars.

Almost all the models and services built by these few self-chosen companies have been closed source. Still, closed source entrusts an awful lot of power and decisions to a limited number of companies that will define our future, which can be quite unsettling.

We’ve entered a critical phase of AI where who gets to build and serve these powerful models has become an important discussion point.

 

The open sourcing of Stable Diffusion by Stability AI, however, posed a serious threat to the foundation model builders determined to keep all the secret sauce to themselves. Cheering from developer communities around the world has been heard regarding Stability’s open sourcing because it liberates systems, putting control in the hands of the masses vs. select companies that could be more interested in profit than what’s good for humanity. This now affects the way insiders think about the current paradigm of closed source AI systems.

Potential hurdles

The biggest obstacle to open sourcing foundation models continues to be money. For open source AI systems to be profitable and sustainable, they still require tens of millions of dollars to be properly run and managed. Though this is a fraction of what the large companies are investing in their efforts, it’s still quite significant to a startup.

We can see Stability AI’s attempt at open sourcing Neo-GPT and turning it into a real business fell flat, as it was outclassed by companies like Open AI and Cohere. Tthe company now has to ideal with the Getty Images lawsuit, which threatens to distract the company and further drain resources — both financial and human. Meta’s counter to closed source systems through LLaMA has poured gas in the open source movement, but it’s still too early to tell if they’ll continue to live up to their commitment.

Making foundation models accessible: The battle between closed and open source AI by Walter Thompson originally published on TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter