IBM, like pretty much every tech giant these days, is betting big on AI.
At its annual Think conference, the company announced IBM Watsonx, a new platform that delivers tools to build AI models and provide access to pretrained models for generating computer code, text and more.
It’s a bit of a slap in the face to IBM’s back-office managers, who just recently were told that the company will pause hiring for roles it thinks could be replaced by AI in the coming years.
But IBM says the launch was motivated by the challenges many businesses still experience in deploying AI within the workplace. Thirty percent of business leaders responding to an IBM survey cite trust and transparency issues as barriers holding them back from adopting AI, while 42% cite privacy concerns — specifically around generative AI.
“AI may not replace managers, but the managers that use AI will replace the managers that do not,” Rob Thomas, chief commercial officer at IBM, said in a roundtable with reporters. “It really does change how people work.”
Watsonx solves this, IBM asserts, by giving customers access to the toolset, infrastructure and consulting resources they need to create their own AI models or fine-tune and adapt available AI models on their own data. Using Watsonx.ai, which IBM describes in fluffy marketing language as an “enterprise studio for AI builders,” users can also validate and deploy models as well as monitor models post-deployment, ostensibly consolidating their various workflows.
But wait, you might say, don’t rivals like Google, Amazon and Microsoft already provide this or something fairly close to it? The short answer is yes. Amazon’s comparable product is SageMaker Studio, while Google’s is Vertex AI. On the Azure side, there’s Azure AI Platform.
IBM makes the case, however, that Watsonx is the only AI tooling platform in the market that provides a range of pretrained, developed-for-the-enterprise models and “cost-effective infrastructure.”
“You still need a very large organization and team to be able to bring [AI] innovation in a way that enterprises can consume,” Dario Gil, SVP at IBM, told reporters during the roundtable. “And that is a key element of the horizontal capability that IBM is bringing to the table.”
That remains to be seen. In any case, IBM is offering seven pretrained models to businesses using Watsonx.ai, a few of which are open source. It’s also partnering with Hugging Face, the AI startup, to include thousands of Hugging Face–developed models, datasets and libraries. (For its part, IBM is pledging to contribute open source AI dev software to Hugging Face and make several of its in-house models accessible from Hugging Face’s AI development platform.)
The three that the company is highlighting at Think are fm.model.code, which generates code; fm.model.NLP, a collection of large language models; and fm.model.geospatial, a model built on climate and remote sensing data from NASA. (Awkward naming scheme? You betcha.)
Similar to code-generating models like GitHub’s Copilot, fm.model.code lets a user give a command in natural language and then builds the corresponding coding workflow. Fm.model.NLP comprises text-generating models for specific and industry-relevant domains, like organic chemistry. And fm.model.geospatial makes predictions to help plan for changes in natural disaster patterns, biodiversity and land use, in addition to other geophysical processes.
These might not sound novel on their face. But IBM claims that the models are differentiated by a training dataset containing “multiple types of business data, including code, time-series data, tabular data and geospatial data and IT events data.” We’ll have to take its word for it.
“We allow an enterprise to use their own code to adapt [these] models to how they want to run their playbooks and their code,” Arvind Krishna, the CEO of IBM, said in the roundtable. “It’s for use cases where people want to have their own private instance, whether on a public cloud or on their own premises.”
IBM is using the models itself, it says, across its suite of software products and services. For example, fm.model.code powers Watson Code Assistant, IBM’s answer to Copilot, which allows developers to generate code using plain English prompts across programs including Red Hat’s Ansible. As for fm.model.NLP, those models have been integrated with AIOps Insights, Watson Assistant and Watson Orchestrate — IBM’s AIOps toolkit, smart assistant and workflow automation tech, respectively — to provide greater visibility into performance across IT environments, resolve IT incidents in a more expedient way and improve customer service experiences — or so IBM promises.
FM.model.geospatial, meanwhile, underpins IBM’s EIS Builder Edition, a product that lets organizations create solutions addressing environmental risks.
Alongside Watsonx.ai, under the same Watsonx brand umbrella, IBM unveiled Watsonx.data, a “fit-for-purpose” data store designed for both governed data and AI workloads. Watsonx.data allows users to access data through a single point of entry while applying query engines, IBM says, plus governance, automation and integrations with an organization’s existing databases and tools.
Complementing Watsonx.ai and Watsonx.data is Watsonx.governance, a toolkit that — in IBM’s rather vague words — provides mechanisms to protect customer privacy, detect model bias and drift, and help organizations meet ethics standards.
New tools and infrastructure
In an announcement related to Watsonx, IBM showcased a new GPU offering in the IBM cloud optimized for compute-intensive workloads — specifically training and serving AI models.
The company also showed off the IBM Cloud Carbon Calculator, an “AI-informed” dashboard that enables customers to measure, track, manage and help report carbon emissions generated through their cloud usage. IBM says it was developed in collaboration with Intel, based on tech from IBM’s research division, and can help visualize greenhouse gas emissions across workloads down to the cloud service level.
It could be said that both products, in addition to the new Watsonx suite, represent something of a doubling down on AI for IBM. The company recently built an AI-optimized supercomputer, known as Vela, in the cloud. And it has announced collaborations with companies such as Moderna and SAP Hana to investigate ways to apply generative AI at scale.
The company expects AI could add $16 trillion to the global economy by 2030 and that 30% of back-office tasks will by automated within the next five years.
“When I think of classic back-office processes, not just customer care — whether it’s doing procurement, whether it’s elements of supply chain [management], whether it’s elements of IT operations, or elements of cybersecurity … we see AI easily taking anywhere from 30% to 50% of that volume of tasks, and being able to do them with much better proficiency than even people can do them,” Gil said.
Those might be optimistic (or pessimistic, if you’re humanist-leaning) predictions, but Wall Street has historically rewarded the outlook. IBM’s automation solutions — part of the company’s software segment — grew revenue by 9% year over year in Q4 2022. Meanwhile, revenue from data and AI solutions, which focuses more on analytics, customer care and supply chain management, grew sales by 8%.
But as a piece in Seeking Alpha notes, there’s reason to lower expectations. IBM has a difficult history with AI, having been forced to sell its Watson Health division at a substantial loss after technical problems led high-profile customer partnerships to deteriorate. And rivalry in the AI space is intensifying; IBM faces competition not only from tech giants like Microsoft and Google but also from startups like Cohere and Anthropic that have massive capital backing.
Will IBM’s new apps, tools and services make a dent? IBM’s hoping so. But we’ll have to wait and see.
IBM intros a slew of new AI services, including generative models by Kyle Wiggers originally published on TechCrunch