Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI. learn more
Don’t get me wrong. Enterprise AI is big business, especially for IBM.
IBM already has a $2 billion business in generative AI and is now looking to accelerate its growth. IBM today announced the third generation of Granite Large-Scale Language Models (LLM), expanding its enterprise AI business. A core element of the new generation is a continued focus on true open source enterprise AI. Going a step further, IBM is using InstructLab capabilities to help you fine-tune your models for enterprise AI.
The new models announced today include universal options with 2 billion and 8 billion Granite 3.0. There are also Mixture-of-Experts (MoE) models including Granite 3.0 3B A800M Instruct, Granite 3.0 1B A400M Instruct, Granite 3.0 3B A800M Base, and Granite 3.0 1B A400M Base. Rounding out the update, IBM also has a new group with optimized guardrails and safety options, including the Granite Guardian 3.0 8B and Granite Guardian 3.0 2B models. The new model will be available on IBM’s watsonX service, as well as Amazon Bedrock, Amazon Sagemaker, and Hugging Face.
Rob Thomas, IBM’s senior vice president and chief commercial officer, said: “As I said on our last earnings call, the book of business we built on generative AI now includes technology and consulting together. That’s more than $2 billion.” Briefings with reporters and analysts. “Looking back over my 25 years at IBM, I don’t think we’ve ever seen our business grow at this pace.”
How IBM is advancing enterprise AI with Granite 3.0
Granite 3.0 introduces a variety of advanced AI models customized for enterprise applications.
IBM expects the new model to help support a wide range of enterprise use cases, including customer service, IT automation, business process outsourcing (BPO), application development, and cybersecurity.
The new Granite 3.0 models were trained by IBM’s centralized data model factory team, which is responsible for sourcing and curating the data used for training.
Dario Gil, IBM’s senior vice president and research director, explained that the training process involved 12 trillion tokens of data, including both linguistic and coded data across multiple languages. He highlighted that the main differences with the previous generation are the quality of the data and innovations in the architecture used in the training process.
Thomas added that it’s also important to be aware of where your data is coming from.
“Part of our advantage in model building is that we have IBM’s proprietary datasets,” Thomas said. “We have a unique advantage in the industry, where we are the first customer for everything we build, and we also have an advantage in terms of how we build our models.”
IBM claims high-performance benchmarks for Granite 3.0
According to Gil, the Granite model achieves impressive results across a wide range of tasks, outperforming modern versions of models from Google and Anthropic, among others.
“What you’re looking at here is an incredibly high-performance model, it’s truly state-of-the-art, and we’re very proud of it,” Gill said.
But it’s not just the raw performance that sets Granite apart. IBM also focuses on safety and reliability and has developed an advanced “Guardian” model that can be used to prevent core models from being jailbroken or generating harmful content. Different model size options are also an important factor.
“We’re very focused on the lessons we’ve learned from scaling AI that inference costs are essential,” Gil said. “That’s why we focus so much on model category size, because it has a combination of performance and inference cost that is very attractive for expanding use cases within the enterprise.”
Why true open source matters for enterprise AI
A key differentiator for Granite 3.0 is IBM’s decision to release the model under the Apache 2.0 open source license, approved by the Open Source Initiative (OSI).
There are many other open models on the market that are not actually available under OSI-approved licenses, such as Meta’s Llama. This is an important distinction for some companies.
“We decided to be very thorough in this regard and decided to acquire the Apache 2 license, which gives us the maximum amount of power to help our enterprise partners do what they need to do with the technology. We will be able to offer you maximum flexibility,” Gil explained.
The permissive Apache 2.0 license allows IBM partners to build their own brand and intellectual property on top of the Granite model. This will help foster a robust ecosystem of solutions and applications powered by Granite technology.
“If you have permissive licenses that enable contributions, enable communities, and ultimately enable broad distribution, it completely changes the concept of how quickly companies can adopt AI.” Mr. Thomas says.
Looking beyond generative AI to generative computing
Looking ahead, IBM is thinking about the next big paradigm shift, which Gill called generative computing.
Essentially, generative computing refers to the ability to program a computer by providing examples and prompts rather than explicitly writing step-by-step instructions. This is consistent with the ability of LLMs such as Granite to generate text, code, and other output based on the input they receive.
“This paradigm of programming computers by example rather than writing instructions is fundamental, and we are just beginning to get a feel for what it is by interacting with the LLM,” Gill said. I say. “With this paradigm of generative computing, you can see us investing and working very aggressively towards being able to implement things like next-generation models and agent frameworks. It’s a new way to program computers as a result of the AI generation revolution.