Ad image

Open vs. closed models: AI leaders from GM, Zoom and IBM weigh trade-offs for enterprise use

2 Min Read

Need smarter insights in your inbox? Sign up for our weekly newsletter to get only the things that matter to enterprise AI, data and security leaders. Subscribe now


Deciding on an AI model is also a technical and strategic decision. However, there are all trade-offs to choose an open, closed or hybrid model.

While speaking at VB Transforms this year, General Motors model architecture experts Zoom and IBM discussed how businesses and customers might consider choosing an AI model.

Barak Turovsky, who became GM’s first top AI officer in March, said there is a lot of noise with each new model release whenever the leaderboard changes. Long before the leaderboard became mainstream discussion, Turovsky helped launch the first major language model (LLM), recalling how the weights and training data of the open-sourcing AI model led to major breakthroughs.

“It was, frankly, one of the biggest breakthroughs that helped Openai and others get its release,” Turovsky says. “So, it’s actually an interesting anecdote. Open source actually helped create something closed.

The factors in the decision can vary, including cost, performance, reliability and safety. According to Turovsky, companies sometimes prefer mixed strategies. This is an open model for internal use and a closed model for production and facing customers, or vice versa.

https://www.youtube.com/watch?v=zvn4yj1i2kg

>> See all 2025 coverage here <

IBM’s AI Strategy

Armand Ruiz, IBM’s vice president of AI platforms, said IBM initially launched its platform with its own LLM, but realized that that wasn’t enough. The company has since expanded to offer integration with platforms such as face-catching, allowing customers to choose open source models. (The company recently debuted a new model gateway that provides companies with APIs to switch LLM.)

More companies are choosing to buy more models from multiple vendors. Andreessen Horowitz Survey subject 100 CIOs, 37% of respondents said they used five or more models. Last year, only 29% used the same amount.

Choices are important, but too many choices create confusion, Lewis said. To help customers approach, IBM doesn’t worry too much about which LLMs they are using during the proof of concept or pilot phase. The main goal is feasibility. Only later will you start to consider whether to distill or customize the model based on the needs of your customers.

“First, we’ll try to simplify all the analytical paralysis with all these options and focus on the use case,” Lewis said. “Then we understand what the best path to production is.”

How Zoom approaches AI

Zoom customers can choose from two configurations: AI companions, says Zoom CTO Xuedong Huang. One is to adopt the company’s own LLM with other larger foundation models. Another configuration makes me worried that I’ll use too many models to use only Zoom models. (The company recently partnered with Google Cloud to adopt the agent protocol from the agent, an AI companion for enterprise workflow.)

The company has created its own small language model (SLM) without using customer data, Huang said. At 2 billion parameters, LLM is actually very small, but could be better than other industry-specific models. SLM works best on complex tasks when working with larger models.

“This is really the power of a hybrid approach,” fans said. “Our philosophy is very simple. Our company is very similar, like Mickey Mouse and the Elephant Dance. Small models perform very specific tasks. Small models aren’t said to be good enough… Mickey Mouse and the Elephant work together as one team.”

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version