Prana AIEuropean startups working on compression algorithms for AI models are creating optimization frameworks Open Source Thursday.
Pruna AI creates a framework that applies several efficient methods to specific AI models, such as caching, pruning, quantization, and distillation.
“We standardize the storage and loading of compression models, apply a combination of these compression methods, and evaluate the compression model after compression,” Pruna AI co-fonder and CTO John Rachwan told TechCrunch.
In particular, the Pruna AI framework can assess whether large quality losses occur after compressing the model and performance can be obtained.
“If I use metaphors, we’re similar to how to embrace a standardized transformer and diffuser on a face. How to call them, how to store them, how to load them, etc. We’re doing the same thing, but on efficient ways.”
Big AI Labs already uses a variety of compression methods. Openai, for example, relies on distillation to create faster versions of its flagship models.
This could be the way Openai developed the GPT-4 turbo for GPT-4. This is a faster version of GPT-4. Similarly, Flux.1-schnell The image generation model is a distilled version of the Flux.1 model from Black Forest Labs.
Distillation is a technique used to extract knowledge from large AI models with a “teacher student” model. The developer sends requests to the teacher model and records the output. Compare the answers with the dataset and see how accurate they are. These outputs are used to train student models trained to approximate teacher behavior.
“For large companies, what they usually do is for them to build something like this in-house. And what you can find in the open source world is usually based on a single method. Let’s take one method of quantization of LLMS or one method of caching a spreading model as an example,” says Rachwan. “But we can’t find a tool to aggregate them all, make them all easy to use and combine them together. This is the great value that Prana brings now.”
Pruna AI supports all types of models, from large-scale language models to spreading models, to speech to text models and computer vision models, but the company is currently focusing on image and video generation models.
Included by existing users of Pruna AI scenario and Photoroom. In addition to the open source edition, Pruna AI has an enterprise product with advanced optimization capabilities, including optimization agents.
“The most exciting feature we’re releasing right away will be compressors,” Rachwan said. “Essentially, you give it your model, you say: “I need more speed, but don’t drop my accuracy by more than 2%.” And the agent finds the best combination for you.
Pruna AI charges you hourly rates for the pro version. “It’s similar to how you think of GPUs when renting GPUs on AWS or cloud services,” says Rachwan.
Additionally, if the model is an important part of the AI infrastructure, then you’ll save a lot of money with optimized model inference. For example, Pruna AI has used a compression framework to reduce the llama model by 8 times less without much loss. Pruna Ai wants customers to think of its compression framework as an investment that pays them themselves.
Pruna Ai raised $6.5 million in seed funding a few months ago. Startup investors include EQT Ventures, Daphni, Motier Ventures and Kima Ventures.