Bria Next-Gen Text to Image Model Beats Other Models With Only 4B Parameters, Built Entirely on Licensed Data for Enterprise Development
Bria, the visual generative responsible AI platform, announced the release of its third-generation model, now available as the first fully open-source text-to-image model from Bria. This release continues Bria’s commitment to training exclusively on fully licensed data.
Research shows that Bria’s new open-source model performance is comparable to leading open-source models, with 66% fewer parameters, while training only on licensed data. This study also shows that Bria models are twice as easy to fine-tune, which means that one needs 50% less compute and data to achieve the same result.
Marketing Technology News: MarTech Interview with Abhay Singhal, Co-Founder @ InMobi & CEO – InMobi Advertising
To effectively evaluate the performance of models, Bria selected all market-available models that met at least one of the following criteria: (1) built as open source, or (2) built only using licensed data. Two leading open-source models and one licensed model met the study’s criteria. The test included blind preference evaluations among thousands of design professionals, who selected which model output they preferred when using the same prompts across all four models. The test included blind preference evaluations among thousands of design professionals, who selected which model output they preferred when using the same prompts across all four models.
As part of the evaluation, participants were asked to assess the model’s Aesthetic Quality, prompt adherence, text rendering accuracy, and, finally, to evaluate the Fine-Tuning Effectiveness and Computational Performance of the models. After the evaluation, a statistical analysis was performed to cluster the different models across these criteria.
“While the industry races to build ever-larger models using scraped web data, we’ve proven that smaller, ethically-trained models can deliver equivalent performance,” said Yair Adato, CEO of Bria. “Our new source code available model demonstrates that respecting creators’ rights and building efficient AI aren’t mutually exclusive—they’re complementary strategies that benefit everyone in the ecosystem.”
The third-generation model is available immediately through Hugging Face, including a complete development framework with ControlNets, IP adaptors, and other auxiliary models, as well as Bria’s platform-as-a-service offering. Bria’s platform optimizes for developer velocity with built-in content moderation, enterprise security standards, and support for multiple access methods, including MCP (Model Context Protocol) servers, a Plugin for Figma, and Adobe Creative Suite.
Marketing Technology News: The Biggest Video Marketing Hurdle? Compliance
Addressing the Creator Economy Challenge
Bria’s approach represents a fundamental shift in how AI companies engage with the creative community. By partnering directly with artists, photographers, and content creators, Bria ensures that those whose work powers AI innovation share in its economic benefits.
Unlike competitors relying on internet-scraped data with unclear origins, Bria’s approach, which uses only licensed content, eliminates the copyright and trademark risks that have prevented many corporations from adopting AI technology. The company ensures fair compensation for all content creators contributing to its training data.
“Every image generated by Bria represents a vote for a sustainable creative ecosystem,” added Vered Horesh, CSO at Bria. “We’re proving that AI can amplify artistic work rather than exploit it, creating value for all intellectual property holders, including the entire premium content ecosystem that surrounds Hollywood.”
Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.