Freepik AI Image Model Launches with Licensed Training Data and Developer Access

You are currently viewing Freepik AI Image Model Launches with Licensed Training Data and Developer Access

Freepik has officially launched F Lite, a new Freepik AI image model developed in collaboration with AI startup Fal.ai. Announced Tuesday, F Lite is positioned as a transparent and customizable generative AI tool, trained solely on commercially licensed, safe-for-work images. The model enters a competitive landscape amid growing concern over copyright and data use in AI training.

Built with Ethics in Mind: Licensed Training at Scale

F Lite was trained using a dataset of approximately 80 million images, all internally sourced and commercially cleared. With 10 billion parameters, the model was built over a two-month period using 64 Nvidia H100 GPUs, according to Freepik.

The company offers two distinct versions of the model:

  • Standard – More accurate and prompt-faithful.
  • Texture – Allows for richer visuals but with a slightly higher margin of error.

This bifurcated model structure gives developers the flexibility to choose between predictability and creativity, depending on their use case.

A Response to Copyright Controversies

F Lite enters the market at a time when generative AI companies like OpenAI and Midjourney are under legal scrutiny for training their models on publicly scraped — and often copyrighted — content. In contrast, Freepik aims to avoid such issues entirely by building its model from the ground up using only licensed media.

This makes Freepik one of a growing list of companies — including Adobe, Shutterstock, and Getty Images — that are prioritizing ethical training data in generative AI development.

freepik
Image Credits: Freepik

Developer-Friendly, But Demands High-End Hardware

Although Freepik is not positioning F Lite as a direct competitor to high-end tools like Midjourney V7 or Black Forest Labs’ Flux, it is openly available for developers to fine-tune, explore, and innovate on.

However, there’s a catch — the model requires a GPU with at least 24GB of VRAM to run effectively, which could be a limiting factor for casual users or smaller teams.

Get the Latest AI News on AI Content Minds Blog

Leave a Reply