Mistral FT refers to fine-tuned variants of Mistral AI base language models, exposed via the Mistral API for domain- or task-specific use. These models inherit architecture, context window, and most capabilities from their underlying base models (e.g., Mistral Small, Mistral Medium, Mistral Large) while adapting behavior to customer data. There is no single canonical "Mistral FT" checkpoint with unified public benchmarks; performance depends on the chosen base model and fine-tuning setup.
Open-weight instruction-tuned base model from Mistral with public benchmarks and community tooling.
Sparse MoE model from Mistral with strong performance per dollar and open weights.
Fine-tuned variant of OpenAI's flagship GPT-4o model with strong tooling and ecosystem.