Mistral 7B and Mixtral 8x22B are open-weight large language models from Mistral AI, designed for efficient, high-quality text generation and reasoning. Mistral 7B is a dense 7B-parameter model, while Mixtral 8x22B is a sparse Mixture-of-Experts model that activates a subset of its experts per token for strong performance at lower inference cost. Both models target general-purpose use cases such as chat, coding, and knowledge-intensive tasks.