Mistral 7B is a dense 7B-parameter open-weight language model and Mixtral 8x7B is a sparse Mixture-of-Experts model with eight 7B experts, both from Mistral AI. They target strong reasoning, coding, and general-purpose text generation while remaining efficient to run on commodity hardware. They are widely used as high-performance open alternatives to larger proprietary LLMs.