The models/mistral.yaml file provides comprehensive configuration for all Mistral AI models available through their API. This configuration follows the same structure as the Gemini models configuration, ensuring consistency across the MindX system.
mistral-large-latest - Latest large model with excellent reasoning and writing capabilitiesmistral-large-2402 - Specific version from February 2024mistral-8x22b-instruct - Mixture of experts model with 22B parameters per expertmistral-small-latest - Latest small model optimized for speed and efficiencymistral-small-2402 - Specific version from February 2024mistral-nemo-latest - Ultra-fast model for high-throughput applicationsmistral-nemo-12b-latest - Larger Nemo variant with 12B parameterscodestral-latest - Latest code generation modelcodestral-22b-latest - Larger code model with 22B parameterscodestral-2405 - Specific version from May 2024mistral-7b-instruct - Original 7B instruction-tuned modelmistral-7b-instruct-v0.3 - Version 0.3 of the 7B modelmistral-8x7b-instruct - Mixture of experts with 7B parameters per expertmistral-8x7b-instruct-v0.1 - Version 0.1 of the 8x7B modelmistral-embed - Original embedding modelmistral-embed-v2 - Improved embedding model with better performanceEach model entry includes the following parameters:
reasoning - Logical reasoning and problem-solving abilitycode_generation - Code writing and programming capabilitieswriting - Creative and technical writing qualitysimple_chat - Conversational and chat capabilitiesdata_analysis - Data processing and analysis skillsspeed_sensitive - Performance in time-critical applicationscost_per_kilo_input_tokens - Cost for input tokenscost_per_kilo_output_tokens - Cost for output tokensmax_context_length - Maximum context window sizesupports_streaming - Whether the model supports streaming responsessupports_function_calling - Whether the model supports function callingapi_name - The actual API identifier for the modeltext - General text processingreasoning - Advanced reasoning capabilitiescode_generation - Code writing and programmingmultilingual - Multi-language supportfill_in_middle - Fill-in-the-middle code completionembedding - Text embedding generationmistral-large-latest - Excellent balance of capabilitiesmistral-small-latest - Good performance with lower costmistral-nemo-latest - Maximum speed for high-throughput needscodestral-latest - Specialized for programming taskscodestral-22b-latest - More powerful for complex codecodestral- models support FIM for code completionmistral-embed-v2 - Improved performance and qualitymistral-embed - Original embedding modelmistral-nemo-latest - Lowest cost per tokenmistral-small-latest - Good performance-to-cost ratiomistral-large-latest - Best quality for complex tasksmistral-large-latestmistral-small-latestmistral-nemo-latestcodestral-latestcodestral-22b-latestmistral-nemo-12b-latestmistral-8x22b-instructmistral-7b-instructmistral-8x7b-instructmistral-7b-instruct-v0.3mistral-8x7b-instruct-v0.1mistral-embedmistral-embed-v2mistral-nemo-latestmistral-small-latestmistral-large-latestcodestral-latestmistral-large-latest (0.92 score)mistral-8x22b-instruct (0.90 score)codestral-22b-latest (0.96 score)codestral-latest (0.95 score)mistral-large-latest (0.94 score)mistral-8x22b-instruct (0.93 score)mistral-large-latest (0.96 score)mistral-small-latest (0.94 score)mistral-8x22b-instruct (0.91 score)mistral-large-latest (0.89 score)# In .env file
MINDX_LLM__MISTRAL__DEFAULT_MODEL="mistral-large-latest"
MINDX_LLM__MISTRAL__DEFAULT_MODEL_FOR_CODING="codestral-latest"
MINDX_LLM__MISTRAL__DEFAULT_MODEL_FOR_REASONING="mistral-large-latest"
from llm.llm_factory import LLMFactory
Get a Mistral model
model = await LLMFactory.create_llm_handler(
provider="mistral",
model="mistral-large-latest"
)
# Use Codestral for code generation
code_model = await LLMFactory.create_llm_handler(
provider="mistral",
model="codestral-latest"
)
# Use Nemo for high-throughput tasks
fast_model = await LLMFactory.create_llm_handler(
provider="mistral",
model="mistral-nemo-latest"
)
The mistral.yaml file will be updated as new Mistral models are released. Updates include:
mistral-small-latest firstcodestral- for code tasksThis configuration ensures optimal model selection and performance within the MindX system while maintaining cost efficiency and quality standards.