







Enhance your workflow with extensions
Tools from the community and partners to simplify tasks and automate processes
Search results
OpenAI o3-mini
Modelo3-mini includes the o1 features with significant cost-efficiencies for scenarios requiring high performance.
Text-embedding-3 series models are the latest and most capable embedding model from OpenAI.
Text-embedding-3 series models are the latest and most capable embedding model from OpenAI.
OpenAI o1-preview
ModelFocused on advanced reasoning and solving complex problems, including math and science tasks. Ideal for applications that require deep contextual understanding and agentic workflows.
OpenAI o1-mini
ModelSmaller, faster, and 80% cheaper than o1-preview, performs well at code generation and small context operations.
Command R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprise.
Command R+ is a state-of-the-art RAG-optimized model designed to tackle enterprise-grade workloads.
Llama 3.3 70B Instruct offers enhanced reasoning, math, and instruction following with performance comparable to Llama 3.1 405B.
Llama 4 Maverick 17B 128E Instruct FP8 is great at precise image understanding and creative writing, offering high quality at a lower price compared to Llama 3.3 70B
Codestral 25.01
ModelCodestral 25.01 by Mistral AI is designed for code generation, supporting 80+ programming languages, and optimized for tasks like code completion and fill-in-the-middle
Llama 4 Scout 17B 16E Instruct is great at multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over vast codebases.
Ministral 3B
ModelMinistral 3B is a state-of-the-art Small Language Model (SLM) optimized for edge computing and on-device applications. As it is designed for low-latency and compute-efficient inference, it it also the perfect model for standard GenAI applications that have
Mistral Small 3.1
ModelEnhanced Mistral Small 3 with multimodal capabilities and a 128k context length.
Phi-4
ModelPhi-4 14B, a highly capable model for low latency scenarios.
Phi-4-mini-instruct
Model3.8B parameters Small Language Model outperforming larger models in reasoning, math, coding, and function-calling
Phi-4-mini-reasoning
ModelLightweight math reasoning model optimized for multi-step problem solving
OpenAI gpt-5-mini
Modelgpt-5-mini is a lightweight version for cost-sensitive applications.
OpenAI gpt-5-nano
Modelgpt-5-nano is optimized for speed, ideal for applications requiring low latency.
OpenAI o4-mini
Modelo4-mini includes significant improvements on quality and safety while supporting the existing features of o3-mini and delivering comparable or better performance.
OpenAI o3
Modelo3 includes significant improvements on quality and safety while supporting the existing features of o1 and delivering comparable or better performance.