DeepSeekDevelops cost-effective large language models like R1, optimized for performance and efficiency.
Total Models: 3
DeepSeek-V3-0324
DeepSeek-V3-0324 demonstrates notable improvements over its predecessor, DeepSeek-V3, in several key aspects, including enhanced reasoning, improved function calling, and superior code generation capabilities.
chat-completion
DeepSeek-V3
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
chat-completion
DeepSeek-R1
DeepSeek-R1 excels at reasoning tasks using a step-by-step training process, such as language, scientific reasoning, and coding tasks.
chat-completion