DeepSeek R1 Distill (Groq)
DeepSeek R1's reasoning capabilities distilled into a Llama 70B architecture, running on Groq's ultra-fast LPU hardware. This model combines DeepSeek's chain-of-thought reasoning with Groq's speed advantage.
Key Features
Chain-of-thought reasoning distilled from DeepSeek R1
Ultra-fast inference on Groq hardware
Strong analytical and math capabilities
70B parameter base for broad knowledge
Ideal Use Cases
Fast reasoning tasks requiring quick analytical responses
Math and logic problems with speed constraints
Real-time tutoring and educational applications
Technical Specifications
| Context Window | 128K tokens |
| Modality | Text → Text |
| Provider | Groq |
| Category | Reasoning |
| Reasoning Style | Chain-of-thought (distilled) |
| Latency | Ultra-low (Groq LPU) |
API Usage
1 curl -X POST https://api.vincony.com/v1/chat/completions \ 2 -H "Authorization: Bearer YOUR_API_KEY" \ 3 -H "Content-Type: application/json" \ 4 -d '{ 5 "model": "groq/deepseek-r1-distill-llama-70b", 6 "messages": [ 7 { "role": "user", "content": "Hello, DeepSeek R1 Distill (Groq)!" } 8 ] 9 }'
Replace YOUR_API_KEY with your Vincony API key. OpenAI-compatible endpoint — works with any OpenAI SDK.
Compare with Another Model
Frequently Asked Questions
Try DeepSeek R1 Distill (Groq) now
Start using DeepSeek R1 Distill (Groq) instantly — 100 free credits, no credit card required. Access 343+ AI models through one platform.
More from Groq
Use ← → to navigate between models · Esc to go back