Longcat 128K is Meituan's ultra-long context model, purpose-built for tasks that require processing massive amounts of text in a single request. With a 128K token context window, it can handle entire books, large codebases, and extensive document collections without losing coherence.
Longcat excels at tasks where understanding the full scope of a large document is essential — contract analysis, research synthesis, and codebase understanding.
Key Features
128K token context for massive document processing
Strong long-range coherence and recall
Excellent at cross-document synthesis
Good bilingual support (Chinese + English)
Optimized for document-heavy workflows
Ideal Use Cases
Full-book analysis and summarization
Large codebase understanding and documentation
Cross-document research synthesis
Contract and legal document review
Technical Specifications
| Context Window | 128K tokens |
| Modality | Text → Text |
| Provider | Meituan |
| Category | Text Generation |
| Max Output | 16K tokens |
| Best For | Ultra-long document processing |
API Usage
1 curl -X POST https://api.vincony.com/v1/chat/completions \ 2 -H "Authorization: Bearer YOUR_API_KEY" \ 3 -H "Content-Type: application/json" \ 4 -d '{ 5 "model": "meituan/longcat-128k", 6 "messages": [ 7 { "role": "user", "content": "Hello, Longcat 128K!" } 8 ] 9 }'
Replace YOUR_API_KEY with your Vincony API key. OpenAI-compatible endpoint — works with any OpenAI SDK.
Compare with Another Model
Frequently Asked Questions
Try Longcat 128K now
Start using Longcat 128K instantly — 100 free credits, no credit card required. Access 343+ AI models through one platform.
More from Meituan
Use ← → to navigate between models · Esc to go back