01-ai / DescartesLinks
☆112Updated last year
Alternatives and similar repositories for Descartes
Users that are interested in Descartes are comparing it to the libraries listed below
Sorting:
- Byzer-retrieval is a distributed retrieval system which designed as a backend for LLM RAG (Retrieval Augmented Generation). The system su…☆49Updated 6 months ago
- Easy, fast, and cheap pretrain,finetune, serving for everyone☆315Updated last month
- llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deploy…☆86Updated last year
- Akcio is a demonstration project for Retrieval Augmented Generation (RAG). It leverages the power of LLM to generate responses and uses v…☆259Updated last year
- ☆30Updated last year
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- ☆32Updated last year
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆247Updated last year
- xllamacpp - a Python wrapper of llama.cpp☆54Updated 2 weeks ago
- Qwen GRPO Graph Extraction RL Finetune☆55Updated 5 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆140Updated last year
- bisheng-unstructured library☆55Updated 3 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆264Updated last month
- Efficient AI Inference & Serving☆477Updated last year
- GLM Series Edge Models☆149Updated 2 months ago
- Its an open source LLM based on MOE Structure.☆58Updated last year
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆95Updated last week
- ☆106Updated last year
- A Toolkit for Running On-device Large Language Models (LLMs) in APP☆80Updated last year
- [ACL2025 demo track] ROGRAG: A Robustly Optimized GraphRAG Framework☆172Updated this week
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆56Updated 9 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆137Updated 9 months ago
- SUS-Chat: Instruction tuning done right☆49Updated last year
- AGI模块库架构图☆77Updated 2 years ago
- A demo built on Megrez-3B-Instruct, integrating a web search tool to enhance the model's question-and-answer capabilities.☆39Updated 8 months ago
- Puck is a high-performance ANN search engine☆363Updated 3 months ago
- Imitate OpenAI with Local Models☆89Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year