FreedomIntelligence / ApolloMoE
[ICLR'25] ApolloMoE: Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts
☆40Updated 5 months ago
Alternatives and similar repositories for ApolloMoE:
Users that are interested in ApolloMoE are comparing it to the libraries listed below
- FuseAI Project☆85Updated 3 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 5 months ago
- ☆38Updated 4 months ago
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆57Updated 11 months ago
- Multilingual Medicine: Model, Dataset, Benchmark, Code☆185Updated 6 months ago
- Source code of the paper: RetrievalQA: Assessing Adaptive Retrieval-Augmented Generation for Short-form Open-Domain Question Answering [F…☆62Updated 11 months ago
- PGRAG☆48Updated 9 months ago
- ☆24Updated 7 months ago
- ☆45Updated 7 months ago
- ☆61Updated 9 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated 11 months ago
- OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation☆72Updated last month
- ☆51Updated 9 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆65Updated last week
- [Preprint] An inference-time decoding strategy with adaptive foresight sampling☆90Updated 2 weeks ago
- Code and data for CoachLM, an automatic instruction revision approach LLM instruction tuning.☆61Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆77Updated last year
- ☆63Updated last month
- The official repo for the code and data of paper SMART☆25Updated 2 months ago
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆76Updated 6 months ago
- Reformatted Alignment☆115Updated 7 months ago
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆90Updated last month
- ☆84Updated 6 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆25Updated last month
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 7 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 3 months ago
- Data preparation code for Amber 7B LLM☆89Updated 11 months ago
- Implementation of "SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models"☆27Updated 2 months ago
- ☆49Updated last year
- ☆44Updated 11 months ago