pangu-tech / pangu-ultraLinks
☆64Updated last month
Alternatives and similar repositories for pangu-ultra
Users that are interested in pangu-ultra are comparing it to the libraries listed below
Sorting:
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆135Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆110Updated 2 months ago
- ☆77Updated 3 months ago
- ☆75Updated last week
- Efficient Agent Training for Computer Use☆114Updated last month
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆179Updated 3 weeks ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆63Updated 3 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆167Updated last month
- ☆285Updated last month
- A High-Efficiency System of Large Language Model Based Search Agents☆66Updated 2 weeks ago
- Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.☆87Updated 3 weeks ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆63Updated 3 weeks ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆412Updated 2 months ago
- ☆72Updated last month
- ☆82Updated 6 months ago
- FuseAI Project☆87Updated 5 months ago
- Repo of ACL 2025 main Paper "Quantification of Large Language Model Distillation"☆88Updated last month
- ☆90Updated 2 months ago
- ☆94Updated 7 months ago
- Code for "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆74Updated 9 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆112Updated 8 months ago
- ☆59Updated last month
- The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆152Updated last week
- ☆48Updated last month
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆216Updated 3 weeks ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆96Updated last month
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆38Updated 4 months ago
- [ACL 2025] An official pytorch implement of the paper: Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement☆31Updated last month
- Implementation for OAgents: An Empirical Study of Building Effective Agents☆88Updated this week