RUCAIBox / QuantizedEmpiricalLinks
☆15Updated 2 years ago
Alternatives and similar repositories for QuantizedEmpirical
Users that are interested in QuantizedEmpirical are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆52Updated last year
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆29Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆125Updated 10 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- ☆121Updated last year
- ☆13Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆167Updated last year
- Implementation of "Decoding-time Realignment of Language Models", ICML 2024.☆21Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆37Updated 11 months ago
- ☆23Updated 7 months ago
- ☆20Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆48Updated 5 months ago
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated 2 years ago
- ☆108Updated 4 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆60Updated 3 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆29Updated 6 months ago
- ☆26Updated last year
- ☆18Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆19Updated 10 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆66Updated last year
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆68Updated last year
- ☆15Updated last year
- ☆50Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆40Updated last year
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆36Updated 6 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago