huangyuxiang03 / Locret
☆12Updated 6 months ago
Alternatives and similar repositories for Locret:
Users that are interested in Locret are comparing it to the libraries listed below
- ☆18Updated 4 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆47Updated 2 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆133Updated last month
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆55Updated 9 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆42Updated 5 months ago
- ☆74Updated this week
- Official PyTorch implementation of "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact"☆43Updated 11 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆81Updated 4 months ago
- official code for GliDe with a CaPE☆18Updated 8 months ago
- Repository of LV-Eval Benchmark☆63Updated 7 months ago
- ☆39Updated 5 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆36Updated last year
- OpenBA-V2: 3B LLM (Large Language Model) with T5 architecture, utilizing model pruning technique and continuing pretraining from OpenBA-1…☆25Updated 11 months ago
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆72Updated 3 months ago
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆26Updated 8 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆61Updated 5 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆82Updated 5 months ago
- ☆76Updated last week
- More Tokens, Lower Precision: Towards the Optimal Token-Precision Trade-off in KV Cache Compression☆11Updated 3 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆45Updated 6 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆65Updated 2 months ago
- FR-Spec: Frequency-Ranked Speculative Sampling☆17Updated last month
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆129Updated 2 months ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆41Updated last year
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆72Updated this week
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆18Updated 8 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆154Updated 10 months ago
- LongMIT: Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets☆36Updated 6 months ago
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆51Updated 10 months ago