Jellyfish042 / uncheatable_eval
Evaluating LLMs with Dynamic Data
☆72Updated last week
Related projects ⓘ
Alternatives and complementary repositories for uncheatable_eval
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆74Updated 10 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆199Updated 6 months ago
- This is the official repository for Inheritune.☆105Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (Official Code)☆135Updated last month
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆52Updated last week
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆89Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated 2 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- Expert Specialized Fine-Tuning☆145Updated last month
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆139Updated this week
- ☆49Updated 6 months ago
- A pipeline for LLM knowledge distillation☆78Updated 3 months ago
- Experiments on speculative sampling with Llama models☆118Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆76Updated last month
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆103Updated 2 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆112Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆133Updated 3 months ago
- ☆103Updated last month
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆95Updated 4 months ago
- ☆40Updated this week
- Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆79Updated this week
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆126Updated 5 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆135Updated 5 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆127Updated 2 months ago
- An Experiment on Dynamic NTK Scaling RoPE☆61Updated 11 months ago
- ☆84Updated last week
- Low-bit optimizers for PyTorch☆119Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆97Updated last year
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆176Updated this week
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆128Updated last month