Jellyfish042 / uncheatable_evalLinks
Evaluating LLMs with Dynamic Data
☆91Updated last month
Alternatives and similar repositories for uncheatable_eval
Users that are interested in uncheatable_eval are comparing it to the libraries listed below
Sorting:
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated 11 months ago
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆80Updated last year
- ☆144Updated 3 weeks ago
- This is the official repository for Inheritune.☆113Updated 7 months ago
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆155Updated 5 months ago
- Longitudinal Evaluation of LLMs via Data Compression☆32Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆170Updated last year
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆207Updated this week
- Low-bit optimizers for PyTorch☆131Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- Reformatted Alignment☆113Updated 11 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆96Updated 2 years ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆138Updated 2 years ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆149Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week
- RWKV-7: Surpassing GPT☆95Updated 9 months ago
- RWKV, in easy to read code☆71Updated 5 months ago
- Code repository for the c-BTM paper☆107Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆241Updated 3 months ago
- ☆39Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆107Updated 6 months ago
- A toolkit for scaling law research ⚖☆51Updated 7 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆149Updated 6 months ago