tval2 / contextual-pruningLinks
Library to facilitate pruning of LLMs based on context
☆32Updated last year
Alternatives and similar repositories for contextual-pruning
Users that are interested in contextual-pruning are comparing it to the libraries listed below
Sorting:
- ☆49Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 6 months ago
- ☆23Updated last year
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- ☆47Updated 9 months ago
- ☆79Updated 9 months ago
- This is the official repository for Inheritune.☆111Updated 3 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- A repository for research on medium sized language models.☆76Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- ☆83Updated 5 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆33Updated 8 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 6 months ago
- ☆51Updated 7 months ago
- Code for ExploreTom☆83Updated 5 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 11 months ago
- Functional Benchmarks and the Reasoning Gap☆86Updated 8 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- ☆76Updated last year
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆85Updated last year
- ☆87Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆69Updated 2 weeks ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year