tval2 / contextual-pruningLinks
Library to facilitate pruning of LLMs based on context
☆32Updated last year
Alternatives and similar repositories for contextual-pruning
Users that are interested in contextual-pruning are comparing it to the libraries listed below
Sorting:
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 7 months ago
- ☆51Updated 7 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- ☆35Updated last year
- A repository for research on medium sized language models.☆76Updated last year
- This is the official repository for Inheritune.☆111Updated 4 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 11 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- ☆86Updated 5 months ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- ☆61Updated 3 weeks ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Code for ExploreTom☆84Updated 6 months ago
- ☆63Updated 9 months ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆45Updated 2 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆85Updated last year
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- ☆23Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆66Updated last year
- Pre-training code for CrystalCoder 7B LLM☆54Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated 10 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- ☆76Updated last year