tval2 / contextual-pruningLinks
Library to facilitate pruning of LLMs based on context
☆32Updated last year
Alternatives and similar repositories for contextual-pruning
Users that are interested in contextual-pruning are comparing it to the libraries listed below
Sorting:
- A repository for research on medium sized language models.☆77Updated last year
- This is the official repository for Inheritune.☆113Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- ☆54Updated 10 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 10 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆48Updated 5 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 9 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆35Updated 11 months ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated last year
- Memoria is a human-inspired memory architecture for neural networks.☆75Updated 11 months ago
- ☆81Updated 2 weeks ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- ☆85Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆88Updated last year
- ☆69Updated last year
- ☆48Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 2 months ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆64Updated 2 years ago
- Collection of autoregressive model implementation☆86Updated 4 months ago
- ☆119Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆225Updated this week
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆27Updated 9 months ago
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆109Updated 5 months ago
- ☆23Updated 2 years ago