sfeucht / footprintsLinks
https://footprints.baulab.info
☆17Updated 8 months ago
Alternatives and similar repositories for footprints
Users that are interested in footprints are comparing it to the libraries listed below
Sorting:
- Codebase for Instruction Following without Instruction Tuning☆34Updated 9 months ago
- MergeBench: A Benchmark for Merging Domain-Specialized LLMs☆14Updated last month
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆35Updated 4 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 5 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated last month
- ☆43Updated 2 months ago
- Exploration of automated dataset selection approaches at large scales.☆45Updated 3 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 5 months ago
- ☆26Updated 4 months ago
- Applies ROME and MEMIT on Mamba-S4 models☆14Updated last year
- Efficient Scaling laws and collaborative pretraining.☆16Updated 4 months ago
- Tasks for describing differences between text distributions.☆16Updated 10 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- ☆20Updated 11 months ago
- ☆20Updated last month
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆37Updated 7 months ago
- ☆14Updated last year
- Lottery Ticket Adaptation☆39Updated 7 months ago
- ACL24☆9Updated last year
- ☆14Updated last year
- The source code for running LLMs on the AAAR-1.0 benchmark.☆16Updated 2 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆95Updated 2 weeks ago
- [arXiv] EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees☆21Updated 3 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 3 months ago
- ☆25Updated last year
- ☆32Updated 5 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆27Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year