CannyLab / anthologyLinks
[EMNLP 2024 Main] Virtual Personas for Language Models via an Anthology of Backstories
☆35Updated last year
Alternatives and similar repositories for anthology
Users that are interested in anthology are comparing it to the libraries listed below
Sorting:
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Evaluating LLMs with fewer examples☆169Updated last year
- ☆47Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Memoria is a human-inspired memory architecture for neural networks.☆83Updated last year
- Experiments to assess SPADE on different LLM pipelines.☆17Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆96Updated 8 months ago
- Understanding the correlation between different LLM benchmarks☆29Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Lottery Ticket Adaptation☆39Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆112Updated last year
- A repository for transformer critique learning and generation☆89Updated 2 years ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Updated last year
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆63Updated 3 months ago
- The repository contains code for Adaptive Data Optimization☆32Updated last year
- Cascade Speculative Drafting☆32Updated last year
- LM engine is a library for pretraining/finetuning LLMs☆113Updated this week
- ☆91Updated last month
- ☆55Updated last year
- accompanying material for sleep-time compute paper☆119Updated 9 months ago
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆30Updated 10 months ago
- ☆71Updated last year
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆35Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆112Updated last year
- ☆91Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 7 months ago
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated last year