ltgoslo / bert-in-contextLinks
Official implementation of "BERTs are Generative In-Context Learners"
☆32Updated 10 months ago
Alternatives and similar repositories for bert-in-context
Users that are interested in bert-in-context are comparing it to the libraries listed below
Sorting:
- ☆29Updated 2 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆90Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆87Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆69Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- ☆91Updated last year
- Minimum Description Length probing for neural network representations☆20Updated 11 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆222Updated last month
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆46Updated 2 months ago
- State-of-the-art paired encoder and decoder models (17M-1B params)☆56Updated 5 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Aioli: A unified optimization framework for language model data mixing☆32Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 11 months ago
- PyTorch implementation for MRL☆20Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 2 months ago
- A mechanistic approach for understanding and detecting factual errors of large language models.☆49Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆60Updated 6 months ago
- ☆33Updated last year
- Code for Zero-Shot Tokenizer Transfer☆142Updated last year
- PyTorch library for Active Fine-Tuning☆96Updated 3 months ago
- ☆92Updated 3 weeks ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 3 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆74Updated 6 months ago