ltgoslo / bert-in-contextLinks
Official implementation of "BERTs are Generative In-Context Learners"
☆30Updated 4 months ago
Alternatives and similar repositories for bert-in-context
Users that are interested in bert-in-context are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆90Updated 8 months ago
- ☆27Updated 11 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 10 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 5 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- ☆69Updated last month
- Functional Benchmarks and the Reasoning Gap☆88Updated 9 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 10 months ago
- PyTorch library for Active Fine-Tuning☆84Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 7 months ago
- ☆68Updated 11 months ago
- ☆81Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Aioli: A unified optimization framework for language model data mixing☆27Updated 5 months ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆32Updated 2 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ☆82Updated 10 months ago
- ☆52Updated 8 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆56Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆75Updated 8 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆86Updated 6 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆68Updated 3 weeks ago
- ☆45Updated 3 months ago
- ☆124Updated 9 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 9 months ago
- ☆33Updated 6 months ago
- ☆66Updated last year