ltgoslo / bert-in-contextLinks
Official implementation of "BERTs are Generative In-Context Learners"
☆31Updated 4 months ago
Alternatives and similar repositories for bert-in-context
Users that are interested in bert-in-context are comparing it to the libraries listed below
Sorting:
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 11 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆81Updated 9 months ago
- A repository for research on medium sized language models.☆78Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆91Updated 8 months ago
- ☆27Updated 11 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 6 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- ☆69Updated 11 months ago
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆49Updated 9 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- PyTorch library for Active Fine-Tuning☆87Updated 5 months ago
- ☆73Updated 2 weeks ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆41Updated 9 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 2 weeks ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 6 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆207Updated 2 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 11 months ago
- ☆83Updated 11 months ago
- ☆45Updated 4 months ago
- ☆51Updated 4 months ago
- ☆81Updated last year
- Replicating O1 inference-time scaling laws☆89Updated 8 months ago
- Minimum Description Length probing for neural network representations☆18Updated 6 months ago
- ☆125Updated 10 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- ☆27Updated 5 months ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆54Updated last year