rovle / gpt3-in-context-fittingLinks
Experiments on GPT-3's ability to fit numerical models in-context.
☆14Updated 3 years ago
Alternatives and similar repositories for gpt3-in-context-fitting
Users that are interested in gpt3-in-context-fitting are comparing it to the libraries listed below
Sorting:
- Embedding Recycling for Language models☆38Updated 2 years ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 11 months ago
- Finding semantically meaningful and accurate prompts.☆48Updated 2 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 5 years ago
- Minimum Description Length probing for neural network representations☆20Updated 10 months ago
- ☆56Updated 2 years ago
- Google Research☆46Updated 3 years ago
- Code for "Counterfactual Token Generation in Large Language Models", Arxiv 2024.☆31Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆44Updated last month
- Official code for the paper "Context-Aware Language Modeling for Goal-Oriented Dialogue Systems"☆34Updated 3 years ago
- A library to create and manage configuration files, especially for machine learning projects.☆79Updated 3 years ago
- Code repository for the NAACL 2022 paper "ExSum: From Local Explanations to Model Understanding"☆64Updated 3 years ago
- ☆44Updated last year
- Few-shot Learning with Auxiliary Data☆31Updated 2 years ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆60Updated 3 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- ☆67Updated 3 years ago
- Code for running the experiments in Deep Subjecthood: Higher Order Grammatical Features in Multilingual BERT☆17Updated 2 years ago
- Understanding how features learned by neural networks evolve throughout training☆40Updated last year
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆20Updated last month
- Ranking of fine-tuned HF models as base models.☆36Updated 2 months ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- Measuring if attention is explanation with ROAR☆22Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- A weak supervision framework for (partial) labeling functions☆16Updated last year
- ☆43Updated 4 years ago