krandiash / quinineLinks
A library to create and manage configuration files, especially for machine learning projects.
☆80Updated 3 years ago
Alternatives and similar repositories for quinine
Users that are interested in quinine are comparing it to the libraries listed below
Sorting:
- ☆67Updated 3 years ago
- ☆55Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- Utilities for the HuggingFace transformers library☆72Updated 2 years ago
- ☆76Updated last year
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- ☆39Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆215Updated 5 months ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆86Updated 3 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- ☆85Updated last year
- A diff tool for language models☆44Updated last year
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- ☆44Updated 11 months ago
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)☆132Updated 4 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- git extension for {collaborative, communal, continual} model development☆215Updated last year
- For experiments involving instruct gpt. Currently used for documenting open research questions.☆70Updated 3 years ago
- Simple and scalable tools for data-driven pretraining data selection.☆28Updated 5 months ago
- ☆166Updated 2 years ago
- Train very large language models in Jax.☆210Updated 2 years ago