krandiash / quinineLinks
A library to create and manage configuration files, especially for machine learning projects.
☆79Updated 3 years ago
Alternatives and similar repositories for quinine
Users that are interested in quinine are comparing it to the libraries listed below
Sorting:
- ☆67Updated 3 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- ☆38Updated last year
- ☆56Updated 2 years ago
- Utilities for the HuggingFace transformers library☆74Updated 3 years ago
- ☆77Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated 2 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- ☆63Updated 3 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆216Updated 3 weeks ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- Parallel data preprocessing for NLP and ML.☆34Updated last year
- ☆44Updated last year
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- ☆78Updated 2 years ago
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆66Updated 3 years ago
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)☆134Updated last week
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆181Updated 3 years ago
- Train very large language models in Jax.☆210Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Updated last year
- ☆31Updated last week
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆83Updated 3 years ago
- ☆72Updated 2 years ago
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago