krandiash / quinineLinks
A library to create and manage configuration files, especially for machine learning projects.
☆79Updated 3 years ago
Alternatives and similar repositories for quinine
Users that are interested in quinine are comparing it to the libraries listed below
Sorting:
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- ☆55Updated 2 years ago
- ☆67Updated 3 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Updated last year
- ☆38Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆215Updated 6 months ago
- ☆76Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Simple and scalable tools for data-driven pretraining data selection.☆29Updated 5 months ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Utilities for the HuggingFace transformers library☆72Updated 2 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- Amos optimizer with JEstimator lib.☆82Updated last year
- ☆62Updated 3 years ago
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆66Updated 3 years ago
- ☆44Updated last year
- A repository for transformer critique learning and generation☆89Updated last year
- ☆85Updated last year
- Parallel data preprocessing for NLP and ML.☆34Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- ☆36Updated 2 years ago
- Train very large language models in Jax.☆210Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆115Updated last year
- A diff tool for language models☆44Updated last year
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- ☆31Updated 2 weeks ago
- git extension for {collaborative, communal, continual} model development☆216Updated last year
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year