krandiash / quinine
A library to create and manage configuration files, especially for machine learning projects.
☆78Updated 3 years ago
Alternatives and similar repositories for quinine:
Users that are interested in quinine are comparing it to the libraries listed below
- ☆38Updated last year
- ☆72Updated last year
- A case study of efficient training of large language models using commodity hardware.☆69Updated 2 years ago
- Simple and scalable tools for data-driven pretraining data selection.☆23Updated 2 months ago
- Utilities for the HuggingFace transformers library☆67Updated 2 years ago
- Mechanistic Interpretability for Transformer Models☆50Updated 2 years ago
- ☆67Updated 2 years ago
- ☆44Updated 5 months ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- ☆33Updated last year
- ☆54Updated last year
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- ☆34Updated last year
- See the issue board for the current status of active and prospective projects!☆65Updated 3 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆58Updated last year
- ☆36Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆36Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- ☆51Updated 11 months ago
- ☆82Updated 9 months ago
- Automatic metrics for GEM tasks☆65Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- The evaluation pipeline for the 2024 BabyLM Challenge.☆30Updated 5 months ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- ☆39Updated 2 years ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆106Updated last year