krandiash / quinine
A library to create and manage configuration files, especially for machine learning projects.
☆77Updated 3 years ago
Alternatives and similar repositories for quinine:
Users that are interested in quinine are comparing it to the libraries listed below
- ☆73Updated 10 months ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- ☆38Updated 10 months ago
- ☆44Updated 3 months ago
- ☆67Updated 2 years ago
- Utilities for the HuggingFace transformers library☆64Updated 2 years ago
- Mechanistic Interpretability for Transformer Models☆50Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- ☆34Updated 11 months ago
- ☆53Updated last year
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆64Updated 2 years ago
- ☆81Updated 7 months ago
- Simple and scalable tools for data-driven pretraining data selection.☆16Updated last month
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- ☆21Updated 5 months ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- ☆26Updated 8 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- ☆33Updated last year
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- ☆35Updated 2 years ago
- ☆13Updated last month
- A library for efficient patching and automatic circuit discovery.☆56Updated 3 weeks ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆90Updated 3 years ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆203Updated 2 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆66Updated 6 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago