stanford-crfm / mistral
Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging Face 🤗 Transformers.
☆569Updated last year
Alternatives and similar repositories for mistral:
Users that are interested in mistral are comparing it to the libraries listed below
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆461Updated 2 years ago
- Scaling Data-Constrained Language Models☆333Updated 5 months ago
- Expanding natural instructions☆975Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆469Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆311Updated last year
- Pipeline for pulling and processing online language model pretraining data from the web☆175Updated last year
- Build, evaluate, understand, and fix LLM-based apps☆485Updated last year
- Task-based datasets, preprocessing, and evaluation for sequence models.☆568Updated this week
- Used for adaptive human in the loop evaluation of language and embedding models.☆306Updated last year
- ☆237Updated 4 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆857Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆144Updated 9 months ago
- Interpretable Evaluation for AI Systems☆361Updated last year
- Find and fix bugs in natural language machine learning models using adaptive testing.☆181Updated 9 months ago
- Interpretability for sequence generation models 🐛 🔍☆401Updated 3 months ago
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)☆310Updated 2 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆443Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆474Updated 8 months ago
- A framework for few-shot evaluation of autoregressive language models.☆102Updated last year
- The pipeline for the OSCAR corpus☆166Updated last year
- Adversarial Natural Language Inference Benchmark☆396Updated 2 years ago
- A prize for finding tasks that cause large language models to show inverse scaling☆608Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆984Updated 6 months ago
- git extension for {collaborative, communal, continual} model development☆207Updated 3 months ago
- Scalable training for dense retrieval models.☆275Updated last year
- Fast Inference Solutions for BLOOM☆563Updated 4 months ago
- ☆338Updated 10 months ago
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆526Updated last year
- NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations☆780Updated 9 months ago