allenai / fm-cheatsheetLinks
Website for hosting the Open Foundation Models Cheat Sheet.
☆267Updated 3 weeks ago
Alternatives and similar repositories for fm-cheatsheet
Users that are interested in fm-cheatsheet are comparing it to the libraries listed below
Sorting:
- Manage scalable open LLM inference endpoints in Slurm clusters☆257Updated 10 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆254Updated last year
- A repository for research on medium sized language models.☆495Updated 3 weeks ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- Scaling Data-Constrained Language Models☆334Updated 8 months ago
- ☆159Updated this week
- ☆517Updated 6 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- PyTorch building blocks for the OLMo ecosystem☆222Updated this week
- ☆130Updated 2 months ago
- The official evaluation suite and dynamic data release for MixEval.☆241Updated 6 months ago
- git extension for {collaborative, communal, continual} model development☆212Updated 6 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆116Updated 5 months ago
- A puzzle to learn about prompting☆127Updated 2 years ago
- ☆121Updated last month
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆201Updated 3 weeks ago
- RuLES: a benchmark for evaluating rule-following in language models☆224Updated 3 months ago
- experiments with inference on llama☆104Updated 11 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆584Updated this week
- Understand and test language model architectures on synthetic tasks.☆195Updated 2 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆189Updated last year
- A comprehensive deep dive into the world of tokens☆223Updated 11 months ago
- Erasing concepts from neural representations with provable guarantees☆227Updated 4 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆220Updated last year
- ☆188Updated 3 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- ☆166Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆173Updated 2 months ago
- Fast bare-bones BPE for modern tokenizer training☆156Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆126Updated 3 weeks ago