Cerebras / modelzooLinks
☆1,045Updated last month
Alternatives and similar repositories for modelzoo
Users that are interested in modelzoo are comparing it to the libraries listed below
Sorting:
- Alpaca dataset from Stanford, cleaned and curated☆1,554Updated 2 years ago
- Fast Inference Solutions for BLOOM☆564Updated 7 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,478Updated 9 months ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,727Updated 5 months ago
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆586Updated last year
- C++ implementation for BLOOM☆809Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,014Updated 2 months ago
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,632Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆821Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,489Updated last year
- Large-scale model inference.☆630Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,568Updated last year
- ☆864Updated last year
- ☆1,025Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆997Updated 10 months ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆302Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,050Updated 10 months ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,524Updated this week
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆689Updated 9 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,119Updated last year
- Quantized inference code for LLaMA models☆1,048Updated 2 years ago
- ☆542Updated 5 months ago
- Ongoing research training transformer models at scale☆386Updated 9 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆697Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,249Updated 2 months ago
- ☆1,471Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,391Updated last year
- A tiny library for coding with large language models.☆1,232Updated 10 months ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,145Updated 3 weeks ago