Cerebras / modelzooLinks
☆1,084Updated last week
Alternatives and similar repositories for modelzoo
Users that are interested in modelzoo are comparing it to the libraries listed below
Sorting:
- Alpaca dataset from Stanford, cleaned and curated☆1,579Updated 2 years ago
- Salesforce open-source LLMs with 8k sequence length.☆722Updated 9 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆732Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,499Updated last year
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,143Updated last month
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆822Updated 2 years ago
- Fast Inference Solutions for BLOOM☆564Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,631Updated 2 years ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,849Updated 11 months ago
- An open-source implementation of Google's PaLM models☆817Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆597Updated 2 years ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,074Updated 4 months ago
- ☆1,494Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,076Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,634Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆729Updated last year
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated 2 years ago
- ☆1,028Updated last year
- ☆864Updated last year
- Ongoing research training transformer models at scale☆391Updated last year
- C++ implementation for BLOOM☆806Updated 2 years ago
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆771Updated last year
- Customizable implementation of the self-instruct paper.☆1,050Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆52Updated 2 years ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,671Updated 5 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,217Updated last year
- ☆1,551Updated 2 weeks ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,343Updated last week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,796Updated 4 months ago