Cerebras / modelzooLinks
☆1,112Updated 3 weeks ago
Alternatives and similar repositories for modelzoo
Users that are interested in modelzoo are comparing it to the libraries listed below
Sorting:
- Alpaca dataset from Stanford, cleaned and curated☆1,581Updated 2 years ago
- ☆866Updated 2 years ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,507Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆734Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆823Updated 2 years ago
- Salesforce open-source LLMs with 8k sequence length.☆723Updated 11 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆630Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,664Updated last year
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,922Updated last year
- Fast Inference Solutions for BLOOM☆566Updated last year
- C++ implementation for BLOOM☆809Updated 2 years ago
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆598Updated 2 years ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,144Updated 4 months ago
- An open-source implementation of Google's PaLM models☆819Updated last year
- Tune any FALCON in 4-bit☆463Updated 2 years ago
- ☆551Updated last year
- ☆1,560Updated this week
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,065Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,090Updated 6 months ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,876Updated last year
- ☆1,505Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Updated last year
- Ongoing research training transformer models at scale☆395Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆975Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆776Updated last year
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,039Updated last year
- Inference code for Persimmon-8B☆412Updated 2 years ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,715Updated 2 months ago
- ☆1,630Updated 2 years ago
- 4 bits quantization of LLaMA using GPTQ☆3,077Updated last year