abacaj / mpt-30B-inferenceLinks
Run inference on MPT-30B using CPU
β575Updated 2 years ago
Alternatives and similar repositories for mpt-30B-inference
Users that are interested in mpt-30B-inference are comparing it to the libraries listed below
Sorting:
- C++ implementation for π«StarCoderβ455Updated 2 years ago
- β597Updated 2 years ago
- Evaluation tool for LLM QA chainsβ1,086Updated 2 years ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructionsβ822Updated 2 years ago
- Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasksβ605Updated 2 years ago
- β275Updated 2 years ago
- C++ implementation for BLOOMβ806Updated 2 years ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backeβ¦β411Updated 2 years ago
- An Autonomous LLM Agent that runs on Wizcoder-15Bβ333Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval pluginβ323Updated 2 years ago
- A school for camelidsβ1,208Updated 2 years ago
- Locally hosted tool that connects documents to LLMs for summarization and querying, with a simple GUI.β797Updated 2 years ago
- Scale LLM Engine public repositoryβ814Updated this week
- LLM-based tool for parsing information and chatting with itβ214Updated 2 years ago
- kani (γ«γ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)β594Updated this week
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.β771Updated last year
- howdoi.aiβ256Updated 2 years ago
- A voice chat appβ1,172Updated 5 months ago
- UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPTβ¦β476Updated 2 years ago
- OpenAI-compatible Python client that can call any LLMβ372Updated 2 years ago
- β591Updated last year
- β534Updated last year
- A tiny implementation of an autonomous agent powered by LLMs (OpenAI GPT-4)β439Updated 2 years ago
- Large Language Models for All, π¦ Cult and More, Stay in touch !β447Updated 2 years ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bitsβ732Updated last year
- Run inference on replit-3B code instruct model using CPUβ159Updated 2 years ago
- Directly Connecting Python to LLMs via Strongly-Typed Functions, Dataclasses, Interfaces & Generic Typesβ398Updated 8 months ago
- Tune any FALCON in 4-bitβ464Updated 2 years ago
- Build robust LLM applications with true composability πβ421Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100sβ714Updated 2 years ago