abacaj / mpt-30B-inference
Run inference on MPT-30B using CPU
☆575Updated last year
Alternatives and similar repositories for mpt-30B-inference:
Users that are interested in mpt-30B-inference are comparing it to the libraries listed below
- C++ implementation for BLOOM☆810Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated last year
- Directly Connecting Python to LLMs via Strongly-Typed Functions, Dataclasses, Interfaces & Generic Types☆394Updated last week
- A school for camelids☆1,210Updated last year
- Evaluation tool for LLM QA chains☆1,070Updated last year
- C++ implementation for 💫StarCoder☆452Updated last year
- howdoi.ai☆255Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆408Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆336Updated 4 months ago
- ☆586Updated last year
- Agent techniques to augment your LLM and push it beyong its limits☆1,569Updated 9 months ago
- Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks☆601Updated last year
- Locally hosted tool that connects documents to LLMs for summarization and querying, with a simple GUI.☆787Updated last year
- OpenAI-compatible Python client that can call any LLM☆370Updated last year
- AI-agents that automatically generate and use Langchain Tools and ChatGPT plugins☆530Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated last year
- A tiny implementation of an autonomous agent powered by LLMs (OpenAI GPT-4)☆443Updated last year
- ☆587Updated 5 months ago
- ☆276Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆820Updated last year
- Salesforce open-source LLMs with 8k sequence length.☆717Updated last month
- A command-line interface to generate textual and conversational datasets with LLMs.☆294Updated last year
- ☆345Updated last year
- Get 100% uptime, reliability from OpenAI. Handle Rate Limit, Timeout, API, Keys Errors☆646Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- INSIGHT is an autonomous AI that can do medical research!☆407Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆760Updated 4 months ago
- Official supported Python bindings for llama.cpp + gpt4all☆1,020Updated last year
- LLM that combines the principles of wizardLM and vicunaLM☆715Updated last year
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,450Updated last year