abacaj / replit-3B-inferenceLinks
Run inference on replit-3B code instruct model using CPU
☆160Updated 2 years ago
Alternatives and similar repositories for replit-3B-inference
Users that are interested in replit-3B-inference are comparing it to the libraries listed below
Sorting:
- ☆135Updated 2 years ago
- llama.cpp with BakLLaVA model describes what does it see☆380Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆173Updated 2 years ago
- 🔓 The open-source autonomous agent LLM initiative 🔓☆91Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆333Updated last year
- A collection of LLM services you can self host via docker or modal labs to support your applications development☆198Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- The code we currently use to fine-tune models.☆117Updated last year
- ☆119Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆125Updated 2 years ago
- inference code for mixtral-8x7b-32kseqlen☆105Updated 2 years ago
- ☆215Updated 2 years ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆42Updated 2 years ago
- ☆132Updated 2 years ago
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated 2 years ago
- A Simple Discord Bot for the Alpaca LLM☆99Updated 2 years ago
- Small finetuned LLMs for a diverse set of useful tasks☆127Updated 2 years ago
- automatically generate @openai plugins by specifying your API in markdown in smol-developer style☆119Updated 2 years ago
- A Personalised AI Assistant Inspired by 'Diamond Age, Powered by SMS☆92Updated 2 years ago
- Harnessing the Memory Power of the Camelids☆147Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated 4 months ago
- ☆274Updated last year
- Convert all of libgen to high quality markdown☆254Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- ☆112Updated 2 years ago
- LLaMA Cog template☆303Updated last year
- ☆36Updated 2 years ago