abacaj / replit-3B-inference
Run inference on replit-3B code instruct model using CPU
☆154Updated last year
Alternatives and similar repositories for replit-3B-inference:
Users that are interested in replit-3B-inference are comparing it to the libraries listed below
- ☆136Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆162Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- 🔓 The open-source autonomous agent LLM initiative 🔓☆91Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated 9 months ago
- Command-line script for inferencing from models such as falcon-7b-instruct☆76Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- llama.cpp with BakLLaVA model describes what does it see☆381Updated last year
- A fast batching API to serve LLM models☆180Updated 9 months ago
- A collection of LLM services you can self host via docker or modal labs to support your applications development☆186Updated 9 months ago
- ☆111Updated 2 months ago
- ☆78Updated 11 months ago
- auto fine tune of models with synthetic data☆75Updated last year
- ☆38Updated 11 months ago
- An Autonomous LLM Agent that runs on Wizcoder-15B☆337Updated 3 months ago
- automatically generate @openai plugins by specifying your API in markdown in smol-developer style☆121Updated last year
- A simple wrapper for OpenAI to log input/outputs.☆106Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- Scripts to create your own moe models using mlx☆86Updated 11 months ago
- WebGPU LLM inference tuned by hand☆148Updated last year
- A Simple Discord Bot for the Alpaca LLM☆101Updated last year
- LLaVA server (llama.cpp).☆177Updated last year
- The code we currently use to fine-tune models.☆113Updated 9 months ago
- For inferring and serving local LLMs using the MLX framework☆94Updated 10 months ago
- The one who calls upon functions - Function-Calling Language Model☆36Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- ☆35Updated last year
- GPT-2 small trained on phi-like data☆65Updated last year
- A python package for developing AI applications with local LLMs.☆146Updated last month