the-crypt-keeper / tiny_starcoderLinks
Python examples using the bigcode/tiny_starcoder_py 159M model to generate code
☆44Updated 2 years ago
Alternatives and similar repositories for tiny_starcoder
Users that are interested in tiny_starcoder are comparing it to the libraries listed below
Sorting:
- PyGPTPrompt: A CLI tool that manages context windows for AI models, facilitating user interaction and data ingestion for optimized long-t…☆29Updated last year
- ☆16Updated 2 years ago
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- ☆16Updated last year
- ☆73Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- run ollama & gguf easily with a single command☆51Updated last year
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Updated 2 years ago
- Plug n Play GBNF Compiler for llama.cpp☆25Updated last year
- Experimental sampler to make LLMs more creative☆31Updated last year
- A repository to store helpful information and emerging insights in regard to LLMs☆20Updated last year
- ☆31Updated last year
- Simple, Fast, Parallel Huggingface GGML model downloader written in python☆24Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- A guidance compatibility layer for llama-cpp-python☆35Updated last year
- Prompt-Promptor is a python library for automatically generating prompts using LLMs☆75Updated last year
- Falcon40B and 7B (Instruct) with streaming, top-k, and beam search☆40Updated 2 years ago
- ☆49Updated last year
- Complex RAG backend☆28Updated last year
- Our Process for Llama2 Finetuning☆16Updated last year
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated last year
- An auto generated wiki.☆21Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 7 months ago
- LLM finetuning☆42Updated last year
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆28Updated last year
- "a towel is about the most massively useful thing an interstellar AI hitchhiker can have"☆48Updated 8 months ago
- Python package wrapping llama.cpp for on-device LLM inference☆69Updated last week
- ☆28Updated 9 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Large-Language-Model to Machine Interface project.☆19Updated last year