AlexBuz / llama-zipLinks
LLM-powered lossless compression tool
☆288Updated last year
Alternatives and similar repositories for llama-zip
Users that are interested in llama-zip are comparing it to the libraries listed below
Sorting:
- A fast batching API to serve LLM models☆187Updated last year
- Experimental adventure game with AI-generated content☆111Updated 5 months ago
- ☆315Updated 2 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆145Updated this week
- ☆134Updated 5 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 8 months ago
- Train your own small bitnet model☆75Updated 11 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- Stop messing around with finicky sampling parameters and just use DRµGS!☆357Updated last year
- 1.58-bit LLaMa model☆83Updated last year
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆79Updated last year
- automatically quant GGUF models☆210Updated last week
- LLM-based code completion engine☆190Updated 8 months ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- ☆162Updated 2 months ago
- Web UI for ExLlamaV2☆510Updated 8 months ago
- Experimental LLM Inference UX to aid in creative writing☆122Updated 9 months ago
- AI management tool☆121Updated 11 months ago
- A benchmark for emotional intelligence in large language models☆365Updated last year
- Open source LLM UI, compatible with all local LLM providers.☆175Updated last year
- SLOP Detector and analyzer based on dictionary for shareGPT JSON and text☆76Updated 11 months ago
- A multimodal, function calling powered LLM webui.☆215Updated last year
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆225Updated 4 months ago
- Inference of Mamba models in pure C☆191Updated last year
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆80Updated last week
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 7 months ago
- Mistral7B playing DOOM☆137Updated last year
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆42Updated last month