kolinko / effort
An implementation of bucketMul LLM inference
☆215Updated 8 months ago
Alternatives and similar repositories for effort:
Users that are interested in effort are comparing it to the libraries listed below
- Mistral7B playing DOOM☆130Updated 8 months ago
- Tiny inference-only implementation of LLaMA☆92Updated 11 months ago
- Fast parallel LLM inference for MLX☆174Updated 8 months ago
- run paligemma in real time☆131Updated 10 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆601Updated 3 months ago
- Dead Simple LLM Abliteration☆211Updated last month
- WebGPU LLM inference tuned by hand☆149Updated last year
- GGUF implementation in C as a library and a tools CLI program☆261Updated 2 months ago
- Enforce structured output from LLMs 100% of the time☆248Updated 8 months ago
- ☆89Updated 5 months ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆136Updated 8 months ago
- ☆242Updated last year
- Visualize the intermediate output of Mistral 7B☆344Updated 2 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆192Updated 10 months ago
- Felafax is building AI infra for non-NVIDIA GPUs☆555Updated last month
- Stop messing around with finicky sampling parameters and just use DRµGS!☆347Updated 9 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆201Updated 4 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Run GGML models with Kubernetes.☆174Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Stateful load balancer custom-tailored for llama.cpp 🏓🦙☆728Updated this week
- Tensor library for machine learning☆278Updated last year
- Long context evaluation for large language models☆202Updated 2 weeks ago
- Dynamically structure language models to produce outputs that adhere to specific requirements without sacrificing their creative capabili…☆118Updated last week
- Revealing example of self-attention, the building block of transformer AI models☆130Updated last year
- ☆163Updated 9 months ago
- LLM-based code completion engine☆181Updated last month
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated 11 months ago
- Action library for AI Agent☆211Updated this week
- Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.☆63Updated 8 months ago