kolinko / effortLinks
An implementation of bucketMul LLM inference
☆217Updated 11 months ago
Alternatives and similar repositories for effort
Users that are interested in effort are comparing it to the libraries listed below
Sorting:
- ☆116Updated 4 months ago
- ☆340Updated this week
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆126Updated 2 months ago
- Visualize the intermediate output of Mistral 7B☆367Updated 5 months ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆252Updated last year
- ☆248Updated last year
- Fast parallel LLM inference for MLX☆193Updated 11 months ago
- Stop messing around with finicky sampling parameters and just use DRµGS!☆349Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆615Updated 3 months ago
- Mistral7B playing DOOM☆132Updated 11 months ago
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- Heirarchical Navigable Small Worlds☆97Updated 2 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆206Updated 7 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆285Updated 3 weeks ago
- Tiny inference-only implementation of LLaMA☆93Updated last year
- Felafax is building AI infra for non-NVIDIA GPUs☆562Updated 5 months ago
- Dead Simple LLM Abliteration☆219Updated 4 months ago
- run paligemma in real time☆131Updated last year
- PyTorch implementation of models from the Zamba2 series.☆182Updated 5 months ago
- Run GGML models with Kubernetes.☆173Updated last year
- Inference of Mamba models in pure C☆187Updated last year
- ☆210Updated 3 months ago
- ☆163Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- ☆89Updated 8 months ago
- ☆196Updated last month
- 1.58 Bit LLM on Apple Silicon using MLX☆214Updated last year
- Applying the ideas of Deepseek R1 to computer use☆214Updated 4 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago