kolinko / effortLinks
An implementation of bucketMul LLM inference
☆220Updated last year
Alternatives and similar repositories for effort
Users that are interested in effort are comparing it to the libraries listed below
Sorting:
- Visualize the intermediate output of Mistral 7B☆366Updated 5 months ago
- Mistral7B playing DOOM☆132Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆126Updated 2 months ago
- Stop messing around with finicky sampling parameters and just use DRµGS!☆349Updated last year
- Live-bending a foundation model’s output at neural network level.☆262Updated 3 months ago
- ☆363Updated this week
- ☆116Updated 5 months ago
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- Fast parallel LLM inference for MLX☆198Updated last year
- run paligemma in real time☆131Updated last year
- ☆89Updated 9 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆285Updated 2 weeks ago
- 1.58 Bit LLM on Apple Silicon using MLX☆214Updated last year
- Implement recursion using English as the programming language and an LLM as the runtime.☆238Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- A multi-player tournament benchmark that tests LLMs in social reasoning, strategy, and deception. Players engage in public and private co…☆279Updated last month
- ☆248Updated last year
- Run GGML models with Kubernetes.☆173Updated last year
- a small code base for training large models☆304Updated 2 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆206Updated 7 months ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆137Updated last year
- GGUF implementation in C as a library and a tools CLI program☆274Updated 6 months ago
- ☆215Updated 4 months ago
- Applying the ideas of Deepseek R1 to computer use☆214Updated 5 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆618Updated 3 months ago
- JS tokenizer for LLaMA 1 and 2☆354Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆447Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Enforce structured output from LLMs 100% of the time☆249Updated 11 months ago
- Felafax is building AI infra for non-NVIDIA GPUs☆566Updated 5 months ago