FL33TW00D / wattkit
β14Updated 5 months ago
Alternatives and similar repositories for wattkit
Users that are interested in wattkit are comparing it to the libraries listed below
Sorting:
- Find out why your CoreML model isn't running on the Neural Engine!β25Updated 10 months ago
- Profile your CoreML models directly from Python πβ27Updated 6 months ago
- ModernBERT model optimized for Apple Neural Engine.β25Updated 4 months ago
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.shβ48Updated last year
- Tensor library for Zigβ12Updated 5 months ago
- TensorRT-LLM server with Structured Outputs (JSON) built with Rustβ52Updated 2 weeks ago
- β26Updated 5 months ago
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.β59Updated 11 months ago
- TOPLOC: is a novel method for verifiable inference that enables users to verify that LLM providers are using the correct model configuratβ¦β24Updated 3 weeks ago
- ANE accelerated embedding models!β16Updated 5 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)β64Updated 6 months ago
- MLX Swift implementation of Andrej Karpathy's Let's build GPT videoβ57Updated last year
- β22Updated 11 months ago
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedbackβ84Updated 2 months ago
- β19Updated 7 months ago
- β17Updated last month
- MLX binary vectors and associated algorithms.β14Updated 2 months ago
- β91Updated last month
- Using modal.com to process FineWeb-edu dataβ20Updated last month
- Experimental compiler for deep learning modelsβ65Updated 3 weeks ago
- Editor with LLM generation tree explorationβ66Updated 3 months ago
- mlx image models for Apple Silicon machinesβ78Updated last month
- Structured outputs for LLMsβ46Updated 9 months ago
- β22Updated 7 months ago
- mlx implementations of various transformers, speedups, trainingβ34Updated last year
- This repo maintains a 'cheat sheet' for LLMs that are undertrained on mlxβ17Updated last month
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPUβ102Updated last year
- See the device (CPU/GPU/ANE) and estimated cost for every layer in your CoreML model.β22Updated 11 months ago
- Access fireworks.ai models via APIβ11Updated last year
- Rust crate for some audio utilitiesβ23Updated 2 months ago