omkaark / simple-federated-learningLinks
☆97Updated last year
Alternatives and similar repositories for simple-federated-learning
Users that are interested in simple-federated-learning are comparing it to the libraries listed below
Sorting:
- Simple Transformer in Jax☆138Updated last year
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆63Updated last month
- a tiny vectorstore implementation built with numpy.☆62Updated last year
- Following master Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆173Updated 11 months ago
- 🌲 A 3D, interactive semantic graph of hacker interests at TreeHacks, scraped from Slack intro messages☆73Updated last year
- A really tiny autograd engine☆94Updated last month
- papers.day☆91Updated last year
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆272Updated 7 months ago
- Run GGML models with Kubernetes.☆173Updated last year
- Gradient descent is cool and all, but what if we could delete it?☆104Updated this week
- ☆89Updated 9 months ago
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆81Updated 2 months ago
- i will automate factorio☆106Updated 11 months ago
- a highly efficient compression algorithm for the n1 implant (neuralink's compression challenge)☆47Updated last year
- look how they massacred my boy☆63Updated 8 months ago
- run paligemma in real time☆131Updated last year
- Cerule - A Tiny Mighty Vision Model☆66Updated 10 months ago
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆110Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆64Updated 8 months ago
- Stream of my favorite papers and links☆42Updated 3 months ago
- ☆111Updated last year
- SIMD quantization kernels☆73Updated last week
- Fast parallel LLM inference for MLX☆198Updated last year
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆168Updated last year
- ☆93Updated 9 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 8 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆70Updated 5 months ago
- This repository contain the simple llama3 implementation in pure jax.☆67Updated 4 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆214Updated last year