aletheap / ai_on_threadsLinks
☆38Updated last year
Alternatives and similar repositories for ai_on_threads
Users that are interested in ai_on_threads are comparing it to the libraries listed below
Sorting:
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Drop in replacement for OpenAI, but with Open models.☆152Updated 2 years ago
- ☆93Updated last year
- Helpers and such for working with Lambda Cloud☆51Updated last year
- AI sends pull requests for features you request in natural language☆112Updated 2 years ago
- ☆143Updated 2 years ago
- ☆211Updated 2 years ago
- Highly commented implementations of Transformers in PyTorch☆137Updated 2 years ago
- A comprehensive deep dive into the world of tokens☆226Updated last year
- A puzzle to learn about prompting☆132Updated 2 years ago
- ☆166Updated 2 years ago
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated last year
- Notes from the Latent Space paper club. Follow along or start your own!☆235Updated last year
- Convert all of libgen to high quality markdown☆254Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- ☆170Updated last year
- Simple Transformer in Jax☆138Updated last year
- Cerule - A Tiny Mighty Vision Model☆66Updated 11 months ago
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆168Updated last year
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆179Updated last month
- ☆78Updated last year
- Fast bare-bones BPE for modern tokenizer training☆164Updated last month
- ☆22Updated last year
- Small finetuned LLMs for a diverse set of useful tasks☆128Updated 2 years ago
- ☆50Updated last year
- ☆71Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- Simple embedding -> text model trained on a small subset of Wikipedia sentences.☆156Updated 2 years ago
- ☆95Updated 2 years ago