isamu-isozaki / huggingface-reading-groupLinks
This repository's goal is to precompile all past presentations of the Huggingface reading group
☆48Updated last year
Alternatives and similar repositories for huggingface-reading-group
Users that are interested in huggingface-reading-group are comparing it to the libraries listed below
Sorting:
- An introduction to LLM Sampling☆79Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Updated last year
- Let's build better datasets, together!☆267Updated last year
- Set of scripts to finetune LLMs☆38Updated last year
- ☆125Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆294Updated 10 months ago
- Collection of autoregressive model implementation☆85Updated this week
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆196Updated last year
- Fast bare-bones BPE for modern tokenizer training☆174Updated 6 months ago
- code for training & evaluating Contextual Document Embedding models☆201Updated 7 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆269Updated 8 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago
- ☆137Updated last year
- Hub for researchers exploring VLMs and Multimodal Learning:)☆59Updated last week
- ☆138Updated 4 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆113Updated last year
- ☆38Updated last year
- ☆150Updated 4 months ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- ☆68Updated last year
- A puzzle to learn about prompting☆135Updated 2 years ago
- Prune transformer layers☆74Updated last year
- A comprehensive deep dive into the world of tokens☆227Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆149Updated 3 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- I learn about and explain quantization☆26Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆49Updated last year
- Google TPU optimizations for transformers models☆133Updated 3 weeks ago