joey00072 / TinyLora
Low-Rank Adaptation of Large Language Models clean implementation
☆9Updated last year
Related projects ⓘ
Alternatives and complementary repositories for TinyLora
- ☆22Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Training hybrid models for dummies.☆15Updated 3 weeks ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 8 months ago
- Jax like function transformation engine but micro, microjax☆26Updated 3 weeks ago
- Using multiple LLMs for ensemble Forecasting☆16Updated 10 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆44Updated 2 weeks ago
- LLM training in simple, raw C/CUDA☆12Updated last month
- implementation of https://arxiv.org/pdf/2312.09299☆19Updated 4 months ago
- Code accompanying the paper "A Language Model's Guide Through Latent Space". It contains functionality for training and using concept vec…☆16Updated 8 months ago
- ☆41Updated 2 weeks ago
- ☆27Updated last year
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated 5 months ago
- ☆57Updated 11 months ago
- ☆24Updated last year
- Modified Beam Search with periodical restart☆12Updated 2 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆23Updated last week
- Collection of autoregressive model implementation☆67Updated this week
- ☆33Updated 6 months ago
- ☆16Updated last month
- alternative way to calculating self attention☆18Updated 5 months ago
- 🚀🤗 A collection of templates for Hugging Face Spaces☆35Updated last year
- A sample pattern for running CI tests on Modal☆13Updated 2 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated 2 weeks ago
- Public reports detailing responses to sets of prompts by Large Language Models.☆26Updated last year
- Efficient Dictionary Learning with Switch Sparse Autoencoders (SAEs)☆12Updated last month
- ☆54Updated 10 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- DPO, but faster 🚀☆23Updated 3 weeks ago