Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit
☆63Jun 21, 2023Updated 2 years ago
Alternatives and similar repositories for cutoff-len-is-context-len
Users that are interested in cutoff-len-is-context-len are comparing it to the libraries listed below
Sorting:
- An experiment to see if chatgpt can improve the output of the stanford alpaca dataset☆12Mar 29, 2023Updated 2 years ago
- ☆18Mar 18, 2024Updated last year
- UTAUTAI(Unrestricted Tune Automated Technology Artificial Interigence)☆15Oct 27, 2023Updated 2 years ago
- Code repository for the c-BTM paper☆108Sep 26, 2023Updated 2 years ago
- ☆39Oct 3, 2022Updated 3 years ago
- PyTorch interface for TrueGrad Optimizers☆43Aug 8, 2023Updated 2 years ago
- A repository of projects and datasets under active development by Alignment Lab AI☆22Dec 22, 2023Updated 2 years ago
- A set of visualization engines.☆14Updated this week
- QLoRA with Enhanced Multi GPU Support☆38Aug 8, 2023Updated 2 years ago
- Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.☆63Jul 8, 2024Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆180May 2, 2024Updated last year
- ☆10Aug 9, 2023Updated 2 years ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆17Nov 4, 2025Updated 4 months ago
- ☆11Feb 25, 2024Updated 2 years ago
- Experiments for efforts to train a new and improved t5☆76Apr 15, 2024Updated last year
- Experimental GPU language with meta-programming☆26Sep 6, 2024Updated last year
- A bunch of LLaMa model investigations, including recreating generative agents (from the paper Generative Agents: Interactive Simulacra of…☆23May 31, 2023Updated 2 years ago
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Aug 29, 2023Updated 2 years ago
- Rust bindings for CTranslate2☆14Jun 21, 2023Updated 2 years ago
- ☆12Jan 4, 2024Updated 2 years ago
- ☆259Jun 6, 2025Updated 9 months ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆56May 8, 2023Updated 2 years ago
- ☆29Oct 24, 2025Updated 4 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- code associated with WANLI dataset in Liu et al., 2022☆31May 24, 2023Updated 2 years ago
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆15Feb 12, 2026Updated 3 weeks ago
- [WIP] Transformer to embed Danbooru labelsets☆13Mar 31, 2024Updated last year
- ☆16Jul 2, 2025Updated 8 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆57Mar 10, 2025Updated 11 months ago
- Trying to deconstruct RWKV in understandable terms☆14May 6, 2023Updated 2 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆409May 17, 2024Updated last year
- Conversion script adapting vicuna dataset into alpaca format for use with oobabooga's trainer☆13Jun 21, 2023Updated 2 years ago
- Fast and differentiable hidden Markov model in C++☆19Jan 20, 2023Updated 3 years ago
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆19Mar 9, 2025Updated 11 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Apr 30, 2024Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Aug 27, 2023Updated 2 years ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆137Mar 14, 2024Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- ☆34May 14, 2025Updated 9 months ago