Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit
☆63Jun 21, 2023Updated 2 years ago
Alternatives and similar repositories for cutoff-len-is-context-len
Users that are interested in cutoff-len-is-context-len are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An experiment to see if chatgpt can improve the output of the stanford alpaca dataset☆12Mar 29, 2023Updated 3 years ago
- ☆15Oct 6, 2022Updated 3 years ago
- A bunch of LLaMa model investigations, including recreating generative agents (from the paper Generative Agents: Interactive Simulacra of…☆23May 31, 2023Updated 2 years ago
- ☆18Mar 18, 2024Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.☆63Jul 8, 2024Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year
- QLoRA with Enhanced Multi GPU Support☆38Aug 8, 2023Updated 2 years ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆18Nov 4, 2025Updated 4 months ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆56May 8, 2023Updated 2 years ago
- Code repository for the c-BTM paper☆108Sep 26, 2023Updated 2 years ago
- LLMs as Collaboratively Edited Knowledge Bases☆45Feb 8, 2026Updated last month
- ☆39Oct 3, 2022Updated 3 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆412May 17, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Experimental GPU language with meta-programming☆27Sep 6, 2024Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Mar 31, 2024Updated last year
- Multi-Domain Expert Learning☆67Jan 23, 2024Updated 2 years ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Mar 25, 2025Updated last year
- Trying to deconstruct RWKV in understandable terms☆14May 6, 2023Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Aug 27, 2023Updated 2 years ago
- Experiments for efforts to train a new and improved t5☆76Apr 15, 2024Updated last year
- A library for squeakily cleaning and filtering language datasets.☆50Jul 10, 2023Updated 2 years ago
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆20Mar 9, 2025Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Conversion script adapting vicuna dataset into alpaca format for use with oobabooga's trainer☆13Jun 21, 2023Updated 2 years ago
- Example of applying CUDA graphs to LLaMA-v2☆12Aug 25, 2023Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆11Jun 1, 2023Updated 2 years ago
- ☆23Sep 19, 2024Updated last year
- DeMo: Decoupled Momentum Optimization☆198Dec 2, 2024Updated last year
- ☆16Jul 2, 2025Updated 8 months ago
- Rust bindings for CTranslate2☆14Jun 21, 2023Updated 2 years ago
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Aug 29, 2023Updated 2 years ago
- GPT* - Training faster small transformers using ALiBi, Parallel Residual Connections and more!☆21Oct 29, 2022Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- ☆259Jun 6, 2025Updated 9 months ago
- Public reports detailing responses to sets of prompts by Large Language Models.☆33Jan 4, 2025Updated last year
- ☆34May 14, 2025Updated 10 months ago
- ☆457Oct 15, 2023Updated 2 years ago
- new optimizer☆20Aug 4, 2024Updated last year
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆16Feb 12, 2026Updated last month