iwiwi / epochraft
Checkpointable dataset utilities for foundation model training
☆32Updated last year
Alternatives and similar repositories for epochraft:
Users that are interested in epochraft are comparing it to the libraries listed below
- ☆20Updated last year
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆12Updated last year
- Mamba training library developed by kotoba technologies☆67Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆31Updated last year
- ☆72Updated 9 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆15Updated 5 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- ☆15Updated 2 months ago
- ☆14Updated 10 months ago
- sigma-MoE layer☆18Updated last year
- ☆46Updated 2 years ago
- 日本語マルチタスク言語理解ベンチマーク Japanese Massive Multitask Language Understanding Benchmark☆32Updated 2 months ago
- ☆29Updated 2 years ago
- An implementation of "Subspace Representations for Soft Set Operations and Sentence Similarities" (NAACL 2024)☆10Updated 8 months ago
- Japanese LLaMa experiment☆52Updated 2 months ago
- Repository for Skill Set Optimization☆12Updated 6 months ago
- ☆25Updated 3 months ago
- Example code for prefix-tuning GPT/GPT-NeoX models and for inference with trained prefixes☆12Updated last year
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- ☆58Updated 8 months ago
- ☆48Updated last year
- ☆18Updated 8 months ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆35Updated last year
- A toolkit for scaling law research ⚖☆47Updated 3 weeks ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆11Updated 2 months ago