LLM360 / amber-data-prepLinks
Data preparation code for Amber 7B LLM
☆91Updated last year
Alternatives and similar repositories for amber-data-prep
Users that are interested in amber-data-prep are comparing it to the libraries listed below
Sorting:
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- Pre-training code for Amber 7B LLM☆166Updated last year
- Pre-training code for CrystalCoder 7B LLM☆54Updated last year
- Open Implementations of LLM Analyses☆103Updated 7 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆78Updated last year
- ☆76Updated last year
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆258Updated 10 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆203Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆124Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated 7 months ago
- A pipeline for LLM knowledge distillation☆104Updated 2 months ago
- Experiments on speculative sampling with Llama models☆126Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 11 months ago
- FuseAI Project☆87Updated 4 months ago
- Code repository for the c-BTM paper☆106Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆142Updated 7 months ago
- ☆47Updated 9 months ago
- ☆49Updated 7 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆190Updated 9 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆221Updated 7 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 6 months ago
- Evaluating LLMs with fewer examples☆155Updated last year
- ☆120Updated 8 months ago
- ☆34Updated 11 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆221Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆110Updated 8 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆69Updated 2 weeks ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆151Updated last year