cindysridykhan / instruct_storyteller_tinyllama2Links
Training and Fine-tuning an llm in Python and PyTorch.
☆42Updated last year
Alternatives and similar repositories for instruct_storyteller_tinyllama2
Users that are interested in instruct_storyteller_tinyllama2 are comparing it to the libraries listed below
Sorting:
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆67Updated last year
- A pipeline for LLM knowledge distillation☆104Updated 2 months ago
- ☆87Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆151Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆199Updated 10 months ago
- Experiments on speculative sampling with Llama models☆127Updated last year
- ☆95Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆76Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- Pre-training code for Amber 7B LLM☆166Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆124Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆203Updated last year
- Data preparation code for Amber 7B LLM☆91Updated last year
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆78Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆190Updated 9 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆275Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- ☆51Updated 7 months ago
- A bagel, with everything.☆321Updated last year
- My fork os allen AI's OLMo for educational purposes.☆30Updated 6 months ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 2 weeks ago
- minimal LLM scripts for 24GB VRAM GPUs. training, inference, whatever☆39Updated 2 weeks ago
- experiments with inference on llama☆104Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆136Updated 10 months ago
- RWKV in nanoGPT style☆191Updated 11 months ago
- ☆53Updated last year