cindysridykhan / instruct_storyteller_tinyllama2Links
Training and Fine-tuning an llm in Python and PyTorch.
☆43Updated 2 years ago
Alternatives and similar repositories for instruct_storyteller_tinyllama2
Users that are interested in instruct_storyteller_tinyllama2 are comparing it to the libraries listed below
Sorting:
- Pre-training code for Amber 7B LLM☆170Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- ☆86Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated last year
- experiments with inference on llama☆103Updated last year
- A pipeline for LLM knowledge distillation☆112Updated 9 months ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆164Updated 5 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- ☆78Updated 2 years ago
- Simple GRPO scripts and configurations.☆59Updated 11 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated 2 years ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆184Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆142Updated 2 years ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- A bagel, with everything.☆325Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆50Updated 2 weeks ago
- ☆95Updated 2 years ago
- ☆85Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Updated 2 years ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆246Updated 2 years ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Updated last year
- A list of LLM benchmark frameworks.☆73Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆340Updated 9 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆203Updated last year
- Tune MPTs☆84Updated 2 years ago