cindysridykhan / instruct_storyteller_tinyllama2
Training and Fine-tuning an llm in Python and PyTorch.
☆41Updated last year
Alternatives and similar repositories for instruct_storyteller_tinyllama2:
Users that are interested in instruct_storyteller_tinyllama2 are comparing it to the libraries listed below
- Experiments on speculative sampling with Llama models☆125Updated last year
- A pipeline for LLM knowledge distillation☆99Updated this week
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆63Updated last year
- Data preparation code for Amber 7B LLM☆86Updated 10 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆144Updated 11 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- ☆87Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- Simple GRPO scripts and configurations.☆58Updated last month
- Data preparation code for CrystalCoder 7B LLM☆44Updated 10 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 8 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆99Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆73Updated 5 months ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆158Updated last year
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆79Updated 2 weeks ago
- A list of LLM benchmark frameworks.☆65Updated last year
- ☆36Updated 2 years ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 4 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 11 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- A bagel, with everything.☆317Updated 11 months ago
- ☆94Updated last year
- ☆194Updated 3 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆92Updated last year
- ☆117Updated 7 months ago
- ☆74Updated last year
- ☆84Updated last year