cindysridykhan / instruct_storyteller_tinyllama2
Training and Fine-tuning an llm in Python and PyTorch.
☆41Updated last year
Alternatives and similar repositories for instruct_storyteller_tinyllama2:
Users that are interested in instruct_storyteller_tinyllama2 are comparing it to the libraries listed below
- A pipeline for LLM knowledge distillation☆83Updated 5 months ago
- ☆74Updated last year
- Experiments on speculative sampling with Llama models☆122Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆115Updated last year
- My fork os allen AI's OLMo for educational purposes.☆30Updated last month
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆56Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated 11 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆141Updated 9 months ago
- experiments with inference on llama☆104Updated 7 months ago
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated last year
- ☆94Updated last year
- Train your own small bitnet model☆64Updated 2 months ago
- Data preparation code for Amber 7B LLM☆84Updated 8 months ago
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆149Updated 10 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆87Updated 11 months ago
- ☆74Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆198Updated 2 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 9 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆145Updated 7 months ago
- A bagel, with everything.☆315Updated 9 months ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆156Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆224Updated 2 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- ☆108Updated 3 months ago
- minimal LLM scripts for 24GB VRAM GPUs. training, inference, whatever☆35Updated this week
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- Set of scripts to finetune LLMs☆36Updated 9 months ago