cindysridykhan / instruct_storyteller_tinyllama2Links
Training and Fine-tuning an llm in Python and PyTorch.
☆42Updated last year
Alternatives and similar repositories for instruct_storyteller_tinyllama2
Users that are interested in instruct_storyteller_tinyllama2 are comparing it to the libraries listed below
Sorting:
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆152Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆129Updated last year
- Data preparation code for Amber 7B LLM☆91Updated last year
- ☆95Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 11 months ago
- ☆76Updated last year
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆40Updated last week
- experiments with inference on llama☆104Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Parameter-efficient finetuning script for Phi-3-vision, the strong multimodal language model by Microsoft.☆58Updated last year
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆83Updated last year
- A pipeline for LLM knowledge distillation☆104Updated 2 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆191Updated 10 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆45Updated 9 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- ☆198Updated 6 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- minimal GRPO implementation from scratch☆90Updated 3 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- ☆53Updated last year
- My fork os allen AI's OLMo for educational purposes.☆30Updated 6 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- Collection of autoregressive model implementation☆85Updated 2 months ago
- ☆87Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆68Updated last year
- Set of scripts to finetune LLMs☆37Updated last year
- Pre-training code for Amber 7B LLM☆166Updated last year
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆105Updated last month