cindysridykhan / instruct_storyteller_tinyllama2
Training and Fine-tuning an llm in Python and PyTorch.
☆41Updated last year
Alternatives and similar repositories for instruct_storyteller_tinyllama2:
Users that are interested in instruct_storyteller_tinyllama2 are comparing it to the libraries listed below
- ☆94Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆146Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆64Updated last year
- ☆75Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Data preparation code for Amber 7B LLM☆88Updated 11 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆82Updated last month
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆274Updated last year
- Pre-training code for Amber 7B LLM☆166Updated 11 months ago
- A pipeline for LLM knowledge distillation☆100Updated 3 weeks ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Experiments on speculative sampling with Llama models☆125Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆86Updated this week
- Evaluating LLMs with CommonGen-Lite☆89Updated last year
- experiments with inference on llama☆104Updated 10 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆120Updated last year
- A bagel, with everything.☆320Updated last year
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆100Updated 9 months ago
- ☆87Updated last year
- My fork os allen AI's OLMo for educational purposes.☆30Updated 4 months ago
- RWKV in nanoGPT style☆189Updated 10 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆135Updated 9 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 9 months ago
- Code for NeurIPS LLM Efficiency Challenge☆57Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated 11 months ago
- Pre-training code for CrystalCoder 7B LLM☆54Updated 11 months ago
- ☆31Updated 10 months ago
- ☆32Updated last year
- ☆197Updated 4 months ago