keeeeenw / TinyLlamaLinks
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
☆11Updated last year
Alternatives and similar repositories for TinyLlama
Users that are interested in TinyLlama are comparing it to the libraries listed below
Sorting:
- ☆53Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 7 months ago
- entropix style sampling + GUI☆26Updated 7 months ago
- Simple examples using Argilla tools to build AI☆53Updated 7 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆24Updated 3 months ago
- ☆66Updated last year
- One Line To Build Zero-Data Classifiers in Minutes☆56Updated 9 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆80Updated last month
- Easy to use, High Performant Knowledge Distillation for LLMs☆86Updated last month
- Modified Beam Search with periodical restart☆12Updated 9 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 7 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- A list of language models with permissive licenses such as MIT or Apache 2.0☆24Updated 3 months ago
- ☆28Updated 9 months ago
- ☆68Updated this week
- ☆51Updated 7 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆62Updated 10 months ago
- A pipeline parallel training script for LLMs.☆149Updated last month
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Easily convert HuggingFace models to GGUF-format for llama.cpp☆21Updated 10 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- A repository for research on medium sized language models.☆76Updated last year
- Mycomind Daemon: A mycelium-inspired, advanced Mixture-of-Memory-RAG-Agents (MoMRA) cognitive assistant that combines multiple AI models …☆34Updated 11 months ago
- ☆36Updated 2 years ago
- 5X faster 60% less memory QLoRA finetuning☆21Updated last year
- Self-host LLMs with LMDeploy and BentoML☆20Updated 2 weeks ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year