YuvrajSingh-mist / SmolLlamaLinks
So, I trained a Llama a 130M architecture I coded from ground up to build a small instruct model from scratch. Trained on FineWeb dataset form HuggingFace consisting of 15 M texts (10BT snapshot) for a total of full 3 epochs
☆15Updated 6 months ago
Alternatives and similar repositories for SmolLlama
Users that are interested in SmolLlama are comparing it to the libraries listed below
Sorting:
- ☆89Updated 6 months ago
- ☆157Updated 6 months ago
- ☆75Updated last year
- A repository consisting of paper/architecture replications of classic/SOTA AI/ML papers in pytorch☆383Updated 2 weeks ago
- One click templates for inferencing Language Models☆213Updated 2 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year
- Simple examples using Argilla tools to build AI☆56Updated 11 months ago
- Solving data for LLMs - Create quality synthetic datasets!☆151Updated 9 months ago
- A compact LLM pretrained in 9 days by using high quality data☆330Updated 6 months ago
- A lightweight evaluation suite tailored specifically for assessing Indic LLMs across a diverse range of tasks☆38Updated last year
- ☆68Updated 4 months ago
- Various installation guides for Large Language Models☆73Updated 5 months ago
- rl from zero pretrain, can it be done? yes.☆275Updated 3 weeks ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated 11 months ago
- chrome & firefox extension to chat with webpages: local llms☆127Updated 10 months ago
- Fast parallel LLM inference for MLX☆220Updated last year
- ☆46Updated 6 months ago
- Repository of implementations of classic and sota rl algorithms from scratch in PyTorch☆199Updated last month
- 🦾💻🌐 distributed training & serverless inference at scale on RunPod☆18Updated last year
- Collection of resources for RL and Reasoning☆26Updated 8 months ago
- ☆116Updated 10 months ago
- ☆86Updated last year
- Verifiers for LLM Reinforcement Learning☆75Updated last month
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 8 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆125Updated 2 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 11 months ago
- Inference, Fine Tuning and many more recipes with Gemma family of models☆271Updated 3 months ago
- Train your own SOTA deductive reasoning model☆108Updated 7 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated last month
- ☆146Updated last year