YuvrajSingh-mist / SmolLlamaView on GitHub
So, I trained a Llama a 130M architecture I coded from ground up to build a small instruct model from scratch. Trained on FineWeb dataset form HuggingFace consisting of 15 M texts (10BT snapshot) for a total of full 3 epochs
17Mar 26, 2025Updated last year

Alternatives and similar repositories for SmolLlama

Users that are interested in SmolLlama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?