broskicodes / slmsLinks
Experimenting with small language models
☆75Updated last year
Alternatives and similar repositories for slms
Users that are interested in slms are comparing it to the libraries listed below
Sorting:
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- ☆136Updated last year
- Train your own small bitnet model☆75Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- Testing LLM reasoning abilities with family relationship quizzes.☆63Updated 10 months ago
- Combining ViT and GPT-2 for image captioning. Trained on MS-COCO. The model was implemented mostly from scratch.☆46Updated 2 years ago
- ☆75Updated last year
- ☆127Updated 9 months ago
- Various installation guides for Large Language Models☆77Updated 7 months ago
- One click templates for inferencing Language Models☆222Updated last month
- Experimental BitNet Implementation☆73Updated 3 weeks ago
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆194Updated last year
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any language☆315Updated last year
- 1.58-bit LLaMa model☆83Updated last year
- Set of scripts to finetune LLMs☆38Updated last year
- A little(lil) Language Model (LM). A tiny reproduction of LLaMA 3's model architecture.☆53Updated 7 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- ☆208Updated last year
- ☆120Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆163Updated 4 months ago
- This repository's goal is to precompile all past presentations of the Huggingface reading group☆48Updated last year
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆190Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆293Updated 9 months ago
- ☆88Updated 2 years ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆112Updated last year
- Maybe the new state of the art vision model? we'll see 🤷♂️☆169Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆336Updated 8 months ago
- Collection of autoregressive model implementation☆85Updated 7 months ago