broskicodes / slmsLinks
Experimenting with small language models
☆67Updated last year
Alternatives and similar repositories for slms
Users that are interested in slms are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- 1.58-bit LLaMa model☆81Updated last year
- ☆36Updated 2 weeks ago
- Video+code lecture on building nanoGPT from scratch☆67Updated 11 months ago
- ☆130Updated 9 months ago
- ☆127Updated 2 months ago
- Train your own small bitnet model☆71Updated 7 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimization☆63Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 7 months ago
- ☆121Updated 2 months ago
- Collection of autoregressive model implementation☆85Updated last month
- ☆66Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆124Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆279Updated 3 months ago
- Experimental BitNet Implementation☆65Updated 2 weeks ago
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆46Updated last year
- ☆54Updated 3 months ago
- Function Calling Benchmark & Testing☆87Updated 10 months ago
- Maybe the new state of the art vision model? we'll see 🤷♂️☆163Updated last year
- This repository's goal is to precompile all past presentations of the Huggingface reading group☆48Updated 9 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 10 months ago
- So, I trained a Llama a 130M architecture I coded from ground up to build a small instruct model from scratch. Trained on FineWeb dataset…☆15Updated 2 months ago
- The training notebooks that were similar to the original script used to train TinyMistral.☆21Updated last year
- ☆48Updated 3 months ago
- ☆118Updated 9 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆151Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year