soketlabs / coomLinks
A training framework for large-scale language models based on Megatron-Core, the COOM Training Framework is designed to efficiently handle extensive model training inspired by Deepseek's HAI-LLM optimizations.
☆21Updated last week
Alternatives and similar repositories for coom
Users that are interested in coom are comparing it to the libraries listed below
Sorting:
- ☆28Updated 9 months ago
- This repository contains the code for dataset curation and finetuning of instruct variant of the Bilingual OpenHathi model. The resultin…☆23Updated last year
- A lightweight evaluation suite tailored specifically for assessing Indic LLMs across a diverse range of tasks☆37Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆188Updated 2 months ago
- ☆46Updated 4 months ago
- ☆43Updated 2 months ago
- This repo has all the basic things you'll need in-order to understand complete vision transformer architecture and its various implementa…☆228Updated 7 months ago
- A blueprint for creating Pretraining and Fine-Tuning datasets for Indic languages☆107Updated 10 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆125Updated last year
- A repository consisting of paper/architecture replications of classic/SOTA AI/ML papers in pytorch☆318Updated 2 weeks ago
- ☆44Updated last month
- "LLM from Zero to Hero: An End-to-End Large Language Model Journey from Data to Application!"☆30Updated 2 weeks ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆48Updated last year
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆274Updated 8 months ago
- rl from zero pretrain, can it be done? we'll see.☆66Updated 2 weeks ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆284Updated 5 months ago
- ☆209Updated last month
- lancedb-myntra-fashion-search☆28Updated last year
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆573Updated this week
- ☆89Updated 4 months ago
- ⚖️ Awesome LLM Judges ⚖️☆108Updated 3 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆313Updated 3 weeks ago
- code for training & evaluating Contextual Document Embedding models☆196Updated 2 months ago
- Learnings and programs related to CUDA☆414Updated last month
- List of resources, libraries and more for developers who would like to build with open-source machine learning off-the-shelf☆199Updated last year
- Following master Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 10 months ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆381Updated this week