fkodom / python-repo-template
Template repo for Python projects, especially those focusing on machine learning and/or deep learning.
☆12Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for python-repo-template
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- ☆57Updated 11 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated this week
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆80Updated 11 months ago
- Understand and test language model architectures on synthetic tasks.☆162Updated 6 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆179Updated 5 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆157Updated 10 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated last month
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆252Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆76Updated 2 years ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆48Updated 7 months ago
- ☆91Updated last year
- ☆22Updated last year
- Collection of autoregressive model implementation☆67Updated this week
- code for training & evaluating Contextual Document Embedding models☆117Updated this week
- ☆55Updated last month
- Public Inflection Benchmarks☆69Updated 8 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆214Updated 3 months ago
- ☆93Updated last year
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- σ-GPT: A New Approach to Autoregressive Models☆59Updated 3 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆85Updated 2 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆128Updated 3 weeks ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆127Updated 6 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆95Updated 2 weeks ago
- Code repository for the c-BTM paper☆105Updated last year