kyegomez / Python-Package-TemplateLinks
A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much much more
☆199Updated last week
Alternatives and similar repositories for Python-Package-Template
Users that are interested in Python-Package-Template are comparing it to the libraries listed below
Sorting:
- An extension of the nanoGPT repository for training small MOE models.☆233Updated 10 months ago
- open source alpha evolve☆69Updated 8 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- ☆206Updated last year
- ☆207Updated 3 weeks ago
- An open source implementation of LFMs from Liquid AI: Liquid Foundation Models☆203Updated last week
- minimal GRPO implementation from scratch☆102Updated 10 months ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Updated 11 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆344Updated last month
- Code repository for Black Mamba☆261Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆170Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆293Updated 8 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆205Updated 3 weeks ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆59Updated 8 months ago
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆168Updated 2 weeks ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆294Updated last year
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆445Updated last week
- Build high-performance AI models with modular building blocks☆576Updated this week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆134Updated 3 months ago
- Data preparation code for Amber 7B LLM☆94Updated last year
- Beyond Language Models: Byte Models are Digital World Simulators☆334Updated last year
- ☆137Updated last year
- Repository for code used in the xVal paper☆148Updated last year
- All credits go to HuggingFace's Daily AI papers (https://huggingface.co/papers) and the research community. 🔉Audio summaries here (https…☆211Updated 3 months ago
- Google TPU optimizations for transformers models☆135Updated 2 weeks ago
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆144Updated 2 months ago
- Normalized Transformer (nGPT)☆198Updated last year