josehoras / Knowledge-DistillationLinks
☆11Updated 5 years ago
Alternatives and similar repositories for Knowledge-Distillation
Users that are interested in Knowledge-Distillation are comparing it to the libraries listed below
Sorting:
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)☆171Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆243Updated last year
- Naively combining transformers and Kolmogorov-Arnold Networks to learn and experiment☆35Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆116Updated 2 years ago
- Implementation of DoRA☆301Updated last year
- Conference schedule, top papers, and analysis of the data for NeurIPS 2023!☆119Updated last year
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆118Updated this week
- Combining ViT and GPT-2 for image captioning. Trained on MS-COCO. The model was implemented mostly from scratch.☆44Updated 2 years ago
- ☆134Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆206Updated last month
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture☆131Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆277Updated 7 months ago
- non-official NoisyNN Implemnentation☆50Updated last year
- Trying out the Mamba architecture on small examples (cifar-10, shakespeare char level etc.)☆47Updated last year
- Notes on quantization in neural networks☆104Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆190Updated this week
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆211Updated last year
- Documentation, notes, links, etc for streams.☆82Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆327Updated 6 months ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆123Updated 11 months ago
- Distributed training (multi-node) of a Transformer model☆84Updated last year
- Implementation of Agent Attention in Pytorch☆91Updated last year
- Pytorch implementation of the xLSTM model by Beck et al. (2024)☆174Updated last year
- This repository contains an overview of important follow-up works based on the original Vision Transformer (ViT) by Google.☆183Updated 3 years ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆362Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated last month
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆123Updated last year
- Build high-performance AI models with modular building blocks☆555Updated this week