cozheyuanzhangde / Forward-ForwardLinks
Hinton's Forward-Forward Algorithm for Deep Learning
☆10Updated 2 years ago
Alternatives and similar repositories for Forward-Forward
Users that are interested in Forward-Forward are comparing it to the libraries listed below
Sorting:
- [ECCV 2022] SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning☆20Updated 3 years ago
- Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.☆51Updated 2 years ago
- ACL 2023☆39Updated 2 years ago
- Code repo for the paper BiT Robustly Binarized Multi-distilled Transformer☆114Updated 2 years ago
- Converting a deep neural network to integer-only inference in native C via uniform quantization and the fixed-point representation.☆26Updated 3 years ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated 2 years ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆111Updated last year
- Binarize convolutional neural networks using pytorch☆149Updated 3 years ago
- RWKV in nanoGPT style☆197Updated last year
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- In this repository, we explore model compression for transformer architectures via quantization. We specifically explore quantization awa…☆24Updated 4 years ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆50Updated 4 months ago
- FastFeedForward Networks☆20Updated 2 years ago
- Benchmarking PyTorch 2.0 different models☆20Updated 2 years ago
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆46Updated last year
- E2E AutoML Model Compression Package☆45Updated 10 months ago
- ☆70Updated last year
- The official, proof-of-concept C++ implementation of PocketNN.☆35Updated 3 months ago
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆89Updated 3 years ago
- Reorder-based post-training quantization for large language model☆197Updated 2 years ago
- Code for High-Capacity Expert Binary Networks (ICLR 2021).☆27Updated 4 years ago
- PyTorch Lightning implementation of the paper Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and H…☆35Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Updated 3 years ago
- Implementation of "Gradients without backpropagation" paper (https://arxiv.org/abs/2202.08587) using functorch☆114Updated 2 years ago
- ☆40Updated last year
- Based on BrainTransformers, BrainGPTForCausalLM is a Large Language Model (LLM) implemented using Spiking Neural Networks (SNN). We are e…☆33Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆305Updated last year
- You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms☆12Updated 2 years ago
- Code for the note "NF4 Isn't Information Theoretically Optimal (and that's Good)☆21Updated 2 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year