vineeths96 / Compressed-Transformers

In this repository, we explore model compression for transformer architectures via quantization. We specifically explore quantization aware training of the linear layers and demonstrate the performance for 8 bits, 4 bits, 2 bits and 1 bit (binary) quantization.
22Updated 3 years ago

Related projects

Alternatives and complementary repositories for Compressed-Transformers