BlinkDL / LM-Trick-QuestionsLinks
Here we collect trick questions and failed tasks for open source LLMs to improve them.
☆32Updated 2 years ago
Alternatives and similar repositories for LM-Trick-Questions
Users that are interested in LM-Trick-Questions are comparing it to the libraries listed below
Sorting:
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- Here we will test various linear attention designs.☆62Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 10 months ago
- ☆106Updated last year
- Structural Pruning for LLaMA☆54Updated 2 years ago
- ☆32Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated last month
- ☆34Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- A repository for research on medium sized language models.☆78Updated last year
- ☆52Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 8 months ago
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- Utilities for Training Very Large Models☆58Updated 10 months ago
- RWKV model implementation☆38Updated 2 years ago
- ☆20Updated last year
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆35Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated 11 months ago
- GoldFinch and other hybrid transformer components☆46Updated last year
- ☆37Updated last year
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- JAX Scalify: end-to-end scaled arithmetics☆16Updated 9 months ago
- ☆37Updated 2 years ago
- ☆24Updated last week
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆71Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆37Updated 5 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week