abacaj / transformers
Understanding large language models
☆118Updated last year
Alternatives and similar repositories for transformers:
Users that are interested in transformers are comparing it to the libraries listed below
- Simple embedding -> text model trained on a small subset of Wikipedia sentences.☆153Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- A comprehensive deep dive into the world of tokens☆220Updated 7 months ago
- 📝 Reference-Free automatic summarization evaluation with potential hallucination detection☆101Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆143Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- ☆92Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆100Updated 10 months ago
- ☆153Updated last year
- Repository containing awesome resources regarding Hugging Face tooling.☆46Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 6 months ago
- Helpers and such for working with Lambda Cloud☆51Updated last year
- ☆107Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 3 months ago
- Convert all of libgen to high quality markdown