tianlinxu312 / Everything-about-LLMsLinks
A work in progress. Trying to write about all interesting or necessary pieces in the current development of LLMs and generative AI. Gradually adding more topics.
☆199Updated 2 years ago
Alternatives and similar repositories for Everything-about-LLMs
Users that are interested in Everything-about-LLMs are comparing it to the libraries listed below
Sorting:
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated last year
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆482Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆248Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- Pre-training code for Amber 7B LLM☆170Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- Implementation of DoRA☆307Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆272Updated last year
- Official PyTorch implementation of QA-LoRA☆145Updated last year
- ☆274Updated 2 years ago
- [MathCoder, MathCoder-VL] Family of LLMs/LMMs for mathematical reasoning.☆338Updated 3 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆98Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆294Updated last year
- Scaling Data-Constrained Language Models☆343Updated 6 months ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆259Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- ☆150Updated 2 years ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆85Updated 2 years ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆193Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 4 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆315Updated 2 years ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆666Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Updated 2 years ago