huggingface / blogLinks
Public repo for HF blog posts
☆3,070Updated this week
Alternatives and similar repositories for blog
Users that are interested in blog are comparing it to the libraries listed below
Sorting:
- Accessible large language models via k-bit quantization for PyTorch.☆7,450Updated this week
- Train transformer language models with reinforcement learning.☆14,989Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,252Updated last week
- 🤗 Evaluate: A library for easily evaluating machine learning models and datasets.☆2,285Updated last month
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆12,511Updated 7 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,010Updated this week
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,590Updated 2 months ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,448Updated 2 years ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,750Updated this week
- Example models using DeepSpeed☆6,618Updated 2 weeks ago
- ☆1,260Updated 5 months ago
- Instruction Tuning with GPT-4☆4,322Updated 2 years ago
- A framework for few-shot evaluation of language models.☆9,802Updated this week
- An open-source framework for training large multimodal models.☆3,995Updated 11 months ago
- ☆2,858Updated 2 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,692Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,134Updated 3 weeks ago
- ☆1,532Updated last week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,406Updated last year
- Foundation Architecture for (M)LLMs☆3,101Updated last year
- Ongoing research training transformer models at scale☆13,130Updated this week
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,082Updated last month
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,772Updated last month
- Reference implementation for DPO (Direct Preference Optimization)☆2,692Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,889Updated last year
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,054Updated last year
- Transformer related optimization, including BERT, GPT☆6,270Updated last year
- Fast and memory-efficient exact attention☆18,776Updated last week
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,405Updated this week
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,470Updated 2 years ago