microsoft / GODELLinks
Large-scale pretrained models for goal-directed dialog
☆873Updated last year
Alternatives and similar repositories for GODEL
Users that are interested in GODEL are comparing it to the libraries listed below
Sorting:
- Large-scale pretraining for dialogue☆2,393Updated 2 years ago
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.☆471Updated last year
- Repo for fine-tuning Casual LLMs☆457Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆820Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆538Updated 9 months ago
- An open-source implementation of Google's PaLM models☆819Updated last year
- ☆1,583Updated 2 years ago
- Alpaca dataset from Stanford, cleaned and curated☆1,561Updated 2 years ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆823Updated 2 years ago
- Expanding natural instructions☆1,008Updated last year
- ☆1,529Updated last week
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,637Updated last year
- Implementation of ChatGPT RLHF (Reinforcement Learning with Human Feedback) on any generation model in huggingface's transformer (blommz-…☆562Updated last year
- Ongoing research training transformer models at scale☆389Updated 11 months ago
- Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.☆536Updated 2 months ago
- A dataset containing human-human knowledge-grounded open-domain conversations.☆657Updated 11 months ago
- ChatLLaMA 📢 Open source implementation for LLaMA-based ChatGPT runnable in a single GPU. 15x faster training process than ChatGPT☆1,203Updated 6 months ago
- Code for "Learning to summarize from human feedback"☆1,033Updated last year
- simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.☆395Updated 2 years ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,484Updated 11 months ago
- Ask Me Anything language model prompting☆547Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated 2 years ago
- Tune any FALCON in 4-bit☆467Updated last year
- DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI☆508Updated 5 months ago
- Fast Inference Solutions for BLOOM☆563Updated 9 months ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆463Updated 2 years ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆951Updated 8 months ago
- A method to fix GPT-3 after deployment with user feedback, without re-training.☆329Updated 2 years ago
- Salesforce open-source LLMs with 8k sequence length.☆719Updated 5 months ago