young-geng / koala_data_pipelineLinks
The data processing pipeline for the Koala chatbot language model
☆118Updated 2 years ago
Alternatives and similar repositories for koala_data_pipeline
Users that are interested in koala_data_pipeline are comparing it to the libraries listed below
Sorting:
- ☆273Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated 2 years ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆302Updated 2 years ago
- The test set for Koala☆45Updated 2 years ago
- ☆172Updated 2 years ago
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆229Updated 2 years ago
- ☆85Updated 2 years ago
- This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations which is the most serious c…☆222Updated 2 years ago
- CodeGen2 models for program synthesis☆271Updated 2 years ago
- ☆127Updated 2 years ago
- Repository for analysis and experiments in the BigCode project.☆124Updated last year
- ☆180Updated 2 years ago
- Weekly visualization report of Open LLM model performance based on 4 metrics.☆87Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated last year
- Implementation of Toolformer: Language Models Can Teach Themselves to Use Tools☆142Updated 2 years ago
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- ☆371Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆177Updated 2 years ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- Simple next-token-prediction for RLHF☆227Updated last year
- SAIL: Search Augmented Instruction Learning☆157Updated last month
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 2 months ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆200Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆225Updated last year
- Evaluating tool-augmented LLMs in conversation settings☆88Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆291Updated 7 months ago
- A repository for transformer critique learning and generation☆90Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago