shm007g / LLaMA-Cult-and-More
Large Language Models for All, π¦ Cult and More, Stay in touch !
β427Updated last year
Related projects β
Alternatives and complementary repositories for LLaMA-Cult-and-More
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)β1,080Updated 10 months ago
- β565Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructionsβ812Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchβ¦β582Updated last year
- Tune any FALCON in 4-bitβ468Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diveβ¦β886Updated 3 weeks ago
- Customizable implementation of the self-instruct paper.β1,024Updated 8 months ago
- β411Updated last year
- A joint community effort to create one central leaderboard for LLMs.β285Updated 2 months ago
- Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasksβ595Updated last year
- A command-line interface to generate textual and conversational datasets with LLMs.β293Updated last year
- Generate textbook-quality synthetic LLM pretraining dataβ488Updated last year
- Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuningβ295Updated 3 weeks ago
- β454Updated last year
- Salesforce open-source LLMs with 8k sequence length.β718Updated 11 months ago
- Dromedary: towards helpful, ethical and reliable LLMs.β1,128Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ675Updated 7 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bitsβ707Updated 5 months ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".β765Updated 7 months ago
- A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research β¦β893Updated 5 months ago
- Code for fine-tuning Platypus fam LLMs using LoRAβ623Updated 9 months ago
- β263Updated last year
- PaL: Program-Aided Language Models (ICML 2023)β474Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such asβ¦β348Updated last year
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".β1,088Updated 10 months ago
- β275Updated last year
- An open-source implementation of Google's PaLM modelsβ816Updated 4 months ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMAβ301Updated last year
- β534Updated 11 months ago
- A tiny library for coding with large language models.β1,215Updated 4 months ago