shm007g / LLaMA-Cult-and-More
Large Language Models for All, π¦ Cult and More, Stay in touch !
β446Updated last year
Alternatives and similar repositories for LLaMA-Cult-and-More:
Users that are interested in LLaMA-Cult-and-More are comparing it to the libraries listed below
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructionsβ821Updated last year
- β591Updated last year
- Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasksβ604Updated 2 years ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bitsβ722Updated 11 months ago
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)β1,114Updated last year
- A command-line interface to generate textual and conversational datasets with LLMs.β296Updated last year
- A joint community effort to create one central leaderboard for LLMs.β295Updated 8 months ago
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".β1,129Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchβ¦β586Updated last year
- Customizable implementation of the self-instruct paper.β1,043Updated last year
- Tune any FALCON in 4-bitβ466Updated last year
- β1,028Updated last year
- A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research β¦β958Updated 4 months ago
- β356Updated 2 years ago
- PaL: Program-Aided Language Models (ICML 2023)β488Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformerβ1,631Updated last year
- A school for camelidsβ1,209Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRAβ629Updated last year
- Evaluation tool for LLM QA chainsβ1,075Updated last year
- A tiny library for coding with large language models.β1,228Updated 9 months ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMAβ302Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.β762Updated 6 months ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".β791Updated last year
- β277Updated last year
- Generate textbook-quality synthetic LLM pretraining dataβ498Updated last year
- Salesforce open-source LLMs with 8k sequence length.β717Updated 2 months ago
- β1,468Updated last year
- β535Updated last year
- Decoupling Reasoning from Observations for Efficient Augmented Language Modelsβ902Updated last year
- β451Updated last year