teknium1 / RawTransform
A repository of prompts and Python scripts for intelligent transformation of raw text into diverse formats.
☆30Updated last year
Alternatives and similar repositories for RawTransform:
Users that are interested in RawTransform are comparing it to the libraries listed below
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆59Updated 11 months ago
- ☆48Updated last year
- A strongly typed Python DSL for developing message passing multi agent systems☆52Updated last year
- never forget anything again! combine AI and intelligent tooling for a local knowledge base to track catalogue, annotate, and plan for you…☆37Updated 11 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated last month
- KMD is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricaci…☆24Updated last year
- Simplex Random Feature attention, in PyTorch☆74Updated last year
- Public reports detailing responses to sets of prompts by Large Language Models.☆30Updated 3 months ago
- ☆112Updated 4 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- ☆24Updated last year
- Chat Markup Language conversation library☆55Updated last year
- 🔓 The open-source autonomous agent LLM initiative 🔓☆91Updated last year
- An Apache 2.0 licensed starter kit for making Discord bots which converse via direct address (@) and LLMs.☆32Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- ☆20Updated 5 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 8 months ago
- This repository explains and provides examples for "concept anchoring" in GPT4.☆72Updated last year
- Flexible, efficient, and context-aware generation from large unstructured knowledge sources.☆16Updated 11 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆25Updated 10 months ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21Updated 10 months ago
- ☆38Updated 9 months ago
- The original BabyAGI, updated with LiteLLM and no vector database reliance (csv instead)☆21Updated 6 months ago
- look how they massacred my boy☆63Updated 6 months ago
- ☆22Updated last year
- Eh, simple and works.☆27Updated last year
- ☆38Updated last year
- ☆47Updated last year