sshh12 / llm_optimize
LLM Optimize is a proof-of-concept library for doing LLM (large language model) guided blackbox optimization.
☆53Updated last year
Alternatives and similar repositories for llm_optimize:
Users that are interested in llm_optimize are comparing it to the libraries listed below
- A repository re-creating the PromptBreeder Evolutionary Algorithm from the DeepMind Paper in Python using LMQL as the backend.☆27Updated last year
- ☆74Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Updated 7 months ago
- ☆48Updated 2 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- A re-implementation of Meta-Prompt in LangChain for building self-improving agents.☆63Updated last year
- Small, simple agent task environments for training and evaluation☆18Updated 2 months ago
- Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents☆114Updated 7 months ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated 8 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 8 months ago
- LLM finetuning☆42Updated last year
- LLM reads a paper and produce a working prototype☆48Updated last month
- create workflows with LLMs☆51Updated 5 months ago
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆64Updated last year
- Open Implementations of LLM Analyses☆98Updated 3 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated 8 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆24Updated 2 months ago
- ☆56Updated last week
- Score LLM pretraining data with classifiers☆54Updated last year
- LLMs as Collaboratively Edited Knowledge Bases☆43Updated 11 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆46Updated 7 months ago
- ☆52Updated 9 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆21Updated 2 months ago
- ☆20Updated last year
- ☆27Updated 5 months ago
- 🧠 Mindstorm in Natural Language-based Societies of Mind☆51Updated 3 months ago
- A repository of projects and datasets under active development by Alignment Lab AI☆22Updated last year
- never forget anything again! combine AI and intelligent tooling for a local knowledge base to track catalogue, annotate, and plan for you…☆36Updated 8 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 10 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year