cognitivecomputations / q-star
☆9Updated last year
Alternatives and similar repositories for q-star:
Users that are interested in q-star are comparing it to the libraries listed below
- entropix style sampling + GUI☆25Updated 5 months ago
- ☆28Updated last year
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆22Updated 2 weeks ago
- ☆112Updated 3 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 2 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆49Updated 2 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 6 months ago
- ☆48Updated last year
- Very minimal (and stateless) agent framework☆42Updated 3 months ago
- ☆17Updated 2 months ago
- ☆50Updated 4 months ago
- ☆48Updated 5 months ago
- never forget anything again! combine AI and intelligent tooling for a local knowledge base to track catalogue, annotate, and plan for you…☆37Updated 11 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆21Updated 4 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- ☆27Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 10 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆31Updated last month
- The original BabyAGI, updated with LiteLLM and no vector database reliance (csv instead)☆21Updated 6 months ago
- Modeling code for a BitNet b1.58 Llama-style model.☆23Updated 11 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated 2 months ago
- ☆66Updated 10 months ago
- BH hackathon☆14Updated last year
- look how they massacred my boy☆63Updated 6 months ago
- ☆20Updated 4 months ago
- How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training☆27Updated 3 weeks ago
- Simple examples using Argilla tools to build AI☆52Updated 4 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆95Updated last month
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆26Updated last year