llmonpy / needle-in-a-needlestack
☆113Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for needle-in-a-needlestack
- a curated list of data for reasoning ai☆113Updated 3 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated 10 months ago
- ChatData 🔍 📖 brings RAG to real applications with FREE✨ knowledge bases. Now enjoy your chat with 6 million wikipedia pages and 2 milli…☆155Updated last week
- Mistral7B playing DOOM☆122Updated 4 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆203Updated 6 months ago
- Enforce structured output from LLMs 100% of the time☆241Updated 4 months ago
- ☆149Updated 4 months ago
- Implement recursion using English as the programming language and an LLM as the runtime.☆128Updated last year
- Action library for AI Agent☆191Updated 2 weeks ago
- GRDN.AI app for garden optimization☆69Updated 9 months ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆81Updated this week
- Visualize the intermediate output of Mistral 7B☆316Updated 9 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated 6 months ago
- A simple Python sandbox for helpful LLM data agents☆170Updated 5 months ago
- Efficient vector database for hundred millions of embeddings.☆200Updated 6 months ago
- Function Calling Benchmark & Testing☆74Updated 4 months ago
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆64Updated 11 months ago
- Generate ideal question-answers for testing RAG☆123Updated 4 months ago
- ☆38Updated 8 months ago
- Routing on Random Forest (RoRF)☆84Updated last month
- 📝 Reference-Free automatic summarization evaluation with potential hallucination detection☆98Updated 10 months ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆131Updated 4 months ago
- ☆44Updated last week
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- inference code for mixtral-8x7b-32kseqlen☆98Updated 11 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆161Updated 10 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆113Updated 3 weeks ago
- Tutorial for building LLM router☆163Updated 4 months ago
- Let's create synthetic textbooks together :)☆70Updated 9 months ago
- An implementation of bucketMul LLM inference☆214Updated 4 months ago