kurakurai / LuthLinks
Luth is a state-of-the-art series of fine-tuned LLMs for French
☆37Updated 2 weeks ago
Alternatives and similar repositories for Luth
Users that are interested in Luth are comparing it to the libraries listed below
Sorting:
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last week
- ☆68Updated 5 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆103Updated 6 months ago
- Super basic implementation (gist-like) of RLMs with REPL environments.☆204Updated 2 weeks ago
- ☆50Updated 8 months ago
- ☆62Updated 3 months ago
- Testing paligemma2 finetuning on reasoning dataset☆18Updated 10 months ago
- Train your own SOTA deductive reasoning model☆109Updated 7 months ago
- Simple examples using Argilla tools to build AI☆56Updated 11 months ago
- ☆113Updated last week
- ☆55Updated 11 months ago
- ☆158Updated 6 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆79Updated 7 months ago
- ☆67Updated last year
- Efficient non-uniform quantization with GPTQ for GGUF☆52Updated last month
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆67Updated 11 months ago
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆111Updated 6 months ago
- Verifiers for LLM Reinforcement Learning☆77Updated last month
- Project code for training LLMs to write better unit tests + code☆21Updated 5 months ago
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆78Updated last year
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆59Updated 5 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆57Updated 5 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆117Updated 2 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 3 months ago
- Dataset Viber is your chill repo for data collection, annotation and vibe checks.☆46Updated last year
- An introduction to LLM Sampling☆79Updated 10 months ago
- ☆96Updated 7 months ago
- Marketplace ML experiment - training without backprop☆27Updated last month
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year