rayliuca / T-RagxLinks
Enhancing Translation with RAG-Powered Large Language Models
☆81Updated 3 months ago
Alternatives and similar repositories for T-Ragx
Users that are interested in T-Ragx are comparing it to the libraries listed below
Sorting:
- ☆108Updated 3 weeks ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆156Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆88Updated 2 months ago
- ☆128Updated 3 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimization☆64Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- State-of-the-art LLM-based translation models.☆539Updated 3 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆26Updated 4 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated last year
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆67Updated last year
- ☆52Updated last year
- Tokun to can tokens☆18Updated 3 weeks ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆140Updated 4 months ago
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆217Updated last month
- ☆76Updated last year
- A pipeline parallel training script for LLMs.☆153Updated 2 months ago
- Let's build better datasets, together!☆260Updated 6 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated 8 months ago
- ☆156Updated 2 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 8 months ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆176Updated this week
- ☆17Updated last year
- ☆158Updated 2 years ago
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- ☆205Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆252Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆318Updated 3 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆81Updated last month
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated 11 months ago