rayliuca / T-RagxLinks
Enhancing Translation with RAG-Powered Large Language Models
☆83Updated last month
Alternatives and similar repositories for T-Ragx
Users that are interested in T-Ragx are comparing it to the libraries listed below
Sorting:
- Easy to use, High Performant Knowledge Distillation for LLMs☆95Updated 5 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 7 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- ☆136Updated 2 months ago
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆102Updated 10 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆68Updated last year
- ☆51Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- A pipeline parallel training script for LLMs.☆159Updated 6 months ago
- ☆119Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆67Updated 11 months ago
- ☆221Updated last month
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆67Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- cli tool to quantize gguf, gptq, awq, hqq and exl2 models☆76Updated 10 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 9 months ago
- Simple, Fast, Parallel Huggingface GGML model downloader written in python☆24Updated 2 years ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆43Updated this week
- A massively multilingual modern encoder language model☆104Updated 2 weeks ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆95Updated 5 months ago
- ☆78Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- ☆157Updated 2 years ago
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Train your own small bitnet model☆75Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆140Updated last year
- Parkiet is a 1.6B parameter Dutch text-to-speech model (TTS)☆50Updated last month
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated last year