rayliuca / T-RagxLinks
Enhancing Translation with RAG-Powered Large Language Models
☆80Updated 3 months ago
Alternatives and similar repositories for T-Ragx
Users that are interested in T-Ragx are comparing it to the libraries listed below
Sorting:
- Using open source LLMs to build synthetic datasets for direct preference optimization☆64Updated last year
- ☆53Updated last year
- Fine-tuning Open-Source LLMs for Adaptive Machine Translation☆79Updated last month
- 💬 Language Identification with Support for More Than 2000 Labels -- EMNLP 2023☆138Updated 2 weeks ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆24Updated 3 months ago
- ☆9Updated last year
- ☆124Updated 2 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆78Updated last month
- ☆118Updated 9 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆27Updated last month
- ☆96Updated 6 months ago
- Let's create synthetic textbooks together :)☆75Updated last year
- Open language modeling toolkit based on PyTorch☆129Updated last month
- ☆242Updated last year
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆59Updated 10 months ago
- Fine-tune ModernBERT on a large Dataset with Custom Tokenizer Training☆65Updated 4 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 7 months ago
- Triton backend for https://github.com/OpenNMT/CTranslate2☆35Updated last year
- Tokun to can tokens☆17Updated this week
- Tune MPTs☆84Updated 2 years ago
- ☆68Updated last week
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆96Updated 6 months ago
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆67Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated 8 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆85Updated last month
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 4 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆129Updated last year
- This public GitHub repository contains code for a fully self-hosted, on-premise transcription solution.☆52Updated 6 months ago