pacman100 / openhathi_instruct
This repository contains the code for dataset curation and finetuning of instruct variant of the Bilingual OpenHathi model. The resulting model is meant to follow instructions and chat in Hindi and Hinglish.
☆23Updated last year
Alternatives and similar repositories for openhathi_instruct:
Users that are interested in openhathi_instruct are comparing it to the libraries listed below
- Code repository for "Introducing Airavata: Hindi Instruction-tuned LLM"☆59Updated 6 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 9 months ago
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆102Updated last month
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆198Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆46Updated 11 months ago
- A blueprint for creating Pretraining and Fine-Tuning datasets for Indic languages☆106Updated 7 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆107Updated 7 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆136Updated 9 months ago
- Fine-tune an LLM to perform batch inference and online serving.☆110Updated last week
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆34Updated 4 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 6 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- ☆78Updated 11 months ago
- Repository for fine-tuning gemma models using unsloth for indic languages☆92Updated last year
- experiments with inference on llama☆104Updated 11 months ago
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆131Updated 4 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Mistral + Haystack: build RAG pipelines that rock 🤘☆103Updated last year
- Lite weight wrapper for the independent implementation of SPLADE++ models for search & retrieval pipelines. Models and Library created by…☆30Updated 8 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆255Updated 9 months ago
- Generalist and Lightweight Model for Text Classification☆124Updated last week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆99Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆124Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 9 months ago
- ☆143Updated 9 months ago
- Chunk your text using gpt4o-mini more accurately☆44Updated 9 months ago
- ☆43Updated 2 months ago
- ☆123Updated 6 months ago
- ☆48Updated last year
- A lightweight evaluation suite tailored specifically for assessing Indic LLMs across a diverse range of tasks☆35Updated 10 months ago