KhoomeiK / sanskrit-ocrLinks
☆45Updated 6 months ago
Alternatives and similar repositories for sanskrit-ocr
Users that are interested in sanskrit-ocr are comparing it to the libraries listed below
Sorting:
- A lightweight evaluation suite tailored specifically for assessing Indic LLMs across a diverse range of tasks☆38Updated last year
- rl from zero pretrain, can it be done? yes.☆286Updated 3 months ago
- This repository contains the code for dataset curation and finetuning of instruct variant of the Bilingual OpenHathi model. The resultin…☆23Updated 2 years ago
- Following Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated last year
- Simple Transformer in Jax☆140Updated last year
- ☆116Updated this week
- ☆92Updated last year
- Compiling useful links, papers, benchmarks, ideas, etc.☆46Updated 9 months ago
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆31Updated 8 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆76Updated 7 months ago
- Curated collection of community environments☆200Updated this week
- An introduction to LLM Sampling☆79Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- A simple, consistent and extendable toolkit for IndicTrans2. (Pypi: https://pypi.org/project/indictranstoolkit)☆37Updated 5 months ago
- code for training & evaluating Contextual Document Embedding models☆201Updated 7 months ago
- Simple UI for debugging correlations of text embeddings☆306Updated 7 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆61Updated last year
- ☆136Updated 9 months ago
- ☆68Updated 7 months ago
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆276Updated last year
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆454Updated last year
- AI eXplainable Inference & Search. Open Sourcing on-premise, ultra-fast latency intelligence to all.☆35Updated 10 months ago
- ⚖️ Awesome LLM Judges ⚖️☆148Updated 8 months ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆112Updated last week
- A training framework for large-scale language models based on Megatron-Core, the COOM Training Framework is designed to efficiently handl…☆24Updated last month
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆315Updated 6 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆85Updated 4 months ago
- MoE training for Me and You and maybe other people☆315Updated last week