samlhuillier / spider-sql-finetuneLinks
☆17Updated last year
Alternatives and similar repositories for spider-sql-finetune
Users that are interested in spider-sql-finetune are comparing it to the libraries listed below
Sorting:
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- ☆20Updated last year
- ☆12Updated last year
- ☆16Updated last month
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆35Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- Microsoft Phi 2 Streamlit App, deployed on HuggingFace Spaces is based on the Microsoft Phi 2 small language model (SLM) for text generat…☆14Updated last year
- Official repository for RAGViz: Diagnose and Visualize Retrieval-Augmented Generation [EMNLP 2024]☆85Updated 6 months ago
- Finetune any model on HF in less than 30 seconds☆57Updated 3 weeks ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated 3 weeks ago
- FuseAI Project☆87Updated 6 months ago
- ☆26Updated 2 years ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Open Implementations of LLM Analyses☆106Updated 10 months ago
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- Self-Controlled Memory System for LLMs☆50Updated last year
- Verifiers for LLM Reinforcement Learning☆69Updated 3 months ago
- ☆59Updated 8 months ago
- Official repo of Respond-and-Respond: data, code, and evaluation☆103Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated last year
- ☆76Updated 6 months ago
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- ☆48Updated 10 months ago
- The code in "SlideCoder: Layout-aware RAG-enhanced Hierarchical Slide Generation from Design"☆19Updated 2 months ago
- ☆17Updated last year
- ☆94Updated 8 months ago
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆41Updated last month
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Data preparation code for Amber 7B LLM☆91Updated last year