bigcode-project / bigcode-tokenizerLinks
☆15Updated last year
Alternatives and similar repositories for bigcode-tokenizer
Users that are interested in bigcode-tokenizer are comparing it to the libraries listed below
Sorting:
- A library for squeakily cleaning and filtering language datasets.☆47Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- ☆23Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- ☆19Updated 2 years ago
- Using short models to classify long texts☆21Updated 2 years ago
- QLoRA for Masked Language Modeling☆22Updated last year
- ☆37Updated 2 years ago
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- ☆47Updated 4 months ago
- 🚀🤗 A collection of templates for Hugging Face Spaces☆35Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated 2 years ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated last month
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 9 months ago
- ☆31Updated last year
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆58Updated last month
- ☆22Updated 4 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆57Updated 8 months ago
- Code for our paper Resources and Evaluations for Multi-Distribution Dense Information Retrieval☆14Updated last year
- Model implementation for the contextual embeddings project☆33Updated 3 weeks ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)☆22Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆51Updated 2 weeks ago
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆35Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆25Updated 6 months ago