neuralwork / arxiverLinks
Codebase for the arxiver dataset
☆14Updated 10 months ago
Alternatives and similar repositories for arxiver
Users that are interested in arxiver are comparing it to the libraries listed below
Sorting:
- ☆62Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- ☆63Updated last year
- ☆22Updated 2 years ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 4 months ago
- ☆49Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆170Updated last year
- Code repository for Black Mamba☆255Updated last year
- Simplex Random Feature attention, in PyTorch☆74Updated last year
- ☆82Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- ☆32Updated last year
- MEXMA: Token-level objectives improve sentence representations☆41Updated 9 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆44Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- Implementation of the Llama architecture with RLHF + Q-learning☆168Updated 8 months ago
- Set of scripts to finetune LLMs☆38Updated last year
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- manage histories of LLM applied applications☆91Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆178Updated last year
- Community Open Source Implementation of GPT4o in PyTorch☆29Updated 2 weeks ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Finetune any model on HF in less than 30 seconds☆55Updated 3 weeks ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 2 months ago
- ☆135Updated last year
- Collection of autoregressive model implementation☆86Updated 5 months ago
- ☆43Updated 2 years ago
- ☆14Updated last year