erfanzar / OST-OpenSourceTransformersLinks
OST Collection: An AI-powered suite of models that predict the next word matches with remarkable accuracy (Text Generative Models). OST Collection is based on a novel approach to work as a full and intelligent NLP Model.
☆15Updated last year
Alternatives and similar repositories for OST-OpenSourceTransformers
Users that are interested in OST-OpenSourceTransformers are comparing it to the libraries listed below
Sorting:
- (EasyDel Former) is a utility library designed to simplify and enhance the development in JAX☆28Updated last week
- Xerxes, a highly advanced Persian AI assistant developed by InstinctAI, a cutting-edge AI startup. primary function is to assist users wi…☆11Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆317Updated last week
- Agents for intelligence and coordination☆21Updated 3 weeks ago
- A cutting-edge text-to-image generator model that leverages state-of-the-art Stable Diffusion Model Type to produce high-quality, realist…☆13Updated last year
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆28Updated 7 months ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- A set of Python scripts that makes your experience on TPU better☆54Updated last month
- JAX implementation of the Llama 2 model☆216Updated last year
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 4 months ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 5 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- ☆20Updated 2 years ago
- ☆32Updated last year
- ☆15Updated last year
- Mixture of A Million Experts☆48Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆167Updated 8 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- ☆21Updated 11 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆58Updated 3 years ago
- A repository for research on medium sized language models.☆78Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆102Updated 2 years ago
- ☆88Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆52Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆33Updated 2 years ago
- ☆81Updated last year