evintunador / templateGPTLinks
customizable template GPT code designed for easy novel architecture experimentation
☆26Updated 9 months ago
Alternatives and similar repositories for templateGPT
Users that are interested in templateGPT are comparing it to the libraries listed below
Sorting:
- A compact LLM pretrained in 9 days by using high quality data☆337Updated 8 months ago
- ☆131Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- ☆137Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Fast bare-bones BPE for modern tokenizer training☆174Updated 6 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- Long context evaluation for large language models☆224Updated 9 months ago
- ☆104Updated 2 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated last year
- Draw more samples☆198Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Updated 10 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆163Updated 4 months ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆780Updated this week
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆343Updated last year
- An open source implementation of LFMs from Liquid AI: Liquid Foundation Models☆114Updated last year
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- Set of scripts to finetune LLMs☆38Updated last year
- Normalized Transformer (nGPT)☆194Updated last year
- Curated collection of community environments☆196Updated this week
- Collection of autoregressive model implementation☆85Updated 8 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆362Updated last year
- rl from zero pretrain, can it be done? yes.☆282Updated 3 months ago
- Our solution for the arc challenge 2024☆186Updated 6 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- ☆120Updated last year