CarperAI / nmmo-environmentLinks
Neural MMO - A Massively Multiagent Environment for Artificial Intelligence Research
☆15Updated last year
Alternatives and similar repositories for nmmo-environment
Users that are interested in nmmo-environment are comparing it to the libraries listed below
Sorting:
- Fast inference of Instruct tuned LLaMa on your personal devices.☆23Updated 2 years ago
- Code base for internal reward models and PPO training☆24Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- A lightweight PyTorch implementation of the Transformer-XL architecture proposed by Dai et al. (2019)☆37Updated 2 years ago
- Intrinsic Motivation from Artificial Intelligence Feedback☆134Updated 2 years ago
- ☆26Updated 2 years ago
- ☆37Updated 3 years ago
- ☆22Updated 2 years ago
- The application is a end-user training and evaluation system for standard knowledge graph embedding models. It was developed to optimise …☆18Updated 7 months ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.☆17Updated 3 years ago
- Downloads 2020 English Wikipedia articles as plaintext☆25Updated 2 years ago
- Inference code for LLaMA 2 models☆30Updated last year
- This is the code that went into our practical dive using mamba as information extraction☆57Updated 2 years ago
- Understanding RL vision Distill article☆25Updated 2 years ago
- ☆12Updated 7 months ago
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!☆40Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- Clean RL implementation using MLX☆34Updated last year
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆171Updated 3 months ago
- ☆27Updated last year
- Pytorch implementation on OpenAI's Procgen ppo-baseline, built from scratch.☆14Updated last year
- 🏥 Health monitor for a Petals swarm☆40Updated last year
- Documentation for dynamic machine learning systems.☆29Updated last year
- ☆62Updated 2 years ago
- Train a production grade GPT in less than 400 lines of code. Better than Karpathy's verison and GIGAGPT☆16Updated 3 weeks ago
- ☆35Updated 2 years ago
- ☆26Updated 3 years ago
- A small and fast image rescaling library with SIMD support☆22Updated 5 months ago