ivy-llc / demosLinks
Demos and Tutorials with Ivy
☆18Updated last year
Alternatives and similar repositories for demos
Users that are interested in demos are comparing it to the libraries listed below
Sorting:
- Off-the-shelf models for a variety of domains☆23Updated last year
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆297Updated last year
- A collection of different algorithms, tools, repos, products, related to different layers of the AI deployment stack☆19Updated last year
- Notebooks for the Practicals at the Deep Learning Indaba 2022.☆178Updated last year
- Supplementary Materials for the Deep Learning Book by Ian Goodfellow et al☆54Updated 3 years ago
- All about the fundamental blocks of TF and JAX!☆275Updated 4 years ago
- Neural Networks for JAX☆84Updated last year
- ☆26Updated 2 years ago
- Source of the FSDL 2022 labs, which are at https://github.com/full-stack-deep-learning/fsdl-text-recognizer-2022-labs☆83Updated last year
- Lightning Bits: Engineering for Researchers repo☆132Updated 3 years ago
- Introductory lecture on Pytorch☆17Updated 3 years ago
- a python library for cats and hypercats☆24Updated 4 months ago
- An assignment for CMU CS11-711 Advanced NLP, building NLP systems from scratch☆171Updated 3 years ago
- Functional local implementations of main model parallelism approaches☆95Updated 2 years ago
- ML/DL Math and Method notes☆65Updated 2 years ago
- Automatic gradient descent☆215Updated 2 years ago
- A JAX-native High Performance Eval Metrics Library☆53Updated 2 weeks ago
- ☆24Updated 2 years ago
- A deep learning framework created from scratch with Python and NumPy☆238Updated 3 years ago
- Quick demo of setting up a deep learning Python environment using conda and pip-tools.☆39Updated 4 years ago
- NYU Artificial Intelligence Spring 2024☆61Updated last year
- Graph neural networks in JAX.☆68Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆49Updated last year
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆174Updated 2 years ago
- Practical Deep Learning at Scale with MLFlow, published by Packt☆163Updated 2 weeks ago
- Basic guidance on how to contribute to Papers with Code☆24Updated 3 years ago
- ML Research paper summaries, annotated papers and implementation walkthroughs☆114Updated 3 years ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆196Updated last year
- Slide decks, coding exercises, and quick references for learning the JAX AI Stack☆205Updated last month
- Test LLMs automatically with Giskard and CI/CD☆31Updated last year