DeepGenX / CodeGenXLinks
Code Generation using GPT-J!
β516Updated 3 years ago
Alternatives and similar repositories for CodeGenX
Users that are interested in CodeGenX are comparing it to the libraries listed below
Sorting:
- API for the GPT-J language model π¦. Including a FastAPI backend and a streamlit frontendβ336Updated 4 years ago
- A GPT-J API to use with python3 to generate text, blogs, code, and moreβ204Updated 2 years ago
- VSCode extension for code suggestionβ481Updated 2 years ago
- A alternative to Github Copilot for vscode until you get the access to github copilotβ288Updated 3 years ago
- Code Generation and Search for Pythonβ53Updated 4 years ago
- β203Updated last year
- π Semantic search for developersβ544Updated 2 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loadβ¦β114Updated 3 years ago
- OpenAI API webserverβ189Updated 3 years ago
- Some quick BLOOM LLM examplesβ257Updated 3 years ago
- Open-AI's DALL-E for large scale training in mesh-tensorflow.β433Updated 3 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorchβ110Updated 3 years ago
- VSCode extension for code suggestionβ193Updated 3 years ago
- A pre-trained GPT model for Python code completion and generationβ281Updated 2 years ago
- Multi-angle c(q)uestion answeringβ456Updated 3 years ago
- β1,608Updated 2 years ago
- π¬ Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals clientβ315Updated last year
- Code for Parsel π - generate complex programs with language modelsβ433Updated 2 years ago
- Visual Studio Code extension to quickly generate docstrings for python functions using AI(NLP) technology.β312Updated 4 years ago
- ChatGPT @ Home: Large Language Model (LLM) chatbot application, written by ChatGPTβ315Updated 2 years ago
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.β473Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeβ¦β436Updated 2 years ago
- Pretrained Language Models for Source codeβ254Updated 4 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.β169Updated last month
- Full description can be found here: https://discuss.huggingface.co/t/pretrain-gpt-neo-for-open-source-github-copilot-model/7678?u=ncoop57β3,288Updated 3 years ago
- Minimal library to train LLMs on TPU in JAX with pjit().β298Updated last year
- Large-scale pretrained models for goal-directed dialogβ884Updated last year
- UI interface for experimenting with multimodal (text, image) models (stable diffusion).β368Updated 2 years ago
- Tweet Generation with Huggingfaceβ433Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathwaysβ824Updated 2 years ago