OpenAI Codex is a cloud-based software engineering agent that writes code, fixes bugs, and manages pull requests on your behalf. Unlike a standard autocomplete tool, it runs tasks independently inside isolated sandboxes, connected to your GitHub repository. Its latest iteration, powered by the codex-1 model, is available through ChatGPT Pro, Plus, Business, Edu, and Enterprise plans.
What Is OpenAI Codex?
Codex is OpenAI’s dedicated coding agent. The original model, released in 2021, was trained on 159 GB of Python code pulled from 54 million public GitHub repositories. The 2025 version — codex-1 — is built on top of OpenAI’s o3 architecture, retrained with reinforcement learning on real-world software engineering tasks across multiple environments.
The distinction between Codex and a typical AI chat assistant is the degree of autonomy. You assign it a task, and it works through the problem end to end: reading files, running tests, editing code, and opening a pull request when done. Engineers at OpenAI use it daily to offload repetitive work like refactoring, renaming variables, and generating test coverage, letting them stay focused on higher-level decisions.
If you want to understand how browser-based AI tools are changing developer workflows more broadly, the shift Codex represents is part of that same pattern.
How OpenAI Codex Works
The codex-1 Model
codex-1 was optimized specifically for software engineering, not general-purpose tasks. Compared to o3, it produces cleaner patches, adheres more closely to team code style, and follows detailed instructions without requiring long prompts about formatting preferences. It can iteratively run test suites until they pass, rather than producing a single output and stopping.
GPT-5-Codex, the subsequent model released in late 2025, added dynamic thinking time — spending more compute on complex refactors and less on quick single-file edits. During internal testing, the model worked independently for over seven hours on large tasks while continuing to fix failures along the way.
Task Execution Environment
Each Codex task runs inside a separate cloud sandbox preloaded with your repository. Internet access is disabled during execution to limit the agent’s scope to the code you’ve explicitly provided. It can read and edit files, run linters, execute test harnesses, and check types. When it finishes, it shows you terminal logs and test results so you can verify the output before merging anything.
Tasks are isolated from each other, which means you can queue multiple jobs in parallel. If something fails or the agent is uncertain, it flags the issue explicitly rather than proceeding silently. This transparency is by design — manual review before integration remains necessary.
Key Features of OpenAI Codex
Codex runs across several surfaces: a web interface inside ChatGPT, a command-line tool (Codex CLI), a VS Code extension, and a standalone macOS desktop app introduced in early 2026. All of them connect through your ChatGPT account.
The Codex CLI, available via npm i -g @openai/codex or Homebrew, runs locally and lets you apply agent-style edits directly over real repositories. For teams using cloud-based coding environments, the web interface operates similarly — cloning repos into sandboxes and managing branches on your behalf.
The desktop app adds multi-agent management: separate threads per project, diff review, the ability to comment on changes, and an option to open edits in your local editor for manual adjustments. It also includes a Skills library — pre-built workflows for tasks like pulling design context from Figma, generating documentation, and running evaluations.
Automations let Codex pick up routine work without being prompted: issue triage, CI/CD monitoring, alert responses. Code review is built in as well, with the model navigating dependency chains and running your tests before flagging anything to you. On open-source repository benchmarks, GPT-5-Codex’s review comments were rated more correct and more actionable than earlier models.
OpenAI Codex Performance: What the Data Shows
Source: AI code tools market data, 2024–2032 projection at 25.62% CAGR
For a fuller breakdown of Codex numbers by use case and organization, the developer productivity data compiled from public reports covers Cisco, Duolingo, OpenAI, and Stack Overflow survey figures in detail.
On SWE-bench Verified, GPT-5-Codex achieved 74.9% accuracy — a standard benchmark testing automated resolution of real GitHub issues in Python repositories. GPT-5.3-Codex, released in early 2026, reached state-of-the-art on SWE-Bench Pro, which spans four programming languages and uses more contamination-resistant evaluation criteria.
OpenAI Codex Pricing and Availability
Codex is included in ChatGPT Plus ($20/month), Pro ($200/month), Business, Enterprise, and Edu plans. There are no separate seat fees for the agent itself. Business and Enterprise customers can purchase additional credits if they hit usage limits. The API access model, available via the Responses API, prices GPT-5-Codex at the same rate as GPT-5.
| Plan | Codex Access | Notes |
|---|---|---|
| ChatGPT Free / Go | Limited (temporary) | Included for limited time at launch of Codex App |
| ChatGPT Plus | Yes | Available from June 2025 |
| ChatGPT Pro | Yes (double rate limits) | Priority access, higher parallel task limits |
| Business / Enterprise | Yes | Additional credits purchasable, full security controls |
| API (Responses API) | Yes (GPT-5-Codex) | Available from September 2025, pay-as-you-go |
Codex CLI is open source and available on GitHub. Anyone with a ChatGPT subscription or API key can install and run it locally. If you already use ARM-based development tools on ChromeOS, the CLI installs the same way as any other npm package.
Limitations of OpenAI Codex
OpenAI released Codex as a research preview, and the limitations are real. Multi-step refactors that require iterating on an existing pull request are awkward in the default workflow, which opens a new PR for each task. Internet access during execution was disabled at launch — meaning the agent cannot install updated packages or reach external APIs unless you explicitly configure a setup script with pre-installed dependencies.
Delegating to a remote agent takes longer than interactive editing in a local IDE. For quick, targeted edits, an AI-integrated editor like VS Code with Copilot still reacts faster. Codex performs best on well-scoped tasks with a clear success condition — something that can be tested. Open-ended or poorly defined tasks produce lower-quality results. Image inputs for front-end work were not available at launch, though this has been updated in later releases.
OpenAI Codex vs. GitHub Copilot
The two tools address different points in a developer’s workflow. GitHub Copilot works inline as an autocomplete system — it suggests the next line or block while you type. Codex operates asynchronously: you hand it a complete task, it works in a sandboxed environment, and you review the result. Copilot reported over 20 million users as of July 2025. Codex is newer and has a narrower current install base, but its agentic scope is wider.
Stack Overflow’s 2024 Developer Survey found that 76% of developers use or plan to use AI coding tools. Among those already using AI tools, 82% apply them specifically to writing code. The gap between adoption intent and actual daily use is where both tools compete.
Developers setting up online IDE options through the browser can access Codex directly without any local installation beyond a ChatGPT login.
How to Get Started with OpenAI Codex
Access via ChatGPT: log into your account on a Plus or Pro plan, open the sidebar, and assign tasks using the “Code” button. For codebase questions, use “Ask” instead. To authorize Codex on your repositories, install the Codex GitHub app for each organization you want connected. The system clones your repos into isolated sandboxes and manages branches on your behalf.
For CLI access: install with npm i -g @openai/codex, run codex, and sign in with your ChatGPT account. The IDE extension for VS Code is available in the Visual Studio Marketplace. The macOS desktop app is downloadable from openai.com/codex.
FAQs
What is OpenAI Codex used for?
OpenAI Codex writes features, fixes bugs, proposes pull requests, answers codebase questions, and handles refactors. It runs each task inside an isolated cloud sandbox connected to your GitHub repository.
Is OpenAI Codex free?
Codex is included in ChatGPT Plus, Pro, Business, Edu, and Enterprise plans. The CLI is open source. Temporary free access was offered at the Codex App launch in early 2026.
What model powers OpenAI Codex?
The initial 2025 release used codex-1, a version of o3 optimized for software engineering. Later updates introduced GPT-5-Codex and GPT-5.3-Codex, each trained specifically on real-world coding tasks.
How is OpenAI Codex different from GitHub Copilot?
Copilot works as an inline autocomplete tool while you type. Codex operates asynchronously — you assign it a complete task, it executes independently in a sandbox, and you review its pull request before merging.
Does OpenAI Codex have internet access?
By default, internet access is disabled during task execution to limit scope to your repository. OpenAI enabled optional internet access for ChatGPT Plus users starting June 2025, configurable per task.
