OpenAI is making its latest AI model, GPT-4.5, available to a wider audience. Initially, only subscribers to the premium $200-per-month ChatGPT Pro plan had access, but now, the rollout is extending to ChatGPT Plus users.
Rapid Deployment with Some Limitations
The deployment process is expected to take between one and three days. However, OpenAI has cautioned that usage limits may fluctuate based on demand. As a result, users might experience adjustments in how often they can interact with GPT-4.5 over time.
GPT-4.5 represents a significant upgrade, built with more extensive training data and greater computing resources than earlier versions. It boasts a wider knowledge base and improved contextual understanding. Despite these advancements, it doesn’t necessarily outperform all competing AI models.
Impressive, But Not Unmatched
While GPT-4.5 is a substantial step forward, it doesn’t claim the top spot in every AI performance test. Rivals like DeepSeek and Anthropic have demonstrated superior reasoning abilities in specific evaluations.
This has sparked debate over whether OpenAI’s approach—focusing on scaling up model size—translates directly into better overall performance. Nonetheless, OpenAI asserts that GPT-4.5 excels in grasping complex nuances and delivering responses with a more refined emotional tone.
The High Cost of Advanced AI
Running GPT-4.5 comes with a steep price tag. OpenAI has acknowledged the significant costs associated with maintaining the model, leading to uncertainty about its long-term availability through their API.
To offset expenses, OpenAI is charging $75 per million input tokens (around 750,000 words) and $150 per million output tokens. These rates far exceed those of GPT-4, with input pricing being 30 times higher and output costs 15 times higher.
What Sets GPT-4.5 Apart?
Despite its high costs, GPT-4.5 offers several key improvements. One of the most notable is its reduced tendency to generate inaccurate or misleading information—an issue commonly referred to as “hallucination.” This makes it a more reliable AI for users who require accurate and context-aware responses.