OpenAI debuts its GPT-4 1 flagship AI model
Prior reports suggested that GPT-5 might have been prepared for release in the May timeframe; however, several unforeseen developments have popped up since then. TechRadar noted that OpenAI is likely having to tackle the tons of new users its ChatGPT service has recently acquired. Its user base recently jumped from 400 million to 500 million in about an hour, after a design trend prompted by its latest GPT-4o image generation update went viral. OpenAI has introduced GPT-4.1, a successor to the GPT-4o multimodal AI model launched by the company last year.
iPhone 17 Series Is Less Than Two Months Away: Everything We Know
The method has been shown to improve LLMs’ output quality significantly, particularly when they tackle complex reasoning tasks. OpenAI plans to release its next flagship artificial intelligence system, GPT-5, in a matter of months. Altman also said that GPT-4.5, internally codenamed “Orion,” would be OpenAI’s “last non-chain-of-thought model.” This means that future models will have reasoning capabilities. Reasoning models are considered more advanced LLMs, because they can break down prompts into multi-step tasks, often allowing them to give a more thorough and precise response.
- Reasoning models are considered more advanced LLMs, because they can break down prompts into multi-step tasks, often allowing them to give a more thorough and precise response.
- Compared to OpenAI’s reasoning systems, GPT-4.5 is “a more general-purpose, innately smarter model.” Additionally, it’s not natively multimodal like GPT-4o, meaning it doesn’t work with features like Voice Mode, video or screensharing.
- In a Reddit AMA thread, Jerry Tworek, who is a VP at OpenAI, suggested that there are plans to bring some current models and their capabilities together with the next foundational model.
- Pride wallpaper, carrier satellite connectivity for iPhone 13, Screen Time improvements, and more.
- OpenAI is also set to debut the full version of its o3 reasoning model and an o4 mini reasoning model any day now, with references having already been spotted in the latest ChatGPT web release by AI engineer Tibor Blaho.
iPhone 17 Pro Coming Soon With These 16 New Features
GPT-4.5 has a more natural feel with an improved personality, and is able to better guide users through ideas and the steps that it takes to get to answers and ideas. It outperforms GPT-4o in almost every category, including everyday queries, professional queries, and creative intelligence. Despite its relative strengths over GPT-4o and o3-mini, GPT-4.5 isn’t a direct replacement for those models.
OpenAI plans to combine multiple models into GPT-5
- But another issue is that OpenAI maintains an “o” lineup for reasoning capabilities, while the 4o and other models have multi-modality.
- The publication noted that once the o3 and o4-mini models are available, OpenAI will have products called o4 and 4o within the ChatGPT ecosystem.
- Everyone has been talking about ChatGPT’s new image-generation feature lately, and it seems the excitement isn’t over yet.
- Its user base recently jumped from 400 million to 500 million in about an hour, after a design trend prompted by its latest GPT-4o image generation update went viral.
- That might be because OpenAI is focusing its efforts in this area on its reasoning-optimized LLMs, which are specifically optimized for coding and math tasks.
With unsupervised learning, a machine learning algorithm is given an unlabeled data set and left to its own devices to find patterns and insights. GPT-4.5 doesn’t “think” like the company’s state-of-the-art reasoning models, but in training the new model OpenAI made architectural enhancements and gave it access to more data and compute power. “The result is a model that has broader knowledge and a deeper understanding of the world, leading to reduced hallucinations,” the company says. While the GPT-5 update has been long anticipated, the incremental updates are expected to help set up the introduction of the major rollout.
“A top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks,” Altman wrote. Everyone has been talking about ChatGPT’s new image-generation feature lately, and it seems the excitement isn’t over yet. As always, people have been poking around inside the company’s apps and this time, they’ve found mentions of a watermark feature for generated images. OpenAI is making GPT-4.5 available to Pro users starting today, with access coming to Team and Plus users next week.
What we know about GPT-4.5
However, it currently lacks support for voice mode, video comprehension, and screen-sharing. Although the entire AI boom was triggered by just one ChatGPT model, a lot has changed since 2022. New models have been released, old models have been replaced, updates roll out and roll back again when they go wrong — the world of LLMs is pretty busy. At the moment, we have six OpenAI LLMs to choose from and, as both users and Sam Altman are aware, their names are completely useless. This means a more expensive and energy-consuming model, but one that can solve more complicated tasks and think logically. These types of models have the potential to be used on complicated problems like drug discovery, coding and complex scientific reasoning.
There’s experimental voice tech included too, which you can toggle on and off to test — the difference is that apparently, full-duplex speech technology generates audio directly, rather than reading written responses. Deep research features are considered AI agents that can work independently and will allow you to make a query and let the AI process for several minutes while it generates the information and returns when it is finished to display the results. They are considered the first steps toward the concept of artificial general intelligence (AGI), which some define as a model that can process a query based on novel data that it has not been trained on, and it can produce unique content. However, we’re not quite there yet, and the main premise of deep research tools is processing large amounts of data and making it easier to understand. OpenAI is also set to debut the full version of its o3 reasoning model and an o4 mini reasoning model any day now, with references having already been spotted in the latest ChatGPT web release by AI engineer Tibor Blaho.
Join theCUBE Alumni Trust Network
Compared to OpenAI’s reasoning systems, GPT-4.5 is “a more general-purpose, innately smarter model.” Additionally, it’s not natively multimodal like GPT-4o, meaning it doesn’t work with features like Voice Mode, video or screensharing. Speaking of reduced hallucinations, OpenAI measured how much better GPT-4.5 in that regard. Obviously, the new model doesn’t solve the problem of AI hallucinations altogether, but it is a step in the right direction. Following on from the post on X announcing the delay, Altman followed up with, “We were able to really improve on what we previewed for o3 in many ways; I think people will be happy”.