GPT-5.4 brings powerful reasoning, coding, and agentic workflows into one frontier model, now live in ChatGPT, API, and Codex. With GPT-5.4 mini (2x faster) and nano (lowest cost), build responsive AI systems for coding, subagents, and multimodal tasks. Featuring computer use, web search, and massive context, it’s designed for real-world, high-scale execution.
Codex now supports subagents, allowing you to spawn specialized, parallel AI workers for complex coding tasks. By defining custom TOML agents with isolated roles (like explorers and reviewers), you can execute multi-step workflows without context rot.
OpenAI Codex is a new machine learning tool that translates your text in the English language into code. Codex is designed to speed up the work of professional programmers, as well as help amateurs get started coding.
Today, I read in the news that OpenAI is considering "expanding its data footprint" and possibly buying Pinterest, as there is a lot of data on it just for the sake of users finding inspiration (which can be key for purchase decision making and understanding personas Pinterest has 600M+ users).
I also take into account how Pinterest started to resent the proliferation of AI content there users do not like it so much (as far as I know, OpenAI also wants to have its own social network, and Sora curation is a bit reminiscent of that)
GPT-5.4 Thinking delivers deeper web research, stronger context retention on long tasks, and 33% fewer factual errors than its predecessor. You can now interrupt the model mid-response and redirect it. No need to start over. Same intelligence. More control. Less token burn by default.
I often hear that LinkedIn is starting to be cringe, becoming a second Facebook, but let s be honest: it s still a career platform. A little cringe, but it still is.
On the other hand, Sam Altman introduced a new ambition OpenAI Jobs Platform an AI-powered hiring platform, expected to launch by mid-2026.
An application security agent that helps you secure your codebase by finding vulnerabilities, validating them, and proposing fixes you can review and patch. Now, teams can focus on the vulnerabilities that matter and ship code faster.
TechCrunch shared an excerpt from a roughly 30-minute panel featuring Sam Altman, where he mentioned that within the next two years, they plan to introduce hardware built by their AI company.
It s supposed to be: "screenless" and pocket-sized offering a calmer experience than smartphones
avoiding constant notifications and attention overload
Lately, I have been experimenting with how to feed context into GPT models more effectively.
For example, when fine-tuning or working with larger context windows, I have noticed that the dilemma is in organizing the surrounding information, rather than the prompt itself. Last week, I came to know that it's called Context Engineering.