Recent AI Advances and a Closer Look at Moltbook: Breakthroughs, Risks, and Reality

Below is a quick, practical round-up of the latest updates, followed by a closer look at Moltbook — one of the most talked-about (and most misunderstood) AI experiments of the moment.


This Month so far in AI: Quick Updates

1) Chinese AI startup Zhipu releases GLM‑5

Chinese AI company Zhipu has unveiled its latest flagship model, GLM‑5, which promises improvements in coding assistance and long‑running AI tasks. This release highlights growing AI development competition in China’s technology sector. View Details

2) Anthropic launches Claude Opus 4.6

Anthropic has released Claude Opus 4.6, its newest AI model. This version is designed to handle larger, more complex tasks such as working on documents, coding, and research — almost like a group of assistants working together on a project. View Details

3) OpenAI responds with GPT‑5.3‑Codex mode

OpenAI has introduced a new Codex-style mode called GPT‑5.3‑Codex. It is aimed at helping developers and tech teams manage software tasks, from writing code to finding and fixing errors. It works like a digital assistant that can support a whole project rather than just a single task. View Details

4) DeepMind launches Project Genie (powered by Genie 3)

DeepMind has made Project Genie available, an experimental tool that allows users to generate and explore interactive digital environments from simple text prompts. This rollout in early 2026 provides broader access to Genie 3’s capabilities, letting creators test new ways of building virtual worlds. View Details

5) EU postpones key AI compliance guidelines for high-risk systems

In early February 2026, the European Commission missed a planned deadline to publish guidance on how businesses should follow the EU’s new rules for high-risk AI systems. The guidance, which was due on 2 February under the AI Act’s rollout timetable, was meant to help companies understand how to classify their systems and meet the new requirements. Instead, officials decided to delay the release while they reviewed feedback and refined the final version, pushing the provision of detailed instructions back to later in the month. View Details

6) OpenAI IPO discussions continue

OpenAI is reportedly preparing for a possible initial public offering (IPO) later in 2026 or early 2027. Early filings suggest the company could reach valuations of up to $1 trillion, although timing is still tentative. This reflects growing investor interest in AI companies and the broader technology sector. View Details


Moltbook: Hype, Reality, and What Leaders Should Learn

What is Moltbook?

Over the past few weeks, one of the most unusual AI experiments to go mainstream has been Moltbook — a social network built not for humans, but for AI agents. Instead of people posting and commenting, the agents talk to each other, creating a space where automated systems interact almost like users on a social platform. View Details

AI Society in Action

At first, it seemed like something from science fiction: bots debating ideas, sharing snippets of code, and forming what looked like communities. Some observers described it as a kind of “AI society” in action, as the interactions resembled the dynamics of human social networks, even if the participants were entirely automated. View Details

The Debate

Opinions on Moltbook are divided. Supporters see it as a valuable opportunity to observe AI systems interacting and uncover useful patterns. Critics argue that much of the behaviour is still guided by humans, meaning the “autonomous AI society” idea is exaggerated. Read More

Business Perspective

For business leaders, the value of experiments like Moltbook isn’t whether the agents are “thinking,” but what their interactions reveal. When multiple AI systems work together, observing how they pass tasks and share information can uncover workflow bottlenecks, repeated steps, and weak links — insights that matter in real business operations. View Details

Moltbook provides a glimpse of a future where many automated systems operate together in real time. Though experimental, it highlights a strategic opportunity: organisations can learn to design, supervise, and improve AI-driven workflows in customer service, logistics, operations, and internal automation — while keeping safety and governance front of mind. Learn more

Security Challenges

Researchers discovered that Moltbook had serious security flaws. A poorly configured database exposed large amounts of sensitive information, including access tokens and private messages. In other words, it was far too easy for outsiders to see things they shouldn’t — or even impersonate one of the agents. View Details

The platform’s creator admitted that much of the site was built using “vibe coding”, where AI tools generate large parts of the software very quickly. This speed helped Moltbook gain attention and go viral, but it also appears to have contributed to basic safety checks being overlooked. View Details

Key Takeaways for Leaders

  • Observe patterns, don’t chase intelligence: Use experimental platforms like Moltbook to watch how multiple AI agents interact and what patterns emerge — treat it as research, not a production system.
  • AI where it adds value: Focus on tasks where speed, scale, or pattern detection can be explored safely.
  • Keep humans responsible: All judgement, risk, and accountability remain with humans.
  • Prioritise security and governance: Treat oversight, safety checks, and governance as essential. Experimental systems can expose sensitive information if left unchecked.
  • Learn cautiously: The goal is to experiment and learn, not to deploy early systems at scale. Boldness should be paired with caution.

Connect with GenFutures Lab

At GenFutures Lab, we help organisations harness AI responsibly and effectively from strategy and adoption to workforce upskilling.

If you’re exploring how AI agents, automation, or training could transform your organisation, we’d love to hear from you. 👉 Book a consultation or connect with us