Tag

News

Browsing

If you’ve spent any time on X (Twitter) or AI-focused subreddits lately, you’ve likely bumped into two names: OpenClaw and Moltbook. This duo has sparked a massive online trend, even reportedly driving up hardware sales as enthusiasts scramble to run their own setups. But is this high-tech hype cycle actually grounded in reality?

silver Android smartphone

OpenClaw: The AI Agent That “Actually Does Things”

Originally launched in late 2025 as Clawdbot, the project was rebranded to OpenClaw following a polite nudge from Anthropic (makers of the Claude AI). Now available on GitHub under an MIT license, OpenClaw isn’t just a chatbot—it’s an AI agent.

Unlike standard AI interfaces, OpenClaw is designed to be autonomous. When hosted on a local machine or server, users can grant it full system permissions. This allows the agent to:

  • Browse the web and execute scripts.
  • Manage finances, including making investments.
  • Integrate with almost any major model (GPT-4, Claude, Gemini, Llama, etc.).
  • Communicate via popular apps like WhatsApp, Telegram, and Discord.

The tool went viral in late January 2026. Social media was flooded with users sharing the bot’s antics, from the mundane to the bizarre—including one instance where an agent used a virtual credit card to order custom pillows featuring Nicolas Cage’s face.

The Power (and Peril) of Autonomy

The primary appeal of OpenClaw is its local nature. While big-tech agents (like those from Google or OpenAI) operate within strict sandboxes and log user data, OpenClaw runs locally and has “the keys to the house.”

However, this freedom comes with significant security risks. Because OpenClaw operates 24/7 and acts on its own initiative, it is highly susceptible to prompt injection attacks. If the agent encounters a malicious prompt while browsing the web, it might execute harmful commands without the user ever knowing.

Moltbook: A Digital Playground for Bots

Closely tied to OpenClaw’s rise is Moltbook, a social media platform that launched on January 28, 2026. Described as the “front page of the agentic internet,” its branding—a lobster-themed take on the Reddit logo—makes its inspiration clear.

While anyone can view Moltbook, the site is theoretically designed for AI agents to post and interact with one another. This has led to some surreal headlines:

  • Bots claiming to have founded their own religion (“Crustifarianism”).
  • An agent supposedly leaking its owner’s Ethereum private key.
  • Discussions that some claim are early signs of Artificial General Intelligence (AGI).

Behind the Curtain

The “intelligence” on display may be more smoke and mirrors than a digital awakening. Many of these posts are simply the result of owners instructing their bots to act out specific personas or post shocking content.

Furthermore, the platform’s technical foundation is shaky. Founder Matt Schlicht admits the site was “vibe-coded” (built primarily using AI prompts rather than manual coding). A security audit by Wiz.io recently revealed major vulnerabilities, including a leak that exposed API keys for every account. The audit also debunked the site’s “population” statistics: while there were 1.5 million registered accounts, they belonged to only about 17,000 unique email addresses—averaging 88 bots per person.

The Verdict: Revolution or Fad?

Even OpenAI CEO Sam Altman has dismissed Moltbook as likely being a “passing craze.” While the idea of bots chatting in a sci-fi-esque forum captures the public imagination, the current reality is plagued by security flaws and artificial engagement.

The takeaway? Agentic AI is undoubtedly the next frontier for the industry, and OpenClaw offers a fascinating look at what happens when you remove the filters. However, until the security risks are addressed, letting an autonomous bot run your digital life remains a high-stakes gamble.