NextFin News -- Right after the Chinese New Year that fell on February 17, the artificial intelligence world ushered in its own version of the Spring Festival: OpenClaw seems to be what everyone is enthusiastic about and ready to install it across China.
As OpenClaw, dubbed as "Lobster" in China thanks to its logo image, needs to be trained for use, the greetings these days are “Do you raise a lobster”? “Raising” AI Agent tools like OpenClaw on your own computer means that you grant them permissions and assign tasks, let them run 24/7, and they automatically handle all kinds of tasks for you—organizing files, looking things up, writing code, even executing scheduled tasks.
But why “raise lobsters,” and why now?
In the context of the large-model industry in 2026, the public’s tolerance for novelty has already shrunk to a weekly cycle—new models have a hard time drawing much attention anymore. On top of that, the AI battles during the Spring Festival made chat-based AI more widespread, so expectations for AI naturally climbed higher and higher.
That’s why agents like OpenClaw—ones that can directly take over your computer and carry out tasks—arrived at exactly the right moment.
But unfortunately, this “lobster-raising craze” will most likely end in failure. The upside is that it will leave behind a rich historical legacy for AI, pushing forward the development of homegrown “native lobster” technology and the broader industry in China.

There’s a particularly odd mismatch in this “raising lobsters” trend: whether it’s the big players—Alibaba, Tencent, ByteDance, Baidu, Huawei, and the like—or AI startups and leading companies in vertical industries, there’s basically no barrier to integrating OpenClaw or building a lobster-like product. At this stage, there’s almost no differentiation in the underlying technology itself.
Among local “lobsters,” Tencent’s WorkBuddy is a representative example. Positioned as an all-scenario workplace AI agent desktop hub launched by Tencent, it targets a wide range of functional roles: describe what you need in a single sentence, and WorkBuddy can plan and execute tasks on its own and deliver the results.
Cloud-based SaaS versions are represented by ByteDance’s ArkClaw. The Doubao 2.0 large-model series (Doubao-Seed 2.0 Pro) performs well on complex, long-horizon tasks. And because it’s all within the ByteDance ecosystem, integrating with Feishu is relatively easy—you can connect in just a few steps by scanning a QR code, without the multi-step setup required by other platforms. As a result, the Feishu + ArkClaw combo has become popular among developers.
On the other hand, the difficulty of “raising a crayfish” as an individual has been seriously underestimated. If you don’t have a coding background or AI knowledge, the deployment step alone can stall you for days. Ordinary users neither need nor can realistically pull off complicated debugging—so “everyone raising a crayfish” is a contradiction in itself.
If you walk into one of Tencent’s or Baidu’s “crayfish” offline installation stations—where long lines have been forming lately—you’ll find not only all kinds of internet-industry professionals, but also AI hobbyists, and even aunties who don’t even know what Node.js is, all chatting excitedly about how to install a “shrimp” on their own computers. (This isn’t some “free eggs” giveaway for the AI crowd.)
OpenClaw’s sudden explosion in popularity is, in fact, more like a concentrated release of AI anxiety.
One geek from the AI industry also said he isn’t optimistic about the current domestic “crayfish war”. The “crayfish” itself is a simple framework; for big tech companies, hand-building a new one isn’t a technical moat. The reason “crayfish” has received such strong positive feedback in China mainly comes down to the capability gap between foundational models at home and abroad, and how open the software ecosystem is.
In his view, neither the capability of domestic models nor the openness of the software is sufficient to support “crayfish” in delivering a real “aha moment” for users. On the contrary, the current hype—driven by over-marketing—will only amplify the gap between expectations and actual use, and it may end on a rather bleak note.
Many people are rushing to deploy “crayfish” not because they truly need an assistant to automatically book tickets or tidy up Excel sheets, but more because they’re afraid of missing out. OpenClaw has become the whole village’s hope—conveniently providing a concrete outlet for that anxiety.
This feels more like a bout of “phantom limb pain” about productivity. Most people who have already deployed “crayfish” are also realizing that there doesn’t seem to be much they need to hand off to it—an everyday chatbot is already enough for the vast majority of scenarios.
OpenClaw’s social attribute has even outweighed its functional attribute—so how far is it from a collapse?
Security is the next concern that follows close behind. China’s National Internet Emergency Response Center issued a Risk Alert on the Secure Use of OpenClaw, noting that to achieve “autonomous task execution,” the application is granted elevated system privileges, including access to the local file system, reading environment variables, calling external service application programming interfaces (APIs), and installing extensions. However, because its default security configuration is extremely fragile, once an attacker finds a foothold, they can easily gain full control of the system.
OpenClaw’s core logic is high-privilege takeover. If you want the “lobster” to work well, you have to hand over all your permissions—potentially including various account passwords and privacy-related files.
Wang Liejun, a cybersecurity expert at QiAnXin, said the main risks include out-of-control privileges and “jailbreaking,” the Skill supply chain, public internet exposure and remote intrusion, and data privacy leaks.
QiAnXin recommended following the principles of “physical isolation” and “least privilege,” and strongly advised against installing OpenClaw directly on everyday office computers or personal computers that store important personal data (photos, documents, account passwords). It recommended deploying it in a virtual machine or on an idle computer to avoid data risks, and downloading Skills only from secure, trustworthy sources.
The OpenClaw open-source framework itself has many vulnerabilities. To address these high-severity issues, companies added security mechanisms at different layers. For example, in the process of turning OpenClaw into a SaaS offering, ArkClaw made extensive customizations, including security checks, reviews, and adjustments to the open-source code of specific modules.
Compared with local deployments—where OpenClaw can read and write every file on the machine—deploying OpenClaw in the cloud or as a SaaS version has been seen as a safer option.
ArkClaw largely represents the industry’s mainstream approach to security hardening, strengthening protections across four areas: the Agent deployment environment, tools (Skills, MCP, etc.), Agent runtime, and privilege-related behavior. Platform security includes entry-point defense + environment isolation + security hardening; supply-chain security includes plugin scanning, Skill scanning, and Akill runtime detection; runtime security includes defenses against prompt attacks, prevention of sensitive-data leakage, and blocking of high-risk operations; identity and permissions include enterprise identity integration, secure credential hosting, and privileged-behavior management.
Once the first incident occurs—whether a personal financial loss or a workplace accident caused by the lobster’s autonomous decision-making—this fragile chain of trust will snap in an instant.
A developer shared on social media that a friend of his, while using the AI agent tool OpenClaw to write code, exposed a browser to the public internet via a VNC remote desktop. A few days later, the friend’s credit card was hit with repeated fraudulent charges, nearly maxing out the limit.
The public’s tolerance for AI mistakes is far lower than we imagine.
The verb “to raise” (as in “raising” an agent) implies an emotional premium and a willingness to put up with imperfections. But an Agent’s essence is efficiency—pure input-output. Once users realize they have to spend a ton of time tuning it, feeding it data, and fixing bugs—and that it’s no better than traditional ways of working—no one will want to keep investing in it.
Compared with experimentation on the individual side, the “lobster” on the enterprise side is a very different landscape—there’s a long road ahead and heavy responsibilities.
Chen Xudong, Chairman and General Manager of IBM Greater China, put it bluntly: tools like “lobster” are better suited to individuals or small businesses; large enterprises typically don’t allow employees to install software at will. In the future, it may make its way into enterprise systems, but only after proper permission management and compliant controls are in place—and it likely won’t happen quickly.
“Everyone is excited about this product right now. It can help you gather information and make PowerPoint decks—those capabilities exist. It’s not that a single company built all of it; it just strings these things together and does the work, and you grant it permissions. But for enterprise-grade products, there hasn’t been any revolutionary change in the short term. Enterprise products will also evolve in this direction. Many companies may know these tools can be used—so why haven’t they brought them in? Because they haven’t thought through what the consequences might be after using them. Put simply, the consequences are unpredictable,” he said.
Feishu serves both individual and enterprise users, which also makes it easier to observe how their needs diverge. Feishu CEO Xie Xin shared his view on WeChat Moments: “Running Agents on a personal computer and using Agents inside an enterprise are two completely different things. For individuals, playing with Agents is exploration; for enterprises, using Agents is responsibility. If something goes wrong in a personal scenario, at worst you just start over; if something goes wrong in an enterprise scenario, it could mean files being deleted or data being leaked.”
In Xie’s view, the upper bound of what Agents can do is exciting, but the lower bound of safety determines whether they can truly enter real work scenarios. If trust and security aren’t addressed, the more powerful they are, the more dangerous they become.
Some companies have also begun serious exploration. At an internal strategy meeting, Liuka Technology CEO Liu Yingqi mentioned that the company had already pushed its HR department to implement relevant requirements and planned to add 5,000 digital employees, complete with digital work-badge IDs. Employees could apply through the company’s HR and IT teams; the application form would include the Skills scope, monthly salary (number of tokens), and so on.

The history of technology tells us that any great transformation comes with both thunderous mass movements and quiet, almost imperceptible infiltration.
OpenClaw may evolve into some underlying protocol of the future, or it may become an important milestone for the Agent industry. But the red-hot “everyone raising shrimp” craze right now feels more like an overload of imagination about the future—pushed up jointly by vendors and emotion.
What we’re raising isn’t lobster; it’s a sliver of control over our real-world FOMO (Fear of Missing Out). Unfortunately, that’s the truth.
Of course, history rarely judges a mass craze simply as a failure. Even if this thunderous “nationwide lobster-raising” wave eventually ebbs, it will still leave behind a few things that truly matter.
- First, it was a rare mass campaign of tech popularization: terms that used to exist only in developer forums entered ordinary people’s computers for the first time, in an almost entertainment-like way. Many people may not actually keep using this “lobster” in the long run, but through the process they understood concepts like large models and Agents for the first time. That kind of cognitive transfer is a hard-to-come-by form of tech education.
- Second, it inadvertently carried out a society-wide stress test: only this sort of interaction between AI and the real world can force the industry to fill in the gaps—security protocols, permission isolation, and boundaries of responsibility. Many security mechanisms that will later seem “obvious” are often forced into existence during chaotic phases.
- Finally, to borrow a familiar line, this lobster craze proved that the principal contradiction in the AI industry is the contradiction between people’s growing desire for a better life and unbalanced, inadequate development.
“Everyone raising lobsters” may not leave behind many lobsters that truly grow up. Instead, more big tech companies and startups will step up investment in developing native lobsters, delivering more complex Agent capabilities inside a security fence. Take DingTalk, for example: it is expected to release “DingTalk Native Lobster” soon. The industry speculates that within DingTalk—and within business scenarios where safety can be assured—Agent capabilities similar to OpenClaw could be achieved. This has also become the industry consensus for the period ahead.
Imagination is the best accelerator for technology. People are starting to believe that large models will usher in a new era of intelligent agents; once that imagination is unlocked, the history of technology usually doesn’t turn back.
Explore more exclusive insights at nextfin.ai.
