Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    A Robot Cries. What Does That Mean? Yansyn-X2: The First Robot That Cries Real Tears

    May 9, 2026

    Unitree Humanoid Robots Handle Baggage at Haneda: Why Japan Chose China

    May 8, 2026

    From Made in China to Smart Manufacturing: The Rise of China’s Robotics Industry

    May 6, 2026
    Facebook X (Twitter) Instagram
    Robotics Daily
    Facebook X (Twitter) Instagram
    Subscribe
    • Home
    • News
    • Case Studies
      • Manufacturing
      • Logistics & Warehouse
      • Healthcare
      • Agriculture
      • Commercial & Service
    • Features & Analysis
      • In-depth Reports
      • Industry Trends
      • Startup & Investment
    • Reviews
      • New Robot Launches
      • Product Reviews
      • Top Robot Rankings
    • Events & Community
    • Contact us
    Robotics Daily
    Home»Featured»China’s Humanoid Robots Are Done Showing Off – They’re Clocking in for Real Work
    Featured

    China’s Humanoid Robots Are Done Showing Off – They’re Clocking in for Real Work

    leewperBy leewperMay 6, 2026Updated:May 8, 2026No Comments9 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    A UBTECH humanoid robot performs material handling, sorting, and quality inspection tasks on the floor of a new-energy vehicle plant in China.
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email

    Last April, a humanoid robot ran a marathon alongside humans for the first time. This April, a robot crossed the finish line faster than any human runner. But the applause came with a familiar question: what exactly is the point of a sprinting robot if it still can’t do a real job?

    This time, Chinese robotics firms are offering a more pragmatic answer: If you want to work, go get an internship first.

    Agibot recently announced that its new A3 humanoid robot will be deployed at tourist sites through its “BotShare” platform. XSquare Robot, meanwhile, has teamed up with the online service giant 58.com to send robots into real homes, where they will work alongside human cleaners. A wave of commercialization is sweeping through embodied AI, and the narrative is quietly shifting. For the past two years, the ideal stage for a robot was a televised gala or a high-profile race. Today, the real test is whether a robot can clock in at a factory or help out at home — and actually solve a problem.

    The Brain Is Still Under Construction

    About a month ago, XSquare and 58.com launched what they called the world’s first robot housekeeper, a machine designed to cooperate with a human cleaner inside people’s homes. Some early users gave it a try, and the overall verdict was blunt: the robot still isn’t as good as a person.

    Users described clumsy movements. The robot could handle admittedly complex chores like hanging laundry or tidying up, but folding a single piece of clothing took close to ten minutes. Its range was limited, too — steps and door thresholds stopped it cold.

    That weakness isn’t unique to one company; it’s an industry-wide challenge. Wang Qian, founder and CEO of XSquare, put it plainly: “Right now, there isn’t a single robot anywhere in the world that can independently handle the majority of everyday household chores without remote control.” Unitree founder Wang Xingxing has made a similar point, noting that while robots can approach a 100% success rate on pre-scripted tasks, performance crashes once the environment changes or something unexpected appears. He estimates domestic chores are still three to five years away.

    The bottleneck is that robots still don’t understand the physical logic of the real world. The industry has an apt metaphor for this: the robot’s “cerebellum” — its motor control — is highly developed, capable of martial arts and dance. But the “brain” that handles cognition and decision-making is still growing up. And without that brain, there is no foundation for doing real work.

    Currently, three main technical routes are competing to build it.

    The dominant and most mature approach is the end-to-end VLA (Vision-Language-Action) model. It fuses multi-modal perception with language instructions to generate robot movements directly. A user says “I’m hungry,” and the robot fetches some food — providing it has seen a similar object before. The obvious weakness is that as tasks grow more complex and the robot encounters untrained scenarios, it suffers something akin to a logic freeze. On top of that, typical VLA architectures run vision, language, and action modules separately, causing information loss and latency every time data crosses a module boundary. For fine motor work, the brain simply can’t keep up with the body.

    A second route, world models, is considered closer to how humans think. The core idea is to understand the operating rules of the physical world well enough to predict what will happen next. A model that understands gravity and motion, for instance, could estimate where a falling cup will land, allowing a robot to catch it or move out of the way. The price tag is steep. Nvidia’s Cosmos world foundation model was trained on 9000 trillion tokens, and the data collection and compute requirements are enormous.

    A third path, more common in China, separates an LLM “brain” for task understanding from a VLA “cerebellum” for fine control. It’s a pragmatic fit for companies that already have strong motor-control foundations and want to reinforce their long suit while playing catch-up on cognition. The downside is that splitting the brain and the cerebellum risks task delays and makes high-precision work harder to achieve; more modules also tend to mean higher costs.

    Learning on the Job

    None of these paths is clearly the winner yet, and pure single-track approaches are already giving way to deeper integration.

    Agibot, for instance, is not betting on one route alone. This year it released an updated world model (GE-Sim 2.0), a new-generation VLA foundation model (Genie Operator-2), and a second-generation unified brain-cerebellum system called GenieReasoner. It also introduced the idea of a “world-action model” that jointly models state, action, and state evolution, rather than just modeling the world’s state in isolation.

    XSquare went a different way with WALL-B, a unified-architecture model that stuffs the brain and the cerebellum into a single framework to eliminate inter-module delays and information loss. A key feature is “learning by doing” — the robot iterates on itself through repeated failure and retrial. Wang Hao, CTO at XSquare, stresses that “a world model isn’t a separate module you can just bolt on. It’s a capability, and you can’t simply slap one behind a VLA and expect it to understand the world.”

    Another player, AI² Robotics, proposes a dual fast-slow system: a “fast system” for full-body real-time control and a “slow system” for logical reasoning, letting the robot react quickly while maintaining a grasp on long-horizon tasks.

    No matter the technical road, making the robot’s brain truly functional comes down to two things: understanding the world and enabling thought to keep pace with physical reaction speed. And this is not just a matter of more training. Wang Hao uses an analogy: “Someone who learned to swim in a pool for ten years can still drown when thrown into the ocean.” Laboratory training data is too clean. Inside the ivory tower, it’s hard to develop genuine independent thinking. The better method is to let robots learn inside complex, unpredictable environments.

    Xiao Yanghua, a professor at Fudan University’s School of Computer Science, has said that even a conservative estimate puts the amount of data currently available for training embodied AI models at least two orders of magnitude below what’s needed.

    This hunger for real-world data is why robots are now rushing into practical settings. UBTECH’s humanoid robots have entered factories. Founder Zhou Jian says the company spent two years on proof-of-concept training inside new-energy vehicle plants, starting with tasks like material handling, sorting, loading, and quality inspection. Galbot robot is already helping run a pharmacy, independently recognizing orders, grabbing medication, scanning codes, and packaging items. MagicLab’s humanoid has become a “car salesperson,” greeting customers in dealerships and explaining vehicle specs.

    GALBOT's Galbot humanoid robot autonomously recognizes orders, grabs medications, scans barcodes, and packages items at a real pharmacy.

    Different companies, different real-world environments, but the goal is identical: gather genuine data, validate a robot’s ability in the wild, and feed that knowledge back into the foundation model. The hope is to move the brain from single-task, narrow-context operations toward object-level, background-level, and task-level generalization — and get gradually smarter.

    Real-World Scenarios Raise the Ceiling

    Once you grasp that training the brain demands real data, it’s easy to understand why the investment logic for embodied AI has quietly shifted over the past year.

    According to a tally by China Business Network, the domestic embodied AI sector has seen at least 269 funding rounds as of April 10. But the focus of capital is clearly changing. Money is rushing toward data and model algorithms, while the valuation expectation for robot hardware is moving away from pure technology narratives and toward commercial deployment.

    So far this year, several companies that emphasize the “brain” side have closed mega-rounds: XSquare announced a Series B of nearly 2 billion yuan (around $275 million); TARS set a Chinese record for a single embodied AI round with a $455 million Pre-A raise; and Lightwheel AI, a company focused on embodied data and simulation infrastructure, landed a 1 billion yuan (around $138 million) round.

    Hardware, meanwhile, is increasingly becoming commoditized. The winner of this year’s robot marathon wasn’t a traditional robotics firm but Honor, the consumer electronics brand — a telling sign that the hardware barrier is dropping.

    A new consensus is forming in the market: the key variable that decides whether a robot can enter real service is its brain, and behind that brain sit model capability and data assets. In the past, the investment thesis was largely about using hardware shipments to stake out a market position. Today, investors pay more attention to whose brain is smarter and demonstrably better at generalizing. A model that can function in many real-world scenarios — and transfer skills learned in one context to new objects, tasks, or environments — can “learn once and apply broadly,” enabling rapid rollout across many different settings. The stronger the generalization, the wider the moat and the higher the ceiling.

    Wang Qian frames it this way: “Home environments demand the most extreme generalization. If a model can work reliably inside an extremely complex home, it will be a total overkill — in a good way — when it enters a traditional industrial site.” A mature model, in other words, could be deployed across multiple industries, making the business infinitely reusable.

    And instead of building a general capability first and then hunting for a use case, more robotics companies are putting commercial scenarios at the front of product design. GALBOT’s two wheeled robot models emphasize stability and payload, suited for repetitive jobs like moving, picking, and sorting. XPeng IRON is explicitly prioritized for deployment in museums, car dealerships, and shopping malls.

    “What can robots actually do?” The embodied AI industry is slowly closing in on an answer.

    Firms like Unitree spent a decade taking robots from zero to one. But giving them a real ability to think independently — to escape a dependence on single, repetitive human commands — is a journey from one to ten, and from ten to something much bigger. Hands and feet can make a robot stand up. To truly survive, it needs a brain.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    leewper
    • Website

    Related Posts

    A Robot Cries. What Does That Mean? Yansyn-X2: The First Robot That Cries Real Tears

    May 9, 2026

    Unitree Humanoid Robots Handle Baggage at Haneda: Why Japan Chose China

    May 8, 2026

    From Made in China to Smart Manufacturing: The Rise of China’s Robotics Industry

    May 6, 2026

    Four Paths, One Destination: How China, the US, Japan, and Europe Are Building the Humanoid Future

    May 6, 2026

    The Chinese Giant You’ve Never Heard Of: INOVANCE

    May 3, 2026

    Smart Manufacturing 2026: AI, Digital Twins and Autonomous Factories

    April 29, 2026
    Leave A Reply Cancel Reply

    Top Reviews
    News

    A Robot Cries. What Does That Mean? Yansyn-X2: The First Robot That Cries Real Tears

    By leewper
    Featured

    The Chinese Giant You’ve Never Heard Of: INOVANCE

    By leewper
    News

    AiMOGA Debuts MORNINE & Argos Robots at Auto China 2026 | Vehicle-Robot Synergy

    By leewper
    Editors Picks

    A Robot Cries. What Does That Mean? Yansyn-X2: The First Robot That Cries Real Tears

    May 9, 2026

    Unitree Humanoid Robots Handle Baggage at Haneda: Why Japan Chose China

    May 8, 2026

    From Made in China to Smart Manufacturing: The Rise of China’s Robotics Industry

    May 6, 2026

    China’s Humanoid Robots Are Done Showing Off – They’re Clocking in for Real Work

    May 6, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • News
    • Case Studies
    • Features & Analysis
    • Events & Community
    • Reviews
    • Contact
    © 2026 Robotics-Daily.com

    Type above and press Enter to search. Press Esc to cancel.