In a flurry of hyperbole, the newest robot at Xpeng was recently demonstrated walking with such a realistic gait that some viewers claimed a human must be inside a suit. Others suspected a hidden “driver” manipulating the robot to mirror their movements. The Xpeng crew obligingly cut off the robot’s skin to reveal a sophisticated array of metal, joints, and hydraulics. So how does this robot achieve its smooth gait?
First, the bionics are built to resemble human anatomy. Clever materials make this possible — solid metals are usually too rigid, so more flexible composites are used. This allows the robot’s “muscles” to behave more like our own. Second, the robot’s limbs have what engineers call degrees of freedom. If a rod can only rotate around one point, it has very few degrees of freedom. For example, the steering column of a bicycle can only rotate; it cannot shift up, down, or side to side.
Engineers fear high degrees of freedom because each added possibility creates an explosion of complexity. Imagine a bicycle whose steering column could also slide vertically. At any moment, where exactly is the wheel? Humans cope with these variables intuitively — we don’t consciously calculate them — but the mathematics is formidable.
Now imagine adding sideways movement to the steering column as well. The bike becomes unrideable because we cannot manage three simultaneous degrees of movement. Each additional degree of freedom makes it harder to determine where a limb is in space.
Consider a cheetah moving at 100 km/h, twisting abruptly in pursuit of prey. With multiple joints, a tail acting as a stabiliser, a flexible backbone, and a highly mobile head and neck, how does a cheetah’s brain possibly coordinate everything? Surely the calculations exceed any computer we currently possess.
In truth, the brain makes educated guesses. Here is roughly how this works.
During infancy, the roly-poly cub tries to stand. To do this, it may need to combine five movements. Mathematicians represent this as a “vector,” for example:
[10, 3, 4, 9, 2] — where each number is the strength of a movement.
The cub experiments:
[10, 1, 1, 1, 10], [5, 7, 2, 9, 0], [12, 4, 4, 3, 9], and so on.
If a combination doesn’t help it achieve its aim (standing up), it is discarded. Promising attempts are reinforced until, after what seems like an eternity, the cub finally stands. After that, the correct “guess” is reused automatically.
Evolution has given mammals the ability to nurture their young through a long experimental window in which these “correct guesses” are developed through trial and error. This system works — but has serious downsides. The cub must be fed and protected for months or years, costing enormous energy. And during this experimental period, the cub is extremely vulnerable. If intruding male lions attack, one can occupy the mother while the other kills the cub. This vulnerability is a major evolutionary burden.
The Xpeng robot deals with similar challenges using heavy-duty computing and sophisticated AI. Its gait emerges through the AI making guesses, testing them, and refining them. Engineers simply supply an initial “educated guess” rather than waiting through a nurturing period.
The idea that intricate movement is achieved through gradual refinement of guesses is counterintuitive. For 200 years, from the governor on a steam engine onward, we have understood that feedback control systems are highly effective — but as any control-systems engineer knows, such systems must still be “tweaked” because the exact mathematics of perfect control is often impossible to derive. Nature, in short, resists precise control.
This makes systems like passenger aircraft — which rely on multiple interlocking control loops — slightly unnerving. Even after decades of autopilot development, humans remain in the cockpit for good reason.
But all is not lost. Brilliant researchers have begun implementing stochastic control systems. “Stochastic” simply means “based on educated guesses.” The idea is powerful: instead of testing every possible solution sequentially, you test all possibilities at once and simply select the best outcome.
Here’s how to think about it.
Place 1000 people in front of 1000 basketball hoops and have each shoot simultaneously. Suppose 32 succeed. Now study those 32 successful movements. Perhaps two of them send the ball cleanly through the hoop without touching the rim or backboard. Those two represent the “optimal” solutions.
A single mass trial has effectively reduced 1000 possibilities to 2 — instantly.
Nature helps us here. The universe is not precise. A wire carrying “0 to 10 volts” may be 4.87 V, 5.13 V, or 6.02 V — but we treat anything above ~5 as ON and below as OFF. This toleration of imprecision made digital computing possible.
Stochastic control flips the idea. Instead of forcing precision, it uses the natural spread of values. When energy is fed into a material, the output may fall anywhere within a range. If we capture all those outputs, we can see which one “shoots the hoop” — the optimal value emerges from the noise. Suppose the best value is 6.365. We adopt it immediately. We did not need infinite trials; the optimum revealed itself the first time.
This approach is called Stochastic Optimal Control (SOC), and it has the potential to revolutionise robotics and engineered systems. An SOC in an aircraft would evaluate one million control options in a single sweep, in milliseconds, and select the best one. No iteration. No uncertainty. The ball goes through the hoop touching nothing but the net.
Stand by for the real digital revolution — not bigger and better AI, but radically smarter calculations inspired directly by nature. Move over traditional mathematics and make way for engineering built on a new paradigm.