
"A robot truly gains hands only when it can predict 'where and how to grip safely' just by looking."
Hyung-jin Chang, a professor at the University of Birmingham in the UK, described the essence of robotic hand research this way. His team is developing "Force-Aware 3D Contact Modeling" to enable robots to grasp objects more stably. The core of this research goes beyond visual perception to predict the magnitude and direction of forces applied to each finger when gripping an object.
"Real-world robots determine finger positions based solely on an object's shape, often causing slippage when lifting due to force imbalance," Chang explained. "A gap exists between 'seeing' and 'grasping.'"
His team bridged this gap using physics principles. The AI first estimates contact points when a human would grip an object, then calculates the force direction needed to counteract gravity. For example, instead of gripping a wine glass from above, the system guides the robot to cup it from below, offsetting gravitational pull.
The team trained the model through tens of thousands of virtual experiments using physics simulators. "By designing the system to learn both contact points and force directions simultaneously, we improved 'stable grasp' performance by approximately 30%," Chang said. "While existing vision-based research focused on where to grip, our research calculates how to apply force to prevent dropping, considering physical stability."
The research was presented at AAAI 2026, a premier AI conference held in Singapore, drawing significant attention from researchers. The technology proved particularly effective when handling complex tools and household appliances.
Once advanced, this technology is expected to bring innovation across all fields requiring physical AI—from household robots to rescue robots needing precision in disaster zones and manufacturing processes requiring delicate manipulation.
While academia and industry have focused on "walking" in humanoid robot development, the field is now transitioning to integrating locomotion and manipulation. Hands have emerged as a critical element for humanoid commercialization. However, perfecting robotic hands remains challenging.
"Hands have high degrees of freedom and compact structures, making it difficult to install sufficient motors," Chang explained.
For this reason, no single technology—whether sensors, vision, or physics—can complete a robotic hand alone. "Tactile and pressure sensor technologies are advancing rapidly, but approaches vary and no definitive solution exists yet," he noted. "A multimodal approach is needed where vision predicts force direction and tactile sensors provide fine adjustments."
"Robotic hand development is a comprehensive art form," Chang added. "Completing a single hand requires integrating all technologies—vision, touch, and physics."
