
Humanoid robots walking, running, and climbing stairs are no longer unfamiliar sights. Robots carrying parts in factories and moving boxes in logistics centers have become commonplace. However, experts say much more time is needed before truly complete humanoids emerge. The reason is surprisingly simple: the "hands" of robots that otherwise mimic human movement remain unfinished.
In the real world, objects that humans grasp with their hands are unpredictable. Even cups vary in shape, material, and center of gravity. Eggs break easily if mishandled, plastic bags tear under pressure, and wet glass is slippery. Humans instinctively know where and how firmly to grip such objects the moment they see them. Robots cannot do this. They must learn or calculate every situation. The problem is that learning every object in the world through data is virtually impossible. For this reason, scientists are expanding research beyond "more training" toward developing more sophisticated hands themselves.
Dense Sensors on Robot Hands Replicate Human Touch
The most representative approach is embedding tactile sensors in robot hands. Research on the "F-TAC Hand (Full-Hand Tactile-Embedded Biomimetic Hand)" conducted by a joint team from Peking University and the Beijing Institute for General Artificial Intelligence (BIGAI), with doctoral student Li Yuyang as co-first author, proposed covering more than 70% of the hand surface with high-resolution tactile sensors. With sensors this densely packed, pressure changes can be detected at 0.1mm intervals, enabling precise detection of where slippage begins. This goes beyond simple "contact" signals to read pressure distribution like a map.
The "GelSight" sensor developed by Professor Edward H. Adelson's team at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) is another representative vision-based tactile technology. This sensor photographs gel surface deformation inside fingers with a camera to calculate contact shape and force. It converts tactile information into optical signals, producing quantifiable data. These technologies aim to enable robots not just to grasp objects but to feel and adjust like humans.

Cameras Enable Stable Grasping That Mimics Human Hands
However, sensor-based approaches have limitations: they only provide information after objects are already grasped. Determining where to grip before grasping is another problem entirely. This led to vision-based approaches, where robots use cameras to view objects, analyze their three-dimensional shapes, and find stable grasping positions.
A representative example is the 2018 paper "More than a Feeling: Learning to Grasp and Regrasp using Vision and Touch," published in IEEE Robotics and Automation Letters. This joint research involving Professor Sergey Levine's team at UC Berkeley and MIT's Professor Adelson presented a model that learns regrasping ability by combining visual and tactile information. Robots view objects with cameras, then adjust hand positions to find more stable postures. Rather than grasping once, they use visual information after grasping to adjust to better positions. Results showed that combining visual and tactile information significantly improved grasping success rates.
Recent research increasingly focuses on recognizing objects three-dimensionally like 3D models rather than as flat images. This enables estimations about unfamiliar objects based on shape, such as "this part is sturdier" or "this point is risky."

"Hands" Will Be Key to Humanoid Robot Industry
Scientists believe that combining three approaches—tactile sensors, vision recognition, and physics-based force prediction—is necessary to approach "complete hands." Even excellent sensors are meaningless without interpretation capability. Superior vision cannot stably grasp unfamiliar objects without actual force calculation. Even sophisticated physics calculations falter if initial contact detection is inaccurate. Ultimately, robot hands are not a single technology but a complex system.
For this reason, robot hand technology is evaluated as both a scientific challenge and an industrial opportunity. UK market research firm IDTechEx projected in its report "Humanoid Robots 2025-2035" that the humanoid robot market will reach $30 billion (approximately 43 trillion won) by 2035. However, humanoid robot selling prices remain high. The current industrial robot gripper market remains at several billion dollars, but if household and service humanoids spread, related parts and software markets could expand several-fold. The report stated: "As of 2025, average humanoid selling prices remain high due to expensive components and low production volumes. Production will expand only after overcoming technical challenges including limited battery capacity reducing operating time and developing advanced tactile sensor-based hands for delicate tasks."
