ALIBABA, TSINGHUA TO ADVANCE HUMAN-COMPUTER INTERACTION
    2018-04-03    Alibaba Tech

“As artificial intelligence and data technologies advance, humans’ senses and emotions would be further digitalized and become the new modes of interaction with machines,” Alibaba Group Chief Technology Officer Jeff Zhang said.

The new venture is the first HCI-focused lab unveiled by Alibaba since the company launched its global research institute, DAMO Academy, last October. The $15 billion DAMO initiative for fundamental and disruptive research, in which HCI was listed as a key area, is an important part of company’s efforts to broaden its technological expertise beyond e-commerce.

The joint lab will bring together experts from both sides in cognitive science, linguistics, physiology and aesthetics, while its operations and research directions will be led by the director of The Future Lab at Tsinghua University, Xu Yingqing, and Alibaba Group Senior Director of User Experience Paul Fu.

Alibaba Group CTO Jeff Zhang and Professor Bin Yang, vice president of Tsinghua University, attend launch in Beijing.

While HCI may seem impenetrable to the layman, with terms such as “affective computing” and “multimodal perception and interaction,” the science offers some highly useful real-world applications. They include vehicle cockpits that emphasize touch stimulation for safer driving, as studies showed drivers’ reaction to touch is faster than visual signals, as well as product design based on a digital analysis of consumers’ reactions to them.

“By making machines better understand and communicate with humans, HCI is expected to revolutionize various industries and make profound impacts on how we work and live,” said Professor Bin Yang, vice president of Tsinghua University.

Alibaba is no stranger to “combining the senses” to improve the user experience. In December, the company developed voice-recognition ticket kiosks for the Shanghai Metro, which could pick up sound from a user several meters away, even in noisy environments. The technology combines audio signal-processing and computer-vision technology to better identify sound sources.