Topic Title: Complex Environment Perception and Adaptive Vision Algorithm

 

Technical Area: Complex Environment Feature Learning

 

Background

With the development of computer vision technology in recent years, the feature representation learned by these vision networks achieves state-of-the-art performance not only on the classification, but also on various other visual recognition tasks like scene recognition and detection. Furthermore, the computer vision technology attracts great interests in security area and is also used for various surveillance applications. For example, if an accident is captured by a surveillance camera, the system will automatically search from all the videos captured by nearby cameras and then locate the wreck of the accident immediately. We expect that computer vision algorithms can apply in the applications related to people, cars, events, things and so on.

 

However, the application scenarios are easily affected by environment factors, e.g., weather, seasons and illumination. Moreover, the current algorithms lack of city information such as road construction recognition, weather illumination perception, camera technology and the modeling of installation parameters. In this situation, applying an algorithm into a realistic city perception scene becomes a complicate problem. We believe that adaptive vision algorithm is a key technology in real-world applications.

 

In the coming decade, adaptive vision algorithm will continue its success in many industry challenges that rely in data and knowledge-driven technologies. Here, the preprocessing of complex scene is one of the most important issues. Extracting knowledge from data and making full use of video information might be two possible solutions, which will lead to success in many areas. Besides, we also consider to use other information through foreground background detection, road construction recognition, etc.

 

Here, we invite the experts in the related fields to work on new framework and design learning and inference, which eventually help us solve the complex situation in the real world through the adaptive vision algorithms.

 

Target

Based on the complex environment information of urban openness and visual data, we target to integrate other ancillary data information to achieve the comprehensive analysis and understandings on urban participants, such as people, cars, things, events. Moreover, we also aim to achieve the comprehensive analysis, understanding and intervention under large-scale visual cloud computing scenarios, based on videos and other auxiliary data from urban traffics, security and urban constructions.

 

Complex environment perception algorithm includes in the following research areas: foreground and background detection, scene segmentation such as roads and buildings, active area detection of effective objects, object detection in abnormal scenes, imaging chromatic aberration estimation, monitoring and estimation on traffic flows and passenger flows, season perception, weather perception such as cloudy, sunny, rain, snow, haze, etc., and illumination perception.

 

The adaptive vision algorithm is to model the environmental factors and form the feature representation on multiple dimensions, such as environmental information, time series, spatial content, etc. It also solves the problem that the current visual algorithm mainly relies on video frames but cannot fully excavate the video’s spatial-temporal context information. By that, the adaptive vision algorithm can improve the accuracy of recognition in some specific applications like pedestrian, vehicle, object and event, and also improve the adaptive generalization ability of the algorithm under different scenes.

 

Therefore, we intend to integrate multi-source heterogeneous data through multi-view learning mode and realize the comprehensive multi-dimensional perception on urban participants, including pedestrians, vehicles, objects, events, etc. We hope to improve the accuracy of pedestrian and vehicle locating and the capability of trajectory mining. We try to ensure that the dangerous control items should be found in time and the discovery speed and accuracy unusual events can be improved, such as mob fights, fires, etc. Eventually, we can realize comprehensive real-time understanding, excavation, intervention and other tasks which apply to the urban traffic, security, urban construction and other large-scale visual cloud computing scenes.

 

Related Research Topics

Recently, many computer vision algorithms are beneficial to the perception of urban scenes, such as super-resolution, object detection, scene segmentation, person re-identification (Re-id) and so on.

  1. Super-resolution: Super-resolution is a class of techniques that can enhance the resolution of the real scene images. However, how to enhance the images from real-world scenario is still challenging.

 

  1. Object detection: The aim of the object detection algorithm is to accurately locate and identify the objects in the image, e.g., pedestrians and vehicles. In a real scene, there are many tiny objects and obstacles among the target objects, which brings in great challenges for this kind of algorithm.

 

  1. Scene segmentation: The scene segmentation algorithm can be used to understand all kinds of scenes in the city, such as buildings, roads, landscapes, etc.

 

  1. Person Re-identification: The purpose of person Re-id is to identify the same individual from different cameras. By using this technology, we can locate the pedestrian trajectory in a certain area. However, the changes from seasons, weather, illumination and other factors dramatically increase the difficulty of the algorithm implementation.