Theme Title: Model Simplification Techniques for Deep Neural Networks

 

Technical Area: Machine Learning

 

Background

With Neural Networks go more and more complicated and deep, the sizes and operations such networks require keep growing. This not only makes hardware acceleration harder, but also makes mapping some big model onto small devices impossible.

 

Target

With some NN model simplification techniques, we not only targeting to get smaller and compressed models, to be deployed onto terminals, such as auto- robots, self-driving cars, IoT devices; but also targeting condensed data and computation on data center computations for deep learning algorithms, to save storage, bandwidth, inefficient computations and thus power consumptions.

 

We not only target static model compression techniques during training, but also some dynamic computation saving techniques during inferencing.

 

We not only aim to model simplification techniques in space domain, but also in time domain, as well as on tensor dimension reducing.

 

We target seeing these techniques not only be efficient in some experiments, but also be applied onto some of our real application scenarios and showing real savings on memory, computation, and/or powers.

 

Related Research Topics

On static model compression techniques side, many scholars, like Dr. Song Han, didn’t great jobs with techniques like model pruning, parameter quantization, Huffman coding, and so on.

 

On dynamic simplification side, some scholars proposed ideas like “Skip Net” to skip some layers at run time according to some nature of the input data. These are all some good examples of techniques that we would like.