Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • ADVERTISEMENT FEATURE Advertiser retains sole responsibility for the content of this article

Finding order in a swirl of information

Nankai researchers proposed a new image encryption algorithm based on hyper-chaos. Credit: Nankai University

Enhancing information security with hyper-chaotic systems

The need for data security and the authentication of encrypted images stored in the cloud is increasingly significant. The vulnerabilities of existing encryption and decryption systems make them inadequate for today and the future. Gao Tiegang, a professor from College of Software, and an expert in hyper-chaotic systems, has made considerable progress in improving these systems.

Gao has developed four-dimensional hyper-chaotic systems with unique dynamic characteristics, and investigated their properties and applications with both numerical simulation and by implementing electronic circuitry. In his research, hyper-chaotic systems were generated on the basis of the Lorenz chaos system and then electronic circuits applied. Gao developed new image encryption algorithms using double verifiable image encryption and a reversible watermarking algorithm, ensuring the integrity of the decrypted image, a capability rarely available in existing decryption systems. The authentication scheme based on hyper-chaotic systems can guarantee the security of cloud storage and cloud computing, with potential applications in medical imaging and military intelligence processing. Gao’s research achievements have led to his inclusion in the Elsevier list of highly cited Chinese scholars in computer science for five consecutive years.

Machine learning for high impact biomedical problems

The number of biomedical research papers related to machine learning has grown exponentially in the past two decades. Machine learning is used to numerically describe biomedical data, model certain biological processes, and predict the results or outcome. Most importantly, it can enhance understanding of the underlying principles of biological systems by revealing the relationship between their components and variables. As the technology of machine learning matures and more nonlinear effects are incorporated, it has also been applied more successfully in biological applications. For example, techniques based on neural networks have performed very well in neural decoding, in which a subject’s intention is predicted based on brain activity. Machine learning can be invaluable when human capacity to understand models of complex nonlinear biological systems fails.

Associate professor, Han Zhi, has focused on the development of advanced pattern recognition, bioinformatics and statistical methods for solving high-impact problems in bioscience. One of his important findings, recently published in Nature Neuroscience, concerns the structural and functional development of the mouse thalamus. As is established, the thalamus links the cortex to other regions of the brain and supports sensory, motor, and cognitive functions through numerous different nuclei. However, the mechanisms of structural development and functional organization remain poorly understood. Han and coworkers have successfully applied machine learning and pattern recognition techniques, especially clustering and segmentation, to analyze the complex development logic of mice thalamic structure and revealed that lineage relationship was a key regulator in the development of non-laminated thalamus.

Another example of the application of machine learning techniques in biomedicine is its use in revealing unknown phenotype-gene associations, essential for disease discovery. By combining the phenotype, the genotype, and their association into one heterogeneous network, Nankai researchers proposed an algorithm that effectively uses phenotype network information to predict unknown phenotype-gene associations. Some disease-causing genes they have predicted are already confirmed in the literature.

Intelligent operation and maintenance based on machine learning

Machine learning techniques are essential for intelligent network operation and maintenance. To solve software problems in anomaly detection and localization, Zhang Shenglin, a young Nankai researcher, analyzes key performance indicators of the software to quickly and accurately detect any ‘concept drift’ that can lead to software performance degradation. His research led to the development of a robust and rapid concept drift detection and adaptation method to ensure normal operation. With this new capability, software engineers will no longer need to manually set the parameters and thresholds of the fault detection algorithm. The research results were evaluated and validated experimentally using large-scale operation and maintenance data from industry. Compared with existing methods, the new approach has greatly increased the detection accuracy of the system, shortened the delay in the detection, and improved the computing speed.

Zhang and coworkers’ research was reported in the 2018 IEEE symposium on software reliability engineering, and was awarded best research paper. Their research results are important to the operation of large, complex software systems, and have major implications for intelligent software for telecommunication, power grids, aviation systems, medical information systems and the Internet of Things.

Nankai researchers have also worked on computer graphics and image processing, audio signal processing, robot simultaneous localization and mapping, data mining, distributed computing, and pervasive computing, leading to many publications and patents. Their software packages developed in collaboration with their commercial partners were sold in more than 100 countries worldwide.


Quick links