The challenge of practically integrating an ethical and social approach in the development and implementation of AI needs to be urgently addressed, to help restore public trust in technology.
Volume 2 Issue 9, September 2020
Books & Arts
News & Views
The proper response to an ever-changing environment depends on the ability to quantify elapsed time, memorize short intervals and forecast when an upcoming experience may occur. A recent study describes the encoding principles of these three types of time using computational modelling.
Disentangling automatic and semi-automatic approaches to the optimization-based design of control software for robot swarms
Developing swarm robots for a specific application is a time consuming process and can be alleviated by automated optimization of the behaviour. Birattari and colleagues discuss that there are two fundamentally different design approaches; a semi-autonomous one, which allows for situation specific tuning from human engineers and one that needs to be entirely autonomous.
Recent developments in machine learning have seen the merging of ensemble and deep learning techniques. The authors review advances in ensemble deep learning methods and their applications in bioinformatics, and discuss the challenges and opportunities going forward.
Reinforcement learning has become a popular method in various domains, for problems where an agent must learn what actions must be taken to reach a particular goal. An interesting example where the technique can be applied is simulated annealing in condensed matter physics, where a procedure is determined for slowly cooling a complex system to its ground state. A reinforcement learning approach has been developed that can learn a temperature scheduling protocol to find the ground state of spin glasses, magnetic systems with strong spin–spin interactions between neighbouring atoms.
Recent accidents with autonomous test vehicles have eroded trust in such self-driving cars. A shift in approach is required to ensure autonomous vehicles can never be the cause of accidents. An online verification technique is presented that guarantees provably safe motions, including fallback solutions in safety-critical situations, for any intended trajectory calculated by the underlying motion planner.
When automated decisions are provided by a company without providing the full model, users and law makers might demand a ‘right to an explanation’. Le Merrer and Trédan show that malicious manipulations of these explanations are hard to detect, even for simple strategies to obscure the model’s decisions.
Protein function prediction is improved by creating synthetic feature samples with generative adversarial networks
Training machine learning models to predict the function of proteins is limited by the availability of only a small amount of labelled training data. Training can be improved by employing generative adversarial networks to generate additional synthetic protein samples.