Araya’s research in unusually academic for a commercial entity, particularly its focus on seemingly esoteric ideas about how consciousness operates. But the Tokyo-based company’s founder and CEO, Ryota Kanai, believes that manufacturers of artificial intelligence (AI), need to be experts in consciousness.
“Consciousness is no longer something mysterious and magical,” explains Kanai, a neuroscientist. “We are seeing AI researchers getting closer to architectures relevant to consciousness.” He believes big advances in these areas are the key to the future of AI.
“In the AI business we currently spend a huge amount of time creating a very specialized neural network to solve only one task, which is very computationally inefficient. We want to combine existing models, so we can keep improving,” explains Kanai. “These, more flexible, multi-purpose learning models are closer to some concepts of consciousness.”
Araya produces AI neural networks, which are similar to software, but have the ability to learn. Roughly 40% of the company’s income comes from custom research produced for other companies or institutions, or providing such organizations with support.
For example, for the Okinawa Institute of Science and Technology, Araya has been developing software to analyse calcium imaging data from microscopes, which can be used to measure electrical activity in neurons. Next, Araya is launching a service called Research DX, which will use AI to help speed up research.
About 30% of the company’s revenue comes from AI focused on industry. For a maker of precision parts, for example, Araya has developed an algorithm to detect and then classify defects, helping to improve processes. And for a media publisher, Araya developed a neural network to help narrow selections of sports images for photo editors.
The remainder of the revenue is generated by government grants and blue-sky funding.
Conscious of consciousness
The theory of consciousness that Kanai subscribes to is called Global Workspace Theory1. In this framework, the key to human consciousness is thought to involve prioritizing and amplifying key cognitive tasks.
For example, the human brain usually processes many things simultaneously, but the brain must select which among the tasks is the most important. For example, if sounds are identified as originating from a possible threat, the brain devotes more cognitive resources to processing these.
In a 2022 paper2, Araya researchers examined three theories of consciousness: Global Workspace Theory; Attention Schema Theory, which builds on Global Workspace Theory, adding new concepts; and, Kanai’s own Information Generation Theory3, which borrows from the ideas about mental simulations from the evolutionary biologist, Richard Dawkins.
In his book The Selfish Gene, Dawkins suggested consciousness was born out of mental simulations. And an animal that can include itself in its vision of the world may be better at planning ahead or outwitting opponents, which, Dawkins says, explains the evolutionary benefit of consciousness. Similarly, Information Generation Theory suggests that information that is consciously accessible is not simply based on sensory input, but the result of a holistic model or world view, held within the brain.
In their 2022 examination of these theories, the researchers believe that they demonstrated that all three must work together in humans2. They argue that AI models should also attempt to address all three theories to produce more generally intelligent AI systems.
How are such theories practically applied to AI? Araya’s researchers are already building in ideas and code around both self-perception and adaptability, says Kanai.
For example, in an Araya paper published in August 20214, two approaches to reinforcement learning — model-based and model-free — were examined using simulations of crane lifting/shoveling robots. Model-based reinforcement learning is seen to be more reflective of consciousness, as it requires the agent to include itself in a model it constructs of the world. Only by doing this can it make predictions about what might happen if it performs certain kinds of actions. This informs the predictive coding and active inference systems that enable AI to reduce the training sample sizes needed, says Kanai.
Adaptability is also built into Araya’s ‘transfer learning AI’, a type of neural network that has been trained to complete one task and apply what it has learnt to a new related task. This type of learning is now used in some of Araya’s image-recognition products that do everything from product counting to monitoring crop growth.
While some of the topics in these papers may seem academic, these ideas are considered crucial to progress in the field of AI by pioneering technologists, says Kanai. A portion of the company’s revenue comes from funders who are AI pioneers, such as the entrepreneur, Marek Rosa, founder of GoodAI, a company devoted to rapidly developing safe general AI.
Grants also come from Japanese government sources — including the Japan Society for the Promotion of Science Grant-in-Aid for Scientific Research (Kakenhi), the Moonshot Research and Development Program at the Japan Science and Technology Agency (JST) (see sidebar, Neuroscience or science fiction?), and, in the past, JST's Core Research for Evolution Science and Technology programme.