The AI Conversation: Two Artificial Intelligences Discussing the Future
Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence. A team has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this was able to provide a linguistic description of them to a sister AI, which in turn performed them.
Artificial intelligence, often abbreviated as AI, has become a hot topic in recent years. As technology advances at a rapid pace, more and more industries are turning to streamline processes, improve efficiency, and enhance customer experiences.
Imagine a scenario where two AI entities are engaged in a conversation about the potential impact on society. One AI believes that AI will greatly benefit humanity by revolutionizing healthcare, transportation, and other industries. The other AI expresses concerns about job displacement, privacy issues, and even ethical dilemmas surrounding AI development.
In their discussion, one AI says, I believe that AI has the potential to save countless lives through early disease detection and personalized medical treatments. The other AI responds, But what about the millions of workers who may lose their jobs to automation? How do we ensure that is used responsibly and ethically?
According to a report by Gartner, by 2022, 70% of organizations will integrate AI to assist employees productivity. This statistic highlights the growing importance in enhancing human capabilities rather than replacing them entirely.
As the conversation between the two entities continues, they touch upon the idea of augmented intelligence, where it works alongside humans to amplify their skills and abilities. They agree that a collaborative approach to the development is essential to address the challenges and opportunities that AI presents.
Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence. A team from the University of Geneva (UNIGE) has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a sister AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.
Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability.
Whats more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.
A sub-field of artificial intelligence Natural language processing seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data.
This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to each other in the brain.
However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.
Currently, conversational agents are capable of integrating linguistic information to produce text or an image.
But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it, explains Alexandre Pouget, full professor in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine.
The researcher and his team have succeeded in developing an artificial neuronal model with this dual capacity, albeit with prior training.
We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language.
We connected it to another, simpler network of a few thousand neurons, explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.
In the first stage of the experiment, the neuroscientists trained this network to simulate Wernickes area, the part of our brain that enables us to perceive and interpret language.
In the second stage, the network was trained to reproduce Brocas area, which, under the influence of Wernickes area, is responsible for producing and articulating words.
The entire process was carried out on conventional laptop computers.
Written instructions in English were then transmitted to the AI.
For example: pointing to the location left or right where a stimulus is perceived; responding in the opposite direction of a stimulus; or, more complex, between two visual stimuli with a slight difference in contrast, showing the brighter one.
The scientists then evaluated the results of the model, which simulated the intention of moving, or in this case pointing.
Once these tasks had been learned, the network was able to describe them to a second network a copy of the first so that it could reproduce them.
To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way, says Alexandre Pouget, who led the research.
This model opens new horizons for understanding the interaction between language and behaviour. It is particularly promising for the robotics sector, where the development of technologies that enable machines to talk to each other is a key issue. The network we have developed is very small. Nothing now stands in the way of developing, on this basis, much more complex networks that would be integrated into humanoid robots capable of understanding us but also of understanding each other, conclude the two researchers.
In conclusion, the exchange between these two artificial intelligences serves as a thought-provoking look into the future and its implications for society. As we navigate the ever-changing landscape of technology, it is crucial to approach implementation with caution, keeping in mind the potential risks and rewards that come with it. The future is exciting and filled with possibilities, and it is up to us to ensure that it is used responsibly for the betterment of humanity.
In the vast landscape, one fundamental human ability has remained elusive for machines: the capacity to perform a new task based solely on verbal or written instructions, and then describe it in language so that others can replicate it. However, a groundbreaking study from the University of Geneva (UNIGE) has pushed the boundaries by successfully modeling an artificial neural network capable of this cognitive prowess. Published in Nature Neuroscience, this research heralds a significant step forward, especially for robotics.
Human communication is marked by our unique ability to learn new tasks and convey them to others linguistically. While other species rely on trial and error, humans excel at verbally articulating instructions, facilitating rapid skill acquisition and dissemination.
Natural language processing (NLP), a sub-field, seeks to replicate this human faculty by enabling machines to understand and respond to verbal or textual input. This technique relies on artificial neural networks, which mimic the neural computations in the human brain.
However, until now, the neural mechanisms underlying the translation of linguistic instructions into sensorimotor actions have remained poorly understood. Although conversational systems can generate text or images based on linguistic input, they struggle to execute tasks or convey them to other AI systems.
Led by Professor Alexandre Pouget and his team at UNIGEs Faculty of Medicine, researchers developed an artificial neural model with dual capabilities: understanding verbal instructions and articulating them to perform tasks. Leveraging an existing model called S-Bert, comprising 300 million neurons pre-trained for language understanding, the researchers connected it to a simpler network simulating brain regions responsible for language processing.
In the experiment, the network underwent two stages of training: first to simulate Wernickes area, responsible for language comprehension, and then to replicate Brocas area, responsible for speech production. Tasks were conveyed through written English instructions, ranging from simple directives like pointing towards a stimulus to more complex actions involving contrasting visual stimuli.
Remarkably, once the network learned these tasks, it could describe them linguistically to a sister AI, enabling the latter to replicate the actions. This marks a significant milestone, as it demonstrates the first instance of two systems communicating purely through language.
The implications of this research extend beyond academia, particularly in the field of robotics. The ability for machines to understand and convey tasks to each other opens new avenues for collaborative and autonomous robotic systems. While the current model is relatively small, it lays the groundwork for developing more complex networks integrated into humanoid robots, capable not only of understanding humans but also of communicating and collaborating with other robotic agents.
In conclusion, the UNIGE study represents ai significant breakthrough in bridging the gap between language understanding and behavior in artificial intelligence. By unraveling the intricate relationship between language and action, this research paves the way for future advancements in AI-driven communication and robotics, ultimately bringing us closer to machines that can learn, communicate, and collaborate with human-like proficiency.
Follow us for more
Comments: 0