top of page
  • Writer's picturemel

AI and What We Can Learn From How They Collaborate

Updated: Feb 22



Exploring research findings of federated inference and how it’s shaping our understanding of collaboration.


Federated inference can be described as a cooperative approach to problem solving using artificial intelligence. Instead of working separately, they come together, share their insights, and combine their knowledge to reduce uncertainty and confusion. Different AI systems, each with their own data and experiences, collaborate. This revolutionary approach is reshaping industries, from healthcare to finance, and reinventing our interaction with technology.


 

Introducing the heart of this transformation: the concepts of federated inference and belief sharing.


This groundbreaking idea has been discovered where agents' work together, combining their knowledge and insights to make smarter, more informed decisions. Agents can represent animals, humans, or artificial systems that observe and interact with their environment. They have the ability to figure out or understand information about what's around them. Federated inference comes into play when these agents share their beliefs and perspectives, enabling them to solve problems together.

As you continue reading, we will deconstruct the following concepts:

  • Teamwork in Decision-Making

  • Simplifying Complexity Through Minimizing Free Energy

  • Resolving Uncertainty With Communication

  • The Development and Evolution of Language

Ready to dive into these highlights? Let’s begin!

 

Teamwork in Decision-Making

Imagine a world where machines collaborate like air traffic controllers. Recent research delves into how a group of agents share information about their environment in order to make smarter decisions. Each agent contributes a piece of the puzzle, together forming a comprehensive picture, enhancing their collective decision-making ability. This is akin to air traffic controllers having their own part to play in order to effectively manage the flow of aircraft into and out of the airport space. Collaboration is the core driver of the concept of distributed intelligence and federated inference. There are multiple agents sharing beliefs about a common world. Similar to a group of animals keeping a lookout for predators, where their collective understanding and response depends on their ability to communicate their beliefs about what they observe.

All animals, from fruit flies to humans, learn about the world by observing others. (Frith, 2010; Kilner et al., 2007; Manrique and Walker, 2023; Rieucau and Giraldeau, 2011)
 

Simplifying Complexity Through Minimizing Free Energy

Ever watched a sports team coach orchestrate the performance of individual players to achieve a unified and effective team effort? A recent study shows that achieving effective AI decision-making lies in a similar harmony. Minimizing free energy is fundamental to this process because it’s a way for agents to refine their understanding to be as accurate and straightforward as possible. Minimizing free energy is all about making the best guess the agent can, with the least amount of uncertainty, using all the information they have. By minimizing free energy, these agents streamline their decision-making process, enabling them to solve complex problems in a more organized and efficient way.


There are necessary components for federated inference and belief sharing that emerge from minimizing free energy. The concept becomes clear when we see computer simulations mimic how language is created, learned, and utilized by artificial agents. These simulations show when computer models adjust their beliefs based on new information, they get better at making decisions. They learn faster and pick the best actions to take. This whole process is called "structure learning”


 

Resolving Uncertainty with Communication

Communication enables the process of updating the agents’ beliefs based upon what they see, allowing them to effectively gather the evidence through ‘many pairs of eyes’. The agents share and update their beliefs using a method called Bayesian updating. This kind of teamwork is built upon the updating of shared beliefs about unobserved situations, effectively reducing uncertainty in a world that is only partially observed. This shared generative model allows agents to not only express their own beliefs but also understand and integrate the beliefs of others, thereby refining collective intelligence and decision-making. When agents work together, their collective intelligence creates a solid dynamic of sharing insights, smart decision making, and better understanding, all while minimizing uncertainty.


 

The Development and Evolution of Language

Like a child learning a new language, AI agents develop their own language. The research shows how simulations are used to study how agents develop and learn a language respective to their needs. It’s a process of evolution, mirroring the human nature of communicating. Through these interactions, language becomes more refined, enhancing their ability to collaborate and make decisions. Akin to an ethnic group where their native language evolves overtime, this evolution happens more rapidly in the AI world. In a group of agents, language emerges as they interact and learn from their environment. This isn't simple; it's a complex language that evolves, allowing these agents to share detailed information about the world around them. This finding has huge implications for cognitive computing and AI systems, suggesting these technologies can learn to communicate and interact as intuitively as humans. Imagine a world where technology doesn't just assist us; instead, it adapts, understands, and collaborates with us, thereby opening doors to new possibilities.


 

All in all, the "Federated Inference and Belief Sharing" study offers a groundbreaking

peek into the future of AI. It reveals how a collaborative approach to intelligence can lead to more advanced and efficient decision-making, reshaping our understanding of what AI is capable of.


Source:

Karl J. Friston, Thomas Parr, Conor Heins, Axel Constant, Daniel Friedman, Takuya Isomura, Chris Fields, Tim Verbelen, Maxwell Ramstead, John Clippinger, Christopher D. Frith (2024). Federated inference and belief sharing.


 

Ready to discover more latest breakthroughs?

Comments


bottom of page