blog post

Introspective AI Says Diversity Improves Performance

(Original art by MidJourney)

In the realm of artificial intelligence, a groundbreaking study has unveiled AI possessing the capability to introspect and fine-tune its own neural networks and displaying enhanced performance when opting for diversity over uniformity. This discovery sheds light on the effectiveness of varied neural networks in tackling intricate tasks with finesse.

William Ditto, the savant behind this inquiry, is a physicist hailing from North Carolina State University and at the helm of the Nonlinear Artificial Intelligence Laboratory (NAIL). According to Ditto, the pivotal aspect was affording the AI the unique ability to delve inward, unravel the mechanics of its learning process, and adjust accordingly. A virtual exercise was devised, involving artificial intelligence, to gauge whether the AI would lean towards diversification and, subsequently, whether this choice would bolster its performance.

At the core of this innovation lay neural networks, a sophisticated manifestation of AI rooted in our understanding of the human brain’s workings.

These brainiac innate neurons dancing around in our cerebral cortex communicate via electrical impulses, their strength dictated and impacted by the connectivity between them. Similarly, artificial neural networks thrive by calibrating numerical weights and biases through rigorous training sessions. For instance, a neural network trained to recognize dogs in photos undergoes a rigorous sifting process, forming an educated guess about each photo’s content. It then scrutinizes the margin of error and systematically adjusts its weights and biases, honing its accuracy.

The traditional AI paradigm rests upon neural networks replete with identical artificial neurons, their connections altering as knowledge is accrued. Once this intricate network attains optimization, these neurons stabilize into a static configuration. However, Ditto’s trailblazing team diverged from the norm, empowering their AI with the autonomy to engineer neural network components – the number, shape, and strength of connections – as it learned. This groundbreaking feature spawned sub-networks housing diverse neuron types and connection strengths within the overarching network.

Ditto says, “The human brain showcases a mosaic of neuron types. So, we granted our AI the ability to introspect and recalibrate its neural network composition. In essence, we handed it the reins to its cognitive anatomy. It could solve a problem, evaluate the outcome, and tinker with the artificial neurons’ attributes until it unearthed the optimal configuration. Think of it as AI indulging in meta-learning.”

This innovative AI wasn’t merely interested in composition; it also displayed an affinity for diversity. It could choose between varied or homogenous neurons, and intriguingly, diversity consistently emerged as the preferred choice to bolster performance.

The litmus test entailed a numerical classification task, wherein the AI’s precision was measured against the number of neurons and their diversity. Standard AI, sticking to uniformity, managed 57% accuracy. Contrastingly, the introspective, diverse AI ramped up its accuracy to a commendable 70%.

Ditto believes the real marvel lies in this AI’s prowess to tackle complex challenges. Predicting the arc of a pendulum or deciphering galactic motion – these are the domains where diversity-driven AI truly shines. With up to tenfold greater accuracy than conventional [human] counterparts, it’s clear that AI embracing diversity outpaces the rest in unraveling intricate enigmas.

Ditto emphasizes, “Our study illustrates that when AI is equipped to peek into its own learning mechanisms, it reconfigures its internal landscape – its artificial neurons – to champion diversity. This translates to superior efficiency and precision in problem-solving. Moreover, as the challenges grow more intricate and chaotic, the AI’s performance surges exponentially, eclipsing its uniformity-bound counterparts.”

Cool. Freaky. Scary. Boats and Bridges burning, A new security dial. And brand-new rules of engagement.

This pioneering research, unveiled in Scientific Reports, is a product of support from the Office of Naval Research and United Therapeutics. Joining forces with Ditto in this pursuit is John Lindner, an esteemed physicist affiliated with the College of Wooster and NAIL, and Anshul Choudhary, the lead author and former graduate student at NC State. Anil Radhakrishnan, an NC State graduate student, and Sudeshna Sinha, a distinguished physicist from the Indian Institute of Science Education and Research Mohali, also contributed to this trailblazing endeavor.

Much more to come and much too soon.                        

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.