Almost everyone will experience some degree of hearing loss in their lifetime, whether due to natural aging or exposure to loud noises. Recognizing the significance of comprehending sound biology and exploring avenues for enhancement is crucial.However, the enchanting nature of vocal communication goes beyond mere importance. The intricate interplay of our brains, enabling the exchange of ideas through sound, is undeniably awe-inspiring.In a recent paper in Communications Biology, scientists from the University of Pittsburgh have revealed a novel machine-learning model that sheds light on the way brains perceive and decipher communication sounds.The study’s algorithm simulates how social animals like marmoset monkeys and guinea pigs categorize sounds and take appropriate action, based on whether the sounds represent danger, food, mating, and so on.This work is an important breakthrough in unraveling the complexities of neuronal processing that underlie sound recognition.The insights garnered from this research have the potential to improve speech recognition therapies, hearing aids, and other treatments for conditions that affect communication abilities.“More or less everyone we know will lose some of their hearing at some point in their lives, either as a result of aging or exposure to noise. Understanding the biology of sound recognition and finding ways to improve it is important,” adds senior author Srivatsun Sadagopan. “But the process of vocal communication is fascinating in and of itself. The ways our brains interact with one another and can take ideas and convey them through sound is nothing short of magical.”In a world filled with diverse sounds, both humans and animals possess the remarkable ability to navigate through the symphony of noises, transcending barriers of pitch, accent, and environment. Whether amidst the uproar of the jungle or the bustling ambiance of a restaurant, we effortlessly communicate and comprehend one another.Consider the word “hello” – its meaning remains steadfast, transcending accents, gender, and surroundings. This remarkable phenomenon intrigued researchers who drew parallels between the brain’s recognition of communication sounds and its ability to discern faces among a sea of diverse objects.The team began with the notion that the human brain’s recognition and comprehension of communication sounds could parallel its recognition of faces amidst other objects. While faces exhibit high diversity, they possess some shared characteristics.Rather than attempting to match each face encountered with a perfect “template” face, the brain detects and analyzes useful features like the eyes, nose, and mouth along with their relative positions. By doing so, the brain creates a mental map of these distinctive characteristics that define a face.In a series of experiments, the researchers demonstrated that communication sounds may also be composed of such distinct features. They constructed a machine learning model of sound processing to classify various sounds produced by social animals.The team then recorded brain activity from guinea pigs exposed to their kin’s communication sounds to validate whether the model correlated with brain responses. Neurons in sound-processing brain regions were highly active when they heard a noise that contained features of specific types of sounds, as predicted by the machine learning model.In order to test the machine learning model’s performance against the real-life behavior of animals, the researchers devised a series of experiments with guinea pigs. These small animals were placed in an enclosure where they were exposed to distinct sound signals of different categories, such as squeaks and grunts. The guinea pigs were trained to walk to different corners of the enclosure and receive fruit rewards based on which category of sound was played.Delve into the complexities of sound recognition with a groundbreaking study using machine learning, providing valuable implications for speech disorders and hearing aid improvements.To challenge the animals further and mimic the way humans recognize spoken words with different accents, the researchers ran the guinea pig calls through sound-altering software that changed their pitch, and speed and added noise and echoes. Remarkably, the guinea pigs were able to perform the task with the same consistency, despite the added complexities. Moreover, the machine learning model described their behavior and the activation of sound-processing neurons in the brain with remarkable accuracy.As the next step, the researchers plan to translate the machine learning model’s accuracy from animal to human speech, opening up new avenues for understanding speech recognition in humans.“From an engineering viewpoint,” explains lead author Satyabrata Parida, “there are much better speech recognition models out there. What’s unique about our model is that we have a close correspondence with behavior and brain activity, giving us more insight into biology.”According to the author, in the future, these valuable insights have the potential to assist individuals with neurodevelopmental conditions and contribute to the development of enhanced hearing aids.“A lot of people struggle with conditions that make it hard for them to recognize speech,” adds Manaswini Kar, a student in the Sadagopan lab. “Understanding how a neurotypical brain recognizes words and makes sense of the auditory world around it will make it possible to understand and help those who struggle.”Source: 10.1038/s42003-023-04816-zImage Credit: Getty


Source: www.revyuh.com