AI is often seen as the big-bad-boogeyman that’s going to ruin the world — but sometimes it can also be quite good.
Google’s quite bullish on AI right now — it has to be if only to keep up with the rest of Silicon Valley — and today Google Australia has announced that it’s funding and working with Australian tech firms including Cochlear, Macquarie University Hearing, National Acoustic Laboratories (NAL), NextSense and The Shepherd Centre to use AI and machine learning to enhance development of hearing aids.
Which, on the surface of it, doesn’t feel like a natural fit. Right now, much of the reporting on AI seems to focus on services like ChatGPT or Dall-E 2, and it might seem like the last thing someone with hearing loss might need would be a picture of a duck on rollerskates.
But it’s about the ways that AI can (or could) be used to enhance hearing and recognition of sounds that matters here. Kind of like Autocomplete, but for your ears.
It’s a pretty big problem too; Google’s own release notes that more than 1.5 billion people globally have hearing loss issues. That’s a whole lot of need that could be addressed here.
Google says that the first project will “seek to personalise hearing models to better address individual listening needs to enhance hearing aids and other listening devices.”
Alex’s Take: Technology can be cool, but there’s little that’s more cool than technology that solves a real human problem, and this absolutely is one of them.
I do wonder about the ways that an AI model might mis-identify a sound (or correct one into the wrong pattern of speech or similar) but then I presume that the very clever folks Google and the other research firms have working on this aren’t going to ignore this.
I’d also note this TNW story which talks about the use of GPT-4’s visual identification routines to supercharge an existing app that helps people with visual acuity issues to identify objects as good reading if you’re keen.