Apple's digital assistant Siri was a turning point in the smartphone world when Apple introduced it two years ago. While Siri-like virtual assistants are the future - see Google Now and Microsoft's Cortana - it isn't perfect technology yet. If you don't speak clearly enough, Siri doesn't always understand you.
That is probably going to change sooner than you think.
Apple seems to be getting close to adopting a technology that will drastically improve Siri's speech recognition. The technology, called neural networks, is nothing new. It's been around for 30 years, but really took off in 2009 after a speech by deep learning expert Geoff Hinton. Microsoft then brought Hinton in to run experiments and the results were off the charts.
Hinton’s idea was that machine learning models could work a lot like neurons in the human brain. He wanted to build “neural networks” that could gradually assemble an understanding of spoken words as more and more of them arrived. Neural networks were hot in the 1980s, but by 2009, they hadn’t lived up to their potential.
Apple usually prides itself on being ahead of the technology curve, but it is lagging behind in the speech recognition department. Google, IBM and Microsoft all use neural networks to improve their products. Microsoft even uses it for Skype Translate, which translates speech in real time!
Like usual, Apple isn't commenting on any of this speculation, but the company is hiring quite a few speech technology specialists and researchers for its Siri team. So, signs are pointing to a new and improved Siri in the near future.
It's about time.