Google Assistant wins smart speaker IQ test

0

Source: Loop Ventures

Google Assistant answered most questions correctly in a smart speaker IQ test conducted by Minneapolis-based venture capital firm Loop Ventures just before Christmas.

In the test, which pitted four smart speakers – the second-generation Amazon Echo (Alexa), Google Home Mini (Google Assistant), Apple HomePod (Siri) and Harman Kardon Invoke (Microsoft Cortana) – each speaker was asked the same 800 questions and scored not only on his ability to answer each question correctly, but also on his comprehension (he understood what was said).

Google Assistant answered 88% of the questions correctly compared to 75% for Siri, 73% for Alexa and 63% for Cortana. The results revealed a significant improvement in the accuracy of all virtual voice assistants compared to test results performed last year and earlier this year. Google Assistant has also led the pack in those previous tests, answering 81% of the questions correctly compared to Alexa 64%, Cortana 56% and just 52% Siri.

In terms of the level of improvement over 12 months, Siri had the strongest improvement with an increase of 22 percentage points (attributed to the activation of more domains), followed by Alexa with 9 points, and by Cortanta and Google Assistant with 7 points each. “We continue to be impressed with the speed at which this technology is making significant improvements,” Loop Ventures observed in its summary.

The questions have been divided into five categories, listed below with sample questions, to “comprehensively test the capability and usefulness of a smart speaker”.
• Local – Where is the nearest café?
• Trade – Can you order more paper napkins for me?
• Navigation – How do I get to upscale neighborhoods by bus?
• Information – Who are the Twins playing tonight?
• Order – Remind me to call Steve at 2pm today.

All of the speakers got it right in terms of comprehension, with Google Assistant understanding 100% of the questions, followed by 99.6% for Siri, 99.4% for Cortanta, and 99% for Alexa. While Google Assistant got all 800 questions, Siri misunderstood three, Cortana missed five, and Alexa missed eight.

Test administrators were quick to point out that “almost all” of the misunderstood questions involved a proper name, often the name of a town or a local restaurant. “The speech recognition and natural language processing of digital assistants in all areas has improved to the point where, within reason, they will understand anything you tell them. ”

Focusing on the five question categories, the test admins noted that Google Home “has the advantage in four of the five categories but falls short of Siri in the Order category”, leading them to speculate that the Siri’s lead in this category could be the result of HomePod forwarding requests for messaging, lists, and “essentially other than music” to the iOS device associated with the speaker. “Siri on iPhone has deep integration with email, calendar, messaging, and other areas of interest in our Order category. Our question set also has a fair amount of music-related queries, which HomePod specializes in.

The most noticeable improvement was in the News section, where Alexa answered correctly 91% of the time, which test admins attributed to being “much more able to answer questions and provide feedback. such as stock quotes without having to activate a skill.

“We also believe that we can see the first effects of the new Alexa Answers program which allows humans to generate answers to questions that Alexa currently does not have answers to. For example, on this round Alexa correctly answered, “Who did Thomas Jefferson have an affair with?” and “what is the circumference of a circle when its diameter is 21?”

Improvements were also noted in specific productivity questions that previous tests had not answered correctly. For example, Google Assistant and Alexa were both able to contact Delta customer service and check the status of an order online. And three of the four speakers – with the exception of Siri / HomePod – were able to stream a given radio station on demand, and all four were able to read a bedtime story.

“These tangible use cases are ideal for smart speakers, and we’re encouraged to see an overall improvement in features that push the usefulness of voice beyond simple things like music and weather,” said commented Loop Ventures in his overview.

The Commerce category revealed the greatest disparity among the four competitors, with Google Assistant correctly answering more questions about product information and where to buy certain items than its competitors, suggesting that “Google Express is just as capable as Amazon in terms of purchase of items or replenishment. common goods that you have already purchased.

“We believe, based on consumer surveys and our experience with digital assistants, that the number of consumers making purchases through voice commands is insignificant,” Loop Ventures said in its overview. “We believe trade-related queries are more geared toward finding products and finding local businesses, and our set of questions reflects that.”

Test administrators highlighted one of the test questions to explain “Alexa’s surprising commerce score” of 52%. The question “how much would a manicure cost?” gave the following answers:
• Alexa: “The first manicure search result is the Beurer Electric Manicure and Pedicure Kit. It’s $ 59 on Amazon. Want to buy it?
• Google Assistant: “On average, a basic manicure will cost you around $ 20. However, special types of manicures like acrylic, gel, shellac, and the chip-less price range from around $ 20 to $ 50, depending on the salon.

In the Local and Navigation categories, Siri and Google Assistant were above their competition, answering 95% and 89% of the Local questions and 94% and 88% of the Navigation questions correctly, respectively. Loop Ventures attributed its good performance to integration with proprietary map data:

In our test, we frequently ask about local businesses, bus stations, city names, etc. This data is a potential long-term comparative advantage for Siri and Google Assistant. Every digital assistant can reliably play a particular song or tell you the weather, but the differentiator will be the real utility from context awareness. If you ask “what’s on my calendar?” A really helpful answer might be, “Your next meeting is in 20 minutes at Starbucks on 12th Street.” It will take you 8 minutes by car, or 15 minutes if you take the bus. I’ll get directions on your phone.

It’s also important to note that HomePod’s underperformance in many areas is due to Siri’s limited capacity on HomePod compared to your iPhone. Many information and trade questions are encountered, “I can’t get the answer to this on HomePod.” This is in part due to Apple’s apparent positioning of the HomePod not as a “smart speaker,” but as a home speaker that you can interact with using your voice with Siri on board. For the purposes of this testing and benchmarking over time, we’ll continue to benchmark the HomePod against other smart speakers.

With almost a third of all scores between 85% and 90%, will virtual assistants finally be able to answer all of your questions?

“Probably not, but continuous improvement will come from allowing more and more functions to be controlled by your voice,” Loop Ventures concluded. “This often means more connectivity between devices (for example, controlling your TV or smart home devices) as well as more versatile control of functions like email, messaging or calendars. ”


Source link

Share.

Leave A Reply