Perhaps the biggest reveal at WWDC last week was the announcement of the HomePod. Apple’s long-awaited entry into the digital assistant market, currently dominated by the Amazon Echo and Google Home, seems like a long time coming. After all, Siri was essentially the first successful implementation of the idea. But while Amazon Echo and Google Home are attempting to be one stop shops for all your voice command needs, the HomePod has a very specific focus: revolutionizing the way you listen to music at home.
Above all else, the HomePod has been designed primarily to sound great. With seven beamforming tweeters, along with a 20mm woofer, the requisite hardware is certainly in place to do just that. But what really sets the HomePod apart is its ability to customize audio output based on the physical space the device is occupying. “HomePod,” Apple claims, “uses an advanced algorithm that continuously analyzes the music and dynamically tunes the low frequencies for smooth, distortion‑free sound” (“HomePod”). Essentially, it’s able to scan the size and shape of the room it’s in and adjust the EQ and directionality of its output accordingly.
If I had to guess, I’d wager that this algorithm is based on an old audio engineering trick. Every room sounds different, with certain frequencies being boosted or dampened depending on the material of the wall and placement of objects in the room. But when you’re in a recording studio, it’s important that you get as neutral of an output as possible, so that the recording doesn’t sound bad in the thousands of possible rooms it might be played in after your session is complete. So what is an engineer to do? Simple. He plays a piece of audio out of a speaker inside the studio – a piece of audio that he knows like the back of his hand, and that covers a full range of possible frequencies. By analyzing a recording of how that audio sounds in that particular room, the engineer can see precisely what frequencies are being affected by the shape of the room and by how much – and adjust the equipment accordingly. The HomePod is, in all likelihood, doing something similar, but all of the time – listening to how the music it’s playing actually sounds in the room and adjusting accordingly to make sure that it’s heard precisely how it was meant to be heard.
The HomePod is controlled by a version of Siri – but, oddly, something of a stripped down version from the one that is on your iPhone. Again, the main focus here is on the music, rather than on being a true digital assistant. It’s not going to have the deep web searching functionality of Siri on an iPhone, but it will be able to control Apple Music, suggest new songs based on your input, and even offer you trivia and information about what it is that you’re hearing. “Hey Siri, who’s the drummer in this song?” and “Hey Siri, play an album that was released exactly 20 years ago today” are both examples Apple has given for this functionality (“HomePod”).
So in an odd but interesting move, Apple hasn’t given us exactly the digital assistant that we were expecting from them. The HomePod is something else entirely. We’ll need to wait until its December release, however, to see whether it’s something the public actually wants.
“HomePod.” Apple, https://www.apple.com/homepod/.