Technology has advanced far enough that we have artificial intelligence in our homes at all times. All it has is a microphone so far but soon they will use “lidar” technology. Lidar sends out waves and gets info on what’s around it based on how long it takes the wave to get back. If something is really close to a speaker, it can tell because the wave it sends out would only take a few seconds to get back.
The main question for a few years now is what the speakers and companies behind them are going to do with this data. When you ask Alexa or Siri if it is recording everything it hears, it always has a smart answer such as, “I only record when you use the wake word.”
I think these companies are recording what they hear to better market their products. That’s why sometimes when you talk about a product to family or a friend, you will get ads later about the product you were talking about.
According to the Economist, “They listen out for wake words, and then send what follows to the cloud as an audio clip; when an answer arrives, in the form of another audio clip, they play it back. Putting all the smarts in the cloud means these speakers can be very cheap and acquire new skills as their cloud-based brains are continually upgraded.” This is a good thing, after all. Getting devices smaller and more powerful is what brought us to having such advanced technology. But the main caveat is that putting everything on the cloud means that data has to be sent to Amazon or Google or Apple in order for it to work. They can be taking this data and using it to get business from you. I recently created my own personal assistant named after Iron Man’s Jarvis and every response was programmed locally on the bot: no need to access my server. However, I do understand why it’s on the cloud. For every minor change to the bot I would have to update the files on every device it’s on. For these big tech conglomerates, they would have to send an update to every device in the world and hope that these people downloaded the update.