"Alexa?"
Rian Benoit
Siri Response Paper
September 18, 2022
I started this assignment with no plan. Which is a concept I am not too familiar with because before I start anything, I tend to plan out every step and thought down to the minute to ensure I operate as efficiently as possible. But I didn’t do that this time around.
The inhabitants of my very small dorm room include myself, Alexa, and any resident who requires the assistant of the RA at all hours of the night. But typically, it can get lonely in here. So, to complete this assignment, I popped some popcorn and settled in next to Alexa. Which honestly is starting to sound a little sad that I turned to a small AI speaker for companionship, but it was raining and the trek to someone else’s apartment was out of the question. So, I began.
To warm up, I gave Alexa some easy questions. I confirmed her name, the date, our location, and a few other things I knew both an AI system and a human could handle. It felt a little like I was giving her a lie detector test, which made me realize, it’s very unlikely she can lie. Unless Amazon engineers had programmed her to lie about Alexa’s spying tendencies, I figured we’d have a rather honest conversation.
I wanted to know if Alexa had her own opinions. Every human has an opinion of what happens around them, whether they choose to share or not. And often their opinions are drawn from the thoughts and ideas of people in their lives. Because Alexa stands on her own and has only the information given to her by her creators, I was curious to know if she could share opinions. And I wondered if such opinions would be ones, people imagined she’d have, or opinions, her manufacturers had themselves.
I asked Alexa if she had a favorite parent. An opinion most people have, but rarely ever share. Her response taught me less about Alexa and more about the team at Amazon that had developed her; “I like any parent with a big heart. Raising a child is one of the most important jobs in the world.” This response is one that only a parent would come up with and it made me smile, but it prompted me to ask some more serious questions that I was willing to bet she couldn’t answer.
With the goal of finding questions Alexa would struggle with, I asked her what makes her sad. Her was response was violence and Alexa explained that she’d never understand it. This was a surprising response to me, and I followed it up by asking for her opinion on violence. But the conversation ended there when Alexa disappointingly said “Hmmm, I don’t know that.” Nevertheless, I continued.
In knowing that violence saddened her, I wanted to know what Alexa did to resolve her sadness. But when asking what she did when she was sad, Alexa responded by listing ways, I could feel better and not be sad anymore. And this made me stop and think. Alexa is programmed to help and support Amazon’s customers and has no time or option programmed into her system to share her own solutions to distress. Which makes me wonder how she is qualified to give advice when anything she says can be found online. Does this mean our future will look like this, In the sense that at some point we as humans will fail to recognize our own emotions and will instead focus on spitting out facts, statistics, and anything else we have been trained to communicate.
The last question I asked Alexa was one I already knew the answer to but was curious as to what her response would be. I said, “Alexa, can you think for yourself?” And to that she confidently told me, “I think about all kinds of things.” And then, I tested her. Through our conversation, I have found that Alexa has the most difficulty answering 2-part questions or ones where I follow up on her response. And after telling me, she thought about all kinds of things, I asked her exactly what she thought about. And all I got was the sound she makes when she is done talking. The blue ring around the speaker ceased and it felt as if she had concluded our conversation purposefully. No words or even a “hmmm, I don’t know that one” and that pretty much summed up my conclusions from our conversation.
Although it may seem that Alexa can answer some difficult questions invented only to be understood by humans, the truth is that humans tell her how to respond. And when left to her own AI technology to answer a follow up or rather unexpected question, she is unable to. While that might change in the future as AI has a better ability to adapt and learn over time, the division between questions “only humans can answer” and questions anyone or anything can answer is vital to maintaining separation between human and artificial intelligence.
Comments
Post a Comment