Q: I love my birds. Fuyu (white buncho, girl) and Goma (gray sakura buncho, boy). I do think they are the cutest birds in this world, and I do bring their photos when I travel for a while. But I see other buncho birds shared on the web and can’t stop aww-ing. Does this consider to be cheating?
I saw a short Japanese reportage about AI called AlphaGo (Google DeepMind) the other day: How it works, difference from the previous AI model, and how the games with a human top player went. It was impressive, but it scares me, as it should.
When the word “technological singularity” started to pop up in non tech-centered magazines (about 5-6 years ago?), I got the impression that this will be just a next buzz word for semi-elite group to refer “future” and didn’t find it to be realistic. Because back then, IBM’s Watson debuted in Jeopardy and, though Watson is not AI, I thought current AI would work like that: referring to every single information on the database/web and filter, analyze and find the best choice to meet the given condition. Sure it’s brilliant and not a cakewalk at all, but sounded like it hinged lot more upon the computing capacity than AI’s ability.
But now, things have obviously changed. Research around AI is heating up here and there. Even in my country side area, now the phone shop has an AI equipped robot to serve us (this was freaky, he held my hand while watching someone else's face) And I thought inspiration and instinct are something only animal can gain through experience, but according to the reportage I saw, it seems like (or just my impression) AI is mastering quasi-instinct ability.
I’ve been interested in how far these things are doing, but there's always one big question mark in my head and that is “What is this for?” What does this ultimately aim to?
Why do we need to create something smarter than human? What for?
Try understanding the mechanism or pattern of human emotions, and reproduce them. But what for?
To prove that we can? To get the taste of our Maker?
Machine can execute command for certain purpose, but it cannot decide its purpose. Machine will understand what the general notion of our happiness is, but it cannot sense or know what makes it (machine) happy or sad… Do I understand correctly? Then what is the purpose here? What is this for?
Should I ask Watson…?