In the last few days, another fake news has turned around in foreign and native media - this time there was a speech that Facebook allegedly stopped one of the Artificial Intelligence programs because she was too smart, she learned her own language and did not understand it. A "Skynet scenario will spoil us" (the plot of the Terminator movie). It's a matter of two of the social network chatters so advanced that they have found a way to talk to each other and could not be understood by Facebook developers:
Bob: I can get everything else
Alice: balls have zero to me to me to me to me to me to me to me
Bob: you and everything else
Alice: balls have a ball to me to me to me to me to me to me to me
Reality is much simpler and less apocalyptic, writes Gizmodo. A few weeks ago, the FastCo Design site reported on Facebook's intention to develop a "controversial network" that would be the job of creating negotiation software. The two chattels really do exist, but as explained by the Facebook Intelligence Unit in June, their goal is only to be able to hold a dialogue with different goals, including negotiating. The most complicated thing they've "discussed" with each other is how to divide n items - books, hats, balls, and so on. The craving goal is to become so good that end users do not understand that they are chatting.
When Facebook makes these two semi-intelligent bots talk to each other, the programmers have found that they made a mistake by not encouraging them to speak in a language with comprehensible rules in the English language. And this is actually that weird chat that upsets the spirits.
From the social network, they actually stopped this "conversation," but not because they were panicked that they would start talking about how to enslave mankind, but because they wanted bots to learn to communicate with people and not with each other. They will reprogram them to write in comprehensible English and will still release them.
There are probably good reasons for not letting bots develop their own language that people will not understand - but the situation is not as apocalyptic as it was presented by many sites.
And as Gizmodo notes - artificial intelligence has its extremely useful applications in a variety of fields, such as medical and automotive, but it also has a potentially bad side - if one decides to "plug" artificial intelligence into a nuclear reactor and a disaster happens, Human negligence and foolishness, and not because a bot has received a divine revelation of how bad and harmful the human race is. Machine training is not at all close to a real artificial intelligence that we have seen in the Terminator, and I, the robot, and humanity is still confronted with this technology. If anyone has to worry about this situation in 2017, it is likely that there will be professional negotiators whose jobs may be lost for such chatters.
No comments:
Post a Comment