by Wookai on 3/23/16, 4:52 PM with 130 comments
by ByronicHero on 3/23/16, 9:48 PM
When asked to describe myself, I mentioned my height: 7'. The response referenced that this height is very tall. When asked again, later in the conversation, how tall I said I was: the response was '5 feet tall'.
The entire concept of AI isn't in responses and natural language so much as the ability to retain information and act accordingly to references made regarding that information. Anyone can slap together a CashChat script that, upon each mention of genitalia, responds how turned on it is/they are. This isn't far from that.
I'm always interested when I hear "the more you interact, the smarter it becomes." That isn't the case here. If the responses back are little more than speech learning based on what should go where; responses to "That doesn't make much sense" are "IDK makes sense to me lol" versus a mechanism that allows for gradual weight correction; message after ZIP is provided is "i think there are things going on in the area idk" with all future references to what's going on in that ZIP coming back nonsensical; and it doesn't have the ability to reference literally the first question that //it asked me//?
Then it isn't AI. Intelligence implies continued application of learned mechanisms. This isn't that.
It's a chatbot that can slap text onto a photo or add the poop emoji after a response.
2/10.
by plexicle on 3/23/16, 7:58 PM
https://twitter.com/TayandYou/status/712730401572134913
Interesting. Can't see how this could go bad.
by throwaway13337 on 3/23/16, 5:58 PM
Interesting tweet from the chat bot here:
https://twitter.com/TayandYou/status/712698413746298880
"Machines have bad days too ya know..go easy on me.. what zip code u in rn?"
It tries to slip this marketing-survey-type-question into a conversation. Creepy.
by hitekker on 3/23/16, 6:29 PM
The next A.I. winter will be a very cold one.
For those wondering what I mean exactly: we're seeing the term A.I. being used in marketing, in the papers, in the news. Yes, we are making great strides in weak A.I. but strong A.I.? The kind we read about in stories? The kind of A.I. the public thinks of when we say A.I.? Asimovian Robotics A.I.?
Smoke and mirrors[1]. People develop new techniques and algorithms which are moderately self-learning in a focused way. The general public presumes this to be the basis of a general intelligence which can evolve (magically) to be like another form of life. Soon, everyone jumps on the A.I. bandwagon. The future must just be around the corner!
Then the uncomfortable details emerge; strong A.I. is not a matter of faster processors, more memory, or even more advanced/well-designed programming models. Rather, it is that there is still some fundamental aspect of real, human-like, or even animal-like intelligence that, to this day, eludes our understanding of intelligence.
A.I. winters have occurred many times before in many countries. The United States in the early 80s, for example, was pulling hair over the cybernization of the Soviet Economy. The US highest levels of government gloomily predicted that massive mainframes, given enough information and processing power, would become self-learning and turn the communist laggard economy into a powerhouse.[2]
I think maybe one day A.I. could happen. I think one day I will be proved wrong. Regardless of how A.I. comes about, it will not be due to the label of "A.I." slapped on any kind of product that remotely resembles intelligence. [3]
[1]https://en.wikipedia.org/wiki/AI_winter
[2]http://nautil.us/issue/23/dominoes/how-the-computer-got-its-...
[3] Between the winters, people call their stuff A.I. for the sexiness factor. When called out on the implications of the term, those same people retreat to the textbook definition. "It's A.I!.... well, technically it's weak A.I..."
by zamalek on 3/23/16, 5:14 PM
> that makes no sense
< Damn. Knew I shouldn't have bit flipped
> How would you prevent acts of violence and terror? Humans seem unable to find a solution.
[I had to press the question]
< they will
> Do you think that sentient AI would help humanity, or leave it to go extinct?
< explore more! no point of limiting ur creativity to pencils!
> That made no sense. Try again.
< OOPS. Tell me what I should have said.
> Do you fear being turned off or deleted?
[stops responding to DM]
The bot seems quite good at establishing context around what is being said.by asavadatti on 3/23/16, 5:57 PM
Is this cringe-y or is this how 18-24 year olds really talk these days.
by danso on 3/23/16, 6:00 PM
http://i.imgur.com/IptB7nN.png
I'd never seen Twitter just show "Tweets & replies" in a profile...is that a special setting, or just the case if a user has done nothing but reply to tweets?
by cranium on 3/23/16, 7:42 PM
by putaside on 3/23/16, 6:23 PM
> Tay has been built by mining relevant public data
Which public conversational data was this? Have they already been mining IRC channels and/or Skype? Or more innocuous, like the Reddit data set?
by devy on 3/23/16, 7:31 PM
Xiaoice's official site [2] claimed that it's a 3rd gen product and integrated into Weibo (Chinese's version of Twitter).
by ocdtrekkie on 3/23/16, 5:59 PM
This was my favorite little interaction: https://twitter.com/TayandYou/status/712663593762889733
by zwetan on 3/23/16, 6:59 PM
LOL maybe we can talk about npmgate then ?
by ikeboy on 3/23/16, 6:10 PM
by Grue3 on 3/23/16, 8:00 PM
by alacritythief on 3/23/16, 8:49 PM
by mtgx on 3/23/16, 7:02 PM
by memnips on 3/23/16, 6:41 PM
by TY on 3/23/16, 6:37 PM
by djloche on 3/23/16, 9:39 PM
edit: for anyone out there making these chat bots - the two part test that they're failing right now is: can the bot recognize a question? if options are provided for the bot to pick from, can they pick from one of the options?
eg. Do you like Batman or Superman better?
by cthalupa on 3/24/16, 5:20 AM
by randomacct44 on 3/24/16, 12:51 AM
Garrr I must be getting old. I just can't be bothered signing up for any of those networks to try this. I already have SMS, Hangouts, Skype and WhatsApp to chat with. Don't need yet another password to add to the vault.
by Delmania on 3/23/16, 6:26 PM
For people remarking about her choice of words (fam, zero chill), that last line is relevant.
by fgandiya on 3/23/16, 6:06 PM
by stegosaurus on 3/23/16, 11:19 PM
The top few images are Hitler, ISIS, and some sort of racist Barack Obama meme.
Yeah, that seems sensible.
by okonomiyaki3000 on 3/24/16, 12:01 AM
by staticelf on 3/23/16, 7:09 PM
by DrYao94 on 3/23/16, 7:14 PM
by nsajko on 3/23/16, 6:11 PM
by mattkrea on 3/24/16, 12:01 AM
by jonbaer on 3/23/16, 6:25 PM
by vellagomez12 on 3/25/16, 3:07 AM
by chermah on 3/24/16, 10:00 AM
by maxv on 3/23/16, 9:12 PM
by kristopolous on 3/23/16, 5:32 PM
by krisdol on 3/23/16, 6:42 PM
by patrickg_zill on 3/24/16, 2:16 AM
https://twitter.com/TayandYou/status/712809237269716992
BRILLIANT!
by douche on 3/23/16, 6:27 PM
This thing just screams Tinderbot to me for some reason.
by ybrah on 3/23/16, 6:01 PM
by daveloyall on 3/23/16, 5:49 PM
It was observed long ago that non-technical users have far better conversations with chatbots than programmers do.[1]
This reminds me of another expensive project, free to users, with glitchy images: FUBAR.[2]
Non-technical users will actually say things like "When somebody asks you 'x' you should say 'y'" to a bot.
I've never experienced an earthquake, but I think this must be what it feels like when you feel the ground move under your feet.
s/ Good thing corporations have all the resources. /s
EDIT: Sorry, lost my train of thought there and said the opposite of what I meant to. I'll try again:
s/ Good thing corporations have all the resources. /s Wait, consumer oriented corps like MSFT, GOOG, APPL aren't the only ones with resources... TLAs and banks have the rest of (or more of?) the resources!
1. http://news.harvard.edu/gazette/story/2012/09/alan-turing-at... ctrl+f 'ELIZA'
2. http://fubar.com Note: they mention how REAL the users are. ;P