What is the metric actually measuring? Where did 70% come from? The article says that only 30% of requests were automated but I find it hard to believe that the API aims for 100% automated responses for all requests. Heck, if the bot API managed to reduce human resourcing by 30%, I'd say that's more success than not.
Is this an article or an opinion piece? It reads very weirdly but I can't pinpoint it.
this article is pretty much vapor. theregister has been hitting the news pretty hard and is becoming one of the more popular non-mainstream sources. So they probably need some filler content?
30% in a year is solid, especially with the proliferation of tools for building bots making it more and more accessible to tinkerers and smaller orgs. ie, use caes where complete automation may not be the primary goal.
30% in a year is "solid"? If Apple released something that failed 70% of the time on launch day they would never hear the end of it. The original Newton's handwriting recognition was way better than that.
wow, both of those points are entirely out of context.
First, their BOT api which is the product in question, works 100% of the time. the failure rate mentioned has to so with conversation that require humans as opposed to entirely AI driven. A better Apple analogy would be Siri. if Siri failed to even acknowledge a request 70% of the time, it would indeed be catastrophic. But siri does often fail at providing relevant results and while frustrating, is entirely expected.
We could argue more, but as you and I have both noted, the main substance of the interview is not provided.
FB's third party access to the messenger platform (aka you and the article calling it the bot api) is supposedly failing 30%.
That number, if I were to guess where it came from, likely came from the fact that a chat bot app has to be submitted to Facebook for approval. That means FB has a running tally of how many chat bot accounts exists within their ecosystem.
Maybe that number is 100k.
Likewise, because messages are passed to FB to then be passed to the user, FB has a runny tally of the sentimental analysis or even the blocking analysis (users have the option to block bot accounts).
From there they can ascertain that 70,000 bot accounts result in negative interactions, discontinued use, or results in the user banning the bot.
This is the equivalent of Apple opening up third party access to the Siri platform and seeing that developers and users don't like the 70% of the ways to interact with Siri. Or Amazon saying that users don't like 70% of the Alexa Skills available.
Newton's handwriting recognition worked "100%" of the time by that measure. I assume we are comparing quality of outcomes not whether the API accepts the arguments it says it does and produces data in the format it promises.
I may be missing some context, but I think OP's point was if you have a task that requires 100% human intervention, and you automate 30% of that, its potentially a very big positive. What % is a good number depends completely on the context -- 10% may be best-in-class and drive you to profitability, or it may be 100% or bust. Personally, I was unable to discern the context from the article.
I'm surprised there's so many theregister articles on the front page these days, it's not exactly the most reliable source of information and most of its stuff is clickbait.
I seem to remember certain websites automatically get a lower weight on HN when they get upvoted a lot - shouldn't this be applied to theregister too?
I'm pleasantly surprised how Facebook openly admits that this project is a complete failure. I hear almost daily whitewashed NLP results from other companies.
What would a smashing success look like to Facebook?
Tinfoil hat on: a smashing success for them is controlling everyones online life.
See: the Whatsapp acquisition - burning a double digit number of billions on a profitable platform just to remove their one source of income and USP so as to get even more juicy metadata and eyeballs.
(Yes, I was a huge fan of Whatsapp. Yes, so much that I expected them to manage to stay true to their ideals after the acquisition.
Yes, I tried to believe Facebook actually just wanted a part in what Whatsapp was about to become.
NLP is a distraction. Computers won't be able to do NLP until they understand the real world. Parsing a sentence about snow without know what snow is will not be useful for a computer or a human.
Natural Language Processing is an abstraction? That's quite the statement, considering the recent advances that has been made. I mean, you don't see the value in a computer being able to perceive meaning when you say: "Siri, please dim the lights 40%"?
A computer doesn't need to know how you feel about snow, to tell you that it's snowing outside, and you'll probably want to remember snow chains.
You sound like you think you're disagreeing with rand_r, but all your text is in support of rand_r's point. You use one example that is basically solvable with regexes and in your second paragraph you appear to strongly agree that NLP isn't important at all.
I'm not saying vector spaces are a rich enough structure, but as an example...
When we place the word "snow" into a vectorspace of words (along with the other words), we're saying something about what that word means relative to the other words, with those relationships stemming from the experience of people in the real world. "Snow" is similar to "ice" along these dimensions and is similar to "powder" along those dimensions. Ideally, we want something like "snow = ice - solid + powder". All the possible equations for snow in the embedding gives you, essentially, the different ways in which people understand snow.
The utility of these structures for NLP is fundamentally that they embed something of the understanding people have for the real world in their embedding of the words. So NLP structures must capture something of "understanding" if they're genuinely useful for NLP.
So I agree with the person I was replying to in that you can't have NLP without understanding, because NLP is fundamentally about understanding, but I think they're implying a higher level of understanding than is actually required for most uses of NLP and more over, that it has to be an independent understanding, rather than a secondary derived one.
This is unsupervised learning of word embeddings that shows the "topics" related to the word "snow" and how closely they are related. It shows pretty reasonable understanding of the world - certainly enough to build useful things with it.
WHAT? It is exactly opposite, no Google services that accumulate enough data don't use machine learning these days. For god sake, they even use deep learning to save electricity bill for their data center! Search, recommendation, knowledge graph, ads, photos, translate, even youtube thumbnails have some parts powered by ML.
As late as 2008 Google said they didn't use any machine learning in Search. They didn't trust it. Instead had an army of people to manually write search heuristics.
Interestingly, the general perception I've had for a while now is that Google's search results have been getting slowly, gradually worse over the years. It seems like it works harder now to funnel you into some predefined concept of what it thinks you're looking for, rather than returning the data and letting you filter it yourself.
That's great for very common searches, but makes it much harder to find anything that Google doesn't expect (or want) you to look for.
Granted, I've got no data to back that up. I'm probably in the minority of users here, but I pretty sure I'm not the only one.
On the other hand I've found that for 98% of searches Google gives me exactly what I want as the first option and the other 2% Google makes some assumptions about my search and gives me garbage. Overall I'm very happy with it and it's definitely better for me than it was, it's interesting to see how different users can have very different perceptions of the same product
8 years ago, deep learning as a concept is not even a thing! If someone tells me we would have push image classification, speech recognition to human level or beyond, and make machine translation so much more usable I could call him nuts.
In ML landscape, DL had changed fundermentally the expectation of what algorithm could achieve
I'm just impressed they were able to get so much without ML. And the fact they really distrusted ML for giving weird bugs and edge cases. Even back then, everyone just assumed they were using ML, and in fact it was just the opposite. I assume most of the core systems of Search are the same today, even if they may have added some ML to it now.
I tested FB messenger bots from different companies...They are very intrusive. They need to limit the messages somehow and also make a clear difference between messages from bots and friends, like gmail does with promotional messages.
Definitely an incentive problem there. The bot must do "something" to justify its existence and attract attention to the fact it is useful, while simultaneously not being so annoying that it is, well, annoying. I'm not sure even a human could hit that window consistently, especially as it's probably a window of negative width (i.e., outright impossible) for a lot of users.
If someone from Facebook is listening, allowing bots in Group Conversations will enable other companies to innovate in this space. For example, you could replicate Slack for a small team with Messenger and Group Chat bots. Right now, there is very little you could do with their Messenger API.
Totally agree with this. I'm not sure why Facebook hasn't implemented group chat for bots yet. It must be on their roadmap. Maybe they are choosing to focus on 1-1 interaction right now, because 1-N interaction will require much different UX and code. Some 1-1 features have no obvious corollary in 1-N interaction. For example when a messenger bot sends you a "menu" with options, should the bot send the same menu to everyone in the group? Should only the person who triggered the bot in the group be able to interact with the menu?
These are answerable questions, but there are a lot of them and I can see why Facebook would choose to focus on 1-1 interactions first (especially since those are more likely to become paying interactions).
Personally I would love group bots. They can enable a lot of fun interactions, and it's certainly more fun to "play with" a novel bot with your friends than by yourself.
As an end-user, how would I even come in contact with the AI on a daily basis? I have messenger for talking to real-world people I already know. What's the incentive to talk to a bot?
Is this an article or an opinion piece? It reads very weirdly but I can't pinpoint it.