It asked me to draw a tree. I drew a palm tree. It said "palm tree" on the bottom, but then said it failed.
I drew the palm tree because I've studied AI and that's a classic AI mistake.
If you go to Hawaii and ask students to draw a tree, almost all of them will draw a palm tree. Ask them to draw a bird and it looks like a parrot (instead of the robin you see typically in the "lower 48").
It's interesting that this seems to suffer from the same selection bias.
It seems biased. When it said draw the moon, I drew a circle with a smaller crater shaped circle inside it, and it immediately guessed "the moon". Later when it asked me to draw a cookie, I drew the same exact shape (circle with smaller circle) and it immediately guessed cookie, not moon. What's going on?
The classifier is returning a set of confidence scores, so if the shape you drew probably had a high confidence of both cookie and moon. (in fact if you draw it, and click on it in the endgame screen you can see some details what it saw in the image).
We are not hinting the classifier what your task is, so we try not to bias the classifier.
Jonas (developer behind quick draw)
So it's not really "guessing", it's just seeing if my drawing matches a predetermined bank of answers? Makes sense, since one time it asked me to draw a "police car" and before I had even finished drawing the chassis of a normal car it had already guessed "police car" and moved on.
It's probably assigning a number of possibilities with a confidence rating, and as soon as the confidence rating is > than some percentage of what it's asking you to draw, it says it.
What is also impressive is that it doesn't feel like a complete black box. It takes you to a page after you're done drawing and tells you it saw something else in your drawings, with illustrations of why it thought of other objects. It also gives you a list of drawings by other people that it used to learn about the object.
Interesting. Quick Draw is similar to a game I made called Drawception (https://drawception.com). Which is basically the telephone game meets Pictionary with a 10 minute drawing limit that you play with random players.
I've often wondered at what point an AI would be able to play the game in a convincing way. Looks like things are getting closer!
I am shameless and evil. Everytime it asks me to "draw" something, I just "draw" the letters for the words it uses to describe the thing it wants me to draw. The poor network is always very confused by that.
It's so interesting to see how these random Google websites differ. For instance, this website has been made with Bootstrap and jQuery. Weird choice already given the internal tools they have at their disposal.
Weirder still, the grid of videos doesn't use the Bootstrap grid at all. The elements are set to display: inline-block and then their width (and height, which we'll get to) is adjusted every time the window resizes using JavaScript.
This is presumably to maintain square blocks, because that's the design they've opted for and grid systems do not give you much control over height of the grid cells.
But using JavaScript to try and ensure squareness of grid cells is totally unnecessary. You just need CSS, as I shall demonstrate:
Sometimes JavaScript is the best tool, especially in terms of accessibility; in this case, it adds nothing, only an expensive event handler. The resize event is really an awful way of achieving responsive web design. Media queries are the best option in 99% of cases.
* * *
Returning to the original thought, Google seems to have very different teams working here and there on their various marketing websites.
If you look at gv.com, their site also uses jQuery (with Slick and Velocity plugins).
If you look at duo.google.com and allo.google.com, they're Angular sites — which is what you'd expect from Google. A lot of their websites are based on Angular, it's a framework they're invested in (along with Polymer) and so on.
More recently, some of their marketing sites are being made with MDL. Usually small, less significant ones, not for apps but for random initiatives and projects that few people are going to look at. Which seems rather telling.
Polymer is in use, but seems to be reserved for applications like Youtube Gaming or Play Music. I think the Google PDF Reader is Polymer-based was well. That makes sense, Polymer is barely supported in browsers other than Chrome without a hefty bunch of polyfills.
There's also the Closure JavaScript libraries/tools, which Google used to use a lot for things like GMail (blog.google is the most recent instance I think).
For some reason, I find it odd that they don't have a unified internal toolkit for this sort of work. I'm not actually critical of this fact, I'm no critic of pragmatism. I'm just surprised.
I wonder though: does this indicate that these sites were outsourced to an agency?
We have created our own dataset for this experiment based on internal data collection. Its currently a rather small dataset, some categories only have a handful of samples, but works anyway. Jonas (developer behind quickdraw)
Why didn't you use one of the existing sketch datasets? Was it because of license issues?
Do you think that the accuracy would be the same if you had a bigger dataset?
I drew the palm tree because I've studied AI and that's a classic AI mistake.
If you go to Hawaii and ask students to draw a tree, almost all of them will draw a palm tree. Ask them to draw a bird and it looks like a parrot (instead of the robin you see typically in the "lower 48").
It's interesting that this seems to suffer from the same selection bias.