Well that question only makes sense for connected graphs, we don't really know whether this is connected. So in general a version of this question that makes sense is, among all the connected components of the wikipedia graph, what is the largest diameter.
This is an interesting question that I'd like to answer now that I have all the data. I am curious to see how long it will take to find a solution as I believe even the most efficient algorithms for this have a high runtime complexity.
And yes, the graph is not connected (there are both nodes with no outgoing links and with no incoming links), but over 99% of the pages are connected, so the answer would still be interesting and worthwhile.
Another interesting question would be what the largest graphs are that are disconnected from the "main" one. Are there any larger than 1 or 2 nodes? Is there some set of pages on some obscure topic that only link within themselves?
Floyd-Warshall solves the all-pairs shortest path in time O(V^3). Running a BFS search rooted from every node would find the shortest path in an unweighted graph in O(V*(V + E)).
Hence, since there's a whole lot of Vs in the wikipedia graph, he's probably going to be satisfied with an approximate solution, unless he has a lot of CPU hours to spare.
There's 5,579,252 articles in English Wikipedia right now, which means the largest component has 5 million and change or so vertices. Each individual BFS should take a few CPU-seconds at most, and you can trivially parallelize the BFS for each node across as many CPUs as you have.
I'm probably missing something obvious here, maybe someone can explain the following to me.
- Their approach is a composition of 2 steps, what they call "stylization" and "smoothing".
- Top left of 2nd page they claim: "Both of the steps have closed-form solutions"
- Equations 5 is the closed form solution for the "smoothing" step.
My question: Where's the closed-form solution for the stylization step that they're claiming?
Are they calling equation 3 a closed-form expression? In this case the title and the claim in the introduction are rather misleadinng, because computing 3 requires you to train autoencoders.
You don't train it for every image; in this way, a neural network often is a "closed-form solution": it provides you an equation, admittedly a very convoluted one, which can be used to obtain its solution, admittedly usually an approximation, in a finite amount of time. The normal solution to this problem (according to the paper) is an iterative technique "to solve an optimization problem of matching the Gram matrices of deep features extracted from the content and style photos", whereas this one is simply two passes: stylization and smoothing.
Previous stylisation was slow because it needed to SGD optimisation for each image to be stylised. This uses a NN trained once. When you've trained a NN it is precisely a closed form solution, in the style y = max(0, 3x + 4). However they are normally a little longer to wrote down :P
Ah okay right this is the answer. Previous approaches [1] are deep generative models that you have to optimize for each input, whereas here you run just a forward evaluation on a model that you've trained beforehand.
I would still argue the term closed-form is misleading here, because:
- Even during training at any given time you can read off a "closed-form expression" of the neural network of this type, so closed-form in this broad sense really doesn't mean much. Furthermore any result of any numerical computation ever are also closed-form solutions according to this, on the grounds that they result from a computation that completed in finite number of steps. So really whenever you ask a grad student to run some numerical simulation expect them to come back saying "Hey I found a closed-form expression!"
- The reason the above is absurd is that these trained NN's aren't really solutions to the optimization problem, but approximations. So this is really saying I have a problem, I don't know how to solve it but I can produce a infinite sequence of approximations. Now I'm gonna truncate this sequence of approximations, and call this a closed form solution.
The analogy in highschool math would be computing an infinite sum that doesn't converge, but now let's instead just add to some large N, and call this a closed-form solution.
Actually, I agree with you. Initially you seemed to object to the term "closed form"; this now highlights the more pertinent point - these models are 100% closed form, but 0% "solution" in the formal sense.
Someone correct me if I'm wrong, but I believe this refers to the fact that it can be expressed in terms of certain simple mathematical operations like addition, subtraction, multiplication, powers, roots etc.—and as a consequence, the execution is very efficient. My understanding is that 'closed form' solution is essentially something that resembles a polynomial (again, accepting corrections!).
Closed form just means you can do it in a finite number of operations. So just "run X" rather than the previous versions of this kind of thing which are "repeat X until measure Y is lower than the limit I care about". (my basic understanding)
I checked the Wikipedia article, and the sorts of operations involved do appear to be a part of the definition: https://en.wikipedia.org/wiki/Closed-form_expression —though it sounds like it's a somewhat loosely defined term.
I really don't think that it's fair to call neural networks closed-form solutions. The term immediately makes me assume that it enabled you to bypass the training stage altogether.
The thing is no one ever uses one of these pallets. You start off with one maybe as a base and then adjust everything to get stuff just right. The Coolors UI seems to be good at that.
I love coolors. Just hit space bar to cycle. Once I see a color I like, I just lock that color and continue hitting space bar and locking colors till I have a full pallet :-)
This was very neat. What I'm about to say should not detract from that – this is probably one of the better palette tools I have used, if not the best.
But why on Earth would you want a user account and cloud storage for 15 bytes of colour values?
Presumably the colors have been purposely picked to have really bad contrast, look identical to colorblind users, or create some other sort of usability difficulty.
I like colourlovers.com as well. Feels more organic, less engineered. And those palettes are easier to use, since they are reduced down to a theme, rather than having a 20 color palette with no overarching theme.
That is just me speaking as a non-designer. It's much easier for me to say... go in and find a theme for Cinquo de Mayo in Color Lovers, then try to infer it from these flat palettes.
I think it's obvious that it doesn't offer anything different besides that it's another option than those sites. CL for example has palletes that aren't as easily copied and for the most part are used a lot inside their own system.
Adding to the other reply pointing out this is quite fundamental work, the point of these articles isn't really to announce new breakthroughs that happened last week, but rather to sketch major research themes that's been going on for the past couple of decades, but received little to no media coverage because they're highly technical.
I like this because it describes fully and precisely the structure of an LSTM cell, in a way which mostly avoids ML/stats jargon.
From this article I can correctly and easily read off the model architecture it describes, as a composition of smooth maps on modules over the real numbers, which is more than what one can say about a lot of papers.
I don't want to doubt the potential usefulness of the linked article to certain audiences, but I wanted to point out the following:
It's important to acknowledge that intuition from rotations etc. are crutches, and not substitutes for deeply understanding linear-algebraic constructions and their formal properties.
Linear algebra is not really about the arithmetic of multiplying arrays of numbers - it's about the nice algebraic things that happen when you're working with things that come with "linear" operations. The linear-algebraic things that permeate all of mathematics aren't "rotations matrices" and such, but rather "universal constructions" like kernels and cokernels, products and tensor products etc. We even have abstractions that precisely formalize these nice properties of the category of modules/vector spaces, such as abelian categories and linear functors. Any introductory reference on these will provide you with an abundance of examples where things with these formal properties naturally arise.
A related remark: there is another (rather silly) way to "geometrize linear algebra". It goes something like whenever you see a ring R, think of it as the sheaf of functions (the structure sheaf) on some space. Whenever you see a module over R, think of it as the space of sections in some vector bundle over that space. Then anything you say about modules through this admits a sheaf-theoretic version, which is often easier to picture if you're geometrically inclined.
In the case where the base ring R is a field (all of typical linear algebra), then the space associated to it is a point. So a vector bundle over this point is just a vector space, so we didn't really gain any extra geometry here - if you want a way of picturing linear algebra geometrically, you still need some other idea.
Would you suggest a set of intermediary-level books on the topic?
For example: what would be beneficial to read if I understand that matrices are systems of linear equations, but I do not clearly understand why a cross product exists only for 3 dimensions?
It runs 24/7 aongside the "ok google" detection, and as soon as it detects a song, it'll show the name on the lockscreen. So normally, all you have to do is look at your phone to get the song name [0]. This is also done 100% locally, there's a offline database of 10k+ songs which is only ~50mb [1], and gets updated regularly.
Siri has been able to identify music (I believe using Shazam?) for 3 - 4 years. I suspect they were shopping themselves around, and Apple wanted to own the technology enough to give them their asking price.
The technology Google is using here is very different. They also have active detection on Assitant, but this one is a passive 24/7 detection. There's a local database of the ~10k most popular songs, and it's only ~50mb [0]. It's all done locally, and when it works (Which is very often), you just have to look down at your phone and the song name is already there on the lockscreen without you touching anything.
As mentioned above, this is already running to do OK Google detection. It's also not scanning 100% of the audio, as far as I understand, it analyzes samples roughly each minute. I'm sure they would not make this a feature if it had a big impact. From my own experience, turning this feature on or off made no difference in battery life.
I haven't been able to determine if this is a deliberate plot or just incompetence, but Siri's language parsing for the Shazam music identification became maddeningly limited in some recent iOS update. The only phrasing I can trigger it with now is "What's this song?". "Name this song", "What song is this", etc. all give me some Marx-brothers-esque response like "Sorry, I couldn't find the song What in your library."
Oh, and the only thing you can do with the result is tap it to be bumped to Apple Music, where (if you have a subscription and the song is available), the song starts playing. Hey Apple, when in the history of ever has someone wanted to start playing the song they are already listening to? At least let me copy the damn name. Siri is a dumpster fire.
Google Assistant has also been able to that for ages.
The pixel does something very different : it automatically tries to identify what is playing right now and writes the name of the song on the lockscreen.
It might seem to be gimmicky but I have found myself really liking this feature.
Gracenote (the people who bought and closed CDDB) offer music identifying to anyone who wants to license it. I remember Sony Ericsson feature phones all had this feature powered by gracenote. I would assume that's what Apple is using.
Is it the one that "google now" uses, or "google assistant?" Are they the same? I don't even know with google anymore man I just can't keep track of this shit.
Google assistant has a music identification feature now (up until recently it did not).
Now is still there, but the search feature that used to be on the launcher is just a web search now (can't do anything fancy with it anymore, that's reserved for Assistant).
Pixel 2 has a music identifier that is always listening and will place what song is playing on your lock screen. It does this with a local database and does not send audio data to Google.
> It does this with a local database and does not send audio data to Google.
Do you have details on this? I'm honestly quite surprised; I'd imagine the fingerprints to enough songs for that feature to be usable would take up quite a bit of space.
The one on Pixel 2 is done in the background on device, but it has a small catalog of titles. You can explicitly ask the Assistant for all the other songs.
The following of course is a comment on the quote not on your comment, but no Kochen Specker is not a generalization of EPR to 3 particles. It's easy to come up with a generalization of EPR in 3 particles - just exhibit a 3-party entangled state! (e.g GHZ state). K-S theorem exhibits something a lot more subtle than entanglement known as contextuality. There are examples of systems which have the K-S property but doesn't have any entanglement.
Those problems are normally Pandoc parsing errors. Considering it's open source, perhaps we should print the error message so people can actually help fix it...
The MathJax failures are either things that MathJax doesn't support, or use of \DeclareMathOperator which we haven't added support for yet.
This is great. It's essentially asking you to work in a 2-category[1], which, once you get used to it, is an amazingly intuitive way to make sense of constructions in ordinary 1-categories.