Hacker Newsnew | past | comments | ask | show | jobs | submit | more antiuniverse's favoriteslogin

There was a study recently that (in a mouse model) linked residual soap from baby wipes to food allergy development. https://news.northwestern.edu/stories/2018/april/food-allerg...

For a large fraction of probability theory, you only need two main facts from linear algebra.

First, linear transforms map spheres to ellipsoids. The axes of the ellipsoid are the eigenvectors.

Second, linear transforms map (hyper) cubes to parallelpipeds. If you start with a unit cube, the volume of the parallelpiped is the determinant of the transform.

That more or less covers covariances, PCA, and change of variables. Whenever I try to understand or re-derive a fact in probability, I almost always end up back at one or the other fact.

They're also useful in multivariate calculus, which is really just stitched-together linear algebra.


You might enjoy the podcast 'How did this get made'. Three comedians rip on the best of the worst. They have great chemistry and Jason Manzoukous is probably the funniest person on the planet.

These employees are people who are pulled straight from college, given salaries higher than most of their contemporaries ever dreamed of earning, and are told they are special and are changing the world. Why wouldn't they be blindly loyal to the sociopathic machine they helped create?

thanks! cronometer founder here :-) my indie hackers interview for anyone that cares: https://www.indiehackers.com/interview/03874047f2

I'd love to see that datasheet.

"To use the PLL (Planck-Locked Loop) as an oscillator source, use the 1-billion year galaxy tick as a source and set the PLLMUL register to divide by its maximum scaling factor of 1.855 * 10^52."


https://www.coursera.org/specializations/statistics

Skip all the programming exercises in R - just watch the videos and solve the multiple choice problems. Supplement with the decent opensource text book it links to.

Each "week" is likely only 1-2hrs of work. ~5 weeks per course. Only really need the first 2 courses:

   . Introduction to Probability and Data

   . Inferential Statistics

Here is a nice game about concurrency. It should answer your question why it should be feared by understanding how it works: https://deadlockempire.github.io/

As of my understanding it is fearless in Rust, because the problems that might came up are solved on language level or the compiler warns you about them. (I have never coded Rust, I just deduced this from the comments)


To be honest, this isn't the best list, it's a bit too blog heavy. I've started reading up on ML only recently but here are my recommendations. Note that I haven't went through all of them in entirety but they all seem useful. Note that a lot of them overlap to a large degree and that this list is more of a "choose your own adventure" than "you have to read all of these".

Reqs:

* Metacademy (http://metacademy.org) If you just want to check out what ML is about this is the best site.

* Better Explained (https://betterexplained.com/) if you need to brush up on some of the math

* Introduction to Probability (https://smile.amazon.com/Introduction-Probability-Chapman-St...)

* Stanford EE263: Introduction to Linear Dynamical Systems (http://ee263.stanford.edu/)

Beginner:

* Andrew Ng's class (http://cs229.stanford.edu)

* Python Machine Learning (https://smile.amazon.com/Python-Machine-Learning-Sebastian-R...)

* An Introduction to Statistical Learning (https://smile.amazon.com/Introduction-Statistical-Learning-A...)

Intermediate:

* Pattern Recognition and Machine Learning (https://smile.amazon.com/Pattern-Recognition-Learning-Inform...)

* Machine Learning: A Probabilistic Perspective (https://smile.amazon.com/Machine-Learning-Probabilistic-Pers...)

* All of Statistics: A Concise Course in Statistical Inference (https://smile.amazon.com/gp/product/0387402721/)

* Elements of Statistical Learning: Data Mining, Inference, and Prediction (https://smile.amazon.com/gp/product/0387848576(

* Stanford CS131 Computer vision (http://vision.stanford.edu/teaching/cs131_fall1617/)

* Stanford CS231n Convolutional Neural Networks for Visual Recognition (http://cs231n.github.io/)

* Convex Optimization (https://smile.amazon.com/Convex-Optimization-Stephen-Boyd/dp...)

* Deep Learning (http://www.deeplearningbook.org/ or https://smile.amazon.com/Deep-Learning-Adaptive-Computation-...)

* Neural Networks and Deep Learning (http://neuralnetworksanddeeplearning.com/)

Advanced:

* Probabilistic Graphical Models: Principles and Techniques (https://smile.amazon.com/Probabilistic-Graphical-Models-Prin...)

I have also found that looking into probabilistic programming is helpful too. These resources are pretty good:

* The Design and Implementation of Probabilistic Programming Languages (http://dippl.org)

* Practical Probabilistic Programming (https://smile.amazon.com/Practical-Probabilistic-Programming...)

The currently most popular ML frameworks are scikit-learn, Tensorflow, Theano and Keras.


I don't know about the deep learning side of things, but on the mining side of things, over the past month, my calculations are that ~50,000 GPUs are added to mine the major GPU-minable coins every day -- from roughly one million GPUs on April 1 to about 3.5 million GPUs now. At an average of $250 revenue per card, that'd work out to ~$13M revenue per day in GPU sales for cryptomining, split between AMD cards and NVidia cards. This, however, is up about ten-fold from March, and in all likelihood will return to those levels as mining returns inevitably decline.

Source of this data is my own calculations using historical data on mining difficulty for the various coins, plus benchmarks for typical/optimal-ish GPUs used to mine each coin.


"A meritocracy is a system in which the people who are the luckiest in their health and genetic endowment; luckiest in terms of family support, encouragement, and, probably, income; luckiest in their educational and career opportunities; and luckiest in so many other ways difficult to enumerate — these are the folks who reap the largest rewards. The only way for even a putative meritocracy to hope to pass ethical muster, to be considered fair, is if those who are the luckiest in all of those respects also have the greatest responsibility to work hard, to contribute to the betterment of the world, and to share their luck with others."

-- Ben Bernanke, Princeton University Commencement 2013


The Kalman Filter was used the in the Apollo 11 Guidance Computer [0] (discussed in the past on HN [1]).

As someone linked previously, here is a historical perspective [2], and a link to the actual state vector update computations [3].

The AGC maintained the state vectors for the KF. Ground control would run batch mode least squares solutions, and pass it on to the LM, where the updates to the state vector would be applied by hand. The variables of the state vector were a 6x6 matrix of position and velocity in X, Y, and Z or a 9x9 matrix when including radar/landmark bias.

I have great admiration for Mr. Kalman. Controls engineering has greatly benefited from his work.

[0] http://en.wikipedia.org/wiki/Apollo_Guidance_Computer

[1] https://news.ycombinator.com/item?id=8063192

[2] http://www.ieeecss.org/CSM/library/2010/june10/11-Historical...

[3] http://www.ibiblio.org/apollo/listings/Comanche055/MEASUREME...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: