Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I recently had the experience of taking my first graduate-level probability course. It assumed quite a strong familiarity with real/complex analysis, and I suffered quite heavily. Something of note was that once I finally managed to "peel back" the analysis, the underlying intuition made a lot of sense for the simplest cases in probability (e.g. hypothesis testing between two normal distributions is a matter of figuring out whose mean you are "closer" to).

I am of the opinion that notation is a very powerful tool for thought, but the terseness of mathematical notation often hides the intuition which is more effectively captured through good visualizations. I would really like to take self-driven "swing" at signal processing, this time approaching it through the lens of solving problem on time-series data, since as a programmer I believe that would be quite useful and relevant.



In my opinion, the issue here is notation and a bit more. I did about eight years of college in math, changed paths, changed careers, changed careers again to ML/DL research, and now will finish a CS undergrad degree this month.

I put it in context because it's not quite a direct comparison since I have been in greatly different situations and ages between studying math and CS, but putting that aside, I have to say I have greatly enjoyed the computer science means of teaching more than math, doubly when it comes to self-learning. Concepts in math are generally taught entwined with the means of proving those ideas. That's important if you're a grad student looking to be a math researcher, but (IMO) it is not so great if you're a newer student or learning on your own and trying to grasp the concept and big picture. A proof of a theorem can be (and too often is) a lot of detail that really doesn't help you grasp the concept the theorem provides or is used towards, often because it involves other ideas and techniques from higher levels or just different types of math, both of which are out of the scope of the student learning the topic. Worse yet, it is standard for a proof to be written almost backwards from how it would be thought out. Anyone from a math educational background has the experience in homework of solving a problem, then rewriting it almost in total reverse to be in the proper form to submit. This means not only is the proof of the theorem not useful towards conceptual understanding, reading the proof doesn't show you chronologically how you would discover it yourself. That is a lot of overhead cost to break through to get to real understanding, real learning. As you mention, notation as well is another thing you need to break through.

I have found computer science and related classes to be taught more constructively. Concept is given first, and then your job as student learner is to construct it. Coming from the ML field, I love comparing math and CS proofs of topics here. Explanations from CS people of back propagation, for example, are always visual, and books/courses will have you construct a class and methods to do the calculations. Someone with a bit of programming knowledge can follow along in their language of choice. Math explanations get into a ton of notation from Calc 3+, and it's going to take a lot of playing around and frustration to get a working system out of the explanation. Even the derivation section on Wikipedia is not something most people will understand and be able to turn into useful output.

The more I see other ways concepts are taught, the more I wish math had been taught a different way. There is a lot to break through in order to get to real understanding, just by the way it's formed and taught.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: