Hacker Newsnew | past | comments | ask | show | jobs | submit | lathamcity's commentslogin

The tour of the white house was pretty interesting. If you typed in "oval offi" it takes you to some gold room that I don't recognize, and if you step outside, there's a big portrait of Hillary Clinton. Do the first ladies all have their portraits?


Ahem, you mean the Secretary of State, arguably most important cabinet position of the white house?


Wait, your resume says you're from North Carolina and you went to Case Western. You should really know this.


I didn't read the comments first, spent about ten minutes on it. From a cursory glance, I think it comes down to the following lines:

In modal.register.js:100, 128, 337

$("#veriImg").attr("src", "/present/captchaImage.action?t=" + new Date().getTime());

It doesn't seem like the expected value of the captcha ever actually gets registered on the server side.

My one idea of stopping before the Ajax request and setting the verification code equal to an empty string to try to match a null/undefined on the server side didn't work.

It's possible that, in some really obvious error, the server makes its own request based on a current timestamp to get the captcha value to match against and uses a later timestamp, but the time difference because of latency means that I can't test that without guessing a bunch of times.


From that I would assume that they think they have enough coders, and don't need any more. After all, if you've got over 400,000 subjects, the top 10% are going to be pretty good, especially if they keep competing for the same prizes.


t=... fragment is there to avoid browser caching.


"raising the question of whether this latest incident is a sign of design flaws, of possible risks associated with plug-in vehicles generally or simply a result of the abuses wrought by extremely rare weather conditions"

That's lousy journalism - not mentioning what the specific weather conditions were, and making it sound like the seawater was some sort of simple throwaway answer instead of the scientifically obvious answer that it actually is.

"Why Did 17 Plug-In Cars Burn?" "What caused more than a million dollars-worth of plug-in hybrid vehicles, including 16 Fisker Karma luxury sedans, to catch fire Monday night at Port Newark?"

It really should say what the other car was. It's impossible to guess at any sort of trend, because the other car could break it and we aren't told what it was. Was it another Fisker car, indicating a consistent problem with the company in this instance? Were there similarities in the design? All we know is that it was a plug-in vehicle.

Wikipedia, by the way, says it was a Prius: http://en.wikipedia.org/wiki/Fisker_Karma#Fire_incidents


I've had a car catch fire on me, while I was driving it down the Interstate. It was a good old fashioned American V6, too: a Pontiac Bonneville. Was it a sign of a design flaw? Of possible risks with gas vehicles generally? Or was driving down the Interstate just more than it could handle?

"Based on photographs of the scene obtained by the blog Jalopnik, Fisker’s cars were parked fairly close together, so whatever the initial cause, a fire in one car could quickly spread to others."

So it's possible one car sparked all of this, though we'll have to wait for a real investigation to maybe know anything.

But it's weird to point to electric cars and hybrids as a scary fire hazard. Have they forgotten the 15-20 or more gallons of super flammable liquid normal cars carry with them everywhere? These cars _work_ by containing _fire_. Hell, sometimes something in the 12V circuit gets hot and a car goes up in flames, no gas or fancy lithium-ion batteries needed.

I seem to remember a recall or two over faulty ignition wiring leading to fires in gas-powered cars in perfectly normal weather, but I can't find the one I'm thinking of in the sea of all the others Google turns up.


The article says it was a Prius. In fact, a couple of other hybrid Prius's (Prii?) got hot and were smoldering too.


It never says that the last car in this incident was a Prius, just that "In a separate incident during the storm, three Toyota Prius hybrids at Port Newark also were damaged by fire"


I think there's a disturbing trend in our society towards trying to get something that's competitive by "hacking" it or cheating on it somehow. Either that or working insanely hard at the expense of your personal life and personal health. It ultimately leads to a raising of the standard so that only those people can really get what they want. "Hacking" the Y-Combinator application is an example of that. Suppose some people figure out how to hack it so that they can give the reviewers exactly what they want. Well, then the other people who aren't hacking it in that way are at a disadvantage. Then they have to hack too or be left in the dust. And it becomes this sort of arms race where the original point gets left in the dust.

The same example is true of taking adderall to study for tests. In high school, I didn't take adderall, but I know a lot of kids who did and they would remember information a lot better and score above what they should have been. So if there was some kid who was objectively a worse student than me and he was getting the same grades as me, we both look the same. If everybody started doing it, I'd be left behind or pushed into a lower bracket by virtue of being the only person who didn't do it - who wasn't willing to risk my health (I think, don't know too much about it) for a higher grade. I don't think this will happen, but there are a lot of other examples:

-Doping in bicycling

-Working an insane amount of hours on your startup/job

-Spending months/years studying for the GMAT to get into business school

-Corruption or lying in politics

-Autotuning in the music industry

-High school kids taking all these leadership roles and such things that they don't really care about to try to show passion and get into Harvard. Same for grad school applications.

etc. etc.


Are there any good examples of someone "hacking" the YCombinator application? The only one that comes to mind is where the founder of Instacart demonstrated that his service worked by ordering a beer for Garry. And that's not even close to anything on your list.


Are you sure this is a trend? Was it different some decades ago?

Also,

> -Autotuning in the music industry

Oh, come on. That's like not giving Tron an Oscar because they cheated their special effects scenes by using computers.


That's different - special effects in movies make the impossible possible. A better actor couldn't conjure up the Grid.

A better musician COULD sing on tune. It is well within human capability. Autotuning is a quick hack to make up for not being able to sing.


Funny, I actually thought once that studying for the test was cheating.

I mean, either you learned or not, memorizing some things before the test won't make you better at the thing.


I was watching some video the other day where Jon Stewart made a random appearance. All the people around him had their phones out taking pictures of him instead of actually experiencing him being there. It was like proving to other people that they'd had an awesome experience was more important than actually having that experience. Or like they were going through life as tourists, making sure they checked every box and shared it all with other people to make them jealous. "Been within five feet of a major celebrity - check".


Isn't it kind of cynical to assume the reason they are taking a picture is because they want to brag about it to their friends? I have I'm guessing 10-20k pictures I've taken in the last 12 years. I've taking probably > 1k pictures with my phone this year. I've shared less than 2% of them.

I happen to enjoy looking back at all these photos. They bring back memories of various events in my life. IMO, memories I'd likely not think of without something (like a photo) to trigger the memory.

Take more pictures not less.

Also I don't see a problem with sharing. It's not a substitute for face to face time but it's connecting with people MORE not less. Go back 25 years and you could rarely connected with anyone unless you were with them. Now you can connect all the time by sharing your experiences. Some people do it brag, most do it to share and connect. At least in my experience.


I don't disagree that people are not doing it to brag but perhaps as someone else mentioned the memory that has been captured is now not as fulfilling or vivid or "memorable" as it should have been if the phone wasn't such a distraction.

Going back 25 years, I think you'll find that people rarely connected in as wide of a network but the circle of friends they did connect with was possibly on a much more fulfilling and deeper level and they got significantly more out of it than a shallow network of thousands of "friends".

I too feel that it's easier than ever to connect with friends on Facebook but somehow the convenience has also made the friendship feel more superficial. Whereas before I had to make an effort to write a birthday card or give someone a call, today, I simply click and write "have a great birthday". Engaging with Facebook has almost become like tending a farm in Farmville, I see alerts, I do routine actions that keep up the appearances of being "social" and that's it. It's sad that I think back now and I don't even remember how my friends sound because we interact in person and on the phone so much less.


> Also I don't see a problem with sharing. It's not a substitute for face to face time but it's connecting with people MORE not less. Go back 25 years and you could rarely connected with anyone unless you were with them. Now you can connect all the time by sharing your experiences. Some people do it brag, most do it to share and connect. At least in my experience.

The problem could be exactly that you don't see any problem.

"Connecting all the time" isn't a positive thing; just a few days ago there it's been posted the "Culture of distraction" article.

Also, why assuming that "sharing experiences" is positive? Assuming that "the Facebook way" is the standard in today's experience sharing, most of the "experiences" are banal almost-everyday happenings.

The problem is cultural and it's subtle, although I think it's way more complex than a dualistic living something vs. being and audience of it.


I think I can top that. I saw a photo in a newspaper a few years ago with the Dalai Lama walking through a crowd. People were reaching out to shake hands with him and he was shaking hands with one girl in the crowd who, rather than looking at him was looking at the screen of the phone she was using to take a photo of him shaking her hand.


This is especially annoying at live events like concerts when the person in front of you is holding up a smartphone to record the concert and they're watching it on the tiny screen instead of focusing on the person standing in front of them. I'm completely distracted by their recording and yet at the same time I feel immense pity for them.


The compulsion to be a tourist is ever-present in our lives. Social networks just move the reward of validation for tourism up to the near-instant. One of my favorite books has a significant character plot that explores the (sometimes ridiculous) role of being a tourist.

"Twoflower stared raptly at the display overhead. He probably had the best view of anyone on the Disc. Then a terrible thought occurred to him. 'Where's the picture box?' he asked urgently. 'What?' said Rincewind, eyes fixed on the sky. 'The picture box,' said Twoflower. 'I must get a picture of this!'" (Color of Magic, 1983)


I don't think this is new to cellphones / mobile culture. I've known plenty of people who would spend all their time at parties and social events taking endless pictures, then spend hours showing them to everyone they knew afterwards. Like they were more focused on getting proof that they were having fun and socializing than just enjoying themselves.


Your post reminded me of a CK Louis sketch: http://www.youtube.com/watch?v=xSSDeesUUsU


The first time I tried it, nothing happened. I clicked on the globe because I thought maybe you had to drag it, and couldn't un-drag so it kept jiggling around without doing anything. No red dots came up.

I glanced at the comments to see if anyone said "It's broken" and nobody did, so I went back and tried hitting the play button but nothing happened, same result.

Then I read the first comment on here and went back one more time and it worked; however, this time when it spins and I click on a red dot nothing happens and I get ' error parsing d="" ' in the console, with the line being /quakes/:1, which is just the <html> tag.


Hmm, sorry you're having trouble with it. The 'error parsing d=""' is an issue with the SVG parser within your browser, and shouldn't prevent it from rendering properly. I believe upgrading to the next version of d3.js will likely fix this.

It's possible that it's not properly loading the data for you, either due to latency or some browser restrictions. What are you running?

I'd like to debug if possible. I believe D3.js only fully works in Chrome, Safari, and Firefox (+ maybe Opera?).


I'm actually building a website in D3JS myself ( http://uscfstats.com/matches ) and the SVG and D3 work fine there, and I am using the latest version of Chrome.


Some D3.js code I wrote recently worked in Opera, so I think it should be fine there. I never did figure out how to make D3.js work in IE.


The error keeps replicating itself as long as the globe is moving.


The line at the top of the page, for me, says "Undefined". Firefox 10.0.7, Windows 7.

I debugged it for you - the outerHTML property isn't supported in Firefox < 11. Here's the solution from StackOverflow:

function outerHTML(node){ return node.outerHTML || ( function(n){ var div = document.createElement('div'), h; div.appendChild( n.cloneNode(true) ); h = div.innerHTML; div = null; return h; })(node); }


Even more handy would be to submit a pull request! It's open source! https://github.com/jdm/asknot


Here's mine:

Level 0: Saw the post title and didn't think he needed to look at it because he was so confident in his abilities already.

Level 1: Read the post, noticed a lot that he didn't know but didn't intend to do anything about it.

Level 2: Read the post and went out and learned everything in it that he didn't know.

Level 3: Read the post, figured out which things from it he didn't know but might need to or want to, and learned those.

Oh, and a special level -1: Read the post, noticed a lot of things that he didn't know, and decided the post was stupid because clearly he's an amazing programmer and the post didn't agree with that self-assessment.


Nice.

Is there a level for "Skimmed the first part of the post and then went back to working on his/her startup" ?


No but there's one about commenting on the post in a way to make oneself seem clever. I have hereby attained it.


I'm in the middle of the machine learning coursera course, and registered for this one as well due to interest in the material.

My one complaint is that the programming assignments weren't interesting at all. The results were interesting, but the setups were mostly given to us, and we just had to code an algorithm that was in our notes. For someone who understands the basics of linear algebra and programming, it was just a syntax challenge, and that got irritating after a bit so I stopped doing them.

I won't get the certificate for completing the course, but I have a few extra hours of free time each week to add this second course, so I'm happy. I doubt that the actual homework that Stanford students taking this course get is so easy and repetitive, though, and I'm positive they wouldn't complain about not getting to retake quizzes after getting poor grades.

Not to knock the course. I've learned a lot and the professor (Andrew Ng) does a good job.


I've taken both, and the code is in fact not that much simpler than it was in the original class. There are, however, two huge differences: the algorithm is spoon-fed to you, and there is no math.

Firstly, think about how much more difficult the assignments would be if, for example, the steps weren't broken out and we didn't get any advice on how to vectorize. Of course, it would still be short work for anyone who (a) knows Matlab/Octave and/or (b) understands the material well, but it would also be an order of magnitude harder.

Secondly - and this is by far the larger point - the original CS 229 was really about math; the programming assignments were more of an afterthought. The lectures and homework mainly focused on the theoretical derivations and corollaries of the math that led to the algorithms. Once you'd done your bit on the math and cried to your classmates and the TA about it, you could go and implement the beautiful and extremely succinct result in Matlab.

As for my perspective on the difference, I believe it is a deliberate choice made with full knowledge of the difficulty drop. For starters, there are (with regards to homework help) no TAs in this course, so the absolute difficulty would have to decline to create an equivalent experience. More significantly, the enrollment has increased by a factor of about 700. If Stanford students had trouble with the original, you can bet that the median student in the course doesn't find it as easy as either of us does. If the goal is to generate the greatest benefit for the most people, and delivering the algorithms with a good intuition on their proper use will do so, then this course has succeeded marvelously. Of course, the smartest and most dedicated students will want more, which remains available through textbooks as well as the original course handouts (http://cs229.stanford.edu/materials.html). However, I would argue that the goal of most MOOCs (massive open online courses) should be to kindle interest and foster basic understanding, both of which the Coursera version achieves.


(slightly old) lecture videos for CS 229: http://www.youtube.com/course?list=ECA89DCFA6ADACE599


Hi,

I am also taking the course by Andrew Ng and understand your complaint that the programming assignments aren't as interesting ( from your perspective). Being quite comfortable with linear algebra, I was able to complete the assignments easily.

But when I go through the course forums, I find that for many people taking the course, the intuition behind the use of linear algebra in ML doesn't come as easy as it does for us. I think when Andrew Ng designed this online course, he must have had those people in mind also. I think he mentions it at the start of the course that it's more about understanding the concepts and the implementation details should come later. The programming exercises are designed keeping that in mind, I think.

I tried to make the programming exercises interesting for myself, by first thoroughly understanding the code that they had provided and tweaking it here and there. Once you have done that, you could apply what you've learnt on real world datasets from sources like Kaggle and see how you fare :)


> My one complaint is that the programming assignments weren't interesting at all. The results were interesting, but the setups were mostly given to us, and we just had to code an algorithm that was in our notes. For someone who understands the basics of linear algebra and programming, it was just a syntax challenge, and that got irritating after a bit so I stopped doing them.

I agree with this. The programming assignments I've done so far in the Machine Learning class are usually 5-7 matlab functions, many of which are about 2 lines of code (the longer ones might be ~10 lines of code). If you've ever done matlab/octave programming the assignments will take about 20-30 minutes and be completely unenlightening as you're literally just translating mathematical notation into matlab (which is, by design, already a lot like mathematical notation anyway). They provide entirely way too much skeleton code to learn anything from if you're not actively trying to learn. If I weren't already mostly familiar with most of the material presented in the class, I imagine I would never retain knowledge of how the machine learning "pipeline" worked or have any high-level understanding of the algorithms, because the assignments just require you to implement the mathematical pieces of each step, without ever asking you to, for example, actually call any optimization routines, or put the pipeline together.

The problem, I think, is that it would just be too difficult to do automatic grading in a way that is reasonably possible to pass if they don't turn most of the work into skeleton code. Since the automatic grading needs nearly exactly matching results, one minor implementation difference in a perfectly good implementation of the algorithm itself (e.g., picking a single parameter incorrectly, picking the optimization termination conditions incorrectly, choosing a different train/dev split, etc.) would make the entire solution completely wrong.


I'm doing the Computational Finance class via Coursera at the moment, and I've done a number of other courses previously.

I agree the programming assignments in the Finance class tend to be too simple. Most of the code is literally handed to you, you just have to understand it well enough to change it. I also understand that even that can be a major challenge if you don't have the background for it.

But I'm choosing to see the class itself as a starting point. It's a framework for my own explorations into the topics. I can do the minimum and get the minimum out of it. Or I can use what's provided as a base and go further.

The Coursera Algorithms class, for example. Writing code that got the answer was relatively easy, so once that step was done it became about optimizing the code for my own learning benefit.

It's like any educational process, you get our what you put in.


Right, you can get more out of the assignments if you try, but to me the purpose of assignments (versus passive learning - lectures, reading, etc.) is to force your brain to synthesize rather than just comprehend. The ideal assignment, then, is one that forces you to synthesize as many concepts it intends to teach as possible.

Just like you could go back and implement for yourself the skeleton code they handed you, you could also go out and implement everything in the lectures without any assignments at all. It's just that, like you said, the assignments provide a useful starting point. And I'm only saying they could be even more useful by requiring you to implement more of the complete pipeline.

The fact that an incredibly self-motivated person could learn everything there is to know about machine learning with the course as a starting point doesn't mean that it's bad to make the course more useful for a somewhat lazier or less interested person.


I've noticed this to be the case too with other courses. So for this one I've decided to implement everything in Scala (I'm currently taking the functional programming course as well. This will work well since this machine learning course requires no code submission and just questions about the results


I thought about doing it in scala too but I thought there might be issues with grading, do you know if there's an auto-grader for this course?


Try the Learning From Data course : http://work.caltech.edu/telecourse.html A Fall run has just started (on the 2nd of oct.).

It's the same version as the course given at CalTech and is more in-depth than Andrew Ng's. There is no skeleton code for the programming assignments, answers are made through quizzes. I took the summer session and learned a lot from it.


Great. Do you get a completion certificate at the end of course?


I took the course in the spring, and found it interestng, and the programmin assignments fairly easy. This summer I took the ML course that Caltech offered, which was significantly more challenging (the homework assignments were multiple choice, but they often required writing substantial code, without any starter code.) The Caltech course is now available on iTunes U...


I took CS229 here at Stanford and I was also one of the TAs for the online version last year (I was one of 2.5 people involved with making the programming assignments).

First, the Stanford CS229 version is definitely much more difficult than what you guys had online. The focus in the actual class was on the math, derivations and proofs. The homeworks sometimes got quite tricky and took a group of us PhD students usually about 2 days to complete. There was some programming in the class but it was not auto-graded so usually we produced plots, printed them out, attached the code and had it all graded by TAs for correctness. The code we wrote was largely written without starter code and I do believe you learn more this way.

An online version of the class comes with several challenges. First, you have to largely resort to quizzes to test students (instead of marking proofs, derivations, math). There is also no trivial way to autograde resulting plots, so everything has to be more controlled, standardized and therefore include more skeleton code. But even having said all that, Andrew was tightly involved with the entire course design and he had a specific level of difficulty in mind. He wanted us to babysit the students a little and he explicitly approved every assignment before we pushed it out. In short, the intent was to reach as many people as possible (after all, scaling up education is the goal here) while giving a good flavor of applied Machine Learning.

I guess what I mean is that you have more experience than the target audience that the class was intended for and I hope they can put up more advanced classes once some basics are covered (Daphne Koller's PGM class is a step in this direction). But there are still challenges with the online classes model. Do you have ideas on how one can go beyond quizzes, or how one can scale down on the skeleton code while retaining (or indeed, increasing) the scale at which the course is taught?


I think peer-graded assignments might do the job. I am taking Gamification course on Coursera right now, and I liked peer-graded assignments a lot.

If there would be peer-graded assignments in machine learning course, I would definitely have tried them out.


> The results were interesting, but the setups were mostly given to us, and we just had to code an algorithm that was in our notes.

Right; I agree. I'm not sure how they would go about making it more challenging though. They can't expect us to go out and collect data ourselves, after all. I suppose they could give us the data, then expect us to code the setup and algorithms up ourselves, but that, too, would become repetitive after a few assignments.

> Not to knock the course. I've learned a lot and the professor (Andrew Ng) does a good job.

Agreed once again. I knew nothing about machine learning before starting; now I know about neural networks, SVMs, and PCM. It's really cool how much I've learned already, for free, too!

I've also signed up for this course, but the quizzes really aren't up to par. As an example: the first quiz question was about training a neural network with too much data, and about whether or not said network would be able to generalize to new test cases. Overfitting neural networks wasn't even mentioned in the lectures; I had to rely on material from Andrew's class to answer the question correctly. This chasm between the lectures and the quizzes is likely because Geoffrey is the one creating the video lectures, but he's not the one creating the quiz questions; he is having TAs do it [1].

Nevertheless, it looks like they're responding to feedback, so hopefully it'll get better with time.

1. https://class.coursera.org/neuralnets-2012-001/wiki/view?pag...


(PCM) -> Do you mean PCA (Pricipal Component Analysis)?


I'm positive they wouldn't complain about not getting to retake quizzes after getting poor grades.

My experience is that students everywhere complain about grading. I've never been to Stanford, but I've attended and worked at several other top tier universities.


I didn't know it had a point at all besides surviving for as long as possible. I made my little 4 pixel bubble survive for almost ten minutes purely out of curiosity, to see just how massive this one bubble would get. Then I accidentally hit a blue one and, surprise, it makes me bigger.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: