Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Running Git on a SSD Speed Comparison (benjamin-meyer.blogspot.com)
30 points by icefox on Dec 13, 2010 | hide | past | favorite | 14 comments


It would be nice if axis' were labeled, to give some sort of idea as to what the scale of things is.


Obligatory: http://xkcd.com/833/ (today's xkcd)


Wow, epic timing.


Used google charts to generate them and it while it probably lets you I couldn't figure out how to display the actual y values, only 0-100 which are also useless which was part of the reason I included the averaged times at the end of the article.

This was a fun little article. No doubt someone else can come along and create a full blown in depth with every git command on different OS's showing just how much faster everything is. I knew git should be faster (as it is lots of io), just wanted some (basic) data to back it up rather than just saying it "should be" and thought this might be useful for others.


http://code.google.com/apis/chart/docs/chart_params.html

"axis"/"axes" are at the bottom of the Parameters table. Or, have you tried the wizard?


Just dumped the data in the wizard


you should try Hohli [1] :)

[1]: http://charts.hohli.com/


Agreed, though there are measurements to (nearly?) all of the graphs.

1: "The git add took around ten minutes to run on the hard drive compared to the 40 seconds for the ssd. Committing was also 1/3 of the time taking only 20 seconds v.s. over a minute."

2: "the seven seconds saved wasn't that much compared to the seven minutes the server (a slow arm based box) was preparing the repo."

etc. And further down there's a chunk of the actual values.

They're weird graphs in general though, with 3 statuses and text that mentions that "status" runs in 2x faster, which is only backed up by one of those status-pairs.


When dealing with a large repo, an SSD really helps, no doubt about it. When dealing with the Chromium repo on OS X, the difference is night and day noticeable, especially when grep'ing and traversing history (blame, log -S). It also especially helps when compiling large projects as the rest of the system stays usable.

That said, git isn't terrible with large repos on an HD, but you'll want to make sure your repo is well packed and you'll also notice a big difference between the cold-cache and warm-cache cases.

BTW, when benchmarking git, it's helpful to have the output of "git count-objects -v" to understand how the repo is packed. It's also good to take the best of 3 runs to eliminate the cold-cache case.


I have the same experience. SSD's are really great when you have to access many small files. The absence of seek times speeds up some things enormously, and if you're a developer, you'll see a boost in compiling and version control jobs.


I never noticed myself "waiting" for Git to finish any kind of local operation until after I switched from an SSD to a HDD. (My SSD ran out of space...)

Good article.


I'd love to see something like this for compile times, especially comparing multiple SSD's...anyone seen something like this?


IIRC, Joel Spolsky tried this and found no noticeable speedup.

It's not really surprising that the bottleneck in compilation is the CPU rather than IO.


I'm surprised :)

My disk seems to get hammered during compiles...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: