Merge the portions that you know are correct and will have no affect on anyone else now, which makes future work easier as you do not have to keep those "working" commits up to date.
We do this all the time with kernel development, and is one reason why breaking changes up into tiny pieces is so powerful. We can take the pieces that make sense now, and allow the developer to redo the portions that are not ready yet, instead of having to reject the whole thing if it were done in one single "chunk."
Also note that the TTY/serial portions of this hardware support was already merged through the serial tree because they were independent and didn't affect anyone else.
The big "downside" is that it takes more work on the patch submitter side. But the benefits in the end are almost always more than worth it (easier reviewer time, easier time to track down problems, better development cycle as feedback can be more specific, easier evolution of changes, etc.)
I wrote a whole chapter in the book "Beautiful Code" about how this development model can help create an end result that is almost always better than the initial "huge" submission model. Check it out if you are interested, it should be free online somewhere...
My instinct would have been that it's easier for the submitter (as they have less to polish and test) and more irksome for the reviewer as they have to go through multiple rounds of submissions, but naturally I'll take your word for it!
This kind of discussion is always of interest to me, I'll check out the book, thank you.
Reviewing three changelists, which individually do only a single thing each, is in my experience much easier than reviewing a single changelist bundling the changes from all three.
This is true even if the same lines are changed multiple times. It's something you'll learn with experience, but it's also not even close. Break your patches up as much as possible, and everyone will be happier.
Not GP, but... In the context of recent event, [0] where not reviewing thoroughly enough some tiny patches had a major come-back-to-bite fallout, I can't help but wonder:
How, exactly are you expecting an increase in average patch size to help?
I did read through this debacle when it came out actually, I'm thoroughly on team Greg. I suppose my question was separate from malicious patches - I was interested in knowing if this incremental "merge tiny patches as and when they're ready" mode of development has ever caused issues with half-baked solutions affecting other parts of the kernel where perhaps it wouldn't have otherwise done so if the release was given more time for polishing and testing.
We do this all the time with kernel development, and is one reason why breaking changes up into tiny pieces is so powerful. We can take the pieces that make sense now, and allow the developer to redo the portions that are not ready yet, instead of having to reject the whole thing if it were done in one single "chunk."
Also note that the TTY/serial portions of this hardware support was already merged through the serial tree because they were independent and didn't affect anyone else.