Hacker Newsnew | past | comments | ask | show | jobs | submit | pavpanchekha's commentslogin

That pretty much is how CSS works! At the most basic level, Flow level is about widths down, heights up. But this basic model doesn't let you do a lot of things some people want to do, like distributing left-over space in a container equally among children (imagine a table). So then CSS added more stuff, like Flex-box, which also fundamentally works like this though adds a second pass.

Author here—it is from Tufte CSS. I have a blog post [1] about how floats work. It is a nice example of there being unintuitive and also more-intuitive ways to achieve things in CSS. These days I believe CSS Anchor Positioning provides a simpler way to do this, but I haven't used it yet.

[1]: https://pavpanchekha.com/blog/css-floats.html


Author here. You're right that a lot of CSS's edge cases and implicit rules stem from other choices and implicit rules that maybe need to be reconsidered. But take this logic a step further. The way text with mixed font sizes is laid out is kinda weird—should we just get rid of that? Mixed Chinese-Latin text is weird (search "idiographic baseline"); should we get rid of that? In fact, variable-size characters are weird, maybe just stick to all-Chinese? I'm joking, of course, but my point isn't that a simpler system is inconceivable, just that it would be inconvenient.

Author here. I suppose it depends on what "rely on" means, but... have you ever used CSS to center text? Did you think much at all about what happens if the zoom level is high enough and the screen size small enough that the text doesn't fit? I assume not (I don't think I'd ever thought about that before I read that part of the standard), so in that sense you were relying on this behavior. I do think that in most cases where it activates, the quirk implemented by CSS probably improves the layout.

Author here. This specific quirk of CSS is minor, and probably if CSS didn't have this quirk it'd be fine. But I'd guess that you've at least once in your life been on your phone and been browsing a website which used a really long word (or a really long line of code!) in centered text (maybe a heading) and you've scrolled right to read the whole thing. Are you sure your website doesn't have such a thing, if you have centered text somewhere?

So, yes, CSS could have fewer edge cases and workarounds---what I refer to in the post as less implicit knowledge---and then it would be simpler. But the resulting layouts would probably be worse. And a radical simplification like a constraint system would probably be even simpler and the results (I assert) would be even worse. It's fine to want a better life for browser developers, but I don't think it's unthinkable for CSS to create new edge cases and sometimes-surprising behavior if it also results in, typically, better outcomes.


Fixed

Author here. The problem isn't the technical challenge of writing a constraint solver. It's making sure that the resulting layout looks good, despite contradictory guidance from the designer.

Yes, a constraint solver can figure out which constraints it's violating. And for under-specification, it can produce a layout that satisfies the constraints. But the layout the constraint-solver chooses might be really bad—if all the text is placed at 0,0 the result is unreadable. And over-specified constraints might occur for some user on some weird device after deployment, when there's no developer to respond to errors.

Determining whether a set of constraints could be over- or under-specified for some set of parameters is computationally very challenging (this is what SAT and SMT solvers do, basically). But besides the computational challenge, I think it is practically very challenging—this is drawing off my experience doing this for four years—to write non-conflicting constraints for real-world designs. How would you write constraints for text wrapping around a figure? For mixed-font text lining up nicely?


fwiw:

I do feel like I can speak on this some authority.

predictability is again suffers.

And recall that everything in parameterized.


Author here. I agree that this would have a similar effect; we'd probably still end up with 36-bit or 48-bit IP addresses (though 30-bit would have been possible and bad). We'd probably end up with a transition from 24-bit to 48-bit addresses. 18-bit Unicode still seems likely. Not sure how big timestamps would end up being; 30-bit is possible and bad, but 48-bit seems more likely.


Author here, copied from another comment above.

Actually I doubt we'd have picked 27-bit addresses. That's about 134M addresses; that's less than the US population (it's about the number of households today?) and Europe was also relevant when IPv4 was being designed. In any case, if we had chosen 27-bit addresses, we'd have hit exhaustion just a bit before the big telecom boom, a lucky coincidence meaning the consumer internet would largely require another transition anyway. Transitioning from 27-bit to I don't know 45-bit or 99-bit or whatever we'd choose next wouldn't be as hard as the IPv6 transition today.


Author here. The argument was that by numerological coincidence, a couple of very important numbers (world population, written characters, seconds in an epoch, and plausible process memory usage) just happen to lie right near 2^16 / 2^32. I couldn't think of equally important numbers (for a computer) near ~260k or ~64B. We just got unlucky with the choice of 8-bit bytes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: