Hacker Newsnew | past | comments | ask | show | jobs | submit | kogir's commentslogin

MacOS assumes you won’t full screen every app because all of them ship with large enough, high enough resolution monitors that full screening a single app is a waste of valuable space. Unlike on cheap laptops with 1080p screens.

I suppose you could splurge for a Mac desktop and then get the cheapest, smallest screen possible, but I hope it’s rare.


> full screening a single app is a waste of valuable space

Any space not used for the task I'm focused on is wasted. For me the actual problem is that switching apps/windows is too slow because of UI animations.


I run 27" 4k and a 34" ultra wide monitors on my desktops, and my main laptop is a P16S with a 16" 3840x2400 OLED typically docked to one of those screens when not on the go, and I almost never use windows that are not snapped to fullscreen or at the very least to halves or quarters. "Large enough" scarcely applies to a MacBook Air or Neo with a 13" display, and I bet a TON of those get docked to cheap 21, 24, and 27" 1080p screens.

I'd like to be able to snap things to the middle third, especially on the ultrawides.

Only little calculator widgets, property panels, and modal dialogs that get immediately closed after use don't get maximized or at least docked to fill some region. I hate the cluttered, layered feeling of having a bunch of non-full-screen windows overlapping, I want to have a dozen apps open and making optimal use of the available display area.


writing this reply on a 13 inch macbook air...

Yes, and highlighting a failing in email that cannot be fixed, but which is addressed in other services where confidentiality is desired.


Email is not remotely comparable to Signal.

Email is a free, open source, strictly defined and consensual decentralised protocol.

Signal is a source available app and server that is not decentralised and represents a walled garden. Signal is centrally controlled by OWS.


In the context of sending something.

No need to get pedantic


OIDC is usually limited to a small selection of providers.


Well the problem is simply user base. There is no point in being provider if you have 100 users. On the other hand, despite OIDC being standardised, there are way too many ways of implementing it. It is essentially impossible to have a "wildcard" support for OIDC providers. How do I know? I just implemented one myself. For example, providers usually support only one or very few authorisation flows, so in reality you would likely end up with a lot of failed attempts to sign up with some "3rd world" provider.

PS: just take PKCE where the provider has no way of communicating whether it is supported, or required, at all.


I have just added OIDC support for bring-your-own-SSO to our application, and it wasn’t as bad as you make it sound: As long as the identity provider exposes a well-known OpenID configuration endpoint, you can figure it out (including whether PKCE is required or supported, by the way!)

The only relevant flow is authorisation code with PKCE now (plus client credentials if you do server-to-server), and I haven’t found an identity provider yet that wouldn’t support that. Yes, that protocol has way too many knobs providers can fiddle with. But it’s absolutely doable.


I didn't say it was impossible, just impractical and that is why majority of services that use SSO only support google, apple, twitter or facebook. You write the code specific to these few providers once and are done with it for good. There is little reason to invest time and money for adding generic support for other providers. It's just the way it is. If OIDC protocol would get streamlined a bit, we could easily have universal support. But then again, these big providers would likely be stuck in the current version and not bother adjusting to the new, simpler version, if it would come to be.


Pkce is trivially easy to announce support for, you put it in the issuer metadata.

code_challenge_methods_supported

https://datatracker.ietf.org/doc/html/rfc8414#section-2


With metadata endpoint, things become much easier, that is true.

Though how would you implement it? Like, user comes to your website and wants to sign in with some foo.bar provider, do you force the user to paste in the domain where you go look for the metadata? What about facebook or google, do you give them special treatment with prepared buttons or do you force user to still put in their domains? What about people using your flow to "ddos" some random domain...?


Fedcm offers some hope here, where the browser gets some capability to announce the federation domains to the RP. It's not straightforward though, of course. In this case though it's inverted - you are providing the url of the MCP server, and the MCP server is providing the url of an authz server it supports. The client is uses the metadata lookup to know if it should include PKCE bits or not.


With DCR (dynamic client registration) you can have an unlimited number of providers. Basically, just query the well-known endpoint and then use regular OAuth with a random secret.

There's also a proposal to add stateless ephemeral clients.


DCR is cool, but I haven't seen anyone roll it out. I know it has to be enabled per-tenant in Okta and Azure (which nobody does), and I don't think Google Workspace supports it at all yet. It's a shame that OIDC spent so long and got so much market-share tied to OAuth client secrets, especially since classic OpenID had no such configuration step.


DCR is now being pushed by AI companies, using the MCP protocol that basically requires DCR.

So it might get some traction, and finally break the monopoly of "Login With Google" buttons.


This is because the MCP folks focus almost entirely on the client developer experience at the expense of implementability CIMD is a bit better and will supplant DCR, but overall there's yet to be a good flow here that supports management in enterprise use cases.


This isn’t fundamental to its design, though. It’s a result of providers wanting to gate access to identities for various reasons. The protocol presented here does nothing to address this gating.


Any properly grounded device will do that with specifically incorrect electrical wiring and/or a shoddy charger. Did this happen with a properly wired outlet, and an undamaged Apple charger?

I have doubts that it did, as that would warrant a safety recall.


Can confirm it does happen. UK, both on my ThinkPad and a friend's MacBook when plugged in. It's a somewhat unavoidable side effect of the switching AC adapter designs - the output is isolated from the mains, but there is a tiny leakage current that can sometimes be felt as a "vibration". This is completely safe (far below the currents needed to cause harm) and no recall is needed.


Thank you. I always felt this vibration and wondered what it was.


If you replace the two prong plug on the AC adapter for a three prong cable, your MacBook case will be properly grounded and you won’t feel any vibration.


Cast aside your doubts, I've been to different parts of Europe a few times with different, healthy MBPs (I buy a new one every 4-5 years) with healthy adapters.

Plugging into the wide EU outlet with the Apple-manufactured plug, from the "World Travel Adapter Kit", can lead to uncomfortable "vibration" that you feel when you touch the top case, depending on the the hotel/airbnb. Whenever I visit I expect I should charge while I'm not using the device.


In researching why it was happening to me, I found sufficient forum posts complaining about it that it seems to be commonplace.

I have my doubts that Apple would admit enough to perform a safety recall given the issues they've had with their garbage chargers in the past. Other companies have no problems with building hardware that lasts. Apple seem to prefer their customers pay for replacements.


  Another example for all you computer folks out there: ultimately, all software
  engineering is just moving electrons around. But imagine how hard your job would
  be if you could only talk about electrons moving around. No arrays, stacks,
  nodes, graphs, algorithms—just those lil negatively charged bois and their 
  comings and goings.
I think this too easily skips over the fact that the abstractions are based on a knowledge of how things actually work - known with certainty. Nobody in CS is approaching the computer as an entirely black box and making up how they think or hope it works. When people don't know how the computer actually works, their code is wrong - they get bugs and vulnerabilities they don't understand and can't explain.


Computer science has nothing to do with physical computing devices. Or rather, it has to do as much with computers as astronomy has to do with telescopes. You can do it all on paper. The computing device doesn't afford you anything new, but scale and speed for simulating the mechanical work doing it on paper would. Electrons are irrelevant. They are as relevant to computer science as the species of tree from which the wood in your pencil comes from is relevant to math.

Obviously, being able to use a computer is useful, just as using a telescope is useful or being able to use a pencil is useful, but it's not what CS or software engineering are about. Software is not a phenomenon of the physical device. The device merely simulates the software.

This "centering" of the computing device is a disease that plagues many people.


> When people don't know how the computer actually works, their code is wrong - they get bugs and vulnerabilities they don't understand and can't explain.

While this is true, we're usually targeting a platform, either x86 or arm64, that are incredibly complex pieces of engineering. Unless you are in the IoT or your application requires you to optimize at the hardware level, we're so distant from the hardware when we're programming in python for instance that the level of awareness required about the hardware isn't that much more complicated than the basic Turing machine.


Physical latencies in distributed systems design. Calibration for input devices. Block storage failures and RAID in general. Monitor refresh rates. Almost everything about audio. Rowhammer.


No, red is an abstraction that is not based on knowledge of how colors work.


"How colors work" is dubious.

In physics, color has been redefined as a surface reflectance property with an experiential artefact as a mental correlate. But this understanding is the result of the assumptions made by Cartesian dualism. That is, Cartesian dualism doesn't prove that color as we commonly understand it doesn't exist in the world, only in the mind. No, it defines it to be the case. Res extensa is defined as colorless; the res cogitans then functions like a rug under which we can sweep the inexplicable phenomenon of color as we commonly understand it. We have a res cogitans of the gaps!

Of course, materialists deny the existence of spooky res cogitans, admitting the existence of only res extensa. This puts them in a rather embarrassing situation, more awkward that the Cartesian dualist, because now they cannot explain how the color they've defined as an artefact of consciousness can exist in a universe of pure res extensa. It's not supposed to be there! This is an example of the problem of qualia.

So you are faced with either revising your view of matter to allow for it to possess properties like color as we commonly understand them, or insanity. The eliminativists have chosen the latter.


There's no definition for "color" in physics. Physics does quantum electrodynamics. Chemistry then uses that to provides an abstracted mechanism for understanding molecular absorption spectra. Biology then points out that those "pigments" are present in eyes, and that they can drive nerve signals to brains.

Only once you're at the eye level does anyone start talking about "color". And yes, they define it by going back to physics and deciding on some representative spectra for "primary" colors (c.f. CIE 1931).

Point being: everything is an abstraction. Everything builds on everything else. There are no simple ideas at the top of the stack.


> There's no definition for "color" in physics.

This is unnecessarily pedantic. Your explanation demonstrates that.

> There are no simple ideas at the top of the stack.

I don't know what a "simple idea" is here, or what an abstraction is in this context. The latter has a technical meaning in computer science which is related to formalism, but in the context of physical phenomena, I don't know. It smells of reductionism, which is incoherent [0].

[0] https://firstthings.com/aristotle-call-your-office/


> To untutored common sense, the natural world is filled with irreducibly different kinds of objects and qualities: people; dogs and cats; trees and flowers; rocks, dirt, and water; colors, odors, sounds; heat and cold; meanings and purposes.

It's too early to declare that there are irreducible things in the universe. All of those things mentioned are created in the brain and we don't know how the brain works, or consciousness. We can't declare victory on a topic we don't fully understand. It's also a dubious notion to say things are irreducible when it's quite clear all of those things come from a single place (the brain), of which we don't have a clear understanding.

We know some things like the brain and the nervous system operate at a certain macro level in the universe, and so all it observes are ensembles of macro states, it doesn't observe the universe at the micro level, it's then quite natural that all the knowledge and theories it develops are on this macro scopic / ensemble level imo. The mystery of this is still unsolved.

Also regarding the physics itself, we know that due to the laws of physics, the universe tends to cluster physical matter together into bigger objects, like planets, birds, whatever. But those objects could be described as repeating patterns in the physical matter, and that this repeating nature causes them to behave as if they do have a purpose. The purpose is in the repetition. This is totally inline with reductionism.


> It's too early to declare that there are irreducible things in the universe. [...] We can't declare victory on a topic we don't fully understand.

This isn't a matter of discovering contingent facts that may or may not be the case. This is a matter of what must be true lest you fall into paradox and incoherence and undermine the possibility of science and reason themselves. For instance, doubting rationality in principle is incoherent, because it is presumably reason that you are using to make the argument, albeit poorly. Similar things can be said about arguments about the reliability of the senses. The only reason you can possibly identify when they err is because you can identify when they don't. Otherwise, how could you make the distinction?

These may seem like obviously amateurish errors to make, but they surface in various forms all over the place. Scientists untutored in philosophical analysis say things like this all the time. You'll hear absurd remarks like "The human brain evolved to survive in the universe, not to understand it" with a confidence of understanding that would make Dunning and Kruger chuckle. Who is this guy? Some kind of god exempt from the evolutionary processes that formed the brains of others? There are positions and claims that are simply nonstarters because they undermine the very basis for being able to theorize in the first place. If you take the brain to be the seat of reason, and then render its basic perceptions suspect, then where does that leave science?

We're not talking about the products of scientific processes strictly, but philosophical presuppositions that affect the interpretation of scientific results. If you assume that physical reality is devoid of qualitative properties, and possesses only quantifiable properties, then you will be led to conclusions latent in those premises. It's question begging. Science no more demonstrates this is what matter is like than the proverbial drunk looking for his keys in the dark demonstrates that his keys don't exist because they can't to be found in the well-lit area around a lamp post. What's more, you have now gotten yourself into quite the pickle: if the physical universe lacks qualities, and the brain is physical, then what the heck are all those qualities doing inside of it! Consciousness has simply been playing the role of an "X-of-the-gaps" to explain away anything that doesn't fit into the aforementioned presuppositions.

You will not find an explanation of consciousness as long as you assume a res extensa kind of matter. The most defining feature of consciousness is intentionality, and intentionality is a species of telos, so if you begin with an account of matter that excludes telos, you will never be able to explain consciousness.


But the problem is we don't know how it works. It's not about assuming consciousness is outside of physical reality or something like this, it's simply the fact that we don't have an understanding of it.

For example if we could see and trace all intentional thoughts/acts before they occurred (in matter), intentionality would cease to be a property, it would be an illusion.

All things that we know of in the universe function as physical matter, and we know the brain is a physical thing with 80 billion neurons and trillions of connections. What's the simplest explanation?

1) This is an incredibly complicated physical thing that we don't understand yet (and quite naturally so, with it having an incredible number of "moving parts")

or 2) there are qualitative elements in the universe that we don't have the scientific tools to measure or analyze, even in principle

I go with #1 because that's what every fiber is telling me (although I admit I don't know, of course). And with #1 also comes reductionism. It is a physical system we just don't have the mental models to understand it.

I also want to say there could be another aspect that affects consciousness - namely the appearance of a "present now" that we experience in consciousness. This present moment is not really explained in physics but it could have something to do with how consciousness works. How I don't know but it all relates to how we model physics itself mentally.


> I don't know what a "simple idea" is here

To be blunt: it's whatever was in your head when you decided to handwave-away science in your upthread comment in favor of whatever nonsense you wanted to say about "Cartesian dualism".

No, that doesn't work. If you want to discount what science has to say you need to meet it on its own turf and treat with the specifics. Color is a theory, and it's real, and fairly complicated, and Descartes frankly brought nothing to the table.


> it's whatever was in your head

That doesn't make anything "simple". Analysis operates on existing concepts, which means they're divisible. It's clear words are being thrown around without any real comprehension of them. This is a stubborn refusal to examine coarse and half-baked notions.

> If you want to discount what science has to say you need to meet it on its own turf and treat with the specifics.

Except this isn't a matter of science. These are metaphysical presuppositions that are being assumed and read into the interpretation of scientific results. So, if anything, this is a half-assed, unwitting dabbling in metaphysics and a failure to meet metaphysics on its own turf.

> whatever nonsense you wanted to say about "Cartesian dualism" [...] Descartes frankly brought nothing to the table

That's nice. But I haven't "handwaved-away" science. It is you who have handwaved-away any desire to understand the subject beyond a recitation of an intellectually superficial grasp of what's at stake. To say Descartes has nothing to do with any of this betrays serious ignorance.

See above [0].

[0] https://news.ycombinator.com/item?id=44014069


Love your perspective. It reminds me of this argument I’m working on about Turing machine qualia. Maybe my argument is just Searle in disguise? https://x.com/jchris/status/1815379571736551923?s=46&t=8A60w...


it is an abstraction based on how our biological eyes work (this implies "knowledge" of physics)

so it is indirectly based on knowledge of how color works, it's simply not physics as we understand it but it's "physics" as the biology of the eye "understands" it.

red is an abstraction whose connection to how colors work is itself another abstraction, but of a much deeper complexity than 'red' which is a rather direct abstraction as far as abstraction can go nowadays


There is absolutely no knowledge needed for someone to point to something that is red and say "this is red" and then for you to associate things that roughly resemble that color to be red.

Understanding the underlying concepts is irrelevant.


Except I could think they mean the name of the thing, the size of the thing or a million other things. Especially if i have no knowledge of the underlying concept of colors.


For me, a computer is at best semi-transparent.

I can rely on a TCP socket guaranteeing delivery, but I am not very well versed in the algorithms that guarantee that, and I would be completely out of my depth if I had to explain the inner workings of the silicon underneath.


Plenty of programmers know nothing about electrons. Think kids.

Most programmers never think once about electrons. They know how things work at a much higher level than that.


That only works because some EE has ensured the abstractions we care about work. You don't need to know everything, you just need to ensure that everything is known well enough by someone all the way down.


Yeah? So what. Theyre still using abstractions that were created by people who know about electrons.


"Nobody in CS is approaching the computer as an entirely black box and making up how they think or hope it works."

That is literally how we approach transformers.


Who is "we"? Lot's of people (including me) know how how transformers work. Just because we can't do all the math in our head quickly enough to train a model or run inference mentally, doesn't mean we don't know mechanically how they work.


We know how they are trained. We just don't know how the trained model works, since the program is emergent.


Lol, we also know how inference works. The fact that LLMs turned about to be surprisingly effective doesn't mean we don't know how it works. There are many fields where we know the underlying physics and it's just difficult to actually predict real world results because there are so many calculations. What's next, you are going to tell an aerospace engineer that flight is "emergent" because we need to run simulations or experiments?


The actual program is a black box. We have been able to dissect some details but the whole system is hard to understand. The program is grown more than developed. Understanding the concept of inference doesn't help you much.


"the program" is some silly abstraction you've made up. If you don't understand the underlying mathematical operations that's fine, but many of use do. And they aren't that complicated in the grand scheme.

Every complex system is hard to understand due to the number of variables v human working memory.


This is like saying that understanding water phase changes makes you competent at ice skating. You know what I'm talking about.


The desperation to create a simplified model that fits in your head, using abstractions and analogies, is leading you astray.

There is no ice skating.


Tell that to Richard Zedníks throat.


It doesn't skip over it. First this is an example and not the primary thing it's talking about. But secondly, just above, the article states that some lower level knowledge is necessary in the transit example. If you map those things, as written by someone who, as they say, isn't they knowledgeable about programming, then they make sense without diving into the specific.


> I think this too easily skips over the fact that the abstractions are based on a knowledge of how things actually work - known with certainty.

models ≠ knowledge, and a high degree of certainty is not certainty. This is tiring.


This seems like a misreading of the comment. The models and knowledge of arrays, classes, etc, are known with "arbitrarily high" certainty because they were designed by humans, using native instruction sets which were also designed by humans. Even if this knowledge is specialized, it is readily available. OTOH nobody has a clue how neurons actually work, nobody has a working model of the simplest animal brains, and any supposed model of the human mind is at best unfalsifiable. There's a categorical epistemic difference.


But doesn't this argument defeat itself? We cannot, a priori, know very much at all about the world. There is very, very little we can "know" with certainty -- that's the whole reason Descarte resorted to the whole cogito argument in the first place. You and GP just choose different lines to draw.


Yes, I agree completely. I think the apriori/aposteriori distinction is always worth making though.

This really does matter a lot more when floating signifiers get involved; I'm not actually contesting that our models of electrical engineering model reality quite well.


> Nobody in CS is approaching the computer as an entirely black box and making up how they think or hope it works.

Haven't you heard about vibe coding?


I really miss the feature of CSR devices that allowed keyboard and mouse use before OS boot, and wish any modern Bluetooth receiver was capable of it. Is it a patent issue?


Probably just a "it's hard for little pay-off" issue.

To use a bluetooth keyboard from the stage of "Press F10 to Enter Setup" we need the firmware (whether BT host, mainboard, or something else) to have a full bluetooth stack, some way to manage pairing/unpairing devices, and a bunch of other stuff.

If we do this outside the BT host, we probably need changes to the operating systems at least to handle how we're going to hand-off the state of the bluetooth stack when the OS takes control. Unless we want to _separately_ manage pairing/unpairing in the firmware, we would probably want some way to expose that to the operating system to be able to push its paired devices down.

And then it's probably still not super useful unless we substantially lengthen the prompt time because the time for you to turn the keyboard on, coax it into connecting, and hit the button is gonna probably have the OS booted already.

If you want this today just don't use bluetooth. Get one of the devices that uses "2.4GHz" or uses "Bluetooth + 2.4GHz" and shove a dongle in there. The keyboard/mouse will appear as a normal USB-connected device and you can use them how you want.


> If we do this outside the BT host, we probably need changes to the operating systems at least to handle how we're going to hand-off the state of the bluetooth stack when the OS takes control.

During the CSR hid2hci era, the adapter would just remember the last N pairings, since a sufficiently smart adapter can technically just store the keys used by the host when it tries to pair, then "impersonate" the host during the HCI part.


I don't think the other comments in this thread are at all correct. This is not a hard problem to solve and these comments vastly overcomplicate it.

You need two things: 1) a processor which can present HID devices OR a Bluetooth adapter depending on the presence of 2) a driver which can inform the adapter when it should be in BT mode instead of the default HID and which can configure the firmware to auto-connect to which devices in HID mode.

The first is easy and effecively free. The USB stack is (usually) implemented in firmware which makes it trivial to present as different device classes.

My guess is the problem comes down to drivers. It is difficult and quite expensive to have a custom driver upstreamed to Windows Update. You can't do this without a custom driver or userland software. On the other hand, if you simply present as a generic BT adapter, windows has a generic driver that will (usually) always work and is always installed.

The benefit of this feature is miniscule and there probably is not enough demand to make it worthwhile for CSR to sell their soul to Microsoft to have their driver blessed.

In this day and age, almost nobody ships a custom driver for anything. You just use the generic drivers Windows already has for all standard device types.


In fact, quite recently there was a Show HN showing a "appears-as-USB-HID" Bluetooth adapter: https://news.ycombinator.com/item?id=42125863 . The only thing missing was the hid2hci part.

While I was posting in that thread I also noticed there are a gazillion attempts in github to do something like this, so complicated it is not.


These do essentially exist in dongle form, though you'd need some kind of driver support to get rid of the physical pairing button: https://handheldsci.com/kb/


Mostly it's a cost/ease thing. For the device to work correctly with no OS, the hardware inside has to be powerful enough to run all of the logic itself and it has to be coded up to do that.

If you wait until the OS is up, the device itself can offload a good amount of logic and processing to the device driver.

My bet would be that the main reason is that it's easier to find programmers who can write complex device drivers than it is to find ones who can write complex embedded firmware, and it's quicker/easier in general to write device drivers than firmware.

That and just 99% of people will never notice that it doesn't work outside of the OS, and of the rest, 99% will only be vaguely annoyed but not change brands over that.


FWIW EFI bioses absolutely can support Bluetooth which is unsurprising since EFI is a full fledged OS in its own right.

You still need to check if your motherboard supports Bluetooth at boot but many do.


I have not found any outside Apple's. Do you know some models?


My mxkeys works flawlessly during the grub menu on both a MSI and a HP laptop I have.


This cannot be true. How is the pairing done? Are you sure you're not using Logitech's own receiver?


Am I misunderstanding this? Isn’t this just the view, rating, and comment data required to offer the service?

Building an interface to search that data is exactly how you’d evaluate a recommendation engine.


While I’m all for physical controls, especially ones that self-adjust to reflect the state of the remote device at all times, I wonder if the author just doesn’t know you can finely adjust volume in iOS control center by force/long pressing and then dragging.


This is a great point! When I'm using AirPlay, that feature is really useful. I'm more often using Spotify Connect though, where I'm limited to either using the physical volume buttons on my phone, the small slider in the desktop app, or the slider that's many clicks in to the Spotify mobile app.

In reality though, this project is more about the fun of it than about it being a really pressing need.


It also works when using Spotify Connect on your iOS device. If you can use your volume buttons to control it you can also adjust it with the slider in the control centre.


You're correct, TIL!

That's really helpful to know. At this point though, I'm excited enough to build a volume knob that I'll probably still do it.

edit: After trying this out it a bit, it's definitely an improvement over the small sliders and a huge improvement over the stepped volume changes from the buttons, but I'm still left wishing I could make use of more than ~10% of the slider's full range.


spotify has so many user hostile practices that I am completely mystified why the majority of the population seems to prefer them in a world where youtube music exists.


The only competitor that I've given a fair shot is Apple Music. I'm not thrilled with either. Between those two, Spotify wins solely for Spotify Connect. I much prefer the way it works to AirPlay.

I haven't really tried YouTube Music, but I'll give it a go. I've been meaning to try out Tidal too but haven't yet.


If you’ve a computer to run it on, I highly recommend trying Roon out as a superior alternative to Spotify connect.


What do you prefer about Roon?


They’re not increasing my subscription to give me stuff I never asked for.

On an actual technical side, I can stream to multiple devices concurrently, the interface is cleaner and it supports a local music library.


With youtube premium you get no ads on youtube and youtube music. Its a great deal.


wget begs to differ.

Kidding aside, where exactly does it end? How do you consider when you’ve hit “too much” and how many pieces must be split out when you do? Should every product in the Office suite be offered only individually?


Indeed, a lot of people don't remember but back when spell checking was a new thing, there was genuine concern over whether bundling it with word processors was anticompetitive.

Or if Word and WordPerfect should be sold without spell checkers, and they'd need to interoperate with third party ones.


I find these kinds of rhetorical "where exactly does it end" comments really limp. Life is full of choices where there are grey areas. Lay out a bunch of desirable criteria - like not allowing a single entity to monopolise a market -then pick a starting point and iterate until you get a decent balance between the criteria. Sure it'll be a bit messy, but better than doing exactly nothing after throwing your hands up into the air and whinging about the fact that it's complicated.


I understand your frustration, but it's genuinely not that easy.

You're right that there are a lot of gray areas in the law, where the two sides are clear but there's a blurry line. One famous example being, should Pringles be taxed as potato chips or as other chips? Because they're not fried slices of potatoes, they're a fried and shaped mixture of dried potatoes, corn flour, and rice flour. People think of them as potato chips, but they're not really. But it's still relatively straightforward to just draw a line somewhere.

The problem with antitrust is that we don't really know how to define it at all. It's not just a single dimension like "is it a potato chip?" where there's just a single line. It's more like a blob with lots of dimensions where different reasonable completely just completely disagree on what the basic most important elements even are.

> Sure it'll be a bit messy, but better than doing exactly nothing

That's where you're wrong. Badly applied antitrust law can actually be much worse than doing nothing.

I'm not saying to give up. I'm just saying, it's not nearly as easy as you're making it sound. There are really smart people who research antitrust and try to come up with recommendations, and they have profound disagreements with each other. The problem is actually a lot harder than you seem to think it is.


What if we just make them bundle their competitors products if they want to buy dle their own?

That means that Firefox nextcloud and bitwarden are installed by default on windows/macs


What about my startup nextercloud? Why am I being discriminated against!?


Be sure the goal isn't to get every alternative there, just enough to stop the unfair advantage of bundling and to make a healthier market.


Well, I feel like Nextercould™ is the key to a healthier market and stopping the unfair bundling of Nextcloud with major OS releases.


Well pass that info along to the regulator who can actually make binding decisions regarding this matter.


This seems to cover many common pain points, but I’ve written my fair share of .NET serializers and for anything I build now I’d just use protocol buffers. Robust support, handles versioning pretty well, and works cross platform.

I’d like to know their reasons for making yet another serializer vs just using pb or thrift.


This is a good point. I don't think anyone wakes up wanting to make a new serializer. At this point, I was already pretty deep into making and releasing tools for my game projects so doing this didn't seem like such a stretch (although it actually ended up being one of the hardest things I've ever done).

A lot of small to mid-size games (which are the focus of the tools I provide) want to save data into JSON, whether it is to be mod-friendly or just somewhat human-friendly to the developer while working on the game. Not familiar with Thrift, but PB is obviously for binary data and has a focus on compactness and performance, which isn't the primary concern on my list of priorities for a serialization system. My primary concern for a serialization system is refactor-friendliness. I want to be able to rework type hierarchies without breaking existing save files, or get as close to that as possible.

I suppose you could say I'm only really introducing "half" of a serialization system: the heavy lifting is being split between the introspection generator (for writing metadata at compile time via source generation) and System.Text.Json (which handles a lot of the runtime logic for serializing/deserializing things).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: