Not aware of viewing distance recommendations differing between monitors and TV; it's the same 30°-40° of horizontal field-of-view for both, with 32° being a notable notch along the range.
This is then usually combined with the 60 PPD visual acuity quasi-myth, and so you get 1800px, 1920px, and 2400px horizontal resolutions as the bar, mapping to FHD and ~WQHD resolutions diagonal size independently. From these, one could conclude even UHD is already overkill. Note for example how a FHD monitor of exactly standard density (96 PPI, so ~23") at 32° hfov results in precisely 60 PPD. That is exactly the math working out in its intended way afaik.
At the same time, Mac users will routinely bring up the Pro Display XDR and how they think it is the bare minimum and everything else is rubbish (*), with it coming in at a staggering ~200 PPD, 188 PPD, and ~150 PPD at 30°, 32°, and 40° hfov respectively. Whether the integer result at 32° is just the work of the winds, who knows. It is nonetheless a solid 3x the density that was touted so fine, you would "not be able to see the individual pixels". But if that was a lie back then...
The pixel density (PPI, PPD), viewing distance, and screen real estate discussion is not one with a satisfying end to it I'm afraid. Just a whole lot of numerology, some of which I sadly cannot help but contribute to myself.
(*) not a reliable narration of these sentiments necessarily
> Not aware of viewing distance recommendations differing between monitors and TV
Uh.. what?
You usually sit 60cm from your monitor, but 3-6m from your TV. It completely changes the math, which lets you "cheat" with ordinary TV sizes (50"-65") because you will not or barely notice the difference between 1080p and 2160p.
The other way around, your monitor won't really get the 'retina' effect of not discerning pixels until you hit ~220 PPI.
> 3-6m from your TV (...) lets you "cheat" with ordinary TV sizes (50"-65") because you will not or barely notice the difference between 1080p and 2160p.
Yes, at those sizes and distances, all the usual rules of thumb will report that the difference will be indistinguishable between FHD and UHD. We're in agreement there.
It's just that if "cinematic immersion" is among the goals at all, I really don't think e.g. viewing a 50" TV from 6 meters away can provide it. That's more like "something is making noise while I'm having a pop and scrolling social media on my phone" at best. A lot of TV watching happens like that, but then resolution is rarely a concern during those anyways.
> You usually sit 60cm from your monitor
> the 'retina' effect
So we're selecting for 60 PPD ("retina") at 60 cm of viewing distance, let's see:
21" -> 42.352° hfov, 2544 × 1431, ~139 PPI. At UHD, it's ~210 PPI. WQHD would be enough though, and that'd be ~140 PPI.
All of this is to say, I have no idea where you're pulling that ~220 PPI figure from, especially when it comes to higher sizes.
You can also see the PPI slowly descending from some maximum value (eventually reaching 0) since we're talking about a flat panel, and so the math slowly gives out. If you assume a curved display and make the viewing distance the curvature radius (600R) to compensate, you can calculate that maximum value to be ~145.5 PPI. In that case, this value would remain fixed no matter the diagonal size. Not anywhere close to 220 PPI still however.
That said, I definitely sit further than 60 cm too.
> That said, I definitely sit further than 60 cm too.
I've tried sitting closer than 1 metre / (3ft) from a flat 32" 4K display, and I find it kind of overwhelming. At 60cm viewing distance I end basically up keeping one window down the middle of the display that is in focus, and put utility windows off to the sides - and then I physically translate my head to look at then, because the corners of the screen are distorted if I just pivot.
I used to have a 40" 4K display, back when such things were available on eBay from SK, and I had to wall mount it some ways behind the back of my desk to use it comfortably.
I'm sure a curved panel would help with the corner distortion up close, but it kind of seems like a solution in search of a problem.
> But the article points out that the students here don't even watch movies themselves -- "students have struggled to name any film" they recently watched. Why are these people even studying film? The inattention is clearly caused by disinterest.
There's a saying around here that roughly goes: few things are as successful in killing one's interest in something as pursuing a formal education about it.
Being innately interested in something is one thing, but then being in an environment when that is now a hard expectation is another.
It's like the difference between wanting to draw something and being forced to draw something. Entirely different playing fields.
Everything is theater if one is cynical enough. One can very obviously find value in blocking the camera, even if other sensors remain active.
That said, I do see merit to flagging these. Related surprises are usually e.g. speakers being possible to use as microphones, and accelerometer data being possible to use for location tracking in lieu of GPS / any kind of radio, just not remotely & live ofc.
What value is there in blocking a camera if the microphone isn’t blocked? No one can glean anything from the camera in my pocket or face down (or up) on table - they ca. glean a lot from the microphone
And why would I have my phone in a position when I am nude that anyone could see anything but my face? While I’m in decent shape, if I lost my job I won’t be opening an OnlyFans account
I have looked into it. This appears to be a Firefox bug when HDR is enabled on wayland and the website is using webgl. Firefox looks to be leaking wl_buffer objects which are causing a VRAM leak in the wayland compositor which then causes performance issues in the AMDGPU TTM buffer object management.
Nice dig. Could you share more about how you narrowed it down in the end? Is it a known issue and you just had to confirm it applies, or did you identify all of this yourself?
`perf` to get go from the "it's stuttering" to "it's spending a very long time in the gpu driver". GDB and printf debugging to get to "the sort in the driver is taking a long time because there are an excessively large amount of TTM buffer objects, not because we are calling it too much". I could have made that leap faster, and I will the next time, but this time that step took me a couple of hours. From there it was a question of who is making those buffer objects, and so it was back to GDB to find nothing in sway/wlroots.
That was where I sort of ran out of good ideas. I have never worked with Wayland before. I figured it's a "protocol" so it must have a way to inspect it, and it does. `WAYLAND_DEBUG=1` allows you to dump the wayland messages, which I then manually inspected to find a discrepancy between allocations and dealloctions. That's a client (aka firefox) bug, so I looked through their issue tracker where I found a somewhat similar bug[1]. I reported my findings there.
Since then I've checked out the firefox code (which I've also never worked with before). Back in GDB and the logs, and I think I know what's going wrong. You can read the bugzilla for that though.
Unexpected side quests are so much more enticing when equipped with the skill set to dig deep – but also, much more time-consuming.. Thanks for sharing! : )
IU was correctly used everywhere else in the article except that one place with the mistak, so the LLM didn't hallucinate a correction, it correctly summarized what the bulk of the article actually said.
That's a bit of a non-sequitur, isn't it? The debated point is how oral intake as a delivery method can pan out specifically (and its limits), not the dosage limits of Vitamin D in general. Think consuming a drug vs injecting it.
I don't get why + addresses always come up in this. They're machine-undoable by design.
Using randomized relay addresses instead gives you an immensely higher confidence that when a given contact address starts getting spam, it is misuse stemming from a specific entity. Especially if you rotate it at a fixed time interval, cause then you can even establish a starting timeframe.
Still not perfect but it can never really be, and not even out of email's fault. As long as DNS and IP addressing rule the world, there's only so much one can do. Once identity is private-default, it becomes a secret handling problem at its core, a capability these schemes were never designed to provide.
One big reason I can think of that would make one want a permanent data purge feature, is that the data is not on their premises but on the service provider's. I think GDPR might even require such a feature under a similar rationale.
So maybe a better formulation would be to force the user to transfer out a copy of their data before allowing deletion? That way, the service provider could well and truly wash their hands of this issue.
Forcing an export is an interesting idea. But, like, from the article it sounds like almost anything would be a better flow. It didn't even warn that any data would be deleted at all.
One further refinement I can think of is bundling in a deletion code with the export archive, e.g. a UUID. Then they could request the user to put in that code into a confirmation box, thereby "guaranteeing" the user did indeed download the whole thing and that the service provider is free to nuke it.
Wouldn't really be a guarantee in technical actuality, but one really needs to go out of their way to violate it. I guess this does make me a baddie insofar that this is probably how "theaters" are born, rituals that do not / cannot actually carry the certainty they bolster in their effect, just an empirical one if that.
I don't think such an idea is consistent with the existence of trashbin features, or the non-insignificant use of data recovery tools on normally operating devices.
I can definitely see the perspective in clarifying that ChatGPT didn't lose anything, the person did, but that's about it.
It's always interesting to see how hostile and disparaging people can start to act when given the license. Hate AI yourself, or internalize its social stading as hated even just a little, and this article becomes a grand autoexposé, a source for immense shame and schadenfreude.
The shame is not that he was so imbecile to not have appropriate backups, it is that he is basically defrauding his students, his colleagues, and the academic community by nonchalantly admitting that a big portion of his work was ai-based. Did his students consent to have their homework and exams fed to ai? Are his colleagues happy to know that probably most of the data in their co-authored studies where probably spat out by ai? Do you people understand the situation?
It's not that I don't see or even agree with concerns around the misuse and defrauding angle to this, it's that it's blatantly clear to me that's not why the many snarky comments are so snarky. It's also not as if I was magically immune to such behavioral reflexes either, it's really just regrettable.
Though I will say, it's also pretty clear to me that many taking issue with the misuse angle do not seem to think that any amount or manner of AI use can be responsible or acceptable, rendering all use of it misuse - that is not something agree with.
It seems you are desperately trying to make a strawman without any sensible argument, i don't personally think it is "snarky" to call things as they are, plain and simple, you, as supposed expert and professional academic, post a blog on Nature crying that "ai stole my homework", it's only natural you get the ridicule you deserve, it's the bare minimum, he should be investigated by the institution he works for.
A reasonable amount of AI use is certainly acceptable, where "reasonable" depends on the situation, for any academic related job this amount should be close to zero, and no material produced by any student/grad/researcher/professor should be fed to third party LLM models without explicit consent, otherwise what even is the point? Regurgitating slop is not academic work.
Sorry to hear that's how my comments seem to you; I can assure I put plenty of sense into them, although I cannot find that sense on your behalf.
If you think considering others to be desperate, senseless, and erroneously reasoning without any good reason improves your understanding of them, and that snarky commentary magically ceases to be or is all-okay because it describes something you find a big truth, that's on you. Gonna have to agree to disagree on that one.
This is then usually combined with the 60 PPD visual acuity quasi-myth, and so you get 1800px, 1920px, and 2400px horizontal resolutions as the bar, mapping to FHD and ~WQHD resolutions diagonal size independently. From these, one could conclude even UHD is already overkill. Note for example how a FHD monitor of exactly standard density (96 PPI, so ~23") at 32° hfov results in precisely 60 PPD. That is exactly the math working out in its intended way afaik.
At the same time, Mac users will routinely bring up the Pro Display XDR and how they think it is the bare minimum and everything else is rubbish (*), with it coming in at a staggering ~200 PPD, 188 PPD, and ~150 PPD at 30°, 32°, and 40° hfov respectively. Whether the integer result at 32° is just the work of the winds, who knows. It is nonetheless a solid 3x the density that was touted so fine, you would "not be able to see the individual pixels". But if that was a lie back then...
The pixel density (PPI, PPD), viewing distance, and screen real estate discussion is not one with a satisfying end to it I'm afraid. Just a whole lot of numerology, some of which I sadly cannot help but contribute to myself.
(*) not a reliable narration of these sentiments necessarily
reply