Hacker Newsnew | past | comments | ask | show | jobs | submit | antiuniverse's favoriteslogin

As a scientist, I'm interested how computer programmers work with data.

* They drew beautiful graphs!

* They used chatgpt to automate their analysis super-fast!

* ChatGPT punched out a reasonably sensible t test!

But:

* They had variation across memory and chip type, but they never thought of using a linear regression.

* They drew histograms, which are hard to compare. They could have supplemented them with simple means and error bars. (Or used cumulative distribution functions, where you can see if they overlap or one is shifted.)


> In the next 10 years, AI/robots will generate wealth at an unprecedented scale. Food, clothing and shelter will be plentiful.

Anyone who believes in the possibility of post-scarcity society must be either naive or trolling. Something cannot be made from nothing, therefore scarcity cannot be overcome, even assuming that all planning and execution is performed by superhuman synthetic minds.

Assuming that it's theoretically possible to utilize existing resources in a very efficient manner (e.g. motor running on a grain of rice for a century) and we just need AI to help us figure it out, is a _gigantic_ leap of faith and i would not bet a cent on that.

Let me paint a more realistic possibility for you (with a broader time horizon): most of the value created by automating knowledge work will be captured by private capital, and middle class will all but disappear. Education beyond basic reading and writing will become unattainable (and, frankly, unnecessary), most population will be reduced to the state of semi-literate serfdom dependent on the newly minted lords for survival. The lords wouldn't have to worry about feeding their subjects for too long though, as mass death brought about by climate change will take care of that problem.

Under that scenario, there will be no new enlightenment age to come and save us. The only reason we get to enjoy whatever freedoms we have today is that a (semi-)intellectual population is absolutely necessary to keep the complex modern economy running. Even then, those above you will do absolutely everything to limit your agency - by withholding information, lying, or just outright taking freedoms away. Do you know what happens once our participation in propping up the economic machine becomes unnecessary? Demotion to the default state of a human throughout history - a groveling, suffering serf who has no idea what's going on.

"If you want a picture of the future, imagine a boot stamping on a human face – for ever."


It was like the JavaScript of UX and 2D games for a while. It was trash as a technology, but it was a technology that millions of UI designers and indie game developers cut their teeth on and already knew deeply, so it stuck around much longer than it should have. Like grown-ups trying to ride tricycles to work.

I remember someone at EA talked to Adobe about getting access to the Flash (.fla) file format so that our UI tools could read it directly to export into the in game format instead of reading the already compiled and compressed .swf format. And they were like, "Uh... you really don't want to do that. It's like a memory dump of the entire application state. It's a giant mess. We're really sorry."

By all accounts, it seemed like the Flash codebase was a dumpster fire. But so many amazing little games and UIs were built using it.


Here's a research paper that might provide more information: "Vision-Based Navigation for the NASA Mars Helicopter" https://sci-hub.se/10.2514/6.2019-1411

> This paper provides an overview of the Mars Helicopter navigation system, architecture, sensors, vision processing and state estimation algorithms.


Here's an anecdote about what happens when you design a product in the opposite way.

I own https://www.executeprogram.com, which has interactive in-browser courses on various software development technologies. Currently they all cover languages, more or less: TypeScript, SQL, various JS topics, regexes. (Disclosure: it costs money after you finish your 16th lesson.)

Almost all (maybe literally all) of our competitors are amenable to binging. It's true of books, video learning platforms, and most/all other interactive learning platforms.

Execute Program is very intentionally non-bingeable. When you start a course, you get 5 lessons on the first day, then it stops you and tells you to come back tomorrow. On the next day, you get some brief reviews of yesterday's lessons, then a few new lessons, then it stops you again until the next day. That cadence repeats until you finish the course. You can't binge/cram even if you want to.

(A bit more technical detail: it's a spaced repetition system with exponential review intervals, similar to those used for language learning in e.g. WaniKani and Anki. But it also has a lot of fine-grained knowledge of its own course structure, so it can use reviews to intelligently unlock different lessons depending on how the user performed on their reviews.)

Occasionally, we get support email from new users who don't like this. They want to cram a whole course in a day. But cramming is a very time-inefficient way to learn, so this is self-defeating! Since launch, we've had good success adjusting the app's behavior and internal explanations to reduce these complaints.

However, we still get emails from long-term users who appreciate the time limitations. Generally these fall into two categories:

1. Users like that an enforced break before the reviews provides tangible evidence that "yes, I genuinely understood yesterday's lessons". If we allowed cramming, that reassurance wouldn't exist; it's too easy to succeed at a review when you just finished the lesson 30 seconds ago.

2. Users like that the usage limits remove a source of anxiety and worry. You do your reviews and lessons, you finish, and then you wait until tomorrow. There's no temptation to think "I really should've done 10 lessons today instead of 5; I'm so lazy".

It's still possible for a very dedicated user to do all of our courses in parallel within their first monthly billing cycle. (Median course start-to-finish time is 8-18 days depending on the course.) So this scheme doesn't make users pay us more than they would otherwise. And they're spending the same amount of wall-clock time that they'd spend if they crammed all of the lessons in one day. That makes it pure win: they memorize the topics more deeply, they worry less, and they get those benefits for no extra time expenditure. The only exception I can think of would be people who think "I must get exposure to all TypeScript syntax and semantics before tomorrow morning, even if that significantly reduces my ability to remember what I learned."

Obviously I'm very biased here, and the goals that we're optimizing for don't even exist in most other product spaces. But I thought it would be nice to have a counterexample to "engagement at all costs".


I have a lot of experience with this having developed a personal finance app that is local-first: https://actualbudget.com/

Having all of the data local is a superpower. It changes how you build the app, and you just never have to worry about the network. Everything is fast _by default_. It's great.

The app uses CRDTs to sync and it's unlocked a lot of powerful stuff like a robust undo system. We also offer end-to-end encryption.

On the negative side, over time I've stepped back a little from being truly local-first. The mobile app used to connect to the desktop by pointing the phone's camera at a QR code on desktop, and it would literally sync peer-to-peer. Maintaining that was nightmare and users had endless networking problems. Our network infrastructure unfortunately is not built for truly peer-to-peer apps. Maybe ipv6 will help, I don't know. My app now has a "centralized" server but it's really just a backup that gives some convenience - the app is still totally local but syncs through a single server. You can work on it for weeks offline and then sync up. In the future when p2p is ready, I'll be ready for it.


Wouldn’t that mean you’re holding lots of non reusable connections?

I have recently been using row level security with a transaction middleware where I set the tenant ID.

Nice article regarding it - https://aws.amazon.com/blogs/database/multi-tenant-data-isol...


With Fortnite it's that my nephew's are really into Fortnite and Fornite YouTube. They're very aware, even at their young age of the "default" slur, they call each other names related to it jokingly. Peer pressure and probably influential YouTubers have them convinced they _must_ have particular skins at certain times.

They're so into the YouTube culture they recently requested (and received) RGB keyboards and accessories to adorn their desks even though they're gaming on consoles.


Data mining is an old gamedev worry. Nobody who's been doing it for a while particularly cares. If the game is remotely popular, data mining will happen. Disassembly to find the decrypt function is a talent possessed by many, many people who have the reverse-engineering bug. If you then obfuscate the code, you have made things interesting and then increasingly talented people will try to have a go at it. If you combine a changing obfuscating technique with updates, you can slow down community possession of the game with respect to modding etc., and Minecraft worked that way in its beta phases. But it's really all a question of your purpose at that point.

If you haven't already, I suggest cruising through TCRF: https://tcrf.net/The_Cutting_Room_Floor


A heretical thought I have had about Star Trek: the Federation has no need for Star Fleet. They're fantastically wealthy and cannot meaningfully gain from trade in physical items. They're not just singularity-esque wealthy relative to the present-day US, they're equally more secure. Nobody kills mass numbers of Federation citizens. That occasionally happens on poor planets elsewhere. Sucks but hey poverty sucks.

So why have a Star Fleet? Because Jean Luc Picard is a Federation citizen, and he wouldn't be happy as other than a starship captain. It's a galaxy-spanning Potempkin village to make him happy. Why would they do that? You're thinking like a poor person. Think like an unfathomably rich person. They do it because they can afford to. He might have had a cheaper hobby, like say watching classic TV shows, but the Federation is so wealthy that Starfleet and a TV set both round to zero.

This makes Star Fleet officers into in-universe Trekkies: a peculiar subculture of the Federation who are tolerated because despite their quirky hobbies and dress they're mostly harmless. Of course if you're immersed in the subculture, Picard looks like something of a big shot. We get that impression only because the camera is in the subculture, not in the wider Federation, which cares about the Final Frontier in the same way that the United States cares about the monarch butterfly: "We probably have somebody working on that, right? Bright postdoc somewhere? Good, good."


Most LEDs will quote a CRI value. There are 15 buckets of color which can be tested, but generally CRI will just be the average of the performance on the first 8 buckets only. You can see the colors of the buckets at [1] or [2].

Higher quality LEDs will quote their R9 performance. LEDs are most efficient at creating bluish light, and use a phosphor to even out the light. Many LEDs lack red light output, which is what the R9 bucket is for. However ideally you want great performance across the full spectrum.

The LEDs I'm using are the Bridgelux Thrive. If you look at page 7/9 (page numbers / actual PDF page) on their datasheet [3], they have actual graphs of the light spectrums. Two pages above that they have a table with R1-R15 measurements.

On the lower graph of page 7/9, the dotted line is a black body radiator at 4000 degrees kelvin. Basically hot stuff glows and the spectrum shifts depending on the temperature. Incandescent bulbs are around 2700 K and make a shitton of infrared plus a bit of red/yellow. The sun is at 6000-6500 K; while it still makes a lot of infrared the spectrum now covers the entire visible range and also some UV and shorter wavelengths.

They plot against that spectrums from what would qualify as 80/90/98 CRI. You can see the 80 CRI has a large falloff on the red (right) side of the spectrum. Bridgelux invents a new number Average Spectral Deviation which just computes how different the light is from the black body radiator; as you can see the Thrive is almost the exact same except for the very deep reds and a small blue bump / green dip (which is a characteristic of most LEDs, you can see the others have a much bigger deviation).

I was a bit surprised to see that the 98 CRI had such a huge deviation in the blue (seems it trades a bigger blue bump for a smaller green dip). I don't know how trustworthy the comparisons are, but at least the Thrive is damn near perfect.

However it's not very efficient, outputting 100-120 lumens per watt. Less-accurate emitters can get close to 200 lm/W. Also there's not a very big difference between the prices of different LEDs, and a different module would likely be a drop-in replacement. So you could nearly double the output of the system I designed if you didn't care too much about color rendering, which would make it somewhat competitive with the market in terms of $ per lumen. There's still a bit of soldering and assembly involved though.

[1]: https://en.wikipedia.org/wiki/Color_rendering_index#Test_col... (only has 14...)

[2]: https://www.waveformlighting.com/high-cri-led

[3]: https://www.bridgelux.com/sites/default/files/resource_media...


Another good Linear Algebra book is "Linear Algebra Done Right", which Springer is giving for free right now.

Link: https://link.springer.com/book/10.1007/978-3-319-11080-6


That of course is not the argument at all. The argument is that the cost of working with static types exceeds the benefits. Finding type errors is obviously a benefit, while not finding them (or finding them only at runtime) is a cost. The question is what other costs and benefits there are, and how to compare them. Nobody agrees on that, nor ever will.

This argument will never be resolved because it's dominated by psychological factors. Once you've adapted to environment A, A's costs (e.g. hoops the compiler makes you jump through / time spent tracking down type errors) become habitual and you don't notice them anymore. Meanwhile the costs of unfamiliar environment B are extremely highlighted in your attention. Similarly, if you like and identify with A, its benefits will be top-of-mind for you, while if you don't like B (and nobody likes B), you'll discount its benefits.

It gets worse. Sometimes benefits are costs until you make it through a learning curve. An example is parentheses in Lisp. They stick out like a sore thumb until one day they don't. Later you realize that they were a liberating force all along, and you now have anti-gravity powers you never dreamt of. (<-- This is an example of how people talk when they are adapted to an environment whose costs they no longer count and whose benefits are top-of-mind for them.) We can't even agree on what the costs and benefits are, let alone how to measure them.

It's no wonder that people feel so strongly about this dispute. When you don't count the costs of your preferred environment, and only count the benefits, it gets obvious pretty quickly that other people are idiots. For some reason the idiots feel the opposite and perversely insist on it.

It gets worse. Imagine that you're a decent, fairminded sort and so, magnanimously as is your wont, you decide to give the idiots a fair shake—you know, just to verify that you're being fair and whatnot. You fire up whatever it is (Clojure? OCaml? needless to say, you make a charitable choice) and start programming for a while. What happens? All the costs of unfamiliarity hit you in the face. Not one of the benefits that has lived at the top of your mind for years is anywhere to be seen. This thing doesn't even catch trivial type errors at compile time! / I can't even execute this bit of code in a REPL! This is a painful experience. It can even be an enraging one, for example if you are forced for some extraneous reason to work on a system you didn't create using tools you don't like. Most people who go through that experience once have their views solidified for life. Meanwhile someone else had the opposite experience.

Given how passionately people feel about this and the certainty in forum threads about it, it's curious (or maybe to be expected) that the question of published evidence comes up so rarely. HN user luu did the definitive survey on this: https://danluu.com/empirical-pl/. The short version is that such studies as there are don't address the core question and/or find insignificant effects and/or their designs tip obviously to one side or the other (and I do mean obviously, like comparing Peter Norvig's code to that of random undergrads).

Bias dominates every other factor, put together, times at least 10. As long as that's true, we can't say anything rational without genuinely accounting for bias. Do we actually know how to do that? The studies don't seem able to. These debates certainly don't, nor do they try. Maybe the interesting phenomenon here is actually the bias itself. Maybe it's not that we like an environment because we're more productive in it, but that we're more productive because we like it. Maybe we should get better at liking things.

The debate about this tends to remain stuck at a low level because once you've been around the block enough times you realize that it hurts to bang your head against a wall so you stop. Thus the vocal population undergoes a perpetual exodus of the experienced, but makes up for it with a fresh supply of enthusiasts, reminding one of the adage, "I arrive at the office late, but make up for it by leaving early."


Ugh, this is just going to feed the pessimists.

Look, I get it. All extremes are bad.

Myself - I trend more negative than positive. But I have a couple of friends that are positive bundles of joy and optimism. I find them invigorating and inspiring to be around and they push me to have a more pleasant outlook.

They are not posiopaths.

But my partner (who has long-term struggles with clinical depression) is positively nauseated by them. If she read this article, she'd probably start naming these people as posiopaths, even though they don't reflect the antipatterns described.

A part of me wants to send this article to her to show what TRUE positive-only attitudes look like, but I suspect she'll just map it directly to those whose positivity makes her feel worse about her own negativity, and consider the matter closed.


AMD's latest CPUs are one NUMA node per socket.

Here's the 64-core Threadripper's core to core communication latency: https://pbs.twimg.com/media/EQXru3WU8AAV3JC?format=png&name=...

Communication within a CCX is quicker, but everything else goes through the central IO die that has all the DRAM controllers.


I haven’t found near infrared radiation referenced in the article - I‘m using near infrared as a brain hack, by shining a cheap 850nm LED light on my forehead. This has, over the last 2 years, enabled me to code for weeks on end, for 12+ hours a day, with only minor cognitive decline. It’s not something I really want to do, but sometimes it’s useful.

Before I started the near infrared routine (~5 minutes every other day), 5-6 hours of coding per day was all I could do - eg after coding for 8h, I noticed serious cognitive and emotional decline, and might need to do less the following day. Not anymore - nowadays I can be productive whenever I’m awake, with little side effects. Near infrared radiation is safe (thousands of studies demonstrated only very mild side effects), and is even used to treat Alzheimer’s. I have no idea why its beneficial effects are not more widely known - for some people, it’s life changing.

Sidenote: 850nm light works way better for me than 830nm.


Fantastic project!

For someone looking to learn more about signals and audio processing DFTs FFTs etc any good material people can recommend? I've completely forgotten my Uni CompEng Signals and want to revisit it.


the YoloV3 (the model that these tools are designed to work with) paper is extremely funny and worth reading for anyone who hasn't.

https://pjreddie.com/media/files/papers/YOLOv3.pdf


I just finished "Rewired: the Post Cyberpunk Anthology" which has some commentary mixed in about this issue. The editors' view seems to be in part that Gibson's work was influential precisely because of the reaction it got.

Do we WANT to live in a post government mega corporate future where everyone has mirror shades? The folks who got to thinking about that when Gibson was telling us about that future came up with some great ideas.

Gibson's modern work still feels good to me but it feels more like fiction than science fiction because we've crept into his future.



Desires themselves are imbued with negative affect. As Naval Ravikant tweeted, "Desire is a contract that you make with yourself to be unhappy until you get what you want." More fundamentally, I think the very mechanism of how desires work is through negative hedonic tone. Contemplating and actually fulfilling the desire is associated with positive hedonic tone. The gradient between the negative hedonic tone of the desire and the positive hedonic tone of its fulfillment is how motivation emerges.

Long-term meditation is often associated with a lessening of the influence of desire, and greater happiness.


Does Rust have its own linear algebra, image processing, computer vision libraries in pure rust?

I would love to see how such libraries are built from scratch in a low level language. I feel like I would learn a lot as well.


It's solely focused on data analysis (none of this "dashboard" stuff), but I've found Kst[0] to be the lowest-friction way of going from plaintext time-series data to fancy, zoomable, draggable plots in whatever layout you want. It handles realtime data very handily as well, happily scaling with as much backlog as I could throw at it.

Bokeh is a nice kit as far as it goes, but I hit scaling problems quite early with it - as I understood it, it was supposed to send incremental updates to the client, but in fact it resent everything every time - and therefore fell over after half an hour when the time to update exceeded the update rate. Maybe I was holding it wrong.

[0] https://kst-plot.kde.org/


Since there's not much info on this, and since I missed the relevant thread last week, I'll just note here for the record that Mojang's other recent release, a browser-based retro game called Minecraft Classic, was built on my voxel game engine.

[game] https://classic.minecraft.net/

[engine] https://github.com/andyhall/noa/

if anyone's interested. It's pretty weird waking up one morning to find out that Mojang built a game on your tiny solo project...


I'm glad they were able to sort it out so quickly.

I don't think it's so much that doctors don't care as that most cases spontaneously resolve faster than any person can systematically figure them out.

There's a psychological benefit to the parents -- they're often pretty stressed out and feeling like they're the problem. Putting on a white coat and a serious face while diagnosing the problem as intrinsic to the infant relieves the parents of the burden of thinking they're bad at their new permanent job, and probably helps bolster their waning ability to handle trying to comfort a crying infant without snapping and harming the baby or themselves. I don't think the idea is to leave the baby to wail as much as it is to get the parents to stop taking the wailing personally.

Like most of the paternalistic little white lies doctors deploy, it's not usually the appropriate treatment for patients who know better. Sadly some doctors never learn to distinguish these little psychological tricks (which are an important part of the medical art) from real knowledge about human biology. If they frustrate rather than comfort you, and your doctor can't or won't be straight with you, it's time to find a new one.


Love this comment left on the post by Peter Shor (of Shor's algorithm--the algorithm that kicked off the quantum computing frenzy). I assume it's him and not an imposter.

"It's not just that scientists don't want to move their butts, although that's undoubtedly part of it. It's also that they can't. In today's university funding system, you need grants (well, maybe you don't truly need them once you have tenure, but they're very nice to have).

So who decides which people get the grants? It's their peers, who are all working on exactly the same things that everybody is working on. And if you submit a proposal that says "I'm going to go off and work on this crazy idea, and maybe there's a one in a thousand chance that I'll discover some of the secrets of the universe, and a 99.9% chance that I'll come up with bubkes," you get turned down.

But if a thousand really smart people did this, maybe we'd actually have a chance of making some progress. (Assuming they really did have promising crazy ideas, and weren't abusing the system. Of course, what would actually happen is that the new system would be abused and we wouldn't be any better off than we are now.)

So the only advice I have is that more physicists need to not worry about grants, and go hide in their attics and work on new and crazy theories, the way Andrew Wiles worked on Fermat's Last Theorem."

(new comment right below)

"Let me make an addendum to my previous comment, that I was too modest to put into it. This is roughly how I discovered the quantum factoring algorithm. I didn't tell anybody I was working on it until I had figured it out. And although it didn't take years of solitary toil in my attic (the way that Fermat's Last Theorem did), I thought about it on and off for maybe a year, and worked on it moderately hard for a month or two when I saw that it actually might work.

So, people, go hide in your attics!"


5 yrs ago (mostly for fun) I tried out the 'state-of-the-art' - at the time - NLTK sentiment analyzer to correlate stock market changes with a variety of news/info sources.

I put it on the shelf because the sentiment analysis just wasn't up to snuf (i.e. the bias differentiation was too weak).

Might be time to try again!


I'm by no means an expert - just a hobbyist tinkering with some RF magic, but I made a simple POC of using SDR to reverse engineer and perform jam and replay attacks on cars - essentially giving you permanent access (for cars with button keyfobs). I have a short write up here: https://github.com/trishmapow/rf-jam-replay

I always found it disheartening to read their changelogs and see how every release (I've looked at) fixes some "N+1 SELECTs" problem [1].

This will be an unpopular opinion here, but I feel we've (collectively) failed. Between ORMs and the dynamic language du jour, we have become so far removed from how computers work that most software development is a waste of electricity. We're using tens of servers and complex architectures for simple sites, just because we think that's the only way to scale them [2]. And our computers are slower than they ever were [3] [4].

I only gave a couple of examples, but look around, you can find them everywhere. Everything is awful.

[1] The issue tracker is now down, but you can try it when it comes back: https://about.gitlab.com/releases/.

[2] https://www.usenix.org/system/files/conference/hotos15/hotos...

[3] On older computers GNOME can't even keep up to update the mouse cursor.

[4] I'm honestly not sure that Windows 10 boots faster from an SSD than XP from an HDD.


We run the user supplied code through an instrumentation pass (using Google Closure Compiler) where we modify it at the AST level to call a resource check/interrupt check after every instruction. It also modifies any loop constructs. The compiler will save a sourcemap so you can pretend you’re not modifying user code.

We disable all extensions. We do some instrumentation on string ops as well. We use the mbean API to get the memory and CPU usage of the current thread. We have a monitor thread that checks that for all worker threads. I’ve probably forgotten a bunch of things but it was a huge pain. I’m going to tell our guys about your solution and see if that’s something we can look into.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: