Hacker Newsnew | past | comments | ask | show | jobs | submit | more nerdponx's commentslogin

Prediction: In 2026 the Trump administration will attempt to ban all other forms of voting and will claim that it is in the interests of election security, because the Democrats can't be trusted to count votes (remember the 2020 election was "stolen"?), so we need to mandate all votes be counted electronically using some sketchy electronic voting system, which a company that is very politically friendly to Trump just so happens to be ready to provide. It will get immediately shot down in several courts but it will take months to resolve all the lawsuits, and SCOTUS won't hear the case. This will cause the election to be held in some places but not others, and overall delay final vote tally by several months. Some kind of data breach will occur but details will not be reported. Neither party will trust the election results but won't go so far as to call fraud lest public trust in the system completely unravel.

Yes - thank you for reminding me a voting machine system was also bought recently (too long a process to get in? Just pay more)

https://abcnews.go.com/amp/US/dominion-voting-systems-sold-c...


Efficiency for the voter and efficiency for the vote counting process are totally different things.

One thing that often gets forgotten in the discussions about whether to soft delete and how to do it is: what about analysis of your data? Even if you don't have a data science team, or even a dedicated business analyst, there's a good chance that somebody at some point will want to analyze something in the data. And there's a good chance that the analysis will either be explicitly "intertemporal" in that it looks at and compares data from various points in time, or implicitly in that the data spans a long time range and you need to know the states of various entities "as of" a particular time in history. If you didn't keep snapshots and you don't have soft edits/deletes you're kinda SoL. Don't forget the data people down the line... which might include you, trying to make a product decision or diagnose a slippery production bug.

The difference is that Word2Vec "learned" these relationships auto-magically from the patterns in the surrounding words in the context in which they appear in written text. Don't forget that this was a revolutionary result at the time, and the actual techniques involved were novel. Word2Vec is the foundation of modern LLMs in many ways.

I can't edit my own post but there are two other big differences between the Prolog example and the Word2Vec example.

1. The W2V example is approximate. Not "fuzzy" in the sense of fuzzy logic. I mean that Man Woman Queen King are all essentially just arrows pointing in different directions (in a high dimensional space). Summing vectors is like averaging their angles. So subtracting "King - Man" is a kind of anti-average, and "King - Man + Woman" then averages that intermediate thing with "Woman", which just so happens to yield a direction very close to that of "Queen". This is, again, entirely emergent from the algorithm and the training data. It's also probably a non-representative cherry picked example, but other commenters have gone into detail about that and it's not the point I'm trying to make.

2. In addition to requiring hand-crafted rules, any old school logic programming system has to go through some kind of a unification or backtracking algorithm to obtain a solution. Meanwhile here we have vector arithmetic, which is probably one of the fastest things you can do on modern computing hardware, not to mention being linear in time and space. Not a big deal in this example, could be quite a big deal in bigger applications.

And yes you could have some kind of ML/AI thing emit a Prolog program or equivalent but again that's a totally different topic.


Several major world powers right now are at the endgame of a decades-long campaign to return to a new Gilded Age and prevent it from ending any time soon. Destroying the public's belief in objective truth and fact is part of the plan. A side effect is that fraud in general becomes normalized. "We are cooked" as the kids say.

Fraud is just marketing in the 2020s now.

I'm not a fan of this either but I fail to see how its much different than the happy path tech demos of old.

The happy path was functional.

Mmm, as someone forced to write a lot of last minute demos for a startup right out of school that ended up raising ~100MM, there's a fair bit of wiggle room in "Functional".

Not that I would excuse Cursor if they're fudging this either - My opinion is that a large part of the growing skepticism and general disillusionment that permeates among engineers in the industry (ex - the jokes about exiting tech to be a farmer or carpenter, or things like https://imgur.com/6wbgy2L) comes from seeing first hand that being misleading, abusive, or outright lying are often rewarded quite well, and it's not a particularly new phenomenon.


But this isn’t wiggle room, it flat out doesn’t compile or run.

Yes. Very naive to assume the demos do.

The worst of them are literal mockups of a feature in the same vein as figma... a screenshot with a hotzone that when clicked shows another screenshot that implies a thing was done, when no such thing was done.


It's certainly ironic if an article about slop leads with a tired old glob of pseudoscience slop and the author doesn't realize.


I can't tell if your comment is being ironic or not.


Ironically enough, the comment is pretty straightforward to interpret.

Moreover you can manipulate your results by disingenuous prior choices, and the smaller sample you have the stronger this effect is. I am not sold on the FDA's ability to objectively and carefully review Bayesian research designs, especially given the current administration's wanton disregard for the public good.


I would think there is less opportunity to manipulate your results with bayesian methods than with frequentist ones. Because the frequentist methods don't just require an alternate hypothesis, they depend on the exact set of outcomes possible given your experimental design. You can modify your experimental design afterwards and invisibly make your p-value be whatever you want


The application of Bayesian probabilistic reasoning in general (as described in this video) is not the same thing as "Bayesian statistics" specifically, which usually to modeling and posterior inference using both a likelihood model and a prior model. It's a very different approach to statistical inference both in theory and in practice. This creator himself is either ignorant of this distinction or is trying to mislead his viewers in order to dunk on the FDA. It's obvious from the video comments that many people have indeed been misled as to what Bayesian statistics is and what the implications of its might be in the context of clinical trials.


Indeed, even more broadly online "Bayesian" seems to have taken on the form of "I know Bayes' Rule and think about base rates" as opposed to "Do you prefer jags or stan for MCMC?"


You can make your job in general a passion/hobby/craft but that doesn't mean you have to work more than your fair share for your employer to be a competent craftsperson.


> that doesn't mean you have to work more than your fair share for your employer

I would never argue for that. My meaning was more about having a passion/hobby in the field that you are working in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: