Hacker Newsnew | past | comments | ask | show | jobs | submit | more 1vuio0pswjnm7's favoriteslogin

Netguard has per app blocking with logging/alerts, it's open source and on fdroid/play store.

People aren't going to like what they see. One that always annoyed me was a google services app pinging a geoip url everytime I connected to wifi.


web.archive.org is fine.

That .ph link is a tracking infected garbage, requiring JS.


It is wrong to conflate economic success with intelligence.

The only reason I'm posting on HN right now is because I dared to right-click a folder on a network share. MyFolderName (not responding), whelp, guess I'll go fuck off for 60 seconds while every item in that shortcut menu opens a network connection, navigates to...or whatever the hell it's doing. Yeah, I know, use the CLI. OTOH, I argue: fix your broken UI.

I worked at MSFT for eight years as a full-time, Flavorade-guzzling employee, and fifteen years later rarely a day goes by that I don't question how Microsoft manages to get anyone to buy that steaming pile they call an operating system.


That ship has sailed. “Tech” is simply a positioning or branding term.

Meta and Amazon are businesses that depend on developing technology (so are true tech companies as much as NVIDIA or Apple) but nowadays “tech” merely means at best “online”. Most so-called “tech” companies don’t actually develop any technology at all, often being less technical than a non-“tech” company like, say, State Farm.

The term has become so denatured that a new term, “deep tech” has been coined to mean what “technology company” used to mean.


Somewhat, yes. And not by accident.

For the last couple of decades, Google has been in an odd situation:

- They make a ridiculous amount of money.

- They already have a ridiculous amount of money, and also engineering talent and compute capacity.

- Almost all that money comes from one single revenue stream.

- That one revenue stream is very vulnerable. (If everyone in the world started using ad blockers today, Google would start going out of business tomorrow.)

So they were (and still are) extremely interested in trying to diversify that revenue. And, reasonably, tried to use those strengths to shore up those weaknesses.

That meant taking a lot of gambles on fairly wacky ideas in the hopes that at least one of them would pan out in a big way. It's honestly not very different from how a VC works, except that instead of investing naked dollars they were investing engineering time and infrastructure.

So far, none of these projects has turned into anything big enough to meaningfully diversify their business model; they are still almost exclusively an ad company. But it's probably still wise for them to keep trying.

(Source: I worked for Google for a long time.)


We are paid so well because our work can scale to millions of people, that's all there is to it.

I've progressively through my career felt more ashamed of earning so much money for the comparable little societal benefit I generate. Surely, I work on products that hundreds of millions of people use daily, they even enjoy it, but I don't think the beneficial impact of my work to society is anywhere close to what teachers, nurses, doctors, and so on provide. At some points the impact of my work on society was probably a net negative, I just generated cash for the company in detriment of society's needs.

I just can create millions and millions of US$ for a company through my labour. And for that we are well paid. I know, I've just described capitalism but some folks probably need to be more aware of it.


djb is a great lesson in how "being an asshole" can be counterproductive.

Easy and sensible why you might be against this. The simple reality is that BIOS did not prevent you to run anything you wanted on your machine. Recent developments are usually to restrict the user, so any change would very likely be bad. Extrapolated experience does not need to be true, but it very well can be.

UEFI secure boot is an example. Yes, it can increase security (although I think the threat is specific or outdated), but it can well be used to limit the user practically. And if such a mechanism is established, there will be a class system of trusted and untrusted devices. This is not a development that is hard to predict. So UEFI already failed to a large degree, at least regarding the openness of systems. It is no accident that some companies push these developments enthusiastically. It is not for user security, it is simply for market dominance.


LuaJIT + nginx event loop is probably the fastest network runtime in existence. Is there a faster one? In terms of writing high-level code that executes at the speed of the machine.

> Honestly I don't see any purely technical solution to this.

The technical solution is pretty simple: do not use Wi-Fi. I use wired connections for all of the devices in my household. The only non-technical aspect of the solution was an interior design-based one about unobtrusive cable wiring around the house.


As a data scientist, I think most data is useless, but there is an addictive, video game-like quality to throwing lifeless spreadsheets into a machine and having colorful visualizations come out. It's kind of like a very boring video game for adults that makes them feel like they're working, when they're actually just enjoying colorful abstract shapes and colors. To be honest, this is probably a sizable piece of why I'm in this line of work.

Since I was at Google 12 years, this doesn't surprise me at all. It's not that there are 1,400 flags in ONE module; it's that all the many systems it includes all have their own. No one looks over that list and says, "Hmm, which of these 1,400 do I need today?"

There's absolutely no reward for scouring the system and getting rid of the 90% of the flags that serve no purpose anymore. All you would do is annoy people and risk breaking something. So no one does it.

In fact, as I recall, you could either "list the flags" for a system, or "list all the flags AND all the included systems' flags." Hardly anyone wants to do the latter.


>Google Cloud is not the real Google. Culture-wise, you are closer to Oracle.

Incorrect. Google's culture has shifted. Culture-wise, Google is closer to Oracle than what people think is Google.

20% hasn't been a thing in Google for a long time. Exists only on paper.

Source: I worked at Google when we all got the memo that all information access within the company is now on a need-to-know basis, and that accessing information that you don't need to know for your job is a fireable offense.

Default level of sharing technical documentaiton is fully private, so tech docs are not searchable in company search engine (read this again).

Good luck re-using the code without design docs. Also good luck starting a project if your team doesn't own the data.

Hot take: Google doesn't have a strategy, period. Neither copmany, nor product.


I quit FAANG to do startups this year. I quit because I was completely bored out of my mind, hyper underutilized, there were so many people around me doing the same work as me so zero interesting growth options.

Now that I am in a (very good) startup, I am incredibly happy, learning and innovating nonstop, meeting new people, in a hyper growth market, building a completely new skillset.

I could never go back to these big companies. Not unless I was reporting directly to a CEO.

I am not getting rich but I am filled with joy.

At my previous job, I was so frustrated and the work was so pointless I was literally throwing things in my house. It made me so angry how stupid and useless a waste of time. We were working on a product which was entirely fake and everyone in the team knew it, but it was generally agreed we would all fake it together.

I would rather suck on a gas pipe than go back. It was damaging my mental health.

Say what you want about desperation: You only have time in your life. They were wasting my only resource, my time.


For systems programming, C89 is definitely a "sweet spot". It's relatively easy to write a compiler for and was dominant when system variety was at its highest, so it is best supported across the widest variety of platforms. Later C standards are harder to write compilers for and have questionable features like VLAs and more complicated macro syntax. C++ is a hideous mess. Rust is promising, and would probably be my choice personally, but it's also still fairly new and will limit your deployment platform options.

C89 is still a reasonable choice today. I don't think it's depressing. It's a good language, and hitting the sweet spot of both language design and implementation is really hard, so you'd expect to have very few options.


I think the aritcle is pretty good (I have a strong disiagreement with one of the 50 terms, but otherwise I think they did a good job).

I'm relatively new here at HN, but as a psychologist (who also dabbles in hacking) some of the reactions do support, sadly, the stereoytype that computer/engineer-types suffer anoagnosia (the lack of knowledge of what you do not know).

Some engineers do not understand that the world of human behavior is far more complex than the world of atoms, molecules, components, chips, software, etc.

Behavioral and social scientists are, you may be surprised to learn, aware of this fact, and actually lead in scientific investigation of the difficulty, specifically: dealing with constructs, and figure out how to define and measure them.

"Soft sciences" actually are much harder than "hard sciences" in many ways.


All of the rooms/corridors in my house except my bathrooms are covered by cameras. My initial motivation for installing them was to keep an eye on what my pets were doing when I'm not around, but I find in recent years that if I misplace something, I end up tracing back my history on the cameras and finding where I left it.

It seems obvious that at some point, AI will be able to do that for me and I'll just be able to say "Alexa, where did I leave my glasses?", "Hey Google, where did I put my box of spare fuses?".


Historically ICANN had authority via two mechanisms. The first is that it was appointed as the Internet Assigned Numbers Authority by the Internet Architecture Board (part of the IETF). At a hard minimum that gives it the authority the run the IETF IANA functions, and historically would have been what gave it authority to issue IP addresses and AS Numbers. In performing the IETF IANA functions it has zero regulatory role, it is just a registry. I believe it has slightly more say over policy in issuing AS and IP addresses to the regional internet registries.

The other (no longer applicable) source of authority was the contract from the US Department of Commerce. This is no longer applicable, as the US government decided it did not need to be involved with this, especially because it got a lot of criticism for being involved, but the contract really offered the government no real control over ICANN.

The place where ICANN has the most say is domain names. Here ICANN acts as a full blown policy maker, in addition to running the naming IANA functions (like creation of the root zone file). To the extent that ICANN "regulates 'the Tnternet'", it is restricted to its policy setting over the DNS.

It is less easy to give any current source of authority for this without the commerce contract. But it really comes down to everybody accepting ICANN as the provider of the root zone file, and the fact that ICANN would reassign the TLD delegations to a different registry if a currently assigned registry does not want to follow its rules.

-----

The regional internet registries nominally get their authority to issue DNS address ranges from a delegation from the IANA (in its numbering function). Like mentioned before, one can trace ICANN's authority to run the numbering portion of the IANA back to being designated as the IANA by the Internet Architecture Board (part of the IETF), but this could be a little misleading, as the "numbering community" now exists, and ICANN claims that it is that community that would be allowed to appoint a new organization to run the IANA numbering functions.

(Also ICANN has created a stand alone nonprofit to actually run the IANA functions. This is a membership based non-profit with ICANN as the sole member, making it basically like a subsidiary without legally being one, because ICANN wanted to make it very clear that the operation of the IANA functions is separate from the policy making part of the organization once the Government contract was stopped.)

------

Now perhaps you want to know how the IETF has authority? Quite simply, they are an outgrowth of, and assumed the functions of the Network Working Group, a group a early ARPANET researchers. As the group that defines protocols like TCP, IP, HTTP, etc they derive their authority from just that, that they develop the standards that underlie the internet.


I have done something a tad different.

I am running my custom made proxy (dns blocking like pihole is a joke, you can circumvent it as simply as https://2899908462 ) with interesting features like ASN ( https://en.wikipedia.org/wiki/Autonomous_system_(Internet) ) blocking and for fun I have blocked all google ASNs.

Half of the internet stopped working.

Then I have expanded this to google, microsoft, amazon, facebook ASNs.

The whole internet stopped working. From all search engines I am aware of, only yandex.ru was still operational.

What you are seeing with beeping is just a tip of iceberg. The google is getting far more data from its cloud.

I don't believe that most of advanced users are aware of, how deep the rabbit hole of internet centralization has gone.

And until people figure out how important self hosting actually is, it is only going for worse (yeah, I understand how convenient, blah, blah... the cloud is).


They don't, it's a page that's intended to look like cloudflare for ... unknown reasons?

They do a lot of other very suspicious things to users.


Note that if you search [purpose of life], it does not say it has 2 billion results anywhere on the first page. My team removed the blue bar containing that text way back in 2010. You have to hit "Next" or otherwise visit page 2 to get it.

And I'd bet the reason why it's still there (I left Search in 2014) is because < 0.1% of users ever hit the next page. Everybody else just refines their query to a different search. It's a holdover from when search engines were bad (i.e. around 1998) and you had to go through 10 pages of results to get the one you were looking for. As a result, Google expends approximately zero engineering effort on pages 2-20 of the results - I know that in the 4 visual redesigns I worked on, we didn't touch them once. It wouldn't surprise me if the response to flack on this is to just get rid of all pages other than the first one - it avoids the issue entirely and wouldn't affect 99.9% of users.

The technical reason for this behavior, as others have remarked below, is pagination. Ranking across the full result set is a very complex calculation, and it can depend on some factors that are basically random (eg. timeouts and failures in backend servers). It'd make pagination basically useless if the same results you already went through show up on a later page because the ranking is different. This requires that the full result set be cached. You can cache 400-1000 results for each of the queries that the 0.1% of users who actually hit "Next" care about, but you'd have a big issue caching 2 billion results for each of those queries.


Well, given that I am typing this about 15' from an IBM POWER S914 running the i operating system, I'm not sure its lost. Our accountants hate GUI stuff and love the green screen. Its amazing to have what essentially is a low maintenance machine that calls IBM when something isn't correct. We have the last i Series (a pre-POWER model) that lasted for over a decade, and I do expect this one to make it the same amount of time. It is a bit obtuse, but I dearly wish some other OSes would examine themselves for self administration to the level of the IBM i Series.

TikTok and Facebook’s antics (existence?) have me seriously considering quitting tech. I feel like no matter what I do, I am feeding one of these societal cancers by implementing tracking for whoever is paying me currently, lifting dark patterns, etc, using the API of Google… I tried to avoid doing these things for awhile but nobody pays me to be ethical.

My only marketable skill is software development and I pretty much hate what it has become. 16 years down the drain.


I am a bit older and have worked in IT now for almost 2 decades. (44 years old)

I honestly think the general premise here holds true and perhaps applies to many of us here in computed related occupations.

It seems many of us focused on STEM STEM STEM or COMPUTERS COMPUTERS COMPUTERS in education but perhaps are emotionally infantile to an extent.

How many of us write programs, support infrastructure, design gadgets … for what? What is important ? Our job… buying gadgets… traveling..

I don’t have any answer but it’s worth pondering. How many of us work countless hours for companies whose missions we don’t perhaps even agree with… how many of us ponder what is important to us and why we act the way we do?

Just My two cent


In my experience, Rust compiles orders of magnitude faster than similar C++ code for initial compiles. For incremental compiles, Rust is infinitely faster. C++, even if you use Modules, needs to re-compile, re-instantiate, re-check all the templates all the time. I speculate that this alone is what probably is responsible for the largest difference in compile-times.

Comparing Rust with C here is hard. As others have mentioned, with C, you typically only use libraries that are installed in your system. Rust does not have a global system cache for libraries, and I don't think there is any package manager installing pre-compiled Rust libraries, so at least for the initial compiles, Rust needs to do much more work than C, for Rust dependencies at least. If your Rust project only depends on C libraries, then there is no work to be done of course.

For incremental compiles with C, it is hit or miss. If your C project is properly "modularized" (split into a lot of small enough TUs), then when you change a TU, C only needs to recompile that one TU. In our benchmarks, Rust is faster for that situation, and speculating, this is because the Rust compiler only recompiles the parts of a TU that actually changed.

In practice, however, Rust is much slower than C, because people do not write small Rust TUs, but absurdly huge ones, at least, by C standards. A Rust TU (crate) usually is mapped 1:1 to a C library, which often contains dozens or hundreds of TUs. Just the cost of testing what changed in a Rust crate is often larger than the cost of blindly recompiling the C TU that has a newer timestamp.

If people would split Rust code like they do with C, then this wouldn't happen. But Rust people are too optimistic in that one day the compiler will be able to somehow auto-magically split a crate into multiple TUs as good or even better than what people do for C manually. And do that determination + the incremental compile faster than what takes a C compiler to blindly recompile a tiny amount of code.


I popped in for a few minutes to verify something. I'm ... over this whole annual what's new presentation stuff. I just don't care anymore. And honestly the three presenters I saw didn't even seem like they rehearsed at all. They just read from the prompter and tried to put excitement in their voice at the same time and it just fell flat.

I doubt this is a leak, it very much sounds like Apple is using QUIC to connect home and make the API work.

Not respecting the system firewall does seem like a flaw, but Apple has had a history of bypassing attempts at filtering network traffic. Firewalls have been blocked from working and Apple services have been made unblockable in later APIs. I'm not surprised in the slightest that Apple also bypasses your VPN to call home.

I don't know if this is a problem, though. If you buy Apple, you let Apple make the decisions for you, that's how the entire ecosystem is designed. You must trust Apple unconditionally and accept traffic sent home to adhere to their privacy settings, or you should not run macOS at all. Try to run Windows or Linux on it if you've bought your computer for the hardware quality, though the M1 makes that nearly impossible without sacrificing user experience.


Advertising shits in your head.

They could also just do a reverse DNS lookup on the IP (and then forward lookup to confirm it).

This would be less effective for sites run through CDNs (ex Cloudflare) though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: