Hacker Newsnew | past | comments | ask | show | jobs | submit | more zerd's commentslogin

That would make the locked time shorter, but it would still contend on the global lock, right?


It's annoying, around 10 years ago 10gbps was just starting to become more and more standard on bigger NAS, and 10gbps switches were starting to get cheaper, but then 2.5GbE came out and they all switched to that.


That's because 10GbE tech is not there yet. Everything overheats and drops-out all the time, while 2.5GbE just works. In several years from now, this will all change, of course.


10GbaseT and SFP+ DAC are a world apart. The latter is now extremely power efficient. But even modern 10GbaseT parts are considerably improved


Speak for yourself. I have AQC cards in a PC and a Mac, Intel gear in my servers, and I can easily sustain full speed.


The SFP+ transceivers are hot and I mean literally.


Counter-counter example. When my wife was new to the city she took a bus ride and a girl randomly made contact with her. They hit it off and talked for the whole ride. My wife thought this girl sounds really nice, they actually got into deep conversations, and wanted to exchange numbers so they could hang out some time. Then she asked if she had heard about the book of mormons…


Sorry to hear that. But I was responding to the comment that "Spontaneous, innocent chit chat is dead, both online and offline because everyone’s hustling now". Just one counter example is needed to prove that statement false. Of course, maybe my new contact will betray me and turn out to be a proselytising fundamentalist, but not yet...


> it wasn't a widespread GCP outage.

If this wasn't widespread, what is?

Incident affecting API Gateway, Agent Assist, AlloyDB for PostgreSQL, Apigee, Apigee Edge Private Cloud, Apigee Edge Public Cloud, Apigee Hybrid, Cloud Data Fusion, Cloud Firestore, Cloud Logging, Cloud Memorystore, Cloud Monitoring, Cloud Run, Cloud Security Command Center, Cloud Shell, Cloud Spanner, Cloud Workstations, Contact Center AI Platform, Contact Center Insights, Data Catalog, Database Migration Service, Dataform, Dataplex, Dataproc Metastore, Datastream, Dialogflow CX, Dialogflow ES, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Console, Google Cloud DNS, Google Cloud Dataflow, Google Cloud Dataproc, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Storage, Google Compute Engine, Identity Platform, Identity and Access Management, Looker Studio, Managed Service for Apache Kafka, Memorystore for Memcached, Memorystore for Redis, Memorystore for Redis Cluster, Persistent Disk, Personalized Service Health, Pub/Sub Lite, Speech-to-Text, Text-to-Speech, Vertex AI Search


Our entire infra in GCP stayed up just fine, we just couldn't manage anything. IDK what to tell you. Many of the things you list here were not down at all.


That it wasn’t down for you does not mean it wasn’t down for others or even almost everyone. Certainly, Google wouldn’t have listed the services as having an outage if nobody was impacted. You can’t extrapolate from “works for me” to “it must have been working for everyone”.


Dude, I literally was an SRE there. I'm well aware of how this stuff works.

If some of those things listed had actual widespread outages, it would have been much much worse.


I don't understand your argument? Wasn't GCP's own status page calling them outages? Some of our upstream providers (who use GCP) were definitely affected and down.

As a former SRE there, is "widespread outage" a specific, special kind of classification that's not obvious to the public just by looking at the status page...? Or what do you mean?


So weird to argue when google themselves listed these as having an outage.


The ads will be subliminal so you won't even know that you're watching an ad.


I didn’t say I would be aware of it.



Just noticed the physics simulator at the top is interactive. Then I was stacking squares on top of each other to see how tall I could make it, and started throwing things at it angry birds style. Fun stuff.


Something no one seems to have realised yet is that the hero simulation at the top of the page is using reservoir sampling to colour 3 of the shapes black.


> It has long been established that predictive models can be transformed into lossless compressors and vice versa. Incidentally, in recent years, the machine learning community has focused on training increasingly large and powerful self-supervised (language) models. Since these large language models exhibit impressive predictive capabilities, they are well-positioned to be strong compressors. In this work, we advocate for viewing the prediction problem through the lens of compression and evaluate the compression capabilities of large (foundation) models. We show that large language models are powerful general-purpose predictors and that the compression viewpoint provides novel insights into scaling laws, tokenization, and in-context learning. For example, Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively.

https://arxiv.org/pdf/2309.10668

Transformers are also used in the top algorithm right now on the Large Text Compression Benchmark. https://bellard.org/nncp/nncp.pdf


Interesting video about what goes on behind the unofficial coverage: https://www.youtube.com/watch?v=Jp2Qgw44MsA


"Are you sure you want to install OSM? Don't you know Google Maps is better? You should try Google Maps"


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: