It's annoying, around 10 years ago 10gbps was just starting to become more and more standard on bigger NAS, and 10gbps switches were starting to get cheaper, but then 2.5GbE came out and they all switched to that.
That's because 10GbE tech is not there yet. Everything overheats and drops-out all the time, while 2.5GbE just works. In several years from now, this will all change, of course.
Counter-counter example. When my wife was new to the city she took a bus ride and a girl randomly made contact with her. They hit it off and talked for the whole ride. My wife thought this girl sounds really nice, they actually got into deep conversations, and wanted to exchange numbers so they could hang out some time. Then she asked if she had heard about the book of mormons…
Sorry to hear that. But I was responding to the comment that "Spontaneous, innocent chit chat is dead, both online and offline because everyone’s hustling now". Just one counter example is needed to prove that statement false. Of course, maybe my new contact will betray me and turn out to be a proselytising fundamentalist, but not yet...
Incident affecting API Gateway, Agent Assist, AlloyDB for PostgreSQL, Apigee, Apigee Edge Private Cloud, Apigee Edge Public Cloud, Apigee Hybrid, Cloud Data Fusion, Cloud Firestore, Cloud Logging, Cloud Memorystore, Cloud Monitoring, Cloud Run, Cloud Security Command Center, Cloud Shell, Cloud Spanner, Cloud Workstations, Contact Center AI Platform, Contact Center Insights, Data Catalog, Database Migration Service, Dataform, Dataplex, Dataproc Metastore, Datastream, Dialogflow CX, Dialogflow ES, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Console, Google Cloud DNS, Google Cloud Dataflow, Google Cloud Dataproc, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Storage, Google Compute Engine, Identity Platform, Identity and Access Management, Looker Studio, Managed Service for Apache Kafka, Memorystore for Memcached, Memorystore for Redis, Memorystore for Redis Cluster, Persistent Disk, Personalized Service Health, Pub/Sub Lite, Speech-to-Text, Text-to-Speech, Vertex AI Search
Our entire infra in GCP stayed up just fine, we just couldn't manage anything. IDK what to tell you. Many of the things you list here were not down at all.
That it wasn’t down for you does not mean it wasn’t down for others or even almost everyone. Certainly, Google wouldn’t have listed the services as having an outage if nobody was impacted. You can’t extrapolate from “works for me” to “it must have been working for everyone”.
I don't understand your argument? Wasn't GCP's own status page calling them outages? Some of our upstream providers (who use GCP) were definitely affected and down.
As a former SRE there, is "widespread outage" a specific, special kind of classification that's not obvious to the public just by looking at the status page...? Or what do you mean?
Just noticed the physics simulator at the top is interactive. Then I was stacking squares on top of each other to see how tall I could make it, and started throwing things at it angry birds style. Fun stuff.
Something no one seems to have realised yet is that the hero simulation at the top of the page is using reservoir sampling to colour 3 of the shapes black.
> It has long been established that predictive models can be transformed into lossless
compressors and vice versa. Incidentally, in recent years, the machine learning
community has focused on training increasingly large and powerful self-supervised
(language) models. Since these large language models exhibit impressive predictive
capabilities, they are well-positioned to be strong compressors. In this work, we
advocate for viewing the prediction problem through the lens of compression and
evaluate the compression capabilities of large (foundation) models. We show
that large language models are powerful general-purpose predictors and that the
compression viewpoint provides novel insights into scaling laws, tokenization, and in-context learning. For example, Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively.