I don't know the state of the Borg fleet or how Google Maps serves vector maps for SK today. That's why I said upfront that I was talking about the state a decade ago. Back then, what I heard was very specific, about restrictions on map source data. Not tiles, not realtime traffic, not Gmail, not account data, not search logs, not ads, etc. Even then, it was already an old issue.
Tax reasons might have arisen since, but maybe we're talking about different laws? I don't know. Do you?
The fact that there's no mention of Seoul at "Discover our data center locations" https://www.google.com/about/datacenters/locations/ tells me that its GCP region is most likely rented space in someone else's buildings and there are no full-blown Borg clusters with their usual tenants. Choosing where to build was always a complex process. There have been cases where announcements were made and land was bought, but nothing ever got built.
My explanation that.. "Borg couldn't handle domestic clusters"? What does that even mean? You could probably bring up a toy single-master Borg cell on a workstation with the right 283 command line flags and a toy Chubby cluster, but that still won't give you any map tiles.
Back in the days before incremental map updates, the whole globe's data came from a pipeline of pipelines and mapreduces, stitched together by a number of teams (the project's codename started with O). It was tracked with advanced technology (Google sheets) by PMs that had a taste for suffering. The latency was measured in weeks and months. If you missed the build, your data or code changes would take who knows how long to go live. I think that sometimes builds might have been abandoned altogether. These were the days when mobile Maps, if they existed at all, lagged desktop tiles by weeks or months (there wasn't one set only...). Same with navigation. The amount of data has only increased since, but there's obviously a much more advanced process at work nowadays. I don't know what they changed or how.
Anyway, back then, to build the SK tiles with the same tools you would have needed Borg, yes, but also Bigtable, GFS and a wide assortment of ancillary services. How are they doing it now? Maybe they finally got a license. Maybe they got a sufficiently limited subset of the full feed. Maybe they did a major rewrite of the code, pipelines and even binaries to make it feasible for that chunk of the planet to be handled by a bunch of bare Linux machines on GCP or AWS in South Korea. You might be able to deduce something by comparing updates there vs the rest of the world, but that takes time and dedication.
Tax reasons might have arisen since, but maybe we're talking about different laws? I don't know. Do you?
The fact that there's no mention of Seoul at "Discover our data center locations" https://www.google.com/about/datacenters/locations/ tells me that its GCP region is most likely rented space in someone else's buildings and there are no full-blown Borg clusters with their usual tenants. Choosing where to build was always a complex process. There have been cases where announcements were made and land was bought, but nothing ever got built.
My explanation that.. "Borg couldn't handle domestic clusters"? What does that even mean? You could probably bring up a toy single-master Borg cell on a workstation with the right 283 command line flags and a toy Chubby cluster, but that still won't give you any map tiles.
Back in the days before incremental map updates, the whole globe's data came from a pipeline of pipelines and mapreduces, stitched together by a number of teams (the project's codename started with O). It was tracked with advanced technology (Google sheets) by PMs that had a taste for suffering. The latency was measured in weeks and months. If you missed the build, your data or code changes would take who knows how long to go live. I think that sometimes builds might have been abandoned altogether. These were the days when mobile Maps, if they existed at all, lagged desktop tiles by weeks or months (there wasn't one set only...). Same with navigation. The amount of data has only increased since, but there's obviously a much more advanced process at work nowadays. I don't know what they changed or how.
Anyway, back then, to build the SK tiles with the same tools you would have needed Borg, yes, but also Bigtable, GFS and a wide assortment of ancillary services. How are they doing it now? Maybe they finally got a license. Maybe they got a sufficiently limited subset of the full feed. Maybe they did a major rewrite of the code, pipelines and even binaries to make it feasible for that chunk of the planet to be handled by a bunch of bare Linux machines on GCP or AWS in South Korea. You might be able to deduce something by comparing updates there vs the rest of the world, but that takes time and dedication.