Usually the “big” qualifier is a fuction of RAM, not hard disk space. Getting hundreds of petabytes of data onto persistent storage in a “large” room has been possible for many years now.
What I'm trying to say, as soon as you could fit data and processing unit(s) into one well cooled room in datacenter, managed by two guys per shift, it is not "Big" problem anymore. Making it all local will probably speed-up their queries/analytics enormously as well
"Big data" is anything that's too big to cram into a standard database and still access in reasonable times. There's a practical definition, it doesn't just mean "YUUUGE".
That website doesn't list a price, but I doubt that running a rack of these would be cost-effective. I wouldn't be surprised if those drives have a cost per terabyte at least a couple of times that of commodity drives.
My point is, you could fit all those Uber data (I know, I know, replication, sync etc) into racks in the SINGLE well cooled room in datacenter, managed by 2 guys per shift.
And this is not "Big" as far as I can see.
Probably will speed up their queris/analytics as well all things being local