Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To me, the term "digital twin" evokes memories of "big data"

* Big data - falls prey to the fallacy that quantity of data can allay concerns on quality, sampling bias and unclear directions of causality

* Digital twin - falls prey to the fallacy that if you simulate EVERYTHING it will solve ALL YOUR PROBLEMS AT ONCE

...both the sort of thing that computer scientists come up with before they learn something about empirical science (and I should know, having made that mistake myself).

People model things because they need to predict stuff. To make a good model you need to know what it is you're predicting, and outside of physics, a model designed to predict X will rarely be great at predicting Y because the map is not the territory and a model designed specifically for Y will do better.

The term "digital twin" however implies that someone is mistaking the map for the territory. Ok I see the point of trying to predict lots of outcomes from one model, X,Y,Z,A,B,C... if the model was explicitly designed to do this then "digital twin" is just a buzzword for a package of models predicting XYZABC, with standardized data formats, not duplicating effort, and hopefully taking care to avoid all the pitfalls we've learned the hard way from 70 years of transport planning.

But if it's anything more than a buzzword it carries the implication that we just build a magic "twin" and can then predict anything we care to name, which I suspect is not going to work. That said, I'd be interested to hear a counterpoint from anyone more involved in one of these.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: