Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Cities Want 'Digital Twins' to Manage Traffic (citylab.com)
84 points by pseudolus on July 19, 2019 | hide | past | favorite | 55 comments


"Digital twin" is a bizarre term for this. They're talking about modeling traffic flow based on real-time traffic data.


Realtime modeling that includes realtime data is starting to be called digital twinning in various industries. E.g. each car off a line has its specific torques of each bolt traveling along with it as a digital twin.

Power plants with sensors on all the equipment trying to predict preventive maintenance is being called digital twinning too.


Is this only applied to 'real time' modelling or is 'near real time' also included?

Curious because I used to write software for IIOT predictive analysis (3 years ago?) and I never heard the term 'digital twin.' Curious if it applies to NRT or if this is a different beast.


I work in a field adjacent to all this. My org used to use the term "real time" to mean anything up to and including near-real time. We've recently switched to calling our work "in time" prediction to clarify that we're not referring to real-time in the RTOS sense. Digital twins are a hot topic in aerospace maintenance right now.


I looked up the sales material for the software team I used to work on - https://sw.aveva.com/blog/visualize-asset-performance-manage...

Digital twin is right in the subtitle XD


Use of the term in searches seems to have picked up quite a bit over the past couple of years, so might not have been in such common use ~3yrs ago:

https://trends.google.com/trends/explore?date=today%205-y&q=...


I always forget that this is possible to look up - ty


Digital Twin is essentially a buzzword like Cloud. It can mean whatever you want it to, I’m afraid


I think I've figured out the IBM-style buzzwords like "mobile productivity cloud", and I'm making good progress on all the ones in SJC airport ads, but the one that's still got me is "digital transformation". Shows up all the time in my promoted tweets.

Didn't everyone digitally transform in like the 90s?



You can't just affix the word realtime to everything and expect it to make sense. And of course every car manufactured in the last 30 years used tools with exact torques, no worker is pressing on a drill until it feels just right.


The exact torque measurement within a tolerance is what's recorded, which is different for each car. Less trivially, the vibrations, driving history, etc can be put on the digital twin.


It’s the new buzz word for non-tech Gartner/EY/Deloitte consultants. It’s basically the new blockchain/RPA/BigData/AI. So far only RPA has brought us any value, and it comes with a ton of technical dept and huge amounts of panic when the Human Resources have been relocated and the robot suddenly fails.

I’m sure digital twins will be equally awesome.


Is it bizarre? It's a general term that can be applied in many contexts.

A digital twin is a high-detail, precise, simulacrum of a physical object or objects and their relationships. It's more than modeling traffic flow, it's a way to refer to the practice of digitizing the state of objects.

I've heard the term before in the context of manufacturing, so it doesn't sound odd to me.


The real benefit is when you can run simulations at scale using “digital twins” in different configurations.

- how many people will need to buy diapers this month, based on the similarities they have with people who just started buying diapers last month?

- given past failures of all machine components, what is the expected uptime of this facility in different configurations?


It's an odd term, but one that inadvertently brings attention to concerns such as privacy and identity persistence.

Such concerns are far less obvious to laypeople when terms like "real-time modeling of traffic flows" are used. Perhaps, the choice of term is fortunate after all.


Though the term digital twin was occasionally in use in the virtual reality and manufacturing community as a concept in the mid-1990s, Dr.Michael Grieves explicitly defined the term in 2001 for a digital version of a physical system as part of an overall product lifecycle process.

The term has been widely used in manufacturing since then and has been a key sales messaging for companies like IBM, Dassault Systems, Siemens amongst others. United Technologies for example have been demonstrating digital twin concepts for routing in flight engine telematics into 3D virtual engine simulations for trouble shooting and maintenance since 2005.

The term regularly crops up in manufacturing, maintenance, smart cities, IoT and more recently AI related proposals and projects and generally means a virtual simulation that uses real data to simulate a complex system.

As pointed out by many over the years - the concepts are also described by Prof.David Gelernter in his book 'MIrror Worlds: Or the Day Software Puts the Universe in a Shoebox...How It Will Happen and What It Will Mean' first ipublished in 1991.

SO perhaps less bizarre and more just less familiar on HN.


IIRC it's called "digital twin" because they basically use real-world data to then create similar plausible fake data and then use the "real-ish" data to model things. It's thought to reduce privacy risks since "real-ish" data is not tied to any actual person, so you restrict the exposure of real, sensitive, de-anonymizable data to the model generation.

I have no idea if this actually works out in practice, though, in regards to both how much this actually protects privacy and whether or not "real-ish" data proves to be accurate enough to be useful.


To me, the term "digital twin" evokes memories of "big data"

* Big data - falls prey to the fallacy that quantity of data can allay concerns on quality, sampling bias and unclear directions of causality

* Digital twin - falls prey to the fallacy that if you simulate EVERYTHING it will solve ALL YOUR PROBLEMS AT ONCE

...both the sort of thing that computer scientists come up with before they learn something about empirical science (and I should know, having made that mistake myself).

People model things because they need to predict stuff. To make a good model you need to know what it is you're predicting, and outside of physics, a model designed to predict X will rarely be great at predicting Y because the map is not the territory and a model designed specifically for Y will do better.

The term "digital twin" however implies that someone is mistaking the map for the territory. Ok I see the point of trying to predict lots of outcomes from one model, X,Y,Z,A,B,C... if the model was explicitly designed to do this then "digital twin" is just a buzzword for a package of models predicting XYZABC, with standardized data formats, not duplicating effort, and hopefully taking care to avoid all the pitfalls we've learned the hard way from 70 years of transport planning.

But if it's anything more than a buzzword it carries the implication that we just build a magic "twin" and can then predict anything we care to name, which I suspect is not going to work. That said, I'd be interested to hear a counterpoint from anyone more involved in one of these.


The CEO of Ford Motor Company, Jim Hackett, has claimed this: for autonomous vehicles to work resiliently at scale they require

1. vehicle-centered sensors

2. a way for vehicles to share their sensors with nearby vehicles,

3. detailed models of the cities in which they operate, and

4. wireless networks capable of transporting all this data to where it's needed.

He calls all this the "mobility operating system."

In the interview I heard (I wish I could remember where, sorry) Mr. Hackett said he believes the defensible intellectual property for autonomous cars will be those detailed city models.

So, I am delighted to hear that municipalities are getting ahead of this issue with respect to city models.


Isn’t an issue with no. 2 that bad actors could spoof or inject bad data and cause problems for cars reading this data in good faith. You can discard one abnormal data point but not if it’s coordinated.


Do you worry about some random asshole kid hurling a re-bar in to rush hour traffic? REing automotive tech, setting up transmitters, and spoofing traffic data to throw a virtual re-bar in to rush hour traffic could become a pandemic unless we figure out a way to fight such attacks.


The difference between these two scenarios is that the asshole kid throwing rebar is easily detectable and to do it has to be at the location of the crime.

The asshole spoofing traffic data, depending on how the transmitters are set up could potentially do it from a long distance away, and may actually be hard to detect.


Emergency vehicles in some cities can emit a signal that changes lights from red to green. It's a felony to spoof those devices, but people still get caught doing it.

I'm sure a system can be made resilient to a single bad actor, but, yes, those attack vectors will have to be thought through.



Forgetting to use your turn signal, or leaving it on, or running at night without headlights, is "injecting bad data" into the present system.


Yes. But that’s why you need the model and the car-centric sensors. “Trust but verify.”


If they need to verify, why do they need to talk to peer-vehicle sensors? Ideally what would happen is a car a quarter mile ahead gives the cars behind a heads up that something is different (construction, ladder or junk, stalled car ahead, icy conditions, etc.) but if this look ahead dats isn’t reliable of at least can’t be trusted, what are you verifying it against? Not your own sensors cuz then why even get this fleet data to begin with?


It can at least bias your priors ahead of time. When you (as a human) are using Waze or Google maps and you see a speed trap ahead of you, or a traffic jam, or construction or whatever, obviously you can't blindly trust that. But you can slow down and prepare, and if and when that occurrence comes to pass, you're better prepared to deal with it.


Trust but verify


Thanks. I was on the phone and autocorrect got me.


Are those not two different kinds of models though? For self-driving cars, the most important city models and those to which the Ford CEO was referring would surely be high-precision records of the physical layout of the city. Whereas the models in the article are simulating how people / cars / ... move around the city.


Hackett said the city models were likely to be slowly-changing stuff.

Human drivers also gather data from nearby vehicles. That's the point of brake lights, turn signals, and rearview mirrors, for example. And, yes, human drivers also spoof that data: mostly by forgetting to use those signals correctly.


The ideal is a high precision representation and then let that each model use what it requires. Better aggregated than extrapolated. This is the central idea of our modeling platform at Aimsun.



I just read point 3 and had an awful moment of thinking, "sorry I can't drive in Toronto. I didn't get the DLC subscription for its map data and I never learned to drive."


It's very strange that we're able to drive without 2, 3, and 4.


We can't drive without 2, 3, and 4. We already have them.

2. a way for vehicles to share their sensors with nearby vehicles,

The sensors are the driver's senses. Turn signals, brake lights, and horns relay the consequences of those sensors to other drivers.

3. detailed models of the cities in which they operate, and

Most drivers in any given place have a detailed mental model of the streets they use. Signage contributes a lot to the model. Traffic lights contribute temporal information, as do orange barrels and other ways of noting temporary alterations to the model.

And, consider London taxi drivers. They have the "knowledge," a detailed model of their city.

4. wireless networks capable of transporting all this data to where it's needed.

Sight and sound are generally reliable transport-layer network media.


One of the interesting things that was proved in Cybernetics is that, in order to operate efficiently, a self-governing system must have a model of itself.

A digital twin is a digital "voodoo doll", as Asa Raskin (Jef's son) talks about in "Humane: A New Agenda for Tech" (44 min. watch) posted on April 23, 2019 https://humanetech.com/newagenda/

A self-aware city is an efficient, healthy, safe city.


> A self-aware city is an efficient, healthy, safe city.

I don't see how efficient, healthy, and safe follow from self-aware.

Self awareness _may_ enable such things, but it is easy to see how they could also enable a dystopia.


Just don't think differently than the city managers and you'll be fine.

/s


If you have no bad thoughts to hide, you have nothing to fear!


Yea has no one played WatchDogs, this is the literal prelude to dedsec.


(Still) looking for a link to the PDF of it, but I’m reminded of David’s Gelernter’s “Mirror World” concept from the 1990s (https://en.wikipedia.org/wiki/Mirror_world), encountered it in one of my classes.

This is the second time something on HN actively reminded me of it funnily enough https://news.ycombinator.com/item?id=19532236


Google result #1 for query:

David’s Gelernter’s “Mirror World” filetype:pdf

http://s3.amazonaws.com/arena-attachments/2745202/074de53e96...


What if the future resides in active transport + mass transit (and not cars)...


Should each vehicle be autonomous and make its own decision based on the data that it can capture around (light/sound i.e. radio waves) and possibly open trusted data (maps, traffic, weather, etc.) rather than being dumb and follow a control center that can fail, be abused or get hacked ? I guess one who control the traffic data source, could influence the vehicle's decisions. Vehicles could talk to each other and propagate traffic information, but again vehicles have to trust each other. It comes down to a trust issue.


Should the car protect the driver or the other people around it?

If you have a hypothetical situation where the car is driving lawfully and a family jumps into the road. Assume the car has only two options. Does the computer hit the family or swerve into the telephone pole? Even if the computer believes turning will kill the driver.

2 cars talk to each other and realize one will need to turn to avoid an accident. Assume only 1 driver will survive. Which car does what? What if both computers turn and there are bystanders on both sides?


Yeah, these points are constantly brought up with self driving car tech and are honestly extremely boring. The cars will just slam on the brakes as the most effective way to avoid an accident. No software is going to look at the number of passengers or the value of passengers and decide which car needs to be sacrificed for the greater good.

Self-driving cars don't need to be programmed to be the ultimate moral arbiters of life. They just need to be safer than human drivers. What would optimal human drivers do in the instance that a family jumps out into the middle of the road? Probably just slam on the brakes. Not send their car head first into a semi trailer moving in the opposite direction.


Exactly. We already advise people that swerving is significantly more dangerous than just slamming on the brakes and maintaining some semblance of car control. That advice still applies when a computer is doing the driving. Longitudinal grip is a lot more reliable in various conditions than lateral grip, so that's what you go with.


I am not sure if the self-driving software weights scenarios based on casualties. It should just follow what it has been trained for... mimicking human driving that have been labelled as correct. To go back to your 1st scenario, as a human, seeing a family jumping in front of my car, in a split second, I would have probably try to avoid them, even if as a result it would have kill myself. Now, someone reviewing my driving, could decide in the insight that it was not the right decision and not train the self-driving with my sacrificial driving. But at the end, it would just be one brain ("self-driving AI") that would make split-second decisions, for the better or worse... hopefully statistically better than human in general. If cars talk to each other (again assuming trust, and that cars do not lie to each other), it could be used as one extra input for the car's self driving brain. Indirectly, you could consider that as controlling a car's self driving brain, if you know for sure how the car is going to react. Each car would make its own decisions, and adjust according to the consequences.


Why not both? Central command except when specific thresholds exceeded


Decentralized command?


Here in the UK we have 'smart motorways' that show lanes as closed and over a few miles, guide traffic into a queues but people still don't obey then completely and end up contributing to the messy traffic.


How anyone thinks that cities are competent enough to do any of the fantastical scenarios described in the article is beyond me. This is going to be a huge waste of taxpayer dollars and we're going to end up with an expensive boondoggle that makes things worse than if they has just left things along and let the marketplace sort things out. Once such systems are baked into place, it's going to be hard to undo the damage they cause.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: