Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We need to start thinking about what criminal justice will look like when digital recordings are all considered hearsay. That's bound to happen.

And perhaps it should... if average people have the technology now, how long has it been in the hands of police departments, newspapers, militaries, and intelligence agencies?

We tend to focus on fakes with high-profile people like politicians and celebrities, but it can clearly be done with everyday people as well. The author was able to produce a fake video segment of a well-lit, animated conversation involving his wife. She's directly in front of the camera, speaking and making hand motions for several seconds. How much easier would it be to insert a face into grainy security camera footage, or a shaky clip of a large crowd at a political rally?



Not much will change. People have been able to convincingly fake still photos almost since cameras were invented, yet courts accept some photos as evidence. What matters is the provenance, chain of evidence, and testimony of witnesses about the validity of the photo. Videos will work the same way.


The presence of one source can be easily dismissed, but if you have 500 independent sources that produce video of the same event from their own angles and timespans in an interconsistent way, it's far more likely that the recorded event happened as opposed to all of these different sources lying.


Ahh you are undervaluing the power of automation.

With a few sources you can build an accurate single model of an event. You then edit that event as you see fit and then generate as many "independent" sources from different viewing locations as you want. Then upload them to different social media sites at different times.


I am sure you can do that. The hard part would be building consistent histories to back up those uploads. An upload from my facebook account, verified by other incidental data like GPS tracking on my personal phone, would more merit treatment as an independent corroboration, whereas a random reddit upload from a throwaway account would not.

And the real verification would come from ultra-high-res captures from drone cams, self-driving car cams, traffic cams, etc that are constantly running and recording everything, so that you can't realistically fake soemthing without simultaneously compromising uber, amazon, walmart, etc.


> but if you have 500 independent sources that produce video of the same event

This only works in large events. What if you film a politician lying at a small event, but his image team makes 20 fake videos showing the event in a different light? I guess you're the liar then.


Firstly, people already know politicians lie. Nobody is going to care about your video in the first place.

Secondly, your analogy is wrong. The new hypothetical you present is equivalent to the old model where you are all giving verbal accounts and his staffers lied. If you are going to compare to an old-tech scenario where you one-up them with a more detailed recording than they have, you need to do the same with new technology to keep the comparison accurate.

Thirdly, why aren't you the liar?

Fourthly, you can always go one level up. Show the footage of them constructing doctored footage.


Really? How much harder is it to run the OP's script with 500 input videos as opposed to only one? Especially if they're very similar? How many people have the expertise needed to identify a faked video, and how much training would they need?


At that point there is no such thing as identifying faked videos. Instead you'd rely on the abundance of independent videos and see how consistent they are. Generating a lot of them doesn't do you any good unless you can insert them into the black boxes of the 3 planes and 40 cars in the area, the databases of all the drone operators, the internal storage of all the nearby smartphones, etc.


The great part is that using a GAN means we cannot automatically distinguish a fake from a real... because that's the whole point of the GAN.


The difference is with testimony, the witness can be as moral and honest as possible but still miss report what they heard. With faking audio, it is either correct or someone purposely tried to fake evidence. That means that if the source of a recording could be verified, we know that it is very close to true. With witnesses, there is always doubt.


You're right to be skeptical of witnesses, but recordings are also not either/or. Even if you have some sort of digital signature scheme such that a camera can somehow sign its footage and the time and location it was recorded (a scheme that currently is fairly uncommon), how could it encode the circumstance in which the footage was recorded? Who is responsible for maintaining the signature scheme, and why can we trust them? In the absence of any aspect of that scheme, why shouldn't we closely scrutinize video and audio evidence as much as we scrutinize witness testimony?


The thing is that we have to believe someone when trying to understand what happened. A person, no matter how honest, can make mistakes. With a recording all that is necessary is to make sure that it is honest.


> And perhaps it should... if average people have the technology now, how long has it been in the hands of police departments, newspapers, militaries, and intelligence agencies?

To be honest with you, when I was younger I bought into this.

Nowadays....it kinda seems like with the open source community, consumers are getting stuff first. Not always, of course, but often enough.

And police departments are one category that I can be quite certain are lagging behind, they are definitely not getting stuff before consumers have it.


Yes. As a grad student in ML, on a few occasions I worked with sponsors under the US military trying to help them use some code from projects I'd open sourced, and it was always a struggle.

Frankly, the suggestion that government agencies -- let alone local police departments -- have sophistication in AI/ML years ahead of publicly known work from academia, industry, or even side projects on reddit, will verge on comical to anyone who's actually worked with these folks.


You're not talking about a monolith. Sure, if you've met 20 cops, it's unlikely that you've met any cops who are technological geniuses. Most LEOs haven't had a reason compelling enough to figure out how to fake a video. Can we be sure that no LEOs have? If we can be sure of that, can we also be sure that no intelligence agencies have? That seems unlikely.


Or to say it another way.

Only a few cops will have the ability to edit videos, the problem is almost all cops in his department will lie for that officer on the stand.


The longer time goes on, the more you realize that what companies can get to the public in a year for $1000, takes 10 years and a multi-billion dollar contract for any government (other then priority bureaus like NSA) to obtain. Like good luck getting anything through government bureaucracy and shitty subcontractors.


You're right, lumping police departments and newspapers in with intelligence agencies is pretty silly.


> if average people have the technology now, how long has it been in the hands of police departments, newspapers, militaries, and intelligence agencies?

Most of those people are average. Intelligence agencies can be assumed to have more advanced technologies in a number of areas but the gap between what they have and what's out in the more public world has narrowed.

We've already had image tampering technologies for a long time but there's not been a significant problem with fakes in the press or as evidence.

It is an important issue anyway and figuring out ways to being able to differentiate the genuine from the fake, with cryptographic signing in cameras for example, are worthwhile.


Ultimately, though, don't you have the same problem with things like contracts? Especially with digital signatures, but there is not actually a reliable way to determine whether a signature is authentic. And countless people are going to jail because of complete junk science like bite mark analysis. This seems like the least of our problem.s


Then defendant could make his/her point by doing the same but with the face of the judge or prosecutor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: