Ah yes, the feature designed to prevent the product from working.
Back in the 90s our product--and many of our competitors in EDA--integrated with FlexLM (then Globetrotter, then Macrovision, then I lost track). It was clever and flexible, especially when talking about seats and site licenses and combining different vendors onto one license server, all before Internet was serious.
In reality, it was painful for users and they lost so much time fighting it: someone forgot to "check in" a license before using it, or the site license server was down, or you had the wrong feature line, or dozens of other failure modes.
FlexLM is still going "strong" and used to encumber many scientific pieces of software widely used. It's exactly as you describe and incredibly user hostile -- albeit somehow the "best" DRM scheme out of many bad options in a space that often includes hardware dongles like iLok. It's incredibly frustrating to e.g. pay >$1m for a piece of kit and get a DRM encumbered UI for it, deliberately designed to make life awkward.
Of course, the obvious issue with the public key approach listed as being "best" in the article is that all it takes for an attacker to overcome the scheme is to replace your private keys with theirs -- or to patch a JMP for every do_license_check() call in the binary. It's fundamentally an approach that is always hackable.
>all it takes for an attacker to overcome the scheme is to replace your private keys with theirs -- or to patch a JMP for every do_license_check() call in the binary. It's fundamentally an approach that is always hackable.
That might be true in theory, but is non-trivial in practice. The code responsible for license verification is often obfuscated and tied to critical functionality, so you can't do a simple find and replace.
i made a killing when i was 17 selling w32 POS software i developed in my bedroom with vb6(?) because i hit the time 1) everyone wanted the window look and 2) all the big players (with ascii art ux) were succumbing to flexLM pushing dongle licenses and they never worked.
This doesn't mention the method I used back in the 90s when I was writing apps for Acorn's RISC OS. During purchase, you seed a random number generator with the user's name (which is required for purchase), and then generate a random number of whatever length you want. That's their license key.
The application does the same thing, asks for their name and license key, and then confirms that the generated key matches what they entered.
It's trivial to crack, but I just decided that you can never stop the really determined crackers, and a trivial scheme would be enough to incentivize honest users to pay up. Displaying the user's name prominently on the splash screen acts as a guilt-inducer for those sharing keys.
> Displaying the user's name prominently on the splash screen acts as a guilt-inducer for those sharing keys.
This is imo an absolute genius insight, whoever came up with it first. It’s psychologically meaningful, not just for guilt, but the feeling of ownership.
In many ways, I miss these days of software being complete and local. Today, there are few interactions where I get that nice feeling, and for good reason – whenever the feudal lords decide my account is no bueno, the fun stops.
One issue with showing the name today, are customers who start their (legally licensed) software live on stream, (unintentionally) displaying their real (full) name, serial number, etc.
You can use the product without the key indefinitely. But every 10 times you save, it shows a prompt for buying the software.
Not even intrusive, you can discard it easily, and be productive for months.
But after months of reading that you don't pay, the guilt sets in. After all, you have been enjoying the product, since you have been using it for months.
It also helped exposing some famous musicians when doing an interview in their studio, where the pirated user name was prominently visible on some VST plugin GUIs that were seen in the background of the video.
In my experience, most of the people happily using cracked software were either kids, or the hopelessly immoral, and it wasn't worth chasing after either of those categories.
"most people", I agree. Being a broke kid, I recall trying to figure out how to break different schemes. BBS Doors were my main target. I was horrible at it, but some things were learned.
I was hired as computer assembly tech at a local computer store when young. There was a customer that was frequently delinquent. The boss asked me to write a program to encrypt the hard drive if the customer hadn't paid. This customer had a business so I was very nervous about my lack of confidence in being certain their data was not lost. To compromise, only filenames and extensions were changed. IIRC, the bill was receive a payment weekly. Once the money was received, a code was given to the customer. This code would update a file on their system. If, after two weeks, no payment was received, the code would recurse through the directories and rename data files with a simple XOR. This would allow files to be recovered manually if needed. I'm not certain, but I think this was done in dbase 2000. It was the only compiler we had access to at the time.
Hopelessly immoral? I've used plenty of cracked software in my younger days and not once did I consider it immoral in the least. Some people just draw a line in the sand in a different place for software.
I just don't care at all about someone's "lost potential revenue". I consider it to be the same as skipping commercials with watching TV, or adblocking the web today.
I think the point is that you don't get to just declare what defines "hopelessly immoral".
An argument could be made that inflicting drm on everyone who is not a theif, and all posterity long after any legal or moral copyright has expired yet the thing is still encrypted or broken, is hopelessly immoral.
It even feels alien to me when someone claims to feel bad for pirating something. Most of the time it's just so much easier. And no please don't start on the usual "it's like stealing from a physical store". It's not. Stealing implies that the owner loses something.
I love cracking these homegrown license schemes, people come up with really interesting stuff and as you say it usually only takes a few hours. Can't crack flex though :(
It can until there's hard hardware chain-of-trust enforcement (like UEFI) with signed images.
Without signed executables, and if someone is willing to spend the time to look for the exact spots in a disassembler in non-verified executables, they can either negate the condition or skip over the license check. This assumes it only happens in 1 location. Every upgrade of that particular file would likely need to be re-patched each time.
The least work semi-automated way to find it is through tracing a initial failure run and then binary search inverting logic of branch instructions.
> It can until there's hard hardware chain-of-trust enforcement (like UEFI) with signed images.
Until they make it so we need corporation or government signatures to run software on "our" machines. Until they finally destroy our computing freedom.
This has been the case for over a decade on smartphones, and corporations like Microsoft are currently hard at work bringing it to traditional computers.
Not at all. Android phones had relatively good user freedom for a long time. You could root them, intercept system calls and return fake data to apps, intercept their network traffic and see what it's phoning home about or even debug the apps themselves. Life was good, the device was mostly ours, we were in control and all these app developers answered to us whether they wanted to or not.
Only now is hardware remote attestation putting an end to all that.
Not really. Some devices can be rooted/flashed, some others are not rootable in practice. Also, there's another vector to it - SafetyNet. If google decides it can't trust the device, many apps will stop working, sometimes can kill even Camera and the like (e.g. https://www.xda-developers.com/bootloader-unlocking-no-longe... ). And bypassing SafetyNet going to be harder and harder, and device without SafetyNet passing going to be more useless (e.g. McDonalds App will not run without it)
> It can until there's hard hardware chain-of-trust enforcement (like UEFI) with signed images.
And then hackers will graduate to side-channel attacks... it's a game of cat and mouse. The best bet, IMHO, is to make the cost of cracking exceed the value of what's being cracked.
With hardware root trust the side channel attacks look something like breaking into Apple's most secured facilities and ordering multiple senior employees around at gunpoint.
Hardware remote attestation means your local physical device is as accessible as a distant server. Dumping RAM does not work, tapping the bus does not work, writing your own firmware does not work. It is exactly like an ISP performing a machine-in-the-middle attack on a TLS connection; impossible unless without some way of obtaining the certificate private key.
> It can until there's hard hardware chain-of-trust enforcement (like UEFI) with signed images.
That can only happen if Treacherous Computing wins. If you actually have control over the things you own, you'd be able to tell your device to lie about the first stage's signature.
N=1 but back in my broke teenager years I definitely preferred keygens over cracks. With antivirus flagging everything crack or keygen related as a virus, I considered it important to only run the keygen once (in a sandbox/VM) to get the install done. Replacing DLLs and EXEs with a cracked version always came with risks that keygens don't.
If you wanted to be really cautious, you could diff the original and cracked binaries and reverse engineer the difference. We did that on a project for a client once (a major software corporation) and the cracked version of their software literally just had one opcode replaced.
The fellow working on the project with me had a lot of specialized domain experience in such things, and told me the actual cracking teams would never put malware in their own cracks, because it would ruin their reputation in the scene.
I'm sure some cracks have had malware added after release by other people, of course, but I wonder how much of it was fearmongering by software companies.
I compeletely trust the cracking teams themselves. They're not wasting their talents just to install a key logger into four random machines halfway across the world.
The problem is, the scene usually isn't about making piracy mainstream. Many release groups only released their cracks within closed circles to show off that they managed to crack a game. The cracks that made it out to the web often came through leaks, hosted on a shady site or distributed through torrents. For an outsider, these cracked files weren't all that trustworthy.
That, together with antivirus software being made completely unreliable because of false positives, made it very hard to trust the .exes/.dlls that I used. Piracy is still a major infection vector today, and I imagine it will always be because of all the fake cracks on SEO hacked websites and shady operations necessary for the original crackers to distribute a crack without copyright lawyers sending you very scary letters.
True in an absolute sense, but making the cracking sufficiently difficult, painful, or risky will at least deter a lot of the people who have the means to pay.
It’s not perfect but it does make a significant difference.
As with physical security, you often don't even need to attack the lock if you can instead compromise the latching mechanism.
Most local software cracking doesn't involve fixing the license key check or the crypto code at all. You just remove the conditional jump that exits the program if the check fails.
Downloading some sketchy binaries is one thing. Using generated license key is another thing. Many users would avoid first thing because of fear of download a virus (and for a good reason!). But using some online keygen or running keygen in VM once is not a big deal.
IMO best way to implement license check is: using asymmetric crypto (so it's impossible to write keygen) and online activation (so it's impossible to share key).
>IMO best way to implement license check is: using asymmetric crypto (so it's impossible to write keygen) and online activation (so it's impossible to share key).
I believe from my experiences downloading CLASS and MYTH game rips from sketchy IRC channels on DALnet that even this method can be subverted by simply patching out the online activation check.
The only feasible way to combat this that I'm aware of is to have some required code/data live on the company's server and thus force always-online to use the software.
20 years ago, I added a simple licensing feature to our business software - a readable XML with all the license features and timeframes for them and a signature. The license was pretty cool as it allowed us to do timed feature sets. Of course, our public key was obfuscated in the code, but it wasn't too hard to replace it. The reason was to keep the honest customers from easy abuse. The readability of the license key was a great plus. This worked very well for us, and we've managed to score some huge customers.
The part about merkle trees is not readable for me (tested on Firefox and chromium) due to a `color: white` being applied to the corresponding <code> tags.
The article says nothing about protecting the license checks from being disabled. That's the secret sauce in a few companies, they do really interesting work.
Are these supposed to be practical defenses, or a list of defenses the author thinks are valid? Misusing hashes and cryptography to obscure your licensing scheme doesn’t address the fundamental weaknesses that your software is running on someone else’s computer, and you have no control of it once it leaves your servers.
To this day, I wonder if the crack I found in the 90s or early 00s for some kind of forensic software was real. It was a file containing precise measurements for a hole you had to drill into the CD you burned the software on to circumvent the copy protection … It sounds like a joke now, but back then I was impressed.
I do the last one and have never found a key on the internet but I have found cracks where they probably just decompiled the program and returned true from the function that checks the keys or some other simple bypass.
Back in the 90s our product--and many of our competitors in EDA--integrated with FlexLM (then Globetrotter, then Macrovision, then I lost track). It was clever and flexible, especially when talking about seats and site licenses and combining different vendors onto one license server, all before Internet was serious.
In reality, it was painful for users and they lost so much time fighting it: someone forgot to "check in" a license before using it, or the site license server was down, or you had the wrong feature line, or dozens of other failure modes.