I'm thinking the current-generation AI (deep CNNs, not symbol-based reasoning) are enough to be paradigm-changers in the future by allowing us to record empiric knowledge.
Computers are good at recording structured knowledge - who owns that, this merch goes where, things that were recorded on stone tablets since times immemorial. But human experts also have a different type of knowledge, empiric, accumulated through years of experience. Artisan crafters "know" how good some material is by sense of touch, smell, sight, without always being able to say WHY or HOW this wood is better than that wood for this table. This is why apprenticeship with a master was a key part of developing an artisan - you'd transfer SOME of this empirical knowledge from a master that took a lifetime to develop it.
The humanity took a huge step forward by moving a lot of empirical knowledge to structured knowledge through the use of models (mathematical?) and books. Instead of an apprentinceship with a mason, a builder now goes to school and learns how to structurally design a building based on construction codes. This allows huge scaling of knowledge, at the expense of missing subtle details which are not modelled (spherical cows in a void, right?).
With this article it just occured to me that Deep Learning may be the tool to record this type of empirical knowledge. And it's going to scale out - unlike the human version never did - because digital copying is cheap, and the knowledge doesn't die with the artisan that developed it - it only gets more accurate. The models we build about reality will get more and more accurate - the spherical cows will grow legs in the air, not in a void.
Humanity will start doing more and more things without understanding why, but because "the computer said so", and things work out when the computer says they will. Black boxes will explode in usage, and God forgive those that may be on the side where The computer says No.
> Humanity will start doing more and more things without understanding why, but because "the computer said so", and things work out when the computer says they will. Black boxes will explode in usage, and God forgive those that may be on the side where The computer says No.
This paragraph scares me. I'm not convinced that technology getting good enough to where it doesn't need to be understood to be used is a good thing in most cases. Especially when it's a black box.
What should scare you is it's becoming apparent procedures are being put in place so that when the 'computer says no' the person affected isn't given the information they need to fix the problem they have.
Banking computer closed your account. Why? We can't tell you why.
Google or Apple rejected your app from their app store. Why, won't tell you why.
Me trying to get my prescription refilled at Walgreens. Walgreens computer shows they never got it from the doctor. Doctor sent it 5 times. Turned out the computer was marking it invalid each time. Pharmacist isn't allowed to tell neither me or my doctor why.
They will not tell you not out of malice, but because they don't know. It's literally a black box, and we don't understand whys.
And it doesn't take a deep CNN to become a black box - any sufficient complex algorithm will do. I doubt anybody in the world understands all the details of how the latest CPU actually works, for example.
This is the Fifth Law of Robotics: A robot shall be able to explain how it reached any decision.
And there is the Fourth Law of Robotics: No robot shall have access to means of unlimited self-reproduction.
> If we can't ask the AI to explain how it came to its conclusion it is not really intelligent, is it?
Will it matter whether the AI is "really" intelligent when companies that implement the black box to spec thrive, while the companies that insist on having a human understand everything fall behind and fail?
We do that all the time with humans! “Having a hunch”, “trusting years of experience”.
Or think of the “uncanny valley” in 3D graphics -- most people can tell when a rendered image looks slightly off somehow, but most people can’t pinpoint the exact problems. “Something about the eyes just isn’t quite right”, etc.
most people can tell when a rendered image looks slightly off somehow, but most people can’t pinpoint the exact problems.
But experience people can. I've worked at a 3D animation studio and we had this 'old guy' who started his career as a classically trained artist and sculptor. He could look at a character model that we all agreed felt a bit 'off' and point out that the problem is how you'd modeled the muscles connecting the shoulders to the arm. Hell sometimes he'd look at a model that we all felt was fine and suggest tweaking the bridge of the nose and that made it much better.
True, but they did not decide how to do it and did not implement it. I would expect the facebook engineer who implemented it to be able to explain it to me, at least the part which he did.
>Humanity will start doing more and more things without understanding why, but because "the computer said so", and things work out when the computer says they will. Black boxes will explode in usage, and God forgive those that may be on the side where The computer says No.
I don't have empirical evidence, but I think people are overstating the harm of opaque AIs.
On the one hand, opaque bureaucratic processes already exist today. It's not like the bank that refuses you a loan will tell you "oh, but we would totally have granted it if you were 20% richer and also not a woman, so come back after you've achieved that and we'll give you the loan"!
On the other hand, even opaque models can be studied and systematized. You can't run a simulation of a judge on 100 sample cases, then on the same 100 sample cases with names changed to sound like immigrants, and numerically measure the judge's bias. You can do that with a ML model, and compensate in various ways.
It's not like the bank that refuses you a loan will tell you
That's because they don't want to tell you, not because they don't know. The bank knows exactly why they refused to give you a loan and exactly which series of policy changes led to that outcome. They also know exactly how to change the policy to achieve a different outcome.
We have been using objects of the kind described in your last paragraph, called compilers since the 1950s, and with the increasing number of portability-focused high-performance DSLs / frameworks like tensorflow or OneAPI, we are only going further down this direction. But yet 70 years after the advent of compilers, there are still people who know how to open-up the machine, improve it, and fix it, and there probably always will be.
I don't see how machine learning, at least in its current non-AGI state, will be any different. It's just that your average end-user will have no idea how to "open up the machine", but that's also true for compiler technology today.
There need not be material differences if there are sufficient differences in scale. Of course we already rely on algorithmic black boxes, in fact I'd argue that we have been relying on algorithmic black boxes since before we had computers (we just call them "traditions" instead). But if a technology like neural nets expands the applicability by a sufficient margin, the resulting societal change can be huge.
Your first paragraph is spot on. Some people are hell-bent on developing AGI (and these people should exist), and there are some doing SOTA-chasing by tweaking hyperparameters to the extreme and overfitting the data (these works are _mostly_ useless), but most people do not realize the amount of AI we have right now, right this day, is enough bring in fundamental and permanent paradigm changes in as many fields as you can think of.
And you are onto something when you say that "empirical records" can be learned by ML models, and a huge amount of grunt work is required for that. Every tom, harry, and dick can overfit to MNIST today, it used to be hard someday. And note the amount of grunt work it took to build MNIST- thousands of human hours. Building Datasets does not pay you back instantly, but it has benefits for all. I know some people who are paying money out of their own pockets and creating niche Datasets and making them available under MIT Lisence.
Hope that more and more companies and people do that for really niche fields.
And I don't completely agree with not understanding things that DL models do and relying on them. We have started to understand much of it. ML interpretation is a field with mot much success but I am hopeful. We did not know jack about how very deep CNNs work, but that changed with the Zeiler-Fergus paper[0]. Later with things like Grad-CAM[1]. Now we are trying to understand latent representation fully. We (even I) can create GAN generated pictures with different haircolor, different kinds of glasses, etc. from scratch.
I read not more than two days ago that Microsoft and Peking Uni researchers found a way to identify "knowledge neurons" in unsupervized pretrained embeddings in NLP and they can edit facts with that [2].
So, I am optimistic about our "interpretation" future.
I am facing with this exact problem at work - I have a tool that scans thousands of files and tells me No. Why ? I don't know why. The algorithm it performs is split between 10-20 other subtools (microservices, right?) making following the decision tree almost impossible.
What happens when this kind of tool will calculate your credit score, social score, or the proceeds of your savings ?
> New AI tool calculates materials’ stress and strain based on photos
Surely not - more accurately:
New AI tool intuits materials’ stress and strain based on photos
or
New AI tool guesses materials’ stress and strain based on photos
An experienced engineer could probably also guess roughly what shape the stress and strain gradients might take in a shape, but you wouldn't call such a guess 'calculation'.
Guesses poorly too, based on the examples. The crack growth example is almost laughable. These are nowhere close to where real cracks would form. Cracks start on the inside corner of brittle joints.
The bar stretch is also completely wrong, there are no stress concentrations at the top and bottom; it should be uniform stress or concentrated strain in the center depending on what they're attempting to show.
The question is whether it can get better. Even small steps aggregate to great changes over time.
First CGA monitors were bulky, energy hungry and blurry at the same time. 40 years of continuous development, and I am staring at a nice 4K screen that does not strain my eyes and is substantially more energy efficient.
Thank you for bringing sanity to the comment section. I'm getting really weary of all the "AI" snake oil (I'm a data scientist). There's no way that this will generalize to:
1. materials not in the training set
2. stresses substantially different from those in the training set
So basically, it just memorizes a few patterns and can interpolate between the ones it's seen before. Big deal.
A good rule of thumb for detecting this variety of bullshit is this: given infinite time and resources, would an intelligent enough human be able to perform the task given the input? In this case, the answer is probably no.
This can be useful if you use it to automate review of regular inspection photos - you might be able to get the computer to recognize strain that would be hard to detect without a strain gauge.
True, nothing stops me from wrapping a brick in a T-shirt and fooling the algorithm. Might be useful as a higher level classifer or bucketing heuristic.
Sorry, but I don't want a black box for stress testing bridges - or any other infrastructure for that matter. Do black boxes have their place in design? Sure. In validation? Absolutely not.
Then don't use it for that! Why so many naysayers dumping on interesting things. Engineers aren't idiots who're going to randomly rely on unknown techniques. Even traditional FEA is backed up by hand calculations.
It's not even really a black box. It's learned a subset of Abaqus's functionality so it's scope is known.
It doesn't appear to be stress testing anything, it's
[edit: removed calculating] estimating where a structure or material is stressed. Stress testing is a physical process, and the article gives no indication that stress testing could or would be replaced but this.
Edit: Further elaboration from the article: product designers, for example, could test the viability of their ideas before passing the project along to an engineering team
This would seem to clearly indicate that the intended application is faster iteration in design, not a replacement for a rigorous engineering process that would include appropriate testing.
True, it can be quite concerning, I know that PDE's have issues too but they have served very well over the years and will continue to do so. I think that there is no shortcut to having a solid understanding of the principles at play.
agreed. in situations where you have pictures to analyze but no structure or greatly damaged structure, it makes sense.
It might even be good to analyze inspection photos for regular inspections to draw attention to situations where there might be more strain than anticipated and a human could be flagged to inspect more closely.
That's not the application. ML accelerated modeling allows for instantaneous iteration. It dramatically speeds up creative engineering and scientific work. You take the final product and run it through a classical simulator as a final QC - though in my experience (in a different domain, but similar principle) the ML model outputs tend to be smooth/continuous and ≈98% MSE accurate. Of course you need to carefully train your models to span the input space, but for finite element/finite difference modeling this is relatively straightforward.
ML model outputs tend to be smooth/continuous and ≈98% MSE accurate
In my experience building and working with such models in similar domains, they tend to be right in all the easy and uninteresting cases where there are no problems and in all the cases where the problem is super obvious, and wrong in all the hard and subtle corner cases that you actually care about.
The other problem is that traditional engineers hate models they don't understand. If you can't explain, at least roughly, why it gave the answer it gave, then they won't trust it and won't use it.
>The approach could one day eliminate the need for arduous physics-based calculations, instead relying on computer vision and machine learning to generate estimates in real time.
This is pretty fresh tech, but industry is already using it. We are doing something approximately similar where I work. Instead of running compute intensive finite element/finite difference simulations of physically dependent systems, a neural network (typically something structured like a transformer) is trained to output the calculations up to 6 orders of magnitude faster in our applications.
This changes this allows modeling dependent science and engineering solutions to be iterated over in real time - you can see the results of your edits as you manipulate your models. And the results, at least in our applications, have some ≈98 MSE. It isn't surprising in hindsight - deep neural networks are universal function approximations, and finite modeling is as close to pure mathematics as you can get in an industry setting. It feels like a perfect use case for deep neural nets.
Computers are good at recording structured knowledge - who owns that, this merch goes where, things that were recorded on stone tablets since times immemorial. But human experts also have a different type of knowledge, empiric, accumulated through years of experience. Artisan crafters "know" how good some material is by sense of touch, smell, sight, without always being able to say WHY or HOW this wood is better than that wood for this table. This is why apprenticeship with a master was a key part of developing an artisan - you'd transfer SOME of this empirical knowledge from a master that took a lifetime to develop it.
The humanity took a huge step forward by moving a lot of empirical knowledge to structured knowledge through the use of models (mathematical?) and books. Instead of an apprentinceship with a mason, a builder now goes to school and learns how to structurally design a building based on construction codes. This allows huge scaling of knowledge, at the expense of missing subtle details which are not modelled (spherical cows in a void, right?).
With this article it just occured to me that Deep Learning may be the tool to record this type of empirical knowledge. And it's going to scale out - unlike the human version never did - because digital copying is cheap, and the knowledge doesn't die with the artisan that developed it - it only gets more accurate. The models we build about reality will get more and more accurate - the spherical cows will grow legs in the air, not in a void.
Humanity will start doing more and more things without understanding why, but because "the computer said so", and things work out when the computer says they will. Black boxes will explode in usage, and God forgive those that may be on the side where The computer says No.