Why dump billions of dollars then? Nowhere else to spend it? Effective marketing?[1] Is no one asking this question?
"... and critically: there's no one to hold responsible for getting it wrong."
Could this be part of "AI"'s appeal? A dream of absolving businesses and individuals from accountability.[2]
1. "What's more, artificial research teams lack an awareness of the specific business processes and tasks that could be automated in the first place. Researchers would need to develop an intuition of the business processes involved. We haven't seen this happen in too many areas."
2. Including the ones who designed the "AI" system.
> Why dump billions of dollars then? Nowhere else to spend it? Effective marketing? Is no one asking this question?
Because whoever does achieve the next unlock - should it happen - will receive an unimaginably large windfall. This is the classic intent of venture capital. In fact, I'd suggest that AI is actually one industry where VC is doing what it does best: taking extremely risky bets with a large potential upside.
> Could this be part of "AI"'s appeal? A dream of absolving businesses and individuals from accountability.
Presently, this seems to be one of its large detractors. If I have an employee do something stupid, I can say that an employee did something stupid. People might wonder why they were allowed to do that stupid thing, and what we're going to do to prevent it from happening again, but the explanation of the source is satisfactory. We're fallible, and we understand the fallibility of others (generally speaking).
AI is not that at all. If my automation does something stupid, I still have all the blame, and yet I have nowhere else to pass it off to. "We don't understand why our AI did this really stupid thing" is, frankly, not a satisfying response (nor should it be). Businesses employing AI certainly are not absolved of any form of accountability, and are arguably exposed to more of it (since they're not able to pass the blame on to another fallible human, and have to take direct accountability of a system they built but don't fully understand).
"... and critically: there's no one to hold responsible for getting it wrong."
Could this be part of "AI"'s appeal? A dream of absolving businesses and individuals from accountability.[2]
1. "What's more, artificial research teams lack an awareness of the specific business processes and tasks that could be automated in the first place. Researchers would need to develop an intuition of the business processes involved. We haven't seen this happen in too many areas."
2. Including the ones who designed the "AI" system.