Is anyone else finding their company is asking teams to “insert ai everywhere any way you can”?
That’s a sign of a problem imho. The hype is so high the directives are to use ai everywhere regardless of fit. I’m a believer of ai but shoehorning it into everything as that currently boosts stock prices seems insane.
* blocking every known LLM url due to fear of leaking information to it
* not wanting to hire expensive data scientists for any in house development
I even asked an Engineering Manager at Meta how much their own team use Llama day to day to multiply their productivity. Their answer was they don't use it at all, and they weren't aware of any internal tooling to utilize it for work
This kind of fits the narrative of some of the Mag7 earnings calls where they more or less say "we aren't sure where the revenue will ever come from.. but its a game theory style arms race where we can't afford to NOT be there if someone figures out how to make revenue in the space".
So the big guys are buying GPUs, building out datacenter, developing & training models, etc.. just in case.
Maybe LLMs will change some niche dramatically, maybe it will reshape society, or maybe nothing.
More prior revolutionary developments end up like crypto, voice assistants, IoT, smart homes than the number that end up like smartphones, web, or the PC.
I think the only case I can think of where AI will revolutionize positively is self driving cars. Revolutionizing transport will have huge implication. The next thing would be robots, but that's just making people lazy in the household and replacing jobs. Crypto, voice assistants, IoT, smart homes, are bad examples are these have a great chance to grow more still. They will probably replace smartphones as smartphones did with PCs.
I semi-purposely left out self driving cars, and the revolution it has/is/will provide again.. remains to be seen. Waymo is nearly a 20 year old project at this point and is seemingly quite great in 2 cities, serving ~1% of the US population. These cities also happen to be in warm climates so theres a whole slew of environments / "edge cases" they just don't have to deal with. Maybe 20 more years?
So it's both outperformed what pessimists might have said (never work) and vastly underperformed what the median enthusiast projected (it's always just a few years away). I'd wager we are still teaching teens how to drive even 20 years from now.
LLMs are ~7 years old, so maybe another decade to being useful if we go by self driving cars learning rate?
Meanwhile at Google, 50% of code characters are from LLM autocomplete: https://research.google/blog/ai-in-software-engineering-at-g... Which is a little disconcerting. Maybe need to up my code review game. Also I don't personally use them at all - am I really missing out? Sometimes I wonder.
Yes, I was on an internal project recently that wanted to use LLMs in a way that was appropriate to evaluate if changes between two versions of a text were semantically meaningful, and limited to that scope, it would've been a really valuable tool.
We had a directive from management to, for political reasons, use AI in the tool as much as possible to show how innovative and forward-thinking the company is. This led to a bunch of poorly-thought-out choices and while the project is in production and has internal users... I don't think it was particularly successful.
Not all of that is due to the "use AI" directive; there were also poor technology and deployment stack choices that made things overly complicated and cost us a bunch of time.
Common issue when new tech comes out. The people who know the tech, but not their companies business focus on the tech. Many of them will get promotions, and make their way up in the company. The company will lose likely millions, and the guy will leave to damage another company. If the company is lucky, another person who takes the time to understand the companies customers will come in... throw away the stuff their predecessor built, and solve some real problems. If the company is unlucky, they will double down on the over complicated solutions, and lose to a startup that ignored the sexy stuff and focused on the customer problem.
I work at a financial services company that is quite behind their peers tech-wise and watching the internal politics of AI has been fascinating.
Management seems to see this as their opportunity to catch up on the cheap.
Rather than having modern tech systems and properly staffed engineering department, let's just uh.. have non-technical people do AI hackathons! Also instead of automating excel jockey jobs with server side data pipelines, what if we.. you guessed it.. gave the excel jockeys AI!
I can imagine this playing out in a lot of industries where the underdogs think its a shortcut and yet..
This is happening at my company. Thankfully I'm senior enough to push back on most of the requests to add AI as I still haven't found a good use case for it in our product.
"There's a new technology out there that makes new things possible, let's explore whether it makes sense to integrate it into what we're doing?" - not only is that the correct attitude, in the long run it's the only attitude that keeps companies alive assuming they have exposure to tech.
See the internet/web revolution, the mobile revolution, etc.
I recently had the opposite, where the CEO of the startup was killing almost all ideas of adding AI to their products. Perhaps AI is just polarizing. Some companies are jumping at the opportunity. But older startups like the one I was working for are being ultra-conservative with spend and are maintaining a wait-and-see attitude.
that's expected behavior on any new wave right? I've seen the same with microservices, ORMs, SPAs, etc - "use this hammer any way you can!" - and with product trends (crypto, mobile apps, SoLoMo, etc). It's normal. Companies live and die on how well they surf hype cycles.
Yes. No one learned any lessons the last time around with "put everything on the blockchain".
Or maybe they did learn you can make a profit off of hype alone, but it's not making the end user or anyone else's life better as a result. Who cares - line goes up, people get promotions.
That’s a sign of a problem imho. The hype is so high the directives are to use ai everywhere regardless of fit. I’m a believer of ai but shoehorning it into everything as that currently boosts stock prices seems insane.