Hacker Newsnew | past | comments | ask | show | jobs | submit | wordpad's commentslogin

Some places will just ask you to make up the hours by not working some other days but you're still expected to complete the same work.

Reminds me of unlimited vacations policies. Great on paper.


He is making a point something extremely powerful can be simple and obvious. Importing libraries is an obvious way to manage code complexity and dependencies.

Skills do that for prompts.


How are vibe coding platforms solving this?

As far as I can tell they aren't

If your proposal doesn't align with leadership vision or the product they want to grow...

Well you factor that in too? And be willing to change focus if that's the feedback.

In my experience (I make tools for the network and security guys): that's why you don't propose only one thing. We often have one new project every year, we propose multiple ways to go about it, the leadership ask us to explore 2-3 solutions, we come back with data and propose our preferred solution, the leadership say 'ok' (after a very technical two-hour meeting) and propose minor alterations (or sometimes they want to alter our database design to make it 'closer' to the user experience...)

This can still be okay - but you have to be correct in a way that the company values. This of course needs to be without doing something against the rest of the company - either legally or sabotaging some other product are both out. Values is most commonly money, but there are other things the company values at times..

Just like .com bust from companies going online, there is hype, but there is also real value.

Even slow non-tech legacy industry companies are deploying chatbots across every department - HR, operations, IT, customer support. All leadership are already planning to cut 50 - 90% of staff from most departments over next decade. It matters, because these initiatives are receiving internal funding which will precipitate out to AI companies to deploy this tech and to scale it.


The "legacy" industry companies are not immune from hype. Some of those AI initiatives will provide some value, but most of them seem like complete flops. Trying to deploy a solution without an idea of what the problem or product is yet.

Right, but this is consumer side hype.

Even if AI is vaporware is mostly hype and little value, it will take a while to hype to fizzle out and by then AI might start deliver on its promise.

They got a long runway.


> Even slow non-tech legacy industry companies are deploying chatbots across every department - HR, operations, IT, customer support

Yes, and customers fucking hate it. They want to talk to a person on the damn phone.


I've not met anyone who doesn't just increment a digit at the end every 6 months.

And any password length requirement beyond 8 always ends up being just a logical extension of 8 character password (like putting 1234 at the end), if 16 characters is required one would just type their standard password in twice.

If a any of the old passwords (potentially from unrelated applications) get leaked, it's almost trivial to guess current password.


Yeah, that's kinda my point, increasing the complexity requirements counter-intuitively reduces, or at least doesn't change, the actual level of security provided.

It's a wetware limitation. Not that we don't have methods that could improve it, it's just that they're not yet implemented at this specific point of contact. Interestingly.


And you don't think a dozen of basically scams around the technology justify extreme scepticism?


huh?


This works because a problem could be broken down to a prompt which rarely hallucinates.

Most real world prompts can't be reduced to something so consistent and reliable.

Their key finding was that the number of votes grows linearly with number of prompts you are trying to chain.

However the issue is that the number of votes you need will grow exponentially with hallucination rate.


Heck yes, we want jt, can't imagine browsing without it.

It's like going from YouTube to Tiktok, for most content we consume, you could cut 90% of it without losing anything of value.


Well maybe you lose your ability to focus or your sanity or your privacy but yeah.


"I want to liquefy my brain on short form video" is quite a take.


So you can get 10x the brain damage in the same amount of time. This is not a great case.


Why? Physics of large discrete objects (such as a robot) isn't very complicated.

I thought it's fast accurate OCR that's holding everything back.


The problem becomes complicated once the large discrete objects are not actuated. Even worse if the large discrete objects are not consistently observable because of occlusions or other sensor limitations. And almost impossible if the large discrete objects are actuated by other agents with potentially adversarial goals.

Self driving cars, an application in which physics is simple and arguably two dimensional, have taken more than a decade to get to a deployable solution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: