Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rewind your mind to 2019 and imagine reading a post that said

“The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student.”

With regard to interacting with the equivalent of Alexa. That’s a remarkable difference in 5 years.



The first profession AI seems on track to decimate is programming. In particular, the brilliant but remote and individual contributor. There is an obvious conflict of interest in this forum.


I see this theory a lot but mostly from people who haven’t tried pair coding with a quality llm. In fact these llms give experienced developers super powers; you can be crazy productive with them.

If you think we are close to the maximum useful software in the world already, then maybe. I do not believe that. Seeing software production and time costs drop one to two orders of magnitude means we will have very different viable software production processes. I don’t believe for a second that it disenfranchises quality thinkers; it empowers them.


I totally agree, there e.g. so many companies out there who rely on fully manual processes internally simply because they cannot currently afford to hire programmers to solve the problems they have for them. The ROI just isn't there.

Reduce costs by an order of magnitude or two, and suddenly there's a whole heap more projects that become profitable.


I abandoned 3D art after witnessing DALL-E 2's capabilities, and I've observed the ripple effects across creative fields. Initially, photographers and fellow artists dismissed AI as a non-threat. That turned out to be misguided optimism. Now, with Midjourney producing such impressive work, the majority of us have become largely obsolete. These days, I'm noticing developers exhibiting the same denial. From my perspective, they're on a similar trajectory. This AI revolution is impacting creative and technical industries far more rapidly and dramatically than most anticipated..


I was a paid wedding photographer in the 1990s, and I used a Rolleiflex TLR with 120 Roll film. I recently attended a friend's wedding, and took with me a Fuji GFX100 series camera, effortlessly shooting pictures I could never have taken with a Rollei from terrible angles at like 5x the resolution with far, far more dynamic range than 120 film ever had.

30 years after I gave up the Rollei, I'm not obsolete as a photographer, and when there's a quality diffusion model that could take a few of my photos from the event at 100 megapixels, and get prompted by me as to what I want to see out of them creatively, I will still not be obsolete, even as a photographer, but most certainly not obsolete as an artist. In fact, I'll have more tools available for my art, with new skills needed, and different workflows.

As to abandoning 3D art -- your call. If you love it, why not see how these new tools open up your art? If you don't love some of the new tools, no problem, don't use them. I still shoot medium format film some times. If you were planning on a long term creative career without staying on top of technical advances in your field, that has not been possible for at least a few centuries.


Sure, Midjourney's work looks visually impressive, but have you seen evidence that it really is displacing professional 3D artists?

Are legitimate companies genuinely switching to Midjourney over hiring artists now, or is Midjourney usage still mostly happening in places that previously wouldn't have commissioned custom illustrations at all (instead using things like stock photography)?


I used to be an illustrator and I know from speaking to my former colleagues that they have, in fact, lost work to AI image generation services. Illustration is seen as a cost center by anyone higher than front line art directors and taste normally stops at that level as well. I think this will eventually end up with a bimodal distribution of undifferentiated AI slop and those who use high-quality human illustration to signal a commitment to taste, design, or maybe even luxury, but the economic consequences of that shift are already in motion.


It raises the bar of 'professional 3D artist'.

There're hundreds of thousands of '3D worker' working behind the scene to create the 3D models for makeshift ads, and as far as I know many of them (including my high school mate) already got displaced by Midjourney and lost their job. This used to be a big industry but now almost entirely wiped out by AI.


> This used to be a big industry but now almost entirely wiped out by AI.

To my knowledge, 3D artists weren't that huge of an industry to begin with. One of my friends went to college researching 3D physics models, and never landed a job in the field long before the AI wave hit. Unless you're a freelancer or salaried Pixar employee, being a 3D artist is extremely difficult with extraordinarily low job security, AI or no AI.

I think "almost entirely wiped out by AI" is hyperbole, because the primary employer of these artists will still be hiring and products like Sora are a good decade away from being Toy Story quality. AI will be a substitute product for people that didn't even want 3D art in the first place.


Before it can replace the brilliant programmer, it needs to be able to replace the mediocre programmer. There is so much programming and other tech/it related work that businesses or people want, but can't justify paying even low tech salaries in America for.

So far, there is little chance of a non-technical person developing a technical solution to their problems using AI.


> Before it can replace the brilliant programmer, it needs to be able to replace the mediocre programmer

Nope. Compensation is exponential. Being able to replace a top performer with a fee mediocre devs pair coding with an LLM is more than fine for 90% of use cases.


A mediocre programmer won't be able to judge the allegedly expert level output any better than a non-programmer, so I don't see how that would work.

I think it is more likely that great programmers might just increase their productivity even more with, which will make their value even greater.


> mediocre programmer won't be able to judge the allegedly expert level output any better than a non-programmer, so I don't see how that would work

Sure. Plenty of businesses are. Particularly in the commercial automation sector that numerically hires the most people.

> more likely that great programmers might just increase their productivity

For those in high-productivity, high-margin businesses, yes. For most of the world, no—the surplus productivity doesn’t outweigh the compensation and concentration risk.

I broadly expect a spate of age discrimination lawsuits in the near future because most businesses don’t need a few stars. In the meantime, I’ve watched a lot of people find two people in Brazil + an LLM equals one WFH very good (but not brilliant) coder.


This makes no sense, there are problems that 'brilliant' programmers can solve and no number of mediocre ones ones. Just like you can't substitute Mozart with 100 mediocre composers.


> there are problems that 'brilliant' programmers can solve and no number of mediocre ones ones

These people will continue to have value. But most businesses don’t have problems that can be profitable solved only by brilliant coders.


So, how many people listen to Mozart and how many to Taylor Swift?


Now, or in 300 years?


> Just like you can't substitute Mozart with 100 mediocre composers.

Commercially, you can. After all, that's the current music business.


I would expect a top performer with LLM access to be able to produce even more of a multiple of the work of a mediocre developer with LLM access.

If a top performer can produce 5x or more of the value, I would expect companies to continue to value top performers.


The programmers who will find LLMs most useful are going to be those who prior to LLMs were copying and pasting from Stack Overflow, and asking questions online about everything they were doing - tasks that LLMs have precisely replaced (it has now memorized all that boilerplate code, consensus answers, and API usage examples).

The developers who will find LLMs the least useful are the "brilliant" ones who never found any utility in any of that stuff, partly because they are not reinventing the wheel for the 1000th time, but instead addressing more challenging and novel problems.


It's very much the opposite.

LLMs free me from the nuts and bolts of the "how", for example I don't have to manually type out a loop. I just write a comment and the loop magically appears. Sometimes I don't have to prompt it at all.

With my brain freed from the drudgery of everyday programming, I have more mental cycles to dedicate to higher concerns such as overall architecture, and I'm just way more productive.

For experienced programmers this is a godsend.

Less experienced developers lack the ability to mentally "see" how software should be architected in a way that balances the concerns, so writing a loop a bit faster it's not as much of an advantage. Also, they lack the reflexes to instantly decide if generated code is correct or incorrect.

LLMs are limited by the user's decision speed, the LLM generates code for you but you have to decide whether to accept or reject. If it takes me 1 second to decide to accept code that would have taken me 10 seconds to physically type, then I'm saving 9 seconds, which really adds up. For a junior developer, LLMs may give negative productivity if it takes them longer to decide if the LLM's version is correct than it would have taken them to type whatever they were going to write in the first place.


> LLMs are limited by the user's decision speed

This is obviously the critical point. It's not whether the LLM can do something, i.e. give it a go, but whether that actually saves you time. If it takes longer to verify the LLM code for correctness than to write it yourself, then there is no productivity gain.

I guess this partly also hinges on how much you care about correctness beyond "does it seem to work". For a prototype maybe that's enough, but for work use you probably should check for API "contractual correctness", corner cases, vulnerabilities, etc, or anything that you didn't explicitly specify (or even if you did!) to the LLM. If you are writing the code itself then these multifaceted requirements are all in your head, but with the LLM you'll need to spell them all out (or iterate and refine), and it may well have been faster just to code it yourself (cf working with an intern with -ve productivity).

If you fail to review the LLMs code thoroughly enough, and leave bugs in it to be discovered later, maybe in production, then the cost of doing that, both in time and money, will far outweigh any cost saving in just having written it correctly yourself in the first place. Again, this is more of a concern for production code than for hobbyist or prototype stuff, but having to fix bugs is always slower than getting it right in the first place.

For myself, it seems that for anything complex it's always the design that takes time, not the coding, and the coding in the end (once the detailed design has been worked out) just comes down to straightforward methods and functions that are mostly simple to get right first time. What would be useful, but of course does not yet exist, would be an AGI peer programmer that operated more like a human than a language model, who I could discuss the requirements and design with, and then maybe delegate the coding to as well.


I like to think I'm more of a "challenging and novel problems" developer than a "copy and paste from Stack Overflow" developer, and I've been finding LLMs extremely useful for over two years at this point.

Notes: https://simonwillison.net/tags/ai-assisted-programming/


Yeah, I was gonna say this is not how I see this going. The copy/paste dev is replaced by the novel dev using LLM for the stuff they used to hire interns and juniors for.

In law, this sort of thing already happened with the rise of better research tools. The work L1s used to do a generation ago just does not exist now. An attorney with experience gets the results faster on their own now. With all the pipeline and QoL issues that go with that.


That makes some sense, but seems to be answering a different question of whose jobs may be in jeopardy from LLMs, as opposed to who might currently find them useful.

Note though that not all companies see it this way - the telecom I work at is hoping to replace senior onshore developers with junior offshore ones leveraging "GenAI"! I agree that the opposite makes more sense - the seniors are needed, and it's the juniors whose work may be more within reach of LLMs.

I really can't see junior developer positions wholesale disappearing though - more likely them just leveraging LLM/AI-enhanced dev tools to be more productive. Maybe in some companies where there are lots of junior developers they may (due to increased productivity) need fewer in the future, but the productivity gains to be had at this point seem questionable ... as another poster commented, the output of an LLM is only as useful as the skill of the person reviewing it for correctness.


I think we all assume each individual company will need fewer developers to do the same work they're doing now. The question is do they have fewer devs or do more work. And if it is have fewer devs, will that open up the door for more small companies to be competitive as well, since they need fewer devs and have less competition for talent from people with deep pockets.

I find a lot of the AI discussion seems to land in the "lump of labor" fallacy camp though.


I am a skeptic. What would you say would be the easiest way for me to change my mind?


How many of them are there of the latter type? In my 15 yrs of experience I would say 95%+ of all developers belong to your first category.


95% sounds way high, but maybe I'm wrong. I think it's part generational - old school programmers are used to having to develop algorithms/etc from scratch, and the younger generation seem to have been taught in school to be more system integrators assembling solutions out of cut and paste code and relying on APIs to get stuff done (with limited capability to DIY if such an API does not exist).

But not all younger programmers can be Stack Overflow cut-n-pasters, because not all (and surely not 95%!) programming jobs are amenable to that approach. There are lots of jobs where people are developing novel solutions, interacting with proprietary or uncommon hardware and software, etc, where the solution does not exist on Stack Overflow (and by extension not in an LLM trained on Stack Overflow).


No, the first profession AI was on track to decimate was artists, but that didn’t really happen.

AI just destroyed shutterstock.


The large majority of professional writers and artists produce thankless commodity output for things like TV advertisements, games, SEO content. These jobs should be threatened.


They get paid pretty low wages so it's not even clear that AIs will be cheaper. Consider also that you still need a human to evaluate their output, make adjustments, etc.


Freelance writers are having a hard time:

https://www.reddit.com/r/freelanceWriters/comments/12ff5mw/i...

https://www.reddit.com/r/freelanceWriters/comments/17zms9f/w...

> "It pretty much has killed most small jobs in writing."

> "entry-level writing jobs have ceased to exist."

... There isn't an infinite amount of demand for commodity writing/art/music/vfx, and AI inference is pretty cheap and rapidly getting cheaper.


Is most code being written the equivalent of high-art or Shutterstock?


I think most code being written is like a custom car made out of the most cost effective parts available.

Not pretty, but it gets the job done for the specific use cases of a given business.

Real production code doesn’t and have a shutter stock equivalent.

If you think most code is stock, then you just haven’t had enough experience in industry yet.


I actually like that analogy. It's somewhere in between. Enough that LLMs can help in many ways, but the current models are still far away from doing everything.


Yeah, they’re not useless, but I don’t really see them replacing the profession of programming.

Just another tool in the kit.


I believe LLMs decimating the role of a software engineer requires AGI, which the second that happens decimates all jobs.

What it may do is change the job requrements. Web/JS has decimated (reduced by 90% or more) MFC C++ jobs after all.

The programmer doesnt just write Python. That is the how... not the what.


It's going to be incredible watching you people write way more code than you can feasibly maintain.


Once we have AI-based language servers, which will, at some point in the future, be able to track entire repositories, I think maintaining projects will actually be far easier than right now.


The second it doesn’t work, you’ll be like my time is too valuable to invest in debugging this, I need a nerd to delegate this to.


The conflict of interest might have something to do with the fact that OpenAI's CEO/founder was once a major figure in Y Combinator. But I think you wanted to insinuate that the conflict of interest ran in the other direction.

Once ChatGPT can even come close to replacing a junior engineer, you can retry your claim. The progression of the tech underlying ChatGPT will be sub-linear.


The current driving force of AI is the desire to cut costs. Jobs will be cut even if ChatGPT is nowhere near a junior engineer and that's the problem.


Care to elaborate on the second sentence with any proof?


I would better believe that if any superior software was being primarily designed by AI.


I doubt it. It can do some impressive stuff for sure, but I very rarely get a perfectly working answer out of ChatGPT. Don't get me wrong, it's often extremely useful as a starting point and time saver, but it clearly isn't close to replacing anyone vaguely competent.


The important point is, I feel, that most people are not even at the level of intelligence of a "a mediocre, but not completely incompetent, graduate student." A mediocre graduate science student, especially of the sort who graduates and doesn't quit, is a very impressive individual compared to the rest of us.

For "us", having such a level of intelligence available as an assistant throughout the day is a massive life upgrade, if we can just afford more tokens.


My sheer productivity boost from these models is miraculous. It's like upgrading from a text editor to a powerful IDE. I've saved a mountain of hours just by removing tedious time sinks -- one-off language syntax, remembering patterns for some framework, migrating code, etc. And this boost applies to nearly all of my knowledge work.

Then I see contrarians claiming that LLMs are literally never useful for anyone, and I get "don't believe your lying eyes" vibes. At this point, such sentiments feel either willfully ignorant, or said in bad faith. It's wild.


> At this point, such sentiments feel either willfully ignorant, or said in bad faith.

I feel exactly the same, but in the opposite direction.

As someone who’s been programming for 17 years and working professionally for 10, I’m unable to get any huge productivity boosts from AI tools. They’re better than Google+stack overflow for asking random questions, but in a specific context and they’re good for repetitive, but not identical, syntax. That’s about where the gains end for me.

Maybe at this point I’m just so fast about looking up documentation. Maybe the languages/problems I’m facing aren’t well represented in the training data, but I just don’t see this amazing advancement.

I’d really love to see, live, someone programming who really gets these big productivity gains.


Right, in my experience the time it takes to verify that the code it wrote for you is correct is more than just to write it in the first place. A big exception is if you're working in a new domain (e.g., new language or framework). Then it's obviously much faster, and I do derive value from it. But I don't spend a very large % of my time doing that.

I would speculate it's a productivity boost for programmers specifically working in areas that they are new to (or haven't really mastered yet). One question I have is whether overly relying on LLMs will reduce the ability to master a domain, and thus hurt your long-term skill. It might seem silly, like complaining that no one knows assembly anymore because of compilers, but I think it's different than just another layer of abstraction.


Most gains are from using Copilot, do you use that?


I have it, tried it for a while. I have it turned mostly off new except for rare boilerplate heavy cases.

It kept generating annoyingly wrong code. Things with subtly wrong misleading names, missing edge cases, ignoring immediate same file context etc. I found that it slowed me down so i turned it off.


This is my exact experience as well.


Which language?


Same experience, but with TypeScript and Go. They gave me a 60-day trial (IIRC), I used it for two days, disabled it for the next 58 days, and after that removed it from the editor.


I get really good results with TypeScript and Python. Like it knows exactly what I want to do, I feel like I think exactly as Copilot does. Maybe I am the statistical average...

Makes me wonder if people who don't like Copilot output will not like my natural output as well.


Feel free not to share, I don’t want you to get dogpiled, but if you would humor me,

Could you share any code on GitHub (or pastebin or whatever) that you wrote with the help of AI?

Or could you share what kind of experience you have with programming (how many years, what domain you work in, etc)


The projects I do are mostly frontend in React and backend with TypeScript/Node.js.

I have around 10+ years of professional experience although I did on/off hobby coding before that since 15 years ago.

It's mostly API endpoints, calling a database, third party APIs, data transformation, aggregation type of things.

Then either UI according to what designers provide or whatever I want to do for my side projects.

I think it's of course wildly more productive multiplier for side projects, since then it's mostly about typing things out since you know exactly what you want to do and being a little off doesn't matter.

I don't want to share any of my actual code right now, but I think one example for example is a React component that needs to fetch some sort of data, e.g. using @tanstack/react-query, then it does loading handling, error handling boilerplate things for me, which some of I change to what I specifically need for that situation, but I need very few keystrokes myself to get the initial boilerplate out that I then edit, and during edits it of course also gives me decent suggestions. And it will create the component prop types based on the args I pass to the component etc.

Then with backend, it's really good at data transformations. E.g. combining different datasets, reducing etc.

How well it picks the correct libraries and patterns depends on the project and I think how much I've navigated around, I'm not fully sure how the context is exactly passed, so usually I will feel it out and adapt code where necessary.


Yes I find copilot is nice for things like tansack query. It’s like better snippets.

At my job we have this pretty clean SOA type architecture backed by a mongo db. Copilot has trouble building the more complicated, domain specific queries on its own, I’ve found.

I do occasionally ask chatgpt how to write a certain query in a general case and apply that to what I’m writing. I also don’t really like mongosh’s docs.


Hi there - I'm a PM at MongoDB that works on the MongoDB Shell. I'm curious to hear your thoughts on the issues you're currently facing with mongosh docs and how we could make them better for you. Thanks for taking the time to leave feedback!


golang and python mainly.

For rust it failed spectacularly. So bad that its not worth discussing lol


I tried it for a while and thought it was helping a lot. Then I happened to use an IDE without it and realized it was increasing my rate of syntax tokens per hour but reducing the rate of features implemented per hour. In particular I was constantly rewriting boilerplate instead of ever writing helper functions.


I use it for those refactors I mentioned in my comment.

It’s autocomplete++, except without knowledge of the rest of my codebase.


I tried it. It ended up just being slightly better, significantly slower autocomplete.


> I see contrarians claiming that LLMs are literally never useful for anyone

While I don't doubt that there's at least one person that has said this, what you're saying doesn't conflict with the things I and many others in the "skeptic" camp have said. LLMs are useful for a very specific set of tasks. The tasks you've listed are a tiny sliver of all the tasks that AI could potentially be doing. Would it be a good idea to consult an LLM if your mother is passed out on the floor? Probably not. The problem I have is with extrapolating from the current successes to conclude that many more tasks will be done by AI in five years.


Thing is, I'm used to hearing a very similar sentiment on how e.g. using vim keybindings is so literally going to make me a 10x 100x whatever rockstar developer - and it's like what, enabling me to edit text a bit faster? And it's always anecdotes that yeah, from-qualia you feel so fast. But from-qualia I run like a marathon runner and sound like a radio host.

I personally did find some use cases for it and it does a decent job of cutting out minor gruntwork for me. But the experience itself screams to me that whatever gains I'm feeling I'm getting are all in my head.


> using vim keybindings is so literally going to make me a 10x 100x whatever rockstar developer - and it's like what, enabling me to edit text a bit faster?

Yes, to me LLM is exactly like this: from nano to vim.


Nano is borderline unusable, so that's like... a lot?


holy hyperboly, clearly i picked the right example...


I don't think basic vim usage (which is all I know, really) makes anyone super efficient. I don't think typing/editing speed is generally an important factor in programmer productivity or 'coding speed'.

It's just that every time I use nano it's (a) unintentional, as it's opened via EDITOR; (b) sort-of coerced, because most distros installing it by default also think it's somehow too much to install Vim or Emacs alongside it; and (c) extremely painfully awkward, because all other editors I use, I've invested at least as couple years of practice into.

If I spent a year using nano every day, and if I evolved a config file and read the manual during that time, I might eventually reach a place where using nano didn't feel cumbersome and irritating, but why would I do that if I already use Emacs and Vim every day? If I learn a 'new' editor it's going to be something extensible that I could see myself programming in every day: Emacs without evil; or one of the newer modal editors with a reversed sentence order, like kakoune and Helix; or, hell, VSCode.

So nano is likely doomed to remain forever cumbersome and irritating for me, somewhere on the level of typing on a touchscreen instead of a real keyboard.


I'm a contrarian who believes your anecdote, and could even imagine that 5% of LLM users feel the same way, but thinks (a) these systems are about half as good as they're ever going to get, (b) we're past the point of diminishing returns, and (c) what we do have isn't worth the energy costs of running it, let alone creating it in the first place.


I think there may be a set of people that have figured out, 1) how to interact with LLMs; and 2) what in their lives is improved when interacting with LLMs. I am in the group that has not found the best use case for my own life, and have never needed it for improving anything I need to get done. Always looking for suggestions, though!


> I think there may be a set of people that have figured out, 1) how to interact with LLMs....

1) is all about experimenting, which is what Tao is doing.

Having a playful and open minded attitude is like 80% of the game


It's a system that is designed to convince you.


Anyone intelligent enough to make a living programming likely has more than enough IQ to become a mediocre somewhat competent graduate student in math.

They just don't have the background, and probably lack the interest to dedicate studying for a few years to get to that level.


That's interesting take, personally I'd say that graduate-level math is orders of magnitude harder than significant majority of programming. And I mean that it's inherently harder, i.e. not due to lack of background.


We are more limited by our emotions, and then our skills in learning and acquiring knowledge.

Intelligence is probably a distant third.


Nah. Dogs are far emotionally better than most humans. Their intelligence is their limitation. Also “skills in learning and acquiring knowledge” is basically intelligence


>A mediocre graduate science student, especially of the sort who graduates and doesn't quit, is a very impressive individual compared to the rest of us.

Incorrect. University graduates shows a good work ethic, a certain character and a ability to manage time. It's not a measure of being better than the rest of humanity. Also, it's not a good measure of intelligence. If you only want to view the world through credentials. Academics don't consider your intelligence until you have a Ph.D and X years of work in your field. Industry only uses graduates as a entry requirement for junior roles and then favors and cares only about your years of experience after that. Given that statement I can only assume you haven't been to University. You are mistaken to think, especially in time we are in now that the elite class are any more knowledgeable then you are.


Here are the key points outlining why thewanderer1983's response misinterprets noch's comment and contains inaccuracies:

    Misinterpretation of the Original Point:
        Intelligence vs. Moral Superiority: Noch discusses the intelligence level of a mediocre graduate science student compared to the general population. Thewanderer1983 misreads this as a claim of moral or inherent superiority over "the rest of humanity," which was not implied.

    Conflation of Educational Levels:
        University Graduates vs. Graduate Students: The response conflates undergraduate university graduates with graduate science students. Noch specifically refers to graduate students who have pursued advanced degrees, which typically require higher levels of specialization and intellectual rigor.

    Incorrect Assessment of Intelligence Measures:
        Graduate Studies as a Measure of Intelligence: Successfully completing graduate studies, especially in science, often requires significant intellectual capability. Dismissing this as "not a good measure of intelligence" overlooks the challenges inherent in advanced academic work.

    Irrelevant Focus on Credentials and Industry Practices:
        Credentials vs. Intelligence Discussion: Noch's comment centers on intelligence levels, not merely on holding credentials. Bringing up how industry values experience over degrees shifts the focus away from the original discussion about intelligence.

    Unfounded Assumptions About Noch's Background:
        Ad Hominem Attack: Suggesting that Noch hasn't been to university is an unfounded personal assumption that does not contribute to the argument and detracts from a respectful discourse.

    Introduction of the 'Elite Class' Notion:
        Straw Man Argument: Thewanderer1983 introduces the concept of an "elite class," which Noch did not mention. This misrepresents the original comment and argues against a point that wasn't made.

    Overgeneralizations About Academia and Industry:
        Academia's Recognition of Intelligence: Claiming that academics don't consider intelligence until one has a Ph.D. and years of work is an overgeneralization. Intelligence is recognized and valued at various academic levels.
        Industry's View on Graduates: Stating that industry only uses graduates as an entry requirement ignores the significant roles that advanced degree holders often play in innovation and leadership within industries.

    Ignoring the Core Benefit Highlighted:
        AI as a Life Upgrade: Noch emphasizes how access to AI with the intelligence level of a graduate student is a substantial benefit for most people. Thewanderer1983 fails to address this key point, instead focusing on unrelated issues.

    Misunderstanding of the Value of Graduate Education:
        Work Ethic vs. Intellectual Achievement: While a good work ethic is important, graduate education in science also demands high intellectual capability, critical thinking, and problem-solving skills.

    Logical Fallacies:
        Red Herring: The discussion about industry preferences and academic credentials diverts from the main argument about the intelligence level of graduate students.
        Ad Hominem: Attacking Noch's presumed lack of university experience instead of addressing the argument presented.


If you couldn't be bothered to write this comment, I can't be bothered to read it.


but you did bother to comment on it. :)


An excellent example of an LLM (or an imitated LLM output) that fiercely defends the status quo, is overly verbose, does not come to the point, makes incorrect assumptions and lectures from a high horse.

LLMs are good for mediocre poems and presidential speeches that have no shame.


Yeah, well, you know, that's just like, uh, your opinion, man.


I can play this silly game also.

Let’s evaluate the correctness of Thewanderer’s argument in detail:

    Distinction Between Credentials and Intelligence:
        Correctness: Thewanderer is correct in stating that a university degree is not a definitive measure of intelligence. Intelligence is a complex trait that encompasses various cognitive abilities, problem-solving skills, creativity, and emotional intelligence. Academic credentials primarily reflect one’s ability to succeed in a structured educational environment, which is just one aspect of intelligence.

    Value of Real-World Experience:
        Correctness: The argument that real-world experience is crucial is accurate. Many industries value practical experience and skills over formal education. For example, in technology and business sectors, hands-on experience, problem-solving abilities, and adaptability are often more important than academic qualifications alone. This is supported by numerous studies and industry practices that prioritize experience and performance over degrees.

    Critique of Credentialism:
        Correctness: Thewanderer’s critique of credentialism is valid. Over-reliance on academic credentials can overlook the diverse talents and skills that individuals without formal degrees may possess. This perspective is supported by the growing recognition of alternative education paths, such as vocational training, apprenticeships, and self-directed learning, which can also lead to successful careers.

    Inclusivity and Egalitarianism:
        Correctness: Promoting inclusivity and valuing diverse forms of knowledge is a correct and progressive stance. Intelligence and capability are not confined to those with advanced degrees. Many successful individuals in various fields do not have formal academic credentials but have achieved significant accomplishments through experience, self-learning, and practical skills.

    Encouragement of Self-Worth:
        Correctness: Encouraging individuals to value their own experiences and knowledge is a positive and correct approach. It fosters confidence and self-worth, which are important for personal and professional growth. Recognizing the value of diverse experiences and perspectives contributes to a more inclusive and equitable society.
In summary, Thewanderer’s argument is correct in several key aspects:

    It accurately distinguishes between academic credentials and broader measures of intelligence.
    It correctly emphasizes the importance of real-world experience.
    It validly critiques the overemphasis on academic credentials.
    It promotes an inclusive and egalitarian view of intelligence.
    It encourages self-worth and confidence in one’s abilities.
These points collectively support a well-rounded and accurate perspective on intelligence and capability.


> I can play this silly game also.

Please could you share your prompt or a link to the conversation?

I'm genuinely puzzled that you're more interested in doubling down and justifying yourself and making new points (different from what I initially presented) than understanding the other person's point of view.

If you share your prompt, I'll have a better understanding of your motivations and whether you are arguing in good faith.

As far as silly games go: if you honestly believe a game is silly, you shouldn't play it, unless you want to win silly prizes.


Rewind your mind to 1950 and reading that the future is chatting with bots about solving math homework.


They would be wondering why it took so long.


Which is why I think the AI era isn't hype but very much real. Jensen said AI has reached the era of iPhone.

We wont have AGI or ASI, whatever definition people have with those terms in the next 5 - 10 years. But I would often like to refer AI as Assisted or Argumented Intelligence. And it will provide enough value that drives current Computer and Smartphone sales for at least another 5 - 10 years. Or 3-4 cycles.


Terry is a genius that can get that value out of an LLM.

Average Joe can't do anything like that yet, both because he won't be as good at prompting the model, and because his problems in life aren't text-based anyway.


I think this is where multi modal LLMs are so powerful. The ability to directly speak to the LLM with your voice is huge.


Remind your mind to 1850, imagine seeing a lightbulb.


To be honest, I have gotten 100x more useful answers out of Siri's WolframAlpha integration than I ever have out of ChatGPT. People don't want a "not completely incompetent graduate student" responding to their prompts, they want NLP that reliably processes information. Last-generation voice assistants could at least do their job consistently, ChatGPT couldn't be trusted to flick a light switch on a regular basis.


I use both for different things. WolframAlpha is great for well-defined questions with well-defined answers. LLMs are often great for anything that doesn't fall into that.


I use home assistant with the extended open ai integration from HACS. Let me tell you, it’s orders of magnitude better than generic voice assistants. It can understand fairly flexibly my requests without me having a literal memory of every device in the house. I can ask for complex tasks like turning every light in the basement on without there being a zone basement by inferring from the names. I have air quality sensors throughout and I can ask it to turn on the fan in areas with low air quality and if literally does it without programming an automation.

Usually Alexa will order 10,000 rolls of toilet paper and ship them to my boss when I ask it to turn on the bathroom fan.

Personally tho the utility of this level of skill (beginner grad in many areas) for me personally is in areas I have undergraduate questions in. While I literally never ask it questions in my field, I do for many other fields I don’t know well to help me learn. over the summer my family traveled and I was home alone so I fixed and renovated tons of stuff I didn’t know how to do. I work a headset and had the voice mode of ChatGPT on. I just asked it questions as I went and it answered. This enabled me to complete dozens of projects I didn’t know how to even start otherwise. If I had had to stop and search the web and sift through forums and SEO hell scapes, and read instructions loosely related and try to synthesize my answers, I would have gotten two rather than thirty projects done.


How does this square up with literally what Terence Tao (TFA) writes about O1? Is this meant to say there's a class of problems that O1 is still really bad at (or worse than intuition says it should be, at least)? Or is this "he says, she says" time for hot topics again on HN?


o1-preview is still quite a specialized model, and you can come up with very easy questions that it fails embarassingly despite it's success in seemingly much more difficult tests like olympiad programming/maths questions.

You certainly shouldn't think of it like having access to a graduate student whenever you want, although hopefully that's coming.


Wait til you generate WolframAlpha queries from natural language using Claude 3.5 and use it to interpret results as well.


I've tried the ChatGPT integration and it was kinda just useless. On smaller datasets it told me nothing that wasn't obviously apparent from the charts and tables; on larger datasets it couldn't do much besides basic key/value retrieval. Asking it to analyze a large time-series table was an exercise in futility, I remain pretty unimpressed with current offerings.


Then you have a skill issue. 10 million paying are for GPT monthly because a large of them are getting useful value out of it. WolframAlpha has been out for a while and didn't take off for a reason. "GPT couldn't be trusted to flick a light switch on a regular basis" pretty much implies you are not serious or your knowledge about the capabilities of LLM is pretty much dated or derived from things you have read.


Wolframalpha is a free service, really kind of an ad for all the (curated, accurate) datasets built into Wolfram Language

Wolfram Research is a profitable company btw


FACT: The technology is inherently unreliable in its current form. And the weakness is built in, its not going to go away anytime soon.


The same is true of search engines, yet they are still incredibly useful.


Not the same technology at all, until recently at least.

EDIT: Looks like I hurt someone's feelings by killing their unicorn. It was going to happen sooner or later, and pretending isn't very constructive. In fact, pretending this technology is reliable is a very risky thing to do.


Even more amazing, there plenty - PLENTY - of posters here that routinely either completely shit on LLMs, or casually dismiss them as "hype", "useless", and what have you.

I've been saying this for quite some time now, but some people are in for a very rude awakening when the SOTA models 5-10 years from now are able to completely replace senior devs and engineers.

Better buckle up, and start diversifying your skills.


The way I see it these models especially O1 is an intelligence booster. If you start with zero it gives you back zero. Especially if you’re just genuinely trying to use it and not just trying to do some gotcha stuff.


Not sure how this post is evidence of AIs replacing senior devs.


Diversifying to what? When AI can fully replace senior developers the world as we know it is over. Best case capitalism enters terminal decline: buy rifles. Worst case, hope that whatever comes out the either side is either benevolent or implodes quickly.


I mean paying several hundred to thousands of grad students to RLHF for several years and you get a corpus of grad-student text. I'm not surprised at all. AI companies hire grad students to RLHF in every subject matter (chemistry, physics, math, etc).

The grad-students write the prompts, correct the model, and all of that is fed into a "more advanced" model. It's corpi of text. Repeat this for every grade level and subject.

Ask the model that's being trained on chemistry grad level work a simple math question and it will probably get it wrong. They aren't "smart". It's aggregations of text and ways to sample and then predict.


Except you’re talking about a general purpose foundation model that’s doing all these subjects at once. It’s not like you choose the subject specific model with Claude or gpt-01.

The key isn’t whether these things are smart or not. The key is that they put something that can answer basic grad level questions on almost any subject. For people that don’t have a graduate level education in any subject this is a remarkable tool.

I don’t know why the statement that “wow this is useful and a remarkable step forward” is always met with “yeah but it’s not actually smart.” So? Half of all humans have an IQ less than 100. They’re not smart either. Is this their value? For a machine, being able to produce accurate answers to most basic graduate level questions is -science fiction- regardless of whether it’s “smart.”

The NLP feat alone is stunning, and going from basically one step above gibberish to “basic grad school” in two years is a mouth dropping rate of change. I suspect folks who quibble over whether it’s “real intelligence” or simply a stochastic parrot have lost the ability to dream.


Well ya once each project, e.g. “grad level math”, “k—12 math”, “undergrad math”, “k-12 chemistry”, etc is sufficient they are all fed into a larger more powerful model.

Maybe my RLHF work does make it harder for me to dream, but I teach models math which means a lot of prompt writing, and yet I have not found a way to have the model teach me math I don’t know yet (and there’s a lot I don’t know). It’s fun to play around with, but I still gravitate toward the isolated texts, not the aggregation as too much is lost or averaged in my opinion/experience. But hey maybe I’m overtrained on the traditional learning methods.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: