Oh no, a slippery slope of compassion! To wit, the long title of the bill:
"An Act to make provision for an Animal Sentience Committee with functions relating to the effect of government policy on the welfare of animals as sentient beings."
I take your point, but this feels pretty defeatist to me.
I don’t personally believe that “we have not historically done this well, if at all” is a good reason to stop trying to manage the negative externalities of emerging technology— quite the opposite, in my opinion.
However, "the future technology is completely unpredictable and we don't understand it" is probably a fairly good reason we will only make worse mistakes in that regard going forward.
I cannot imagine a scenario where it doesn’t drop off.
The massive recent improvements in GPT’s performance are a result of giving the model enormous numbers of parameters and a wealth of training data. That’s it.
Surely this paradigm cannot scale up indefinitely. Moore’s law is moribund.
Are we going to build a supercomputer that encircles the globe just for the purpose of trying to make the biggest LLM we can?
Also, even we could scale indefinitely, there is no reason to suspect that an LLM with hundreds of quadrillions of parameters will somehow magically spontaneously become an AGI.
It’s tempting to think that the line will go up forever, but that just doesn’t square with reality.
Anyone whose response to this is something like “well, maybe all the brain is doing is just the same thing that LLMs do” is fundamentally underestimating the complexity of the human brain by many orders of magnitude.
It is the most complex system in the known universe and we do not understand it at all.
I am generally optimistic about AI, but to me it is the absolute height of delusional hubris to think that superintelligence is likely to somehow “fall out” of a language model, just as soon as we make one large enough and give it enough training data.
To come to such a conclusion reveals a failure to grasp the magnitude of the problem.
> Surely this paradigm cannot scale up indefinitely.
It doesn't have to. It just has to scale up long enough to start causing real societal problems, if it isn't already there.
Also it's kind of annoying to see you dismiss any of this criticism as 'alarmist' in thread after thread when if you look back a year at best the state that we are in today was said to be at least several years away by the same people who are continuously harping on the fact that this isn't AGI yet. The point is: it doesn't have to be to do massive damage and from that perspective it might as well be. I don't particularly care if I get bitten by the cat or by the dog, I care about being bitten.
I didn’t dismiss anything. I didn’t say anything was alarmist. Please don’t put words in my mouth.
You didn’t say anything about societal problems. You wondered if the growth will ever stop, and I tried my best in good faith to give the reasons why I believe that it will.
If the question is “when will the models be powerful enough to cause societal problems”, then that is a completely different question and I think the answer to it is clearly “they already are”. (But not because they are superintelligent or anything close to AGI.)
I see now you were making reference to a comment I made in reply to someone else.
Yes, that was something that I said and I stand by it.
I do not see any reason to be concerned about AI as an existential threat at the present moment.
I have explained in my previous comment why I feel this is the case; if you feel that this view is recklessly dismissive and wish to change my mind, then I invite you to do the same.
Edit: I’m sorry for any excessive crispiness or combativeness in my tone; I can see now from your post history that you are likely arguing in good faith.
I have grown weary of arguing against concern trolls lately on this topic, so I may have misjudged your initial comment based on its brevity. Sorry about that.
First off: labeling those you don't agree with as concern trolls is pretty rude, but since the HN etiquette requires looking for the best way to explain your comment I take it to be that you meant that as somewhere else rather than on HN. The number of concern trolls here is vanishingly low, most people on HN when they are concerned about something are so for good reasons even if those are not readily apparent to you without further engagement.
As for my own concerns: we have a bit of a problem with this AI thing and whether or not it is AGI or not is immaterial: I judge a technology by the effect that it is having. We have not yet made a dent in dealing with the weaponization of social media, are beginning to deal with the mobile revolution and the internet we now take for granted. Given that that took us a good 30 years to get to this point and that the current crop of AI tools is on the scene for a little over two years it looks as though there is still a very long way to go before we have internalized the changes this technology brings.
And it isn't exactly standing still either, it's a fast moving target that redefines what it is and isn't and what it can and can not do in the space of months. We are now well into what I would lightly characterize as an AI arms race and during arms races the rate of change can go through the roof compared to what it is was before. You only have to look at nature to see many such examples.
And already ChatGPT and similar tools by other vendors are changing the landscape in visible ways. It doesn't have to be an existential threat to be capable of profound and possibly negative social impact. And whether it is AGI or not is also not all that important.
Those cautioning some pacing of the release of these tools are not doing so because they are concern trolls but because they look a little further than just 'hey, cool new tech' to the effect this can have on our societies, some of which are already precariously balanced and have a whole pile of other stuff to deal with. Least of all the fall-out of COVID (which we definitely have not yet dealt with), an energy crisis and a war. And that's before we get into climate change.
Releasing a tool that could easily be weaponized by either side (or both) in such an environment could well have repercussions that we might be able to foresee and help us to decide on whether or not they are going to be beneficial or not. Like all tools this one is dual use, it may help or it may well hinder. Initially social media was a nice way to re-acquaint with family and friends, some of whom may have been lost or out of touch for ages. These days it is a weapon for mass manipulation on a scale that we have not seen before.
Something similar - or far worse - could easily happen with these new AI tools and personally I would like to have the previous crisis before me settled before trying on the next. There is a limit to how much of this stuff we can deal with at the same time and - again, just speaking for myself here but there may be others that feel similar - I am rapidly approaching the limit of how much of all this I can still comprehend and internalize and deal with while still being able to stay on top of it all. It is, in a single world, overwhelming and those that want to pretend it is all inconsequential are - in my opinion, once more, not thinking about it hard enough.
Firstly, LLMs are an embarrassingly parallel problem, so yes, you can actually get quite far simply by throwing more hardware at it. The catch is that the gain is not linear - e.g. you need 4x more VRAM for 2x inputs / context window size. But if doing that unlocks more useful emergent properties, it may well be a worthwhile trade-off - and it doesn't have to spontaneously become an AGI for that to happen. So I think we'll be playing this game for quite a long time.
> massive recent improvements in GPT’s performance are a result of giving the model enormous numbers of parameters and a wealth of training data. That’s it.
extremely dismissive of the labor that went into converting AGI from "impossible" to "expensive"
Any profession which involves digesting, analysing and organising information will be heavily affected by GPT.
If you are the guy manually writing code to extract and interpret information from CSVs or articles, it might take you hours longer than the guy who gets the chatbot to do it in seconds.
I predict GPT will be a necessary tool to stay competitive in most middle class jobs within the next year
If you’re in an open powerboat, your best chance would be to apply full throttle and get the hell away from the storm as fast as you can, like the parent commenter did.
If you manage to find yourself in a lightning storm in a small, open sailboat, you’ve made a pretty big mistake, but essentially the same rule applies.
On larger vessels, if you’re belowdeck and aren’t touching anything metal, a lightning strike is usually a survivable incident, especially if your boat has a thoughtfully designed lightning protection system. All this involves is making sure you give the lightning a low-resistance path to ground through your boat’s keel and then the water that doesn’t run through anything expensive.
edit: I realize now maybe you didn’t mean specifically in the “on a boat” situation. Hopefully this comment is maybe interesting to fellow salty sea dogs. Yarr.
Yesterday I asked my own computer how to crop a video that was recorded vertically and published in widescreen format and it spew out a ffmpeg command that worked perfectly.
To be fair, I would have called that magic a few months ago.
It's good, but if I looked at the man pages of ffmpeg I would have got the same result and learned something. My issue with ChatGPT is that you don't learn, you get a solution but you still depend on that if you need it later.
I don't mind when a solution is provided by an open-source framework, but I'm very cautious when this solution is given by a closed-source application from a random company.
This is the problem with programming by SO or search engine. I think it's a terrible habit, and developers who do it are cheating their future selves.
I truly understand the urge that "I just need an answer, right now!", but in chasing the quick fix, you aren't bettering yourself as a developer. And we all know that in our industry, if you aren't constantly improving, you're becoming worse.
I agree with you and I also started using ChatGPT a lot since the GPT4 release.
I think there are different problems that needs different solutions. Sometimes I'm hacking around in a language I don't know and I don't want to know, I just have to fix/implement something in that language because program/library X happens to use that language. I don't really care about learning, I just want the damn problem solved.
But then most of the times I spend in a language I expect myself to continue to use for a long time, then it makes sense to spend time coming up with a solution by myself through reading long manuals, other reference documentation or even purchasing books and working through them.
But, it doesn't make sense to do that for everything.
Or another way to look at it is: it's not worth learning if you're just doing it once or twice. By the 10th time you've asked ChatGPT I presume some learning/remembering will take place.
That's cool, but you could have probably figured out how to do that with a google search as well, albeit not as quickly. So your example speeds up something that was already possible, but doesn't necessarily introduce anything novel.
It may be comparable to the invention of the search engine: instead of looking up stuff in dictionaries and encyclopedias people could find information online way faster. The accuracy of the information back then, however, was not as great as it is nowadays. For a long time teachers told kids not to trust anything they read on Wikipedia because “anyone can edit it” so it was best to verify information with another source. Probably still a good idea to do that.
IMO what makes GPT even cooler than a search engine or online forums is you don’t have to wait for a human to respond to your highly technical or niche questions, in some cases GPT can just generate an accurate response to your prompt on the fly. exciting stuff
My main exposure to Ruffle has been through its use on Homestarrunner.com, where it’s been deployed to maintain a working archive of their old cartoons (complete with interactive Easter eggs, which obviously isn’t possible on YouTube).
It was buggy initially but now seems to work very well.
I’m a big fan, and echo the sentiments of others here who remember Flash fondly as a powerful and singularly accessible creative tool.
To anyone working on Ruffle: I value the work y’all are doing immensely!
You may be interested to know that people whose job requires them to be epistemologically rigorous do not think in this way:
https://en.wikipedia.org/wiki/Bayesian_epistemology#Fundamen...