Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not tired yet, because for the first time since GPT-2, we're talking about a thing I can actually play with along with everyone else, instead of just reading about the fun people with the right connections and enough spare $$$ are having.


Well, you could play with Stable Diffusion, too. Moreover, you could install it on your computer - something that's unlikely to happen with the offspring of the so called "Open" AI.


Not sure why this got downvoted, this is 100% correct. Stable Diffusion is far more open than ChatGPT.


UX matters. ChatGPT is: 1) register on OpenAI website, 2) have fun. Stable Diffusion? It's free and easy if I can figure out where to download the current model (and not some of the weaker, stripped down versions or alternatives), and have a GPU farm to run it on?

IIRC, SD was one of the two or three image generation models that were published in the span of several days. Back then I tried to get my hand on any of them, and found only waitlists, wishlists, and one or two "lite" versions that could not deliver anything close to the interesting results everyone with right access was showcasing and talking about.


There are fairly stable web UIs now, and you can be up and running in about five minutes: https://github.com/AUTOMATIC1111/stable-diffusion-webui

Tweaking and installing new models will take some additional effort, but there has been a veritable explosion in freely available resources, eg. https://rentry.co/sdupdates3

Also it runs fine on mid-range consumer GPUs, just with limited batch sizes.


Not sure what you mean. The day Stable Diffusion released I downloaded the officially released model and ran it on my consumer GPU which I already own for gaming. I can use it infinitely and without restriction. That is significantly more free and open than having to sign up for a website which requires me to agree to their terms of service, rate limits me, and will eventually start charging me for access.


Then you recall incorrectly. Stable diffusion models are capable of running on laptops with even modest discrete GPUs. I'm running it using the automatic GitHub repo on a laptop that's over four years old and a 50 step iteration only takes about 15 seconds.

You need zero technical acumen to be able to install it, just the ability to follow basic instructions. Maybe you should ask ChatGPT to help you.


If like many others, you are also tweeting/sharing your findings, you are doing free marketing for them. Expect the floodgates to slam shut once there's enough guerilla-marketed interest for them to open up paid plans.


That won't last. They're already tightening the API limits and have tweeted about excessive costs.


Even as it stands today, I’d pay US$50/month for it, knowing it’s going to improve over time.

I could go as high as US$100/month if I could be sure to get that back, but at the moment I see it as a great tool for learning and getting started on various topics.

It’s clearly not perfect, but I still find the help invaluable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: