Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
CEO Update: Paving the road forward with AI and community at the center (stackoverflow.blog)
31 points by joebiden2 on June 4, 2023 | hide | past | favorite | 48 comments


Overall, a bad sign for Stack Overflow. When executives write gobbledygook paragraphs such as:

“ Since my last quarterly update, companies across nearly every sector have experienced significant transformation—whether it’s a more aggressive focus on profitability or a shift in product strategy due to the acceleration of generative AI (GenAI). Thematically, however, one thing has remained the same: companies are committed to driving productivity and efficiency throughout their organizations. At Stack Overflow, we continue to help our customers and community deliver both.”

The unclear and muddy writing reveals that their thinking is likewise unclear.

Generalities instead of specifics. Vague platitudes. This is poor writing.

StackExchange is facing an existential threat and there is nothing in this letter that names it, identifies steps they’ll take to address, or how they will thrive in the new future.

It is not unexpected, but a confirmation of what we all might fear.


A lot of people have gotten very high up on the corporate totem pole for their ability to string together vague muddy bullshit that sounds pleasantly corporate but says nothing. The ability to write this way has become this weird corporate class signifier. I’ve seen normal people promoted into executive management and then all their emails start sounding like this. It’s almost like someone prompted ChatGPT to write a company strategy email without telling the tool what the company does.

I guarantee nobody at the company dares to critique this guy’s writing like you did. His sycophants at the SVP level nod along, pretending the emperor has clothes, and that they all understand what to do now, as they share similar, vague thoughts with their own teams.


same with politicians tbh


There's an existential threat, but I it's more than just GenAI. SO is such an easy target for many reasons: Creation of an extremely hostile culture to users (how do you change it now?) Old information that's no longer relevant yet "close culture" prevents anyone from posting "duplicate" questions even when they're needed. A reputation among the newer generation that SO is "read only". High schoolers learning to program would never dare to ask a question there. As a business, SO has no value.


> Creation of an extremely hostile culture to users (how do you change it now?)

By changing the community guidelines to prohibit such behavior, and then aggressively banning users who engage in it.

Of all the problems Stack Overflow faces, this one would be the easiest to solve, but the people who run the site have made it abundantly clear long ago that they just don't give a fuck.


"companies across nearly every sector"... err, what?

No, just companies built on user generated data which in turn is used to train the models that will replace them.


My favorite are executives whose products are paying customers of generative AI (aka no moat), yet they parrot platitudes as if they were at the driver's seat.


Appreciate how you dug into the text. It's the same rigor each site owner should be take to maintain a coherent work environment in this age.

Otherwise we will all just speak gobblygook nodding along while our tools do the job for us. Not my personal signage


This assumes AI will be able to offer support outside the trained corpus of human knowledge and generate code with post-training-date apis. These will probably still require human support and feedback so that subsequent AIs can also support them.

This means companies like stack overflow will require licensing and compensation for AI corporations to use new data - it won't be free anymore.

Data hoarding is the future, not open data sharing (because everyone will fear being replaced). We will see a more closed future behind paywalls. It will be very expensive for subsequent versions of ChatGPT etc. to be trained with up to date knowledge outside of Wikipedia etc.


> Data hoarding is the future, not open data sharing (because everyone will fear being replaced). We will see a more closed future behind paywalls. It will be very expensive for subsequent versions of ChatGPT etc. to be trained with up to date knowledge outside of Wikipedia etc.

Well StackExchange seems pretty poorly positioned then because their data is under the Creative Commons.

(Although notably, I only spend time writing answers on StackExchange because I feel like I'm contributing to an open corpus of knowledge; the license is a major part of that.)


> StackExchange seems pretty poorly positioned then because their data is under the Creative Commons

(NAL) the license legally allows others to reproduce the data, so they can't be sued for copyright infringement. But it wouldn't stop SO from implementing rate limits, registration wall, or paywall. They didn't undertake on a mission to freely and openly spread your answers to as many people (or bots) as possible — they only promised not to sue for copyright.


It's all available for the taking: https://archive.org/details/stackexchange

They could stop updating this going forward, but the moat of content they've built from their launch until today is freely available.


> “They are creating this culture on Stack Overflow where it’s become a really safe place to ask a question, a safe place to provide an answer, and try to get people closer to their solution, and I think that’s part of what’s made it really successful and where we found a lot of value.”

Is this satire? Stack Overflow is infamous for how hostile its culture is towards people asking questions. And I'm sure that the CEO is well aware of that. The fact that they're dropping a quote like this one into the post without a single word even attempting to address SO's well-known culture problems is a slap in the face to anyone familiar with the platform.


Anyone care to explain to me what is mean by "safe place"? Was it not safe before? Are Q&A platforms generally known to be unsafe to post on? Genuinely curious. It's such a peculiar choice of words it certainly has intrigued my senses.


The "safe place" stuff is a reference to the recently implemented new Code of Conduct, which bends over backwards to accommodate gender pronouns etc.

https://stackoverflow.blog/2019/10/10/iterating-on-inclusion...

Why SO thinks this is the biggest problem they should be solving, or why it took four years to roll out, is beyond me; for one, I can't ever recall discussing or caring about anybody's genitals or lack thereof when posting coding Q&As.


Hell, I don't even recall ever seeing a pronoun on SO. Most of the discourse is about the answers themselves.


I dislike the proliferation of this word. "Safe" has started to mean "free from critical commentary" in a lot of contexts. It's hampering objective discourse all over the place.


It used to mean physically safe - ie. A gay bar - at least safe from outsiders. Once you move to “emotionally safe” then it basically becomes so vague as to be meaningless


I believe it's called 'virtue signalling', it's a hot phrase (though perhaps past its peak by a few years) and associating yourself with it is only really a positive thing.

What does it actually mean? Yeah, probably nothing. In fact as others say it might ironically actually not be true (an 'unsafe place', 'hostile to newcomers') in SO's case.


StackOverflow is notoriously unpleasant for non-power users. A selection of miserable anecdata over the years:

https://news.ycombinator.com/item?id=16934942

https://news.ycombinator.com/item?id=20859332

https://news.ycombinator.com/item?id=2166021


The answers from SO users on that last thread show how much they lack self-awareness. Nowadays the only way I end up on SO is through a google link, I 100% consider it a read-only site.


> Is this satire? Stack Overflow is infamous for how hostile its culture is towards people asking questions.

I don't think any CEO or PR department would willingly admit such things...


That may be true, but there is usually a limit to how much hypocrisy such posts contain. I'm pretty sure that after the Deepwater Horizon oil spill, BP didn't put out press releases containing quotes that praised the company's stellar environmental record. The above is pretty much the equivalent of that for Stack Overflow.


They’re saying that it’s a safe space because you and your gender/pronouns/etc will be highly respected as your question is closed as redundant to an unrelated question from fifteen years ago.

They have no idea how to solve the real underlying problems so they’re rearranging the deck chairs on the titanic.


I though you were making a joke, but this is actually serious (also posted in another thread):

https://stackoverflow.blog/2019/10/10/iterating-on-inclusion...

I thought it was funny as satire, now it's sad.


I can just imagine their internal mod training materials:

    Problematic: "user64738's answer is an obvious duplicate of [URL], his answer is stupid, and he is stupid."

    Consider changing to: "user64738's answer is an obvious duplicate of [URL], *their* answer is stupid, and *they* are stupid."
Real life is becoming satire. I can't wait to read business books 40 years from now. "In the beginning of the 21st century, corporations erased trillions of dollars of shareholder value. But don't forget: While they were incinerating money and laying off workers, they all changed their internal technical processes to rename 'master' to 'main' and changed their codes of conduct to stop misgendering people."


You can’t fix a problem management won’t acknowledge and management will never acknowledge a problem they don’t know how to fix. It’s a vicious cycle.


it's true. half of my time on stack overflow is convincing mods who don't understand my question that it's not a duplicate.


I'm mostly active in a corner of SO where any questions are rarely closed. They are just mostly left without answers. Looking at most of them, they describe extremely convoluted setups, involving so many parts that likely nobody (except the author) has.


Look, someone asked the same question 12 years ago, just install jQuery and copy-paste the answer, ok?


> We want to be able to continue to invest back into our community, and that’s why we’re also exploring what data monetization looks like in the future. LLMs are trained off Stack Overflow data, which our massive community has contributed to for nearly 15 years. We should be compensated for that data so we can continue to invest back in our community.

How exactly are you planning on investing that back into the community? It sounds like it just wants to be YOU who captures the value of 15 years of community investment. It's fine, SO is a business, and people agreed to the TOS, but this is laughably blatant bs.


The blog post is entirely marketing and tries to obfuscate or ignore the trouble AI-generated content is causing on SO. The company has enacted a deeply unpopular policy this week that essentially makes it impossible to moderate most AI-generated posts on Stack Overflow.

Laying off 10% of the company after putting another 10% of the company on developing AI-anything do look more like panic to me. AI-generated content is threatening one half of their business model, and additionally causing moderation issues as well on the public sites.


Nothing here directly addresses the issue of users migrating away from the flow of `Google search -> SO` to `ChatGPT/Phind`.

Maybe SO should just acquihire Phind. Use Phind as the gatekeep to filter out questions with well established answers before another doe-eyed user make the fatal mistake of asking a non well-formed question before they are excoriated by the poweruser #44956.


Are users migrating to Phind? ChatGPT was all over HN when it came out, but this comment is the first time I hear about Phind.


It is funny to read all this "SO is doomed and AI is good enough". It shows that people have basically no clue how LLM's work, ascribing superpowers (which is ofcourse congruent with the hype)

If the free humam labor solving diverse coding problems in public repositories stops, any such trained LLM solution will gradually degrade, e.g it will have no material around new libraries, new languages etc.


Will it though? Maybe "All You Need Is Documentation"™, and you can transpose previous coding queries and answers onto new languages.


If they monetize content, it would be nice for the community to get a revenue share for its content authorship and moderation.


In my opinion, SO doesn’t really have much of a choice here. Not only has the quality of questions and answers been on the decline, the entire SO experience is notoriously hostile for new comers. I find that ChatGPT/GitHub Copilot tend to give much better answers for the vast majority of programming related questions. It’s much faster, easier, and less intimidating to ask your tools questions knowing that you’ll most likely get a high quality answer and not get admonished for having asked the question in the first place.


> I find that ChatGPT/GitHub Copilot tend to give much better answers for the vast majority of programming related questions

I'm not so sure about this.

Firstly, ChatGPT seems to hallucinate quite a lot, which can result in me wasting a lot of time, especially when the hallucinations aren't immediately apparent.

Secondly, presumably the content of Stackoverflow had been used to train ChatGPT... so if Stackoverflow and all of it's human-generated content disappeared, to be replaced by AI-generated content, that could be a bad thing for ChatGPT in the long run.


Anyone else think Stackoverflow has been in a decline the past decade?

Stackoverflow hardly surfaces when I am searching for help and when I find answers it's usually a blog or a hosted blog somewhere. Most Stackoverflow answers are usually 4-5 years old and outdated.

Perhaps it's simply my engineering has improved in the last decade and I no longer search for help very often and when I do it's outside the scope of Stackoverflow..


Funny ancedota, I use a chrome extension that adds chatgpt to all of stack overflow posts.

You get answers to unanswered posts with 0 replys that are useful that way

Stack Overflow is doomed


I feel kinda less sad that I didn't get the job at SO that I interviewed for last summer.

With tools that allow you to download pre-trained models and run them on premises, I start to doubt the future of SO for Teams.


I was expecting an expensive data export feature and updated ToS


Here is an example page I made inspired by StackOverflow and Quora.

I think StackOverflow could do something like that; have a bot account owned by StackOverflow that would respond to posts.

Here is the example I made: https://doc.nstr.no/bitcoin_background


The other day I saw that Quora is already officially showing you ChatGPT answers at the top.

Here for example: https://www.quora.com/What-is-Arch-Linux

I think they are digging their own grave with this. Their only advantage is all the SEO work they did to show up in google searches.


This could actually work quite well.

Imagine if every SO question automatically had a ChatGPT-generated answer applied to it, but in wiki fashion, so it could be modified to remove errors, add additional context, point out hallucinations etc.

At the same time, humans could still answer too, providing alternative/better solutions, which ultimately are fed back into ChatGPT.


It could work, but which human would be so self-hating as to overtly work towards replacing himself?


Humans mostly wouldn't care about things so complex or remote as e.g. robots replacing them.

E.g. truck drivers wouldn't actively destroy cameras attached to their trucks, even if you tell them that the video feeds from these cameras could be fed to models that might allow driverless trucks, which might eventually replace them. The link between a camera and elimination of their work is too remote and too vague.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: