After seeing this graph about the fall of StackOverflow, it’s clear to me that I’m not the only one who has stopped using it in favor of LLMS (large language model) alternatives. With the rise of generative AI-powered initiatives like OverflowAI by Stack Overflow, and the potential of ChatGPT by OpenAI, the landscape of programming knowledge sharing is evolving. These AI models have the ability to generate human-like text and provide answers to programming questions. It’s fascinating to witness the potential of AI as a game-changing tool in programming problem solving. So, I’m curious, which AI have you replaced StackOverflow with? Share your favorite LLMS or generative AI platform for programming in the comments below!
I replaced it with online docs, Github Issues, Reddit, and Stack Overflow.
Many languages/libraries/tools have great documentation now, 10 years ago this wasn’t the case, or at least I didn’t know how to find/read documentation. 10 years ago Stack Overflow answers were also better, now many are obsolete due to being 10 years old :).
Good documentation is both more concise and thorough than any QA or ChatGPT output, and more likely to be accurate (it certainly should be in any half-decent documentation, but sometimes no).
If online documentation doesn’t work, I try to find the answer on Github issues, Reddit, or a different forum. And sometimes that forum is Stack Overflow. More recently I’ve started to see most questions where the most upvoted answer has been edited to reflect recent changes; and even when an answer is out-of-date, there’s usually a comment which says so.
Now, I never post on Stack Overflow, nor do I usually answer; there are way too many bad questions out there, most of the good ones already have answers or are really tricky, and the community still has its rude reputation. Though I will say the other stack exchange sites are much better.
So far, I’ve only used LLMs when my question was extremely detailed so I couldn’t search it, and/or I ran out of options. There are issues like: I don’t like to actually write out the full question (although I’m sure GPT works with query terms, I’ll probably try that); GPT4’s output is too verbose and it explain basic context I already know so it’s just filler; and I still have a hard time trusting GPT4, because I’ve had it hallucinate before.
With documentation you have the expectation that the information is accurate, and with forums you have other people who will comment if the answer is wrong, but with LLMs you have neither.