The original post was removed, hence the archive link.
HN figures the real issue was the lack of testing/monitoring, not specifically the use of ChatGPT. But the kind of person who’s ok with letting spicy autocomplete write their customer acquisition code is probably not the kind of person knowing how to test and monitor.
I actually tried letting ChatGPT-4o write some tests the other day.
Easily 50% of the tests were wrong. They ignored DB uniqueness constrains or even datatypes. In a few cases, they just hallucinated field names that didn’t exist.
I ended up spending just as much time cleaning up the cruft as writing them. I could easily see someone just starting out letting the code go through.
Now that’s the kind of bad hot take I read awful.systems for! Let’s all call ourselves “engineers” but write no documents but emoji laden jokes, and produce no work except for the copy-pasted excreta from a chatbot!
The original post was removed, hence the archive link.
HN figures the real issue was the lack of testing/monitoring, not specifically the use of ChatGPT. But the kind of person who’s ok with letting spicy autocomplete write their customer acquisition code is probably not the kind of person knowing how to test and monitor.
https://news.ycombinator.com/item?id=40627558
I actually tried letting ChatGPT-4o write some tests the other day.
Easily 50% of the tests were wrong. They ignored DB uniqueness constrains or even datatypes. In a few cases, they just hallucinated field names that didn’t exist.
I ended up spending just as much time cleaning up the cruft as writing them. I could easily see someone just starting out letting the code go through.
Now that’s the kind of bad hot take I read awful.systems for! Let’s all call ourselves “engineers” but write no documents but emoji laden jokes, and produce no work except for the copy-pasted excreta from a chatbot!