While all is 'calm and steady and boring' with the next kernel, Linux creator Torvalds tells an Open Source Summit crowd exactly how he feels about almost everything else.
Did not know the thing about purposefully adding rogue tabs to kconfig files to catch poorly written parsers. That’s fucking hilarious and I’d love to have the kind of clout to get away with something like that rather than having to constantly work around other people’s mistakes.
I write a lot of scripts that engineers need to run. I used to really try to make things ‘fail soft’ so that even if one piece failed the rest of the script would keep running and let you know which components failed and what action you needed to take to fix the problem.
Eventually I had so many issues with people assuming that any errors that didn’t result in a failure were safe to ignore and crucial manual steps were being missed. I had to start making them ‘fail hard’ and stop completely when a step failed because it was the only way to get people to reliably perform the desired manual step.
Trying to predict and account for other people’s behavior is really tricky, particularly when a high level of precision is required.
It is a developer milestone :) when you learn to be a resilient applicant is about recovery situation you perfect understanding. Fail fast everything else. Repeat 1000 times, you have something
soft failures add complexity and ambiguity to your system, as it creates many paths and states you have to consider. It’s generally a good idea to keep the exception handling simple, by failing fast and hard.
here is a nice paper, that highlights some exception handling issues in complex systems
This is why I enjoy programming libraries only I will ever use. “Do I need to account for user ignorance and run a bunch of early exit conditions at the beginning of this function to avoid throwing an exception? Naww, fuck it, I know what I’m doing.”
Did not know the thing about purposefully adding rogue tabs to kconfig files to catch poorly written parsers. That’s fucking hilarious and I’d love to have the kind of clout to get away with something like that rather than having to constantly work around other people’s mistakes.
I write a lot of scripts that engineers need to run. I used to really try to make things ‘fail soft’ so that even if one piece failed the rest of the script would keep running and let you know which components failed and what action you needed to take to fix the problem.
Eventually I had so many issues with people assuming that any errors that didn’t result in a failure were safe to ignore and crucial manual steps were being missed. I had to start making them ‘fail hard’ and stop completely when a step failed because it was the only way to get people to reliably perform the desired manual step.
Trying to predict and account for other people’s behavior is really tricky, particularly when a high level of precision is required.
It is a developer milestone :) when you learn to be a resilient applicant is about recovery situation you perfect understanding. Fail fast everything else. Repeat 1000 times, you have something
soft failures add complexity and ambiguity to your system, as it creates many paths and states you have to consider. It’s generally a good idea to keep the exception handling simple, by failing fast and hard.
here is a nice paper, that highlights some exception handling issues in complex systems
https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf
Always fail soft in underlying code and hard in user space IMHO
This is why I enjoy programming libraries only I will ever use. “Do I need to account for user ignorance and run a bunch of early exit conditions at the beginning of this function to avoid throwing an exception? Naww, fuck it, I know what I’m doing.”
It’s the quickest way to prove to yourself that you know what you’re doing… Most of the time, anyway…
Sounds familiar, haha.