Recently I end up using structs everywhere as functions parameters to basically get named function parameters and better default arguments. Are there any downsides to this? So far the only annoying thing is to have to define those structs.
struct FunParams{
int i = 5;
float f = 3.14f;
std::string s = "hello";
};
void Fun(const FunParams& params){}
int main(){
Fun({.s = "hi there"});
}
The problem is that C++ compilers still haven’t fixed a trivial several-decades-old limitation: you still have to pass the named arguments in order.
They use the excuse of “what’s the evaluation order”, but ordinary constructors have the exact same problem and they deal with that fine.
It’s a bit annoying but why is it a problem? You still can skip arguments where you just want the default value. Compared to function arguments you also get defined evaluation order.
Well, you can’t exactly have required parameters that way. At keast not to my knowledge
It is possible if they are added as regular function parameters before the struct parameter but somehow I find that a bit ugly…
I’m sure you can come up with some utility class
required
(templated withT
, Lemmy won’t let me) that isn’t default constructible but can be implicitly constructed from aT
, then use this instead of typeT
in the struct definition.
There’s a design pattern aptly called Parameter Object.
https://wiki.c2.com/?ParameterObject
Parameter Object is a popular solution for problems such as:
- Exploding number of function arguments. As a general rule of thumb, if you need to pass more than 3 arguments, you just extract them into a Parameter Object to handle as a single parameter.
- Combinatorial explosion of test cases. If your function supports multiple input parameters but some combinations of values are invalid/impossible/unsupported (i.e., show/hide window, full screen or windowed or minimized, etc) then instead of wasting time with branch coverage you simply extract a Parameter Object and add validation to it.