• Kajika@lemmy.mlOP
    link
    fedilink
    arrow-up
    86
    ·
    9 months ago

    Took me 2 hours to find out why the final output of a neural network was a bunch of NaN. This is always very annoying but I can’t really complain, it make sense. Just sucks.

    • flying_sheep@lemmy.ml
      link
      fedilink
      arrow-up
      18
      arrow-down
      2
      ·
      9 months ago

      I guess you can always just add an assert not data.isna().any() in strategic locations

      • Kajika@lemmy.mlOP
        link
        fedilink
        arrow-up
        31
        ·
        9 months ago

        That could be a nice way. Sadly it was in a C++ code base (using tensorflow). Therefore no such nice things (would be slow too). I skill-issued myself thinking a struct would be 0 -initialized but MyStruct input; would not while MyStruct input {}; will (that was the fix). Long story.

        • fkn@lemmy.world
          link
          fedilink
          arrow-up
          13
          arrow-down
          1
          ·
          9 months ago

          I too have forgotten to memset my structs in c++ tensorflow after prototyping in python.

        • TheFadingOne@feddit.de
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          9 months ago

          If you use the GNU libc the feenableexcept function, which you can use to enable certain floating point exceptions, could be useful to catch unexpected/unwanted NaNs

  • affiliate@lemmy.world
    link
    fedilink
    arrow-up
    32
    ·
    9 months ago

    this is just like in regular math too. not being a number is just so fun that nobody wants to go back to being a number once they get a taste of it

  • dan@upvote.au
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    9 months ago

    Also applies to nulls in SQL queries.

    It’s not fun tracing where nulls are coming from when dealing with a 1500 line data warehouse pipeline query that aggregates 20 different tables.

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    9 months ago

    The funniest thing about NaNs is that they’re actually coded so you can see what caused it if you look at the binary. Only problem is; due to the nature of NaNs, that code is almost always going to resolve to “tried to perform arithmetic on a NaN”

    There are also coded NaNs which are defined and sometimes useful, such as +/-INF, MAX, MIN (epsilon), and Imaginary

  • navi@lemmy.tespia.org
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    NaN is such a fun floating point virus. Some really wonky gameplay after we hit NaN in a few spots.

  • ReakDuck@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    As I was coding in C++ my own Engine with OpenGL. I forgot something to do. Maybe forgot to assign a pointer or forgot to pass a variable. At the end I had copied a NaN value to a vertieces of my Model as the Model should be a wrapper for Data I wanted to read and visualize.

    Printing the entire Model into the terminal confused me why everything is NaN suddenly when it started nicely.