I am implementing a numerical evaluation of some analytical expressions which involve factors like
exp(1i*arg(z) / 2), where
z is in principle a complex number, which sometimes happens to be almost real (i.e. to floating point precision, e.g.
I have implemented my computations in Python and C++ and find that sometimes results disagree as the small imaginary part of the "almost real" numbers differ slightly in sign, and then branch cut behaviour of
arg(-1+0j) = pi, but
arg(-1-0j) = -pi) significantly changes the result … I was wondering if there is any commonly used protocol to mitigate these issues?
Many thanks in advance.