I am implementing a numerical evaluation of some analytical expressions which involve factors like exp(1i*arg(z) / 2)
, where z
is in principle a complex number, which sometimes happens to be almost real (i.e. to floating point precision, e.g. 4.440892098500626e-16j
).
I have implemented my computations in Python and C++ and find that sometimes results disagree as the small imaginary part of the "almost real" numbers differ slightly in sign, and then branch cut behaviour of arg(z)
(i.e. arg(-1+0j) = pi
, but arg(-1-0j) = -pi
) significantly changes the result … I was wondering if there is any commonly used protocol to mitigate these issues?
Many thanks in advance.
pi
or-pi
? If it is a phase, and you use it as such, you shouldn't care at the range of the phase; i.e. you have to cut somewhere. Anyway, if, for some reason, you depend on that, why cannot you "normalize" the phase into the range you want?fmod(arg(z)+2*pi, 2*pi)
.exp(1i*arg(z) / 2)
, so that the phase +pi/2 or -pi/2 matters …csqrt
does). If the difference matters, you have to consider the multivalued function or else move the branch cut.