The code that determines the result type, and in particular the precision (and scale in case of numeric types) was setting the scale to 0. The logic has been modified to set the precision and scale at the maximum value among the various arguments, without losing integer digits.