S R
2011-10-20 09:05:49 UTC
There is a DLL on a Windows (Intel, 32-bit) server that contains a functions
called EntConvertInt which takes a Int16 and Int32 arguments and returns a double.
I want to create a Java version of this, but cannot get the numbers to work.
I think the Int16 is the exponent (including the sign bi) and the Int32 is the
mantissa.
I think what the DLL does is convert the ints to binary or Hex, and then moves
the two to a double.
Example of inputs and expected results are:
Int16 Int32 Result
129 0 1
138 369098752 600
29069 379652669 4820.13
I tried converting the inputs to binary (Integer.ParseInt ( x, 2 )),
then joining to two into a string and then using Double.logbitstodouble, but the
result is way out.
I have tried a number of combinations of adjusting taking the 48-bit binary
"string" and then adjusting the size of the exp/mantissa to see which
combination gives the right answer but to no avail.
I need some pointers on how this can be done in Java as I am all out of ideas.
called EntConvertInt which takes a Int16 and Int32 arguments and returns a double.
I want to create a Java version of this, but cannot get the numbers to work.
I think the Int16 is the exponent (including the sign bi) and the Int32 is the
mantissa.
I think what the DLL does is convert the ints to binary or Hex, and then moves
the two to a double.
Example of inputs and expected results are:
Int16 Int32 Result
129 0 1
138 369098752 600
29069 379652669 4820.13
I tried converting the inputs to binary (Integer.ParseInt ( x, 2 )),
then joining to two into a string and then using Double.logbitstodouble, but the
result is way out.
I have tried a number of combinations of adjusting taking the 48-bit binary
"string" and then adjusting the size of the exp/mantissa to see which
combination gives the right answer but to no avail.
I need some pointers on how this can be done in Java as I am all out of ideas.