wish of those help I found a solution in a library developed by OpenEXR. Basically there are two options OpenEXR uses this option a) below- a)Use a 16 bit unsigned short type to stored the half precision float data type and it has a lookup table store of values precomputed , which is used in converting a float to half and also half to float. I used this way- b)I can just loose the precision of a Single precision float to get a half precision float. Store this in a "float" native type. Leave the exponent untouched, since we are still using float(single precision) to store the reduced precision halfprecision float data.
How are double-precision floating-point numbers converted to single-precision floating-point format?
this one helps. The most common floating-point formats are the binary floating-point formats specified in the IEEE 754 standard. I will answer your question for these formats. There are also decimal floating-point formats in the new (2008) version of the standard, and there are formats other than the IEEE 754 standard, but the 754 binary formats are by far the most common. Some information about rounding, and links to the standard, are in this Wikipedia page. Converting double precision to single precision is treated the same as rounding the result of any operation. (E.g., an addition, multiplication, or square root has an exact mathematical value, and that value is rounded according to the rules to produce the result returned from the operation. For purposes of conversion, the input value is the exact mathematical value, and it is rounded.)
Floating point precision in Visual Studio 2008 and Xcode
I think the issue was by ths following , Edit to actually answer the question: I doubt there is a GUARANTEED way to always get the same calculation produce the exact same result with different compilers - the compiler WILL split/combine various steps of calculation as it sees fit. /EDIT This all comes down to EXACTLY how the compiler optimises and arranges the instructions. Unless you know very well what you are doing (and what the compiler will do), any floating point calculation will need to allow for small errors that are introduced during calculation steps. Note however that I would expect that even the lowest level of optimisation to have the compiler calculate a1 * a2 ONCE, and not twice. This is called "CSE" or "Common Sub-expression" optimisation (same calculation being done several times in a block of code). So I'm guessing you are testing this in a "non-optimised" build. (There are cases where compiler may not optimise things because it produces a different result, but this doesn't look like one of those to me).
Using single precision floating-point with FFTW in Visual Studio
I wish this help you For single precision routines in FFTW you need to use routines with an fftwf_ prefix rather than fftw_, so your call to fftw_plan_dft_r2c_2d() should actually be fftwf_plan_dft_r2c_2d().
convert single precision floating point to half precision floating point
it helps some times Getting the biased exponent of -10, you need to create a denormalized number (with 0 in the exponent field), by shifting the mantissa bits right by 11. That gives you 00000 00000 11000... for the mantissa bits, which you then round up to 00000 00001 -- the smallest possible denorm number.