Test data consists of single precision floating point numbers in a text file. These must be read into testbench and applied to a 32 bit std_logic_vector input port. The numbers can be read in as "real" type but how does one obtain their 32 bit representation to store in a std_logic_vector?
Floating point numbers consist of mantissa and exponent. Here the floating point number shall be reversed to get its actual representation inside the hardware and that will be stored in an std_logic_vector.
Here are a few numbers in the text file: -0.951835638, -0.154052139, 0.007186272. I need to convert these into their 32 bit binary representation and store it into std_logic_vector. These numbers are originally floating point with 8 bit exponent and 23 bit mantissa. I want them to be binary represented in this way when I store them into the std_logic_vector.
Best Answer
The testbench below is answer to my question. The result of conversion is stored in slv. The input file contains a column of real numbers of the type I mentioned in the question i.e single preciousion floating point numbers generated from a program probably written in C or C++. As expected, there is difference between real1 and real2 which is another thing I wanted to check.
The float_generic_pkg which is basis for the float_pkg defines the float32 as follows:
Where the float type is:
Where the UNRESOLVED_float is actually:
This means that the floating point type is fundamentally just a std_logic_vector and thus not the same thing as a real!
Anyway, the code sample below is answer to the question.
The gist of all this is these two lines:
The slv must be declared as a 32 bit vector. Else, the conversions will fail at runtime. We need to use the float_pkg but not the float type itself in this case.