In hardware design, there is no floating-point. It is all about how "wide" you choose to define the x1 and x2 in term of bits.
Of course, you will get more precise result by expanding the bit width of the multiply factors. But the cost(speed/area) may well be too costly for a market-targeting product design to afford.
Technically speaking, you can define x1 as say 32-bit wide, and "mark" the lower 16-bit LSBs as "floating points" to represent any arithmetic value smaller than 1. The logic has to be aware of the float-point position(bit 16 in this example) anytime x1-related calculation is involved.
PS: if you are only trying to build up a behavior(like PLL) model or testbench related block, and you don't care about whether it is synthesis-able or not, then by all means use the "real" type for x1 and x2.