Interactive visualization of various floating point formats.
You can click on the bits to toggle them or use the input box to enter a floating point number
Good old IEEE 754 32-bit floating point number.
More info on TensorFloat-32
IEEE 754 half percision floating point number.
BFloat16 has larger range than float16 but less precision. Invented by Google Brain.
OCP FP8 E4M3 is a compact floating point format with 4-bit exponent and 3-bit mantissa. Doesn't support infinity. OCP FP8 Specification.
OCP FP8 E5M2 is a compact floating point format with 5-bit exponent and 2-bit mantissa. OCP FP8 Specification.
FP8 E4M3 FNUZ is a floating point format with 4-bit exponent and 3-bit mantissa. It has a special encoding for NaN and doesn't support infinity.
FP8 E5M2 FNUZ is a floating point format with 5-bit exponent and 2-bit mantissa. It has a special encoding for NaN and doesn't support infinity.
OCP FP6 E2M3 is a floating point format with 2-bit exponent and 3-bit mantissa. Doesn't support NaN or infinity. OCP Microscaling Formats Specification
OCP FP6 E3M2 is a floating point format with 3-bit exponent and 2-bit mantissa. Doesn't support NaN or infinity. OCP Microscaling Formats Specification
OCP FP4 is a floating point format with 2-bit exponent and 1-bit mantissa. Doesn't support NaN or infinity. OCP Microscaling Formats Specification