The decimal type is unique to C#. It is a 128-bit data type which can represent values from 1.0 × 10-28 to approximately 7.9 × 1028 with 28 – 29 significant digits, and is particularly suitable for financial or scientific calculations requiring a high level of precision.
According to the C# specification, for decimals with an absolute value smaller than 1.0M, [6] the value represented is exact to the 28th decimal place. For decimals with an absolute value equal to or greater than 1.0M, the value is exact to 28 or 29 significant figures.
[6] You put an M or m behind a number or floating point number to indicate that this value should be interpreted as a decimal.
At first sight, it may appear that you can implicitly cast a float or double into a decimal. But note that the decimal type has a greater precision but a smaller range than the float or double types. As a result, you cannot implicitly cast a float or double to a decimal, and a decimal cannot be implicitly cast into a float or double, or any other numeric type. Table 9.4 gives the ranges of the different types.
Simple type | Approximate range |
---|---|
float | ±1.5 × 10 -45 to ±3.4 × 10 38 |
double | ±5.0 × 10 -324 to ±11.7 × 10 308 |
decimal | ±1.0 × 10 -28 to ±7.9 × 10 28 |
18.226.133.49