int

The type int is a 32-bit, signed, two's-complement number, as used in virtually every modern CPU. It will be the type that you should choose by default whenever you are going to carry out integer arithmetic.

example declaration:

int i;

range of values: –2,147,483,648 to 2,147,483,647

literals: Int literals come in any of three varieties:

  • A decimal literal, e.g., 10 or –256

  • With a leading zero, meaning an octal literal, e.g., 077777

  • With a leading 0x, meaning a hexadecimal literal, e.g., 0xA5 or 0Xa5

Uppercase or lowercase has no significance with any of the letters that can appear in integer literals. If you use octal or hexadecimal, and you provide a literal that sets the leftmost bit in the receiving number, then it represents a negative number. (The arithmetic types are stored in a binary format known as “two's-complement”—google for full details).

A numeric literal is a 32-bit quantity, even if its value could fit into a smaller type. But provided its actual value is within range for a smaller type, an int literal can be assigned directly to something with fewer bits, such as byte, short, or char. If you try to assign an int literal that is too large into a smaller type, the compiler will insist that you write an explicit conversion, termed a “cast.”

When you cast from a bigger type of integer to a smaller type, the high-order bits are just dropped. Integer variables and literals can be assigned to floating-point variables without casting.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.148.107.254