Computer programs work by manipulating values, such as the number 3.14 or the text “Hello World.” The kinds of values that can be represented and manipulated in a programming language are known as types, and one of the most fundamental characteristics of a programming language is the set of types it supports. When a program needs to retain a value for future use, it assigns the value to (or “stores” the value in) a variable. A variable defines a symbolic name for a value and allows the value to be referred to by name. The way that variables work is another fundamental characteristic of any programming language. This chapter explains types, values, and variables in JavaScript.
JavaScript types can be divided into two categories: primitive types and object types. JavaScript’s primitive types include numbers, strings of text (known as strings), and Boolean truth values (known as booleans). A significant portion of this chapter is dedicated to a detailed explanation of the numeric (§2.1) and string (§2.2) types in JavaScript. Booleans are covered in §2.3.
The special JavaScript values null
and undefined
are primitive values, but
they are not numbers, strings, or booleans. Each value is typically considered
to be the sole member of its own special type. §2.4 has more about
null
and undefined
. ECMAScript 6 adds a new special-purpose type,
known as Symbol, that enables the definition of language extensions
without harming backward compatibility. Symbols are covered briefly in
§2.5.
Any JavaScript value that is not a number, a string, a boolean, or null
or
undefined
is an object. An object (that is, a member of the type object) is
a collection of properties where each property has a name and a value (either
a primitive value, such as a number or string, or an object). One very special
object, the global object, is covered in §2.6, but more general and
more detailed coverage of objects is in Chapter 5.
An ordinary JavaScript object is an unordered collection of named values. The language also defines a special kind of object, known as an array, that represents an ordered collection of numbered values. The JavaScript language includes special syntax for working with arrays, and arrays have some special behavior that distinguishes them from ordinary objects. Arrays are the subject of Chapter 6.
In addition to basic objects and arrays, JavaScript defines a number of other useful object types. Set represents a set of values. Map represents a mapping from keys to values. Various “typed array” types facilitate operations on arrays of bytes and other binary data. The RegExp type represents textual patterns and enables sophisticated matching, searching and replacing operations on strings. The Date type represents dates and times and supports rudimentary string arithmetic. Error and its sub-types represent errors that can arise when executing JavaScript code. All of these types are covered in Chapter 9.
JavaScript differs from more static languages in that functions and classes are not just part of the language syntax: they are themselves values that can be manipulated by JavaScript programs. Like any JavaScript values that is not a primitive value, functions and classes are a specialized kind of object. They are covered in detail in Chapter 7 and Chapter 8.
The JavaScript interpreter performs automatic garbage collection for memory management. This means that a JavaScript programmer never needs to worry about destruction or deallocation of objects or other values. When a value is no longer reachable—when a program no longer has any way to refer to it—the interpreter knows it can never be used again and automatically reclaims the memory it was occupying.
JavaScript is an object-oriented language. Loosely, this means that rather than
having globally defined functions to operate on values of various types, the
types themselves define methods for working with values. To sort the elements
of an array a
, for example, we don’t pass a
to a sort()
function.
Instead, we invoke the sort()
method of a
:
a
.
sort
();
// The object-oriented version of sort(a).
Method definition is covered in Chapter 8. Technically, it is only
JavaScript objects that have methods. But numbers, strings, boolean and
symbol values behave as if they have methods. In JavaScript, null
and
undefined
are the only values that methods cannot be invoked on.
JavaScript’s object types are mutable and its primitive types are
immutable. A value of a mutable type can change: a JavaScript program
can change the values of object properties and array elements. Numbers,
booleans, null
, and undefined
are immutable—it doesn’t even make
sense to talk about changing the value of a number, for example. Strings
can be thought of as arrays of characters, and you might expect them to
be mutable. In JavaScript, however, strings are immutable: you can
access the text at any index of a string, but JavaScript provides no way
to alter the text of an existing string. The differences between mutable
and immutable values are explored further in §2.7.
JavaScript converts values liberally from one type to another. If a
program expects a string, for example, and you give it a number, it will
automatically convert the number to a string for you. And if you use a
non-boolean value where a boolean is expected, JavaScript will convert
accordingly. The rules for value conversion are explained in
§2.8. JavaScript’s liberal value conversion rules affect its
definition of equality, and the ==
equality operator performs type
conversions as described in §2.9.1.
Constants and variables allow you to use names to refer to values in
your programs. Constants are declared with const
and variables are
declared with let
(or with var
in older JavaScript code). JavaScript
constants and variable are untyped: declarations do not specify what
kind of values will be assigned. Variable declaration and assignment are
covered in §2.9.
As you can see from this long introduction, this is a wide-ranging chapter that explains a lot of fundamental details about how data is represented and manipulated in JavaScript. We’ll begin by diving right in to the details of JavaScript numbers and text.
JavaScript’s primary numeric type, Number, is used to represent integers and to approximate real numbers. JavaScript represents numbers using the 64-bit floating-point format defined by the IEEE 754 standard,1 which means it can represent numbers as large as ±1.7976931348623157 × 10308 and as small as ±5 × 10−324.
The JavaScript number format allows you to exactly represent all integers between −9007199254740992 (−253) and 9007199254740992 (253), inclusive. If you use integer values larger than this, you may lose precision in the trailing digits. Note, however, that certain operations in JavaScript (such as array indexing and the bitwise operators described in Chapter 3) are performed with 32-bit integers. If you need to exactly represent larger integers, see §2.2.5.
When a number appears directly in a JavaScript program, it’s called a numeric literal. JavaScript supports numeric literals in several formats, as described in the following sections. Note that any numeric literal can be preceded by a minus sign (-) to make the number negative.
In a JavaScript program, a base-10 integer is written as a sequence of digits. For example:
0
3
10000000
In addition to base-10 integer literals, JavaScript recognizes
hexadecimal (base-16) values. A hexadecimal literal begins with 0x
or 0X
, followed by a string of hexadecimal digits. A hexadecimal
digit is one of the digits 0 through 9 or the letters a (or A) through
f (or F), which represent values 10 through 15. Here are examples of
hexadecimal integer literals:
0xff
// => 255: (15*16 + 15)
0xBADCAFE
// => 195939070
In ECMAScript 6 and later you can also express integers in binary
(base 2) or octal (base 8) using the prefixes 0b
and 0o
(or 0B
and 0O
) instead of 0x
:
0b10101 // => 21: (1*16 + 0*8 + 1*4 + 0*2 + 1*1) 0o377 // => 255: (3*64 + 7*8 + 7*1)
Floating-point literals can have a decimal point; they use the traditional syntax for real numbers. A real value is represented as the integral part of the number, followed by a decimal point and the fractional part of the number.
Floating-point literals may also be represented using exponential notation: a real number followed by the letter e (or E), followed by an optional plus or minus sign, followed by an integer exponent. This notation represents the real number multiplied by 10 to the power of the exponent.
More succinctly, the syntax is:
[digits
][.digits
][(E|e)[(+|-)]digits
]
For example:
3.14
2345.6789
.
333333333333333333
6.02e23
// 6.02 × 10²³
1.4738223
E
-
32
// 1.4738223 × 10⁻³²
JavaScript programs work with numbers using the arithmetic operators
that the language provides. These include +
for addition,
-
for subtraction, *
for multiplication, /
for
division, and %
for modulo (remainder after division).
ECMAScript 2016 adds **
for exponentiation.
Full details on these and other operators can be found in Chapter 3.
In addition to these basic arithmetic operators, JavaScript supports more
complex mathematical operations through a set of functions and constants
defined as properties of the Math
object:
Math
.
pow
(
2
,
53
)
// => 9007199254740992: 2 to the power 53
Math
.
round
(.
6
)
// => 1.0: round to the nearest integer
Math
.
ceil
(.
6
)
// => 1.0: round up to an integer
Math
.
floor
(.
6
)
// => 0.0: round down to an integer
Math
.
abs
(
-
5
)
// => 5: absolute value
Math
.
max
(
x
,
y
,
z
)
// Return the largest argument
Math
.
min
(
x
,
y
,
z
)
// Return the smallest argument
Math
.
random
()
// Pseudo-random number x where 0 <= x < 1.0
Math
.
PI
// π: circumference of a circle / diameter
Math
.
E
// e: The base of the natural logarithm
Math
.
sqrt
(
3
)
// => 3**0.5: the square root of 3
Math
.
pow
(
3
,
1
/
3
)
// => 3**(1/3): the cube root of 3
Math
.
sin
(
0
)
// Trigonometry: also Math.cos, Math.atan, etc.
Math
.
log
(
10
)
// Natural logarithm of 10
Math
.
log
(
100
)
/
Math
.
LN10
// Base 10 logarithm of 100
Math
.
log
(
512
)
/
Math
.
LN2
// Base 2 logarithm of 512
Math
.
exp
(
3
)
// Math.E cubed
ECMAScript 6 defines more functions on the Math object:
Math
.
cbrt
(
27
)
// => 3: cube root
Math
.
hypot
(
3
,
4
)
// => 5: square root of sum of squares of all arguments
Math
.
log10
(
100
)
// => 2: Base-10 logarithm
Math
.
log2
(
1024
)
// => 10: Base-2 logarithm
Math
.
log1p
(
x
)
// Natural log of (1+x); accurate for very small x
Math
.
expm1
(
x
)
// Math.exp(x)-1; the inverse of Math.log1p()
Math
.
sign
(
x
)
// -1, 0, or 1 for arguments <, ==, or > 0
Math
.
imul
(
2
,
3
)
// => 6: optimized multiplication of 32-bit integers
Math
.
clz32
(
0xf
)
// => 28: number of leading zero bits in a 32-bit integer
Math
.
trunc
(
3.9
)
// => 3: convert to an integer by truncating fractional part
Math
.
fround
(
x
)
// Round to nearest 32-bit float number
Math
.
sinh
(
x
)
// Hyperbolic sine. Also Math.cosh(), Math.tanh()
Math
.
asinh
(
x
)
// Hyperbolic arcsine. Also Math.acosh(), Math.atanh()
Arithmetic in JavaScript does not raise errors in cases of overflow, underflow,
or division by zero. When the result of a numeric operation is larger than the
largest representable number (overflow), the result is a special infinity
value, which JavaScript prints as Infinity
. Similarly, when the absolute
value of a negative value becomes larger than the absolute value of the largest
representable negative number, the result is negative infinity, printed as
-Infinity
. The infinite values behave as you would expect: adding,
subtracting, multiplying, or dividing them by anything results in an infinite
value (possibly with the sign reversed).
Underflow occurs when the result of a numeric operation is closer to zero than the smallest representable number. In this case, JavaScript returns 0. If underflow occurs from a negative number, JavaScript returns a special value known as “negative zero.” This value is almost completely indistinguishable from regular zero and JavaScript programmers rarely need to detect it.
Division by zero is not an error in JavaScript: it simply returns infinity or
negative infinity. There is one exception, however: zero divided by zero does
not have a well-defined value, and the result of this operation is the special
not-a-number value, printed as NaN
. NaN
also arises if you attempt to
divide infinity by infinity, or take the square root of a negative number or
use arithmetic operators with non-numeric operands that cannot be converted to
numbers.
JavaScript pre-defines global constants Infinity
and NaN
to hold the
positive infinity and not-a-number value, and these values are also
available as properties of the Number
object:
Infinity
// A positive number too big to represent
Number
.
POSITIVE_INFINITY
// Same value
1
/
0
// => Infinity
Number
.
MAX_VALUE
*
2
// => Infinity; overflow!
-
Infinity
// A negative number too big to represent
Number
.
NEGATIVE_INFINITY
// The same value
-
1
/
0
// => -Infinity
-
Number
.
MAX_VALUE
*
2
// => -Infinity
NaN
// The not-a-number value
Number
.
NaN
// The same value, written another way
0
/
0
// => NaN
Number
.
MIN_VALUE
/
2
// => 0: underflow!
-
Number
.
MIN_VALUE
/
2
// => -0: negative zero
-
1
/
Infinity
// -> -0: also negative 0
-
0
// The following Number properties are defined in ECMAScript 6
Number
.
parseInt
()
// Same as the global parseInt() function
Number
.
parseFloat
()
// Same as the global parseFloat() function
Number
.
isNaN
(
x
)
// Is x the NaN value? Works like x !== x
Number
.
isFinite
(
x
)
// Is x a number and finite?
Number
.
isInteger
(
x
)
// Is x an integer?
Number
.
isSafeInteger
(
x
)
// Is x an integer -(2**53) < x < 2**53?
Number
.
MIN_SAFE_INTEGER
// => -(2**53 - 1)
Number
.
MAX_SAFE_INTEGER
// => 2**53 - 1
Number
.
EPSILON
// => 2**-52: smallest difference between numbers
The not-a-number value has one unusual feature in JavaScript: it does
not compare equal to any other value, including itself. This means
that you can’t write x === NaN
to determine whether the value
of a variable x
is NaN
. Instead, you should write x != x
or Number.isNaN(x)
. That expression will be true if, and only if, x
is NaN
. The global function isNaN()
is similar. It returns true
if its argument is NaN
, or if that argument is a non-numeric value
that can not be converted to a number. The related function
Number.isFinite()
returns true
if its argument is a number
other than NaN
, Infinity
, or -Infinity
. The global
isFinite()
function returns true if its argument is, or can be
converted to a finite number.
The negative zero value is also somewhat unusual. It compares equal (even using JavaScript’s strict equality test) to positive zero, which means that the two values are almost indistinguishable, except when used as a divisor:
let
zero
=
0
;
// Regular zero
let
negz
=
-
0
;
// Negative zero
zero
===
negz
// => true: zero and negative zero are equal
1
/
zero
===
1
/
negz
// => false: Infinity and -Infinity are not equal
There are infinitely many real numbers, but only a finite number of them (18437736874454810627, to be exact) can be represented exactly by the JavaScript floating-point format. This means that when you’re working with real numbers in JavaScript, the representation of the number will often be an approximation of the actual number.
The IEEE-754 floating-point representation used by JavaScript (and just about
every other modern programming language) is a binary representation, which can
exactly represent fractions like 1/2
, 1/8
, and 1/1024
.
Unfortunately, the fractions we use most commonly (especially when performing
financial calculations) are decimal fractions 1/10
, 1/100
, and
so on. Binary floating-point representations cannot exactly represent numbers
as simple as 0.1
.
JavaScript numbers have plenty of precision and can approximate 0.1
very
closely. But the fact that this number cannot be represented exactly can lead
to problems. Consider this code:
let
x
=
.
3
-
.
2
;
// thirty cents minus 20 cents
let
y
=
.
2
-
.
1
;
// twenty cents minus 10 cents
x
===
y
// => false: the two values are not the same!
x
===
.
1
// => false: .3-.2 is not equal to .1
y
===
.
1
// => true: .2-.1 is equal to .1
Because of rounding error, the difference between the approximations of .3 and
.2 is not exactly the same as the difference between the approximations of .2
and .1. It is important to understand that this problem is not specific to
JavaScript: it affects any programming language that uses binary floating-point
numbers. Also, note that the values x
and y
in the code above are very
close to each other and to the correct value. The computed values are adequate
for almost any purpose: the problem arises when we attempt to compare values
for equality.
If these floating-point approximations are problematic for your programs, consider using scaled integers. For example, you might manipulate monetary values as integer cents rather than fractional dollars.
As of mid-2019, a new JavaScript numeric type, known as BigInt, is nearing standardization. Although it has not yet been formally standardized, it has been implemented in Chrome, Firefox and Node, and there is an implementation in progress in Safari. As the name implies, BigInt is a numeric type whose values are integers. The type was added to JavaScript mainly to allow the representation of 64-bit integers which are required for compatibility with many other programming languages and APIs. But BigInt values can have thousands or even millions of digits, should you have need to work with numbers that large. (Note, however, that BigInt implementations are not suitable for cryptography because they do not attempt to prevent timing attacks.)
BigInt literals are written as a string of digits followed by a
lowercase letter n
. By default, the are in base 10, but you can use
the 0b
, 0o
and 0x
prefixes for binary, octal and hexadecimal
BigInts:
1234
n
// A not-so-big BigInt literal
0b111111
n
// A binary BigInt
0o7777
n
// An octal BigInt
0x8000000000000000
n
// => 2n**63n: A 64-bit integer
You can use BigInt()
as a function for converting regular JavaScript
numbers or strings to BigInt values:
BigInt
(
Number
.
MAX_SAFE_INTEGER
)
// => 9007199254740991n
let
string
=
'1'
+
'0'
.
repeat
(
100
);
// 1 followed by 100 zeros.
BigInt
(
string
)
// => 10n**100n: one googol
Arithmetic with BigInt values works like arithmetic with regular JavaScript numbers, except that division drops any remainder and rounds down (toward zero):
1000
n
+
2000
n
// => 3000n
3000
n
-
2000
n
// => 1000n
2000
n
*
3000
n
// => 6000000n
3000
n
/
997
n
// => 3n: the quotient is 3
3000
n
%
997
n
// => 9n: and the remainder is 9
(
2
n
**
131071
n
)
-
1
n
// A Mersenne prime with 39457 decimal digits
Although the standard +
, -
, *
, /
, %
, and **
operators work
with BigInt, it is important to understand that you may not mix
operands of type BigInt with regular number operands. This may seem
confusing at first, but there is a good reason for it. If one numeric
type was more general than the other, it would be easy to define
arithmetic on mixed operands to simply return a value of the more
general type. But neither type is more general than the other: BigInt
can represent extraordinarily large values, making it more general
than regular numbers. But BigInt can only represent integers, making
the regular JavaScript number type more general. There is no way
around this problem, so JavaScript sidesteps it by simply not allowing
mixed operands to the arithmetic operators.
Comparison operators, by contrast, do work with mixed numeric types
(but see §2.9.1 for more about the difference
between ==
and ===
):
1
<
2
n
// => true
2
>
1
n
// => true
0
==
0
n
// => true
0
===
0
n
// => false: the === checks for type equality as well
The bitwise operators (described in §3.8.3) generally work with BigInt operands. None of the functions of the Math object accept BigInt operands, however.
JavaScript defines a simple Date class for representing and manipulating the numbers that represent dates and times. Because JavaScript Dates are objects rather than primitive types, however, they are not covered in this chapter. See §9.4 for details.
The JavaScript type for representing text is the string. A string is an immutable ordered sequence of 16-bit values, each of which typically represents a Unicode character. The length of a string is the number of 16-bit values it contains. JavaScript’s strings (and its arrays) use zero-based indexing: the first 16-bit value is at position 0, the second at position 1 and so on. The empty string is the string of length 0. JavaScript does not have a special type that represents a single element of a string. To represent a single 16-bit value, simply use a string that has a length of 1.
To include a string literally in a JavaScript program, simply enclose the
characters of the string within a matched pair of single or double quotes
('
or "
). Double-quote characters may be contained within strings
delimited by single-quote characters, and single-quote characters may be
contained within strings delimited by double quotes. Here are examples of
string literals:
""
// The empty string: it has zero characters
'testing'
"3.14"
'name="myform"'
"Wouldn't you prefer O'Reilly's book?"
"This string has two lines"
"π is the ratio of a circle's circumference to its diameter"
In ECMAScript 6 you can also delimit strings with backticks (`
),
and strings delimited this way have a special syntax that allows
JavaScript expressions to be embedded within the strings. This new
kind of string literal is covered in §2.3.4.
The original versions of JavaScript required string literals to be
written on a single line, and it is common to see JavaScript code that
creates long strings by concatenating single-line strings with the +
operator. As of ECMAScript 5, however, you can break a
string literal across multiple lines by ending each line but the last
with a backslash (). Neither the backslash nor the line terminator
that follow it are part of the string literal. If you need to include
a newline character in a string literal, use the character sequence
(documented below):
// A string representing 2 lines written on one line:
"two lines"
// A one-line string written on 3 lines:
"one
long
line"
Note that when you use single quotes to delimit your strings, you must be
careful with English contractions and possessives, such as can’t and
O’Reilly’s. Since the apostrophe is the same as the single-quote character,
you must use the backslash character () to “escape” any apostrophes
that appear in single-quoted strings (escapes are explained in the next
section).
In client-side JavaScript programming, JavaScript code may contain strings of HTML code, and HTML code may contain strings of JavaScript code. Like JavaScript, HTML uses either single or double quotes to delimit its strings. Thus, when combining JavaScript and HTML, it is a good idea to use one style of quotes for JavaScript and the other style for HTML. In the following example, the string “Thank you” is single-quoted within a JavaScript expression, which is then double-quoted within an HTML event-handler attribute:
<button
onclick=
"alert('Thank you')"
>
Click Me</button>
The backslash character () has a special purpose in JavaScript
strings. Combined with the character that follows it, it represents a character
that is not otherwise representable within the string. For example,
is an escape sequence that represents a newline character.
Another example, mentioned above, is the '
escape, which
represents the single quote (or apostrophe) character. This escape sequence is
useful when you need to include an apostrophe in a string literal that is
contained within single quotes. You can see why these are called escape
sequences: the backslash allows you to escape from the usual interpretation of
the single-quote character. Instead of using it to mark the end of the string,
you use it as an apostrophe:
'You're right, it can't be a quote'
Table 2-1 lists the JavaScript escape sequences and the characters
they represent. Two escape sequences are generic and can be used to represent
any character by specifying its Latin-1 or Unicode character code as a
hexadecimal number. For example, the sequence xA9
represents the
copyright symbol, which has the Latin-1 encoding given by the hexadecimal
number A9. Similarly, the u
escape represents an arbitrary Unicode
character specified by four hexadecimal digits; u03c0
represents the
character π, for example.
Sequence | Character represented |
---|---|