© Jesper Wisborg Krogh 2020
J. W. KroghMySQL 8 Query Performance Tuninghttps://doi.org/10.1007/978-1-4842-5584-1_18

18. Locking Theory and Monitoring

Jesper Wisborg Krogh1 
(1)
Hornsby, NSW, Australia
 

Together with the optimizer that was discussed in the previous chapter, locks are possibly the most complex topic of query optimization. When locks show their worst side, they can cause gray hairs for even the best expert on locks. However, do not despair. This chapter will introduce you to most of the knowledge of locks you will need – and possibly some more. After you have read this chapter, you should be able to start investigating locks and use that to gain further knowledge.

The chapter starts out discussing why locks are needed and the lock access levels. The largest section of the chapter then goes through the most commonly encountered locks in MySQL. The other half of the chapter discusses why lock requests may fail, how to reduce the impact of locks, and how to monitor locks.

Note

Most of the examples include the statements to reproduce the important parts of the outputs (some data will by nature differ from case to case). Since the interesting parts of locking often include more than one connection, the prompts for the queries have been set to indicate which connection to use for which queries when that is important. For example, Connection 1> means that the query should be executed by the first of your connections.

Why Are Locks Needed?

It can seem like a perfect world where locking in databases is not needed. The price will however be so high that only few use cases can use that database, and it is impossible for a general-purpose database such as MySQL. If you do not have locking, you cannot have any concurrency. Imagine that only one connection is ever allowed to the database (you can argue that itself is a lock and thus the system is not lock-free anyway) – that is not very useful for most applications.

Note

Often what is called a lock in MySQL is really a lock request which can be in a granted or pending state.

When you have several connections executing queries concurrently, you need some way to ensure that the connections do not step on each other’s toes. That is where locks enter the picture. You can think of locks in the same way as traffic signals in road traffic that regulate access to the resources to avoid accidents. In a road intersection, it is necessary to ensure that two cars do not cross each other’s path and collide. In a database, it is necessary to ensure two queries’ access to the data does not conflict.

As there are different levels of controlling the access to an intersection – yielding, stop signs, and traffic lights – there are different lock types in a database.

Lock Access Levels

The lock access level determines which kind of access a given lock allows. It is also sometimes called the lock type, but since that can be confused with the lock granularity, the term lock access level is used here.

There are essentially two access levels: shared or exclusive. The access levels do what their names suggest. A shared lock allows other connections to also get a shared lock. This is the most permissive lock access level. An exclusive lock only allows that one connection to get the lock. A shared lock is also known as a read lock, and an exclusive lock is also known as a write lock.

MySQL also has a concept called intention locks which specify the intention of a transaction. An intention lock can be either shared or exclusive. Intention locks are discussed in more detail when implicit table locks are covered in the next section that goes through the main lock granularity levels in MySQL.

Lock Granularity

MySQL uses a range of different lock granularities (also called lock types) to control access to the data. By using different lock granularities, it makes it possible, to the greatest intent possible, to allow for concurrent access to the data. This section will go through the main granularity levels used by MySQL.

User-Level Locks

User-level locks are an explicit lock type the application can use to protect, for example, a workflow. They are not often used, but they can be useful for some complex tasks where you want to serialize access. All user locks are exclusive locks and are obtained using a name which can be up to 64 characters long.

You manipulate user-level locks with a set of functions:
  • GET_LOCK(name, timeout): Obtains a lock by specifying the name of the lock. The second argument is a timeout in seconds; if the lock is not obtained within that time, the function returns 0. If the lock is obtained, the return value is 1. If the timeout is negative, the function will wait indefinitely for the lock to become available.

  • IS_FREE_LOCK(name): Checks whether the named lock is available or not. The function returns 1 if the lock is available and 0 if it is not available.

  • IS_USED_LOCK(name): This is the opposite of the IS_FREE_LOCK() function. The function returns the connection id of the connection holding the lock if the lock is in use (not available) and NULL if it is not in use (available).

  • RELEASE_ALL_LOCKS(): Releases all user-level locks held by the connection. The return value is the number of locks released.

  • RELEASE_LOCK(name): Releases the lock with the provided name. The return value is 1 if the lock is released, 0 if the lock exists but is not owned by the connection, or NULL if the lock does not exist.

It is possible to obtain multiple locks by invoking GET_LOCK() multiple times. If you do that, be careful to ensure locks are obtained in the same order by all users as otherwise a deadlock can occur. If a deadlock occurs, an ER_USER_LOCK_DEADLOCK error (error code 3058) is returned. An example of this is shown in Listing 18-1.
-- Connection 1
Connection 1> SELECT GET_LOCK('my_lock_1', -1);
+---------------------------+
| GET_LOCK('my_lock_1', -1) |
+---------------------------+
|                         1 |
+---------------------------+
1 row in set (0.0100 sec)
-- Connection 2
Connection 2> SELECT GET_LOCK('my_lock_2', -1);
+---------------------------+
| GET_LOCK('my_lock_2', -1) |
+---------------------------+
|                         1 |
+---------------------------+
1 row in set (0.0006 sec)
Connection 2> SELECT GET_LOCK('my_lock_1', -1);
-- Connection 1
Connection 1> SELECT GET_LOCK('my_lock_2', -1);
ERROR: 3058: Deadlock found when trying to get user-level lock; try rolling back transaction/releasing locks and restarting lock acquisition.
Listing 18-1

A deadlock for user-level locks

When Connection 2 attempts to get the my_lock_1 lock, the statement will block until Connection 1 attempts to get the my_lock_2 lock triggering the deadlock. If you obtain multiple locks, you should be prepared to handle deadlocks. Note that for user-level locks, a deadlock does not trigger a rollback of the transaction.

The granted and pending user-level locks can be found in the performance_schema.metadata_locks table with the OBJECT_TYPE column set to USER LEVEL LOCK as shown in Listing 18-2. The locks listed assume you left the system as it was at the time the deadlock in Listing 18-1 was triggered. Note that some values such as OBJECT_INSTANCE_BEGIN will be different for you.
mysql> SELECT *
         FROM performance_schema.metadata_locks
        WHERE OBJECT_TYPE = 'USER LEVEL LOCK'G
*************************** 1. row ***************************
          OBJECT_TYPE: USER LEVEL LOCK
        OBJECT_SCHEMA: NULL
          OBJECT_NAME: my_lock_1
          COLUMN_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2600542870816
            LOCK_TYPE: EXCLUSIVE
        LOCK_DURATION: EXPLICIT
          LOCK_STATUS: GRANTED
               SOURCE: item_func.cc:4840
      OWNER_THREAD_ID: 76
       OWNER_EVENT_ID: 33
*************************** 2. row ***************************
          OBJECT_TYPE: USER LEVEL LOCK
        OBJECT_SCHEMA: NULL
          OBJECT_NAME: my_lock_2
          COLUMN_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2600542868896
            LOCK_TYPE: EXCLUSIVE
        LOCK_DURATION: EXPLICIT
          LOCK_STATUS: GRANTED
               SOURCE: item_func.cc:4840
      OWNER_THREAD_ID: 62
       OWNER_EVENT_ID: 25
*************************** 3. row ***************************
          OBJECT_TYPE: USER LEVEL LOCK
        OBJECT_SCHEMA: NULL
          OBJECT_NAME: my_lock_1
          COLUMN_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2600542870336
            LOCK_TYPE: EXCLUSIVE
        LOCK_DURATION: EXPLICIT
          LOCK_STATUS: PENDING
               SOURCE: item_func.cc:4840
      OWNER_THREAD_ID: 62
       OWNER_EVENT_ID: 26
3 rows in set (0.0086 sec)
Listing 18-2

Listing user-level locks

The OBJECT_TYPE for user-level locks is USER LEVEL LOCK, and the lock duration is EXPLICIT as it is up to the user or application to release the lock again. In row 1, the connection with Performance Schema thread id 76 has been granted the my_lock_1 lock, and in row 3 thread id 62 is waiting (pending) for it to be granted. Thread id 62 also has a granted lock which is included in row 2.

The next level of locks involves non-data table-level locks. The first of these that will be discussed is the flush lock.

Flush Locks

A flush lock will be familiar to most who have been involved in taking backups. It is taken when you use the FLUSH TABLES statement and last for the duration of the statement unless you add WITH READ LOCK in which case a shared (read) lock is held until the lock is explicitly released. An implicit table flush is also triggered at the end of the ANALYZE TABLE statement. The flush lock is a table-level lock. The read lock taken with FLUSH TABLES WITH READ LOCK is discussed later under explicit locks.

A common cause of lock issues for the flush lock is long-running queries. A FLUSH TABLES statement cannot flush a table as long as there is a query that has the table open. This means that if you execute a FLUSH TABLES statement while there is a long-running query using one or more of the tables being flushed, then the FLUSH TABLES statement will block all other statements needing any of those tables until the lock situation has been resolved.

Flush locks are subject to the lock_wait_timeout setting. If it takes more than lock_wait_timeout seconds to obtain the lock, MySQL will abandon the lock. The same applies if the FLUSH TABLES statement is killed. However, due to the internals of MySQL, a lower-level lock called the table definition cache (TDC) version lock cannot always be released until the long-running query completes.1 That means that the only way to be sure the lock problem is resolved is to kill the long-running query, but be aware that if the query has changed many rows, it may take a long time to roll back the query.

When there is lock contention around the flush lock, both the FLUSH TABLES statement and the queries started subsequently will have the state set to “Waiting for table flush.” Listing 18-3 shows an example of this involving three queries. To reproduce the scenario yourself, start out executing the three queries with the prompt set to Connection N> with N being 1, 2, or 3 representing three different connections. The query against sys.session is done in a fourth connection. All queries must be executed before the first completes (takes three minutes).
-- Connection 1
Connection 1> SELECT *, SLEEP(180) FROM world.city WHERE ID = 130;
-- Connection 2
Connection 2> FLUSH TABLES world.city;
-- Connection 3
Connection 3> SELECT * FROM world.city WHERE ID = 201;
-- Connection 4
Connection 4> SELECT thd_id, conn_id, state,
                     current_statement
                FROM sys.session
               WHERE current_statement IS NOT NULL
                     AND thd_id <> PS_CURRENT_THREAD_ID()G
*************************** 1. row ***************************
           thd_id: 61
          conn_id: 21
            state: User sleep
current_statement: SELECT *, SLEEP(180) FROM world.city WHERE ID = 130
*************************** 2. row ***************************
           thd_id: 62
          conn_id: 22
            state: Waiting for table flush
current_statement: FLUSH TABLES world.city
*************************** 3. row ***************************
           thd_id: 64
          conn_id: 23
            state: Waiting for table flush
current_statement: SELECT * FROM world.city WHERE ID = 201
3 rows in set (0.0598 sec)
Listing 18-3

Example of waiting for a flush lock

The example uses the sys.session view; similar results can be obtained using performance_schema.threads and SHOW PROCESSLIST. In order to reduce the output to only include the queries of relevance for the flush lock discussion, the current thread and threads without ongoing queries are filtered out.

The connection with conn_id = 21 is executing a slow query that uses the world.city table (a SLEEP(180) was used to ensure it took a long time). In the meantime, conn_id = 22 executed a FLUSH TABLES statement for the world.city table. Because the first query still has the table open (it is released once the query completes), the FLUSH TABLES statement ends up waiting for the table flush lock. Finally, conn_id = 23 attempts to query the table and thus must wait for the FLUSH TABLES statement.

Another non-data table lock is a metadata lock.

Metadata Locks

Metadata locks are one of the newer lock types in MySQL. They were introduced in MySQL 5.5, and their purpose is to protect the schema, so it does not get changed while queries or transactions rely on the schema to be unchanged. Metadata locks work at the table level, but they should be considered as an independent lock type to table locks as they do not protect the data in the tables.

SELECT statements and DML queries take a shared metadata lock, whereas DDL statements take an exclusive lock. A connection takes a metadata lock on a table when the table is first used and keeps the lock until the end of the transaction. While the metadata lock is held, no other connection is allowed to change the schema definition of the table. However, other connections that execute SELECT statements and DML statements are not restricted. Usually the biggest gotcha with respect to metadata locks is idle transactions preventing DDL statements from starting their work.

If you encounter a conflict around a metadata lock, you will see the query state in the process list set to “Waiting for table metadata lock.” An example of this including queries to set up is shown in Listing 18-4.
-- Connection 1
Connection 1> SELECT CONNECTION_ID();
+-----------------+
| CONNECTION_ID() |
+-----------------+
|              21 |
+-----------------+
1 row in set (0.0003 sec)
Connection 1> START TRANSACTION;
Query OK, 0 rows affected (0.0003 sec)
Connection 1> SELECT * FROM world.city WHERE ID = 130G
*************************** 1. row ***************************
         ID: 130
       Name: Sydney
CountryCode: AUS
   District: New South Wales
 Population: 3276207
1 row in set (0.0005 sec)
-- Connection 2
Connection 2> SELECT CONNECTION_ID();
+-----------------+
| CONNECTION_ID() |
+-----------------+
|              22 |
+-----------------+
1 row in set (0.0003 sec)
Connection 2> OPTIMIZE TABLE world.city;
-- Connection 3
Connection 3> SELECT thd_id, conn_id, state,
                     current_statement,
                     last_statement
                FROM sys.session
               WHERE conn_id IN (21, 22)G
*************************** 1. row ***************************
           thd_id: 61
          conn_id: 21
            state: NULL
current_statement: SELECT * FROM world.city WHERE ID = 130
   last_statement: SELECT * FROM world.city WHERE ID = 130
*************************** 2. row ***************************
           thd_id: 62
          conn_id: 22
            state: Waiting for table metadata lock
current_statement: OPTIMIZE TABLE world.city
   last_statement: NULL
2 rows in set (0.0549 sec)
Listing 18-4

Example of waiting for table metadata lock

In this example, the connection with conn_id = 21 has an ongoing transaction and in the previous statement queried the world.city table (the current statement in this case is the same as it is not cleared until the next statement is executed). While the transaction is still active, conn_id = 22 has executed an OPTIMIZE TABLE statement which is now waiting for the metadata lock. (Yes, OPTIMIZE TABLE does not change the schema definition, but it as a DDL statement is still affected by the metadata lock.)

It is convenient when it is the current or last statement that is the cause of the metadata lock. In more general cases, you can use the performance_schema.metadata_locks table with the OBJECT_TYPE column set to TABLE to find granted and pending metadata locks. Listing 18-5 shows an example of granted and pending metadata locks using the same setup as in the previous example. Chapter 22 goes into more detail about investigating metadata locks.
-- Connection 3
Connection 3> SELECT *
                FROM performance_schema.metadata_locks
               WHERE OBJECT_SCHEMA = 'world'
                     AND OBJECT_NAME = 'city'G
*************************** 1. row ***************************
          OBJECT_TYPE: TABLE
        OBJECT_SCHEMA: world
          OBJECT_NAME: city
          COLUMN_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2195760373456
            LOCK_TYPE: SHARED_READ
        LOCK_DURATION: TRANSACTION
          LOCK_STATUS: GRANTED
               SOURCE: sql_parse.cc:6014
      OWNER_THREAD_ID: 61
       OWNER_EVENT_ID: 53
*************************** 2. row ***************************
          OBJECT_TYPE: TABLE
        OBJECT_SCHEMA: world
          OBJECT_NAME: city
          COLUMN_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2194784109632
            LOCK_TYPE: SHARED_NO_READ_WRITE
        LOCK_DURATION: TRANSACTION
          LOCK_STATUS: PENDING
               SOURCE: sql_parse.cc:6014
      OWNER_THREAD_ID: 62
       OWNER_EVENT_ID: 26
2 rows in set (0.0007 sec)
-- Connection 1
Connection 1> ROLLBACK;
Query OK, 0 rows affected (0.0003 sec)
Listing 18-5

Example of metadata locks

In the example, thread id 61 (the same as conn_id = 22 from the sys.session output) owns a shared read lock on the world.city table due to an ongoing transaction, and thread id 62 is waiting for a lock as it is trying to execute a DDL statement on the table.

A special case of metadata locks are locks taken explicitly with the LOCK TABLES statement.

Explicit Table Locks

Explicit table locks are taken with the LOCK TABLES and the FLUSH TABLES WITH READ LOCK statements. With the LOCK TABLES statement, it is possible to take shared or exclusive locks; FLUSH TABLES WITH READ LOCK always takes a shared lock. The tables are locked, until they are explicitly released with the UNLOCK TABLES statement. When FLUSH TABLES WITH READ LOCK is executed without listing any tables, the global read lock (i.e., affecting all tables) is taken. While these locks also protect the data, they are considered as metadata locks in MySQL.

Explicit table locks, other than FLUSH TABLES WITH READ LOCK in connection with backups, are not often used with InnoDB as InnoDB’s sophisticated lock features are in most cases superior to handling locks yourself. However, if you really need to lock the entire tables, explicit locks can be useful as they are very cheap for MySQL to check.

An example of a connection taking an explicit read lock on the world.country and world.countrylanguage tables and a write lock on the world.city table is
mysql> LOCK TABLES world.country READ,
                   world.countrylanguage READ,
                   world.city WRITE;
Query OK, 0 rows affected (0.0500 sec)
When you take explicit locks, you are only allowed to use the tables you have locked and in accordance with the requested locks. This means you will get an error if you take a read lock and attempt to write to the table (ER_TABLE_NOT_LOCKED_FOR_WRITE) or if you try to use a table you did not take a lock for (ER_TABLE_NOT_LOCKED), for example:
mysql> UPDATE world.country
          SET Population = Population + 1
        WHERE Code = 'AUS';
ERROR: 1099: Table 'country' was locked with a READ lock and can't be updated
mysql> SELECT *
         FROM sakila.film
        WHERE film_id = 1;
ERROR: 1100: Table 'film' was not locked with LOCK TABLES

Since explicit locks are considered metadata locks, the symptoms and information in the performance_schema.metadata_locks table are the same as for implicit metadata locks.

Another table-level lock but handled implicitly is plainly called a table lock.

Implicit Table Locks

MySQL takes implicit table locks when a table is queried. Table locks do not play a large role for InnoDB tables except for flush, metadata, and explicit locks as InnoDB uses record locks to allow concurrent access to a table as long as the transactions do not modify the same rows (roughly speaking – as the next subsections show – there is more to it than that).

InnoDB does however work with the concept of intention locks at the table level. Since you are likely to encounter those when investigating lock issues, it is worth familiarizing yourself with them. As mentioned in the discussion of lock access levels, intention locks mark what the intention of the transaction is. If you use an explicit LOCK TABLES statement, the table will be locked directly with the access level you have requested.

For locks taken by transactions, first, an intention lock is taken, and then it may if needed be upgraded. To get a shared lock, the transaction first takes an intention shared lock and then the shared lock. Similarly, for an exclusive lock, an intention exclusive lock is first taken. Some examples of intention locks are as follows:
  • A SELECT ... FOR SHARE statement takes an intention shared lock on the tables queried. The SELECT ... LOCK IN SHARE MODE syntax is a synonym.

  • A SELECT ... FOR UPDATE statement takes an intention exclusive lock on the tables queried.

  • A DML statement (not including SELECT) takes an intention exclusive lock on the modified tables. If a foreign key column is modified, an intention shared lock is taken on the parent table.

Two intention locks are always compatible with each other. This means that even if a transaction has an intention exclusive lock, it will not prevent another transaction to take an intention lock. It will however stop the other transaction from upgrading its intention lock to a full lock. Table 18-1 shows the compatibility between the lock types. Shared locks are denoted S and exclusive locks X. Intention locks are prefixed I, so IS is an intention shared lock and IX is an intention exclusive lock.
Table 18-1

InnoDB lock compatibility

 

Exclusive (X)

Intention Exclusive (IX)

Shared (S)

Intention Shared (IS)

Exclusive (X)

Intention Exclusive (IX)

Shared (S)

Intention Shared (IS)

In the table, a checkmark indicates that the two locks are compatible, whereas a cross mark indicates the two locks are conflicting with each other. The only conflicts of intention locks are the exclusive and shared locks. An exclusive lock conflicts with all other locks including both intention lock types. A shared lock conflicts only with an exclusive lock and an intention exclusive lock.

Why are the intention locks even necessary? They allow InnoDB to resolve the lock requests in order without blocking compatible operations. The details are beyond the scope of this discussion. The important thing is that you know that the intention locks exist, so when you see them you know where they come from.

The table-level locks can be found in the performance_schema.data_locks table with the LOCK_TYPE column set to TABLE. Listing 18-6 shows an example of an intention shared lock.
-- Connection 1
Connection 1> START TRANSACTION;
Query OK, 0 rows affected (0.0003 sec)
Connection 1> SELECT *
                FROM world.city
               WHERE ID = 130
                 FOR SHARE;
Query OK, 1 row affected (0.0010 sec)
-- Connection 2
Connection 2> SELECT *
                FROM performance_schema.data_locks
               WHERE LOCK_TYPE = 'TABLE'G
*************************** 1. row ***************************
               ENGINE: INNODB
       ENGINE_LOCK_ID: 2195098223824:1720:2195068346872
ENGINE_TRANSACTION_ID: 283670074934480
            THREAD_ID: 61
             EVENT_ID: 81
        OBJECT_SCHEMA: world
          OBJECT_NAME: city
       PARTITION_NAME: NULL
    SUBPARTITION_NAME: NULL
           INDEX_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2195068346872
            LOCK_TYPE: TABLE
            LOCK_MODE: IS
          LOCK_STATUS: GRANTED
            LOCK_DATA: NULL
1 row in set (0.0354 sec)
-- Connection 1
Connection 1> ROLLBACK;
Query OK, 0 rows affected (0.0003 sec)
Listing 18-6

Example of an InnoDB intention shared lock

This shows an intention shared lock on the world.city table. Notice that the ENGINE is set to INNODB and that LOCK_DATA is NULL. The values of the ENGINE_LOCK_ID, ENGINE_TRANSACTION_ID, and OBJECT_INSTANCE_BEGIN columns will be different if you execute the same query.

As mentioned, InnoDB’s main access level protection is at the record level, so let’s look at those.

Record Locks

Record locks are often called row locks; however, it is more than just locks on rows as it also includes index and gap locks. These are typically the locks that are meant when talking about InnoDB locks. They are fine-grained locks that aim at just locking the least amount of data while still ensuring the data integrity.

A record lock can be shared or exclusive and affect just the rows and indexes accessed by the transaction. The duration of exclusive locks is usually the transaction with an exception, for example, being delete-marked records used for uniqueness checks in INSERT INTO ... ON DUPLICATE KEY and REPLACE statements. For shared locks, the duration can depend on the transaction isolation level as discussed in “Transaction Isolation Levels” in the section “Reduce Locking Issues.”

Record locks can be found using the performance_schema.data_locks table that was also used to find intention locks at the table level. Listing 18-7 shows an example of the locks from updating rows in the world.city table using the secondary index CountryCode.
-- Connection 1
Connection 1> START TRANSACTION;
Query OK, 0 rows affected (0.0003 sec)
Connection 1> UPDATE world.city
                 SET Population = Population + 1
               WHERE CountryCode = 'LUX';
Query OK, 1 row affected (0.0009 sec)
Rows matched: 1  Changed: 1  Warnings: 0
-- Connection 2
Connection 2> SELECT *
                FROM performance_schema.data_locksG
*************************** 1. row ***************************
               ENGINE: INNODB
       ENGINE_LOCK_ID: 2195098223824:1720:2195068346872
ENGINE_TRANSACTION_ID: 117114
            THREAD_ID: 61
             EVENT_ID: 121
        OBJECT_SCHEMA: world
          OBJECT_NAME: city
       PARTITION_NAME: NULL
    SUBPARTITION_NAME: NULL
           INDEX_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2195068346872
            LOCK_TYPE: TABLE
            LOCK_MODE: IX
          LOCK_STATUS: GRANTED
            LOCK_DATA: NULL
*************************** 2. row ***************************
               ENGINE: INNODB
       ENGINE_LOCK_ID: 2195098223824:507:30:1112:2195068344088
ENGINE_TRANSACTION_ID: 117114
            THREAD_ID: 61
             EVENT_ID: 121
        OBJECT_SCHEMA: world
          OBJECT_NAME: city
       PARTITION_NAME: NULL
    SUBPARTITION_NAME: NULL
           INDEX_NAME: CountryCode
OBJECT_INSTANCE_BEGIN: 2195068344088
            LOCK_TYPE: RECORD
            LOCK_MODE: X
          LOCK_STATUS: GRANTED
            LOCK_DATA: 'LUX', 2452
*************************** 3. row ***************************
               ENGINE: INNODB
       ENGINE_LOCK_ID: 2195098223824:507:20:113:2195068344432
ENGINE_TRANSACTION_ID: 117114
            THREAD_ID: 61
             EVENT_ID: 121
        OBJECT_SCHEMA: world
          OBJECT_NAME: city
       PARTITION_NAME: NULL
    SUBPARTITION_NAME: NULL
           INDEX_NAME: PRIMARY
OBJECT_INSTANCE_BEGIN: 2195068344432
            LOCK_TYPE: RECORD
            LOCK_MODE: X,REC_NOT_GAP
          LOCK_STATUS: GRANTED
            LOCK_DATA: 2452
*************************** 4. row ***************************
               ENGINE: INNODB
       ENGINE_LOCK_ID: 2195098223824:507:30:1113:2195068344776
ENGINE_TRANSACTION_ID: 117114
            THREAD_ID: 61
             EVENT_ID: 121
        OBJECT_SCHEMA: world
          OBJECT_NAME: city
       PARTITION_NAME: NULL
    SUBPARTITION_NAME: NULL
           INDEX_NAME: CountryCode
OBJECT_INSTANCE_BEGIN: 2195068344776
            LOCK_TYPE: RECORD
            LOCK_MODE: X,GAP
          LOCK_STATUS: GRANTED
            LOCK_DATA: 'LVA', 2434
4 rows in set (0.0005 sec)
-- Connection 1
Connection 1> ROLLBACK;
Query OK, 0 rows affected (0.0685 sec)
Listing 18-7

Example of InnoDB record locks

The first row is the intention exclusive table lock that has already been discussed. The second row is a next-key lock (more shortly) on the CountryCode index for the value (‘LUX’, 2452) where ‘LUX’ is the country code used in the WHERE clause and 2452 is the primary key id added to the nonunique secondary index. The city with ID = 2452 is the only city matching the WHERE clause, and the primary key record (the row itself) is shown in the third row of the output. The lock mode is X,REC_NOT_GAP which means it is an exclusive lock on the record but not on the gap.

What is a gap? An example is shown in the fourth row of the output. Gap locks are so important that the discussion of the gap lock is split out into its own.

Gap Locks, Next-Key Locks, and Predicate Locks

A gap lock protects the space between two records. This can be in the row through the clustered index or in a secondary index. Before the first record in an index page and after the last in the page, there are pseudo-records called the infimum record and supremum record, respectively. Gap locks are often the lock type causing the most confusion. Experience from studying lock issues is the best way to become familiar with them.

Consider the query from the previous example:
UPDATE world.city
   SET Population = Population + 1
 WHERE CountryCode = 'LUX';

This query changes the population of all cities with CountryCode = 'LUX'. What happens if a new city is inserted between the update and the commit of the transaction? If the UPDATE and INSERT statements commit in the same order they are executed, all is as such fine. However, if you commit the changes in the opposite order, then the result is inconsistent as it would be expected the inserted row would also have been updated.

This is where the gap lock comes into play. It guards the space where new records (including records moved from a different position) would be inserted, so it is not changed until the transaction holding the gap lock is completed. If you look at the last columns of the fourth row in the output from the example in Listing 18-7, you can see an example of a gap lock:
           INDEX_NAME: CountryCode
OBJECT_INSTANCE_BEGIN: 2195068344776
            LOCK_TYPE: RECORD
            LOCK_MODE: X,GAP
          LOCK_STATUS: GRANTED
            LOCK_DATA: 'LVA', 2434

This is an exclusive gap lock on the CountryCode index for the value (‘LVA’, 2434). Since the query requested to update all rows with the CountryCode set to “LUX”, the gap lock ensures that no new rows are inserted for the “LUX” country code. The country code “LVA” is the next value in the CountryCode index, so the gap between “LUX” and “LVA” is protected with an exclusive lock. On the other hand, it is still possible to insert new cities with CountryCode = 'LVA'. In some places this is referred to as a “gap before record” which makes it easier to understand how the gap lock works.

Gap locks are taken to a much less degree when you use the READ COMMITTED transaction isolation level rather than REPEATABLE READ or SERIALIZABLE. This is discussed further in “Transaction Isolation Levels” in the section “Reduce Locking Issues.”

Related to gap locks are next-key locks and predicate locks. A next-key lock is the combination of a record lock and a gap lock on the gap before the record. This is actually the default lock type in InnoDB, and thus you will just see it as S and X in the lock outputs. In the example that has been discussed in this and the previous subsection, the lock on the CountryCode index for the value (‘LUX’, 2452) and the gap before it is an example of a next-key lock. The relevant parts of the output in Listing 18-7 from the performance_schema.data_locks table are
*************************** 2. row ***************************
           INDEX_NAME: CountryCode
            LOCK_TYPE: RECORD
            LOCK_MODE: X
          LOCK_STATUS: GRANTED
            LOCK_DATA: 'LUX', 2452
*************************** 3. row ***************************
           INDEX_NAME: PRIMARY
            LOCK_TYPE: RECORD
            LOCK_MODE: X,REC_NOT_GAP
          LOCK_STATUS: GRANTED
            LOCK_DATA: 2452
*************************** 4. row ***************************
           INDEX_NAME: CountryCode
            LOCK_TYPE: RECORD
            LOCK_MODE: X,GAP
          LOCK_STATUS: GRANTED
            LOCK_DATA: 'LVA', 2434

So to recapitulate, row 2 is the next-key lock, row 3 is the record lock on the primary key (the row), and row 4 is a gap lock between “LUX” and “LVA” (or a before-LVA gap lock).

A predicate lock is similar to a gap lock but applies to spatial indexes where an absolute ordering cannot be made and thus a gap lock does not make sense. Instead of a gap lock, for spatial indexes in the REPEATABLE READ and SERIALIZABLE transaction isolation levels, InnoDB creates a predicate lock on the minimum bounding rectangle (MBR) used for the query. This will allow consistent reads by preventing changes to the data within the minimum bounding rectangle.

One final lock type related to records that you should know is insert intention locks.

Insert Intention Locks

Remember that for table locks, InnoDB has intention locks for whether the transaction will use the table in a shared or exclusive manner. Similarly, InnoDB has insert intention locks at the record level. InnoDB uses these locks – as the name suggests – with INSERT statements to signal the intention to other transactions. As such, the lock is on a yet to be created record (so it is a gap lock) rather than on an existing record. The use of insert intention locks can help increase the concurrency that inserts can be performed at.

You are not very likely to see insert intention locks in lock outputs unless an INSERT statement is waiting for a lock to be granted. You can force a situation where this happens by creating a gap lock in another transaction that will prevent the INSERT statement from completing. The example in Listing 18-8 creates a gap lock in Connection 1 and then in Connection 2 attempts to insert a row which conflicts with the gap lock. Finally, in a third connection, the lock information is retrieved.
-- Connection 1
Connection 1> START TRANSACTION;
Query OK, 0 rows affected (0.0004 sec)
Connection 1> SELECT *
                FROM world.city
               WHERE ID > 4079
                 FOR UPDATE;
Empty set (0.0009 sec)
-- Connection 2
Connection 2> SELECT PS_CURRENT_THREAD_ID();
+------------------------+
| PS_CURRENT_THREAD_ID() |
+------------------------+
|                     62 |
+------------------------+
1 row in set (0.0003 sec)
Connection 2> START TRANSACTION;
Query OK, 0 rows affected (0.0003 sec)
Connection 2> INSERT INTO world.city
              VALUES (4080, 'Darwin', 'AUS',
                      'Northern Territory', 146000);
-- Connection 3
Connection 3> SELECT *
                FROM performance_schema.data_locks
               WHERE THREAD_ID = 62G
*************************** 1. row ***************************
               ENGINE: INNODB
       ENGINE_LOCK_ID: 2195098220336:1720:2195068326968
ENGINE_TRANSACTION_ID: 117144
            THREAD_ID: 62
             EVENT_ID: 119
        OBJECT_SCHEMA: world
          OBJECT_NAME: city
       PARTITION_NAME: NULL
    SUBPARTITION_NAME: NULL
           INDEX_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2195068326968
            LOCK_TYPE: TABLE
            LOCK_MODE: IX
          LOCK_STATUS: GRANTED
            LOCK_DATA: NULL
*************************** 2. row ***************************
               ENGINE: INNODB
       ENGINE_LOCK_ID: 2195098220336:507:29:1:2195068320072
ENGINE_TRANSACTION_ID: 117144
            THREAD_ID: 62
             EVENT_ID: 119
        OBJECT_SCHEMA: world
          OBJECT_NAME: city
       PARTITION_NAME: NULL
    SUBPARTITION_NAME: NULL
           INDEX_NAME: PRIMARY
OBJECT_INSTANCE_BEGIN: 2195068320072
            LOCK_TYPE: RECORD
            LOCK_MODE: X,INSERT_INTENTION
          LOCK_STATUS: WAITING
            LOCK_DATA: supremum pseudo-record
2 rows in set (0.0005 sec)
-- Connection 1
Connection 1> ROLLBACK;
Query OK, 0 rows affected (0.0004 sec)
-- Connection 2
Connection 2> ROLLBACK;
Query OK, 0 rows affected (0.0004 sec)
Listing 18-8

Example of an insert intention lock

Connection 2 has the Performance Schema thread id 62, so in Connection 3, it is possible just to query for that thread and exclude the locks taken by Connection 1. Notice that for the RECORD lock, the lock mode includes INSERT_INTENTION – the insert intention lock. In this case, the data locked is the supremum pseudo-record, but that can also be the value of the primary key depending on the situation. If you recall the next-key lock discussion, then X means a next-key lock, but this is a special case as the lock is on the supremum pseudo-record, and it is not possible to lock that, so effectively it is just a gap lock on the gap before the supremum pseudo-record.

Another lock that you need to be aware of when inserting data is the auto-increment lock.

Auto-increment Locks

When you insert data into a table that has an auto-increment counter, it is necessary to protect the counter so two transactions are guaranteed to get unique values. If you use statement-based logging to the binary log, there are further restrictions as the auto-increment value is recreated for all rows except the first when the statement is replayed.

InnoDB supports three lock modes, so you can adjust the amount of locking according to your needs. You choose the lock mode with the innodb_autoinc_lock_mode option which takes the values 0, 1, and 2 with 2 being the default in MySQL 8. It requires a restart of MySQL to change the value. The meaning of the values is summarized in Table 18-2.
Table 18-2

Supported values for the innodb_autoinc_lock_mode option

Value

Mode

Description

0

Traditional

The locking behavior of MySQL 5.0 and earlier. The lock is held until the end of the statement, so values are assigned in repeatable and consecutive order.

1

Consecutive

For the INSERT statement where the number of rows is known at the start of the query, the required number of auto-increment values is assigned under a lightweight mutex, and the auto-increment lock is avoided. For statements where the number of rows is not known, the auto-increment lock is taken and held to the end of the statement. This was the default in MySQL 5.7 and earlier.

2

Interleaved

The auto-increment lock is never taken, and the auto-increment values for concurrent inserts may be interleaved. This mode is only safe when binary logging is disabled or binlog_format is set to ROW. It is the default value in MySQL 8.

The higher value of innodb_autoinc_lock_mode, the less locking. The price to pay for that is increased number of gaps in the sequence of auto-increment values and for innodb_autoinc_lock_mode = 2 the possibility of interleaved values. Unless you cannot use row-based binary logging or have special needs for consecutive auto-increment values, it is recommended to use the value of 2.

That concludes the discussion of user-level, metadata, and data-level locks. There are a couple other locks related to backups that you should know of.

Backup Locks

The backup lock is an instance-level lock; that is, it affects the system as a whole. It is a new lock introduced in MySQL 8. The backup lock prevents statements that can make a backup inconsistent while still allowing other statements to be executed concurrently with the backup. The statements that are blocked include
  • Statements that create, rename, or remove files. This includes CREATE TABLE, CREATE TABLESPACE, RENAME TABLE, and DROP TABLE statements.

  • Account management statements such as CREATE USER, ALTER USER, DROP USER, and GRANT.

  • DDL statements that do not log their changes to the redo log. This, for example, includes adding an index.

A backup lock is created with the LOCK INSTANCE FOR BACKUP statement, and the lock is released with the UNLOCK INSTANCE statement. It requires the BACKUP_ADMIN privileges to execute LOCK INSTANCE FOR BACKUP. An example of obtaining the backup lock and releasing it again is
mysql> LOCK INSTANCE FOR BACKUP;
Query OK, 0 rows affected (0.00 sec)
mysql> UNLOCK INSTANCE;
Query OK, 0 rows affected (0.00 sec)
Note

At the time of writing, taking a backup lock and releasing it is not allowed when using the X Protocol (connecting through the port specified with mysqlx_port or the socket specified with mysqlx_socket). Attempting to do so returns an ER_PLUGGABLE_PROTOCOL_COMMAND_NOT_SUPPORTED error: ERROR: 3130: Command not supported by pluggable protocols.

Additionally , statements that conflict with the backup lock also take the backup lock. Since DDL statements sometimes consist of several steps, for example, rebuilding a table in a new file and renaming the file, the backup lock can be released between the steps to avoid blocking LOCK INSTANCE FOR BACKUP for longer than necessary.

Backup locks can be found in the performance_schema.metadata_locks table with the OBJECT_TYPE column set to BACKUP LOCK. Listing 18-9 shows an example of a query waiting for a backup lock held by LOCK INSTANCE FOR BACKUP.
-- Connection 1
Connection 1> LOCK INSTANCE FOR BACKUP;
Query OK, 0 rows affected (0.00 sec)
-- Connection 2
Connection 2> OPTIMIZE TABLE world.city;
-- Connection 3
Connection 3> SELECT *
                FROM performance_schema.metadata_locks
               WHERE OBJECT_TYPE = 'BACKUP LOCK'G
*************************** 1. row ***************************
          OBJECT_TYPE: BACKUP LOCK
        OBJECT_SCHEMA: NULL
          OBJECT_NAME: NULL
          COLUMN_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2520402231312
            LOCK_TYPE: SHARED
        LOCK_DURATION: EXPLICIT
          LOCK_STATUS: GRANTED
               SOURCE: sql_backup_lock.cc:101
      OWNER_THREAD_ID: 49
       OWNER_EVENT_ID: 8
*************************** 2. row ***************************
          OBJECT_TYPE: BACKUP LOCK
        OBJECT_SCHEMA: NULL
          OBJECT_NAME: NULL
          COLUMN_NAME: NULL
OBJECT_INSTANCE_BEGIN: 2520403183328
            LOCK_TYPE: INTENTION_EXCLUSIVE
        LOCK_DURATION: TRANSACTION
          LOCK_STATUS: PENDING
               SOURCE: sql_base.cc:5400
      OWNER_THREAD_ID: 60
       OWNER_EVENT_ID: 19
2 rows in set (0.0007 sec)
-- Connection 1
Connection 1> UNLOCK INSTANCE;
Query OK, 0 rows affected (0.00 sec)
Listing 18-9

Example of a conflict for the backup lock

In the example, the connection with thread id 49 owns the backup lock, whereas the connection with thread id 60 is waiting for it. Notice that LOCK INSTANCE FOR BACKUP holds a shared lock, whereas the DDL statement requests an intention exclusive lock.

Related to the backup lock is the log lock which has also been introduced to reduce locking during backups.

Log Locks

When you create a backup, you typically want to include information about the log positions the backup is consistent with. In MySQL 5.7 and earlier, you needed the global read lock while obtaining this information. In MySQL 8, the log lock was introduced to allow you to read information such as the executed global transaction identifiers (GTIDs), the binary log position, and the log sequence number (LSN) for InnoDB without taking a global read lock.

The log lock prevents operations that make changes to log-related information. In practice this means commits, FLUSH LOGS, and similar. The log lock is taken implicitly by querying the performance_schema.log_status table. It requires the BACKUP_ADMIN privilege to access the table. Listing 18-10 shows an example output of the log_status table.
mysql> SELECT *
         FROM performance_schema.log_statusG
*************************** 1. row ***************************
    SERVER_UUID: 59e3f95b-e0d6-11e8-94e8-ace2d35785be
          LOCAL: {"gtid_executed": "59e3f95b-e0d6-11e8-94e8-ace2d35785be:1-5343", "binary_log_file": "mysql-bin.000033", "binary_log_position": 3874615}
    REPLICATION: {"channels": []}
STORAGE_ENGINES: {"InnoDB": {"LSN": 7888992157, "LSN_checkpoint": 7888992157}}
1 row in set (0.0004 sec)
Listing 18-10

Example output of the log_status table

That concludes the review of the main lock types in MySQL. What happens when a query requests a lock, but it cannot be granted? Let’s consider that.

Failure to Obtain Locks

The whole idea of locks is to restrict access to objects or records to avoid conflicting operations to execute concurrently. That means that sometimes a lock cannot be granted. What happens in that case? It depends on the lock requested and the circumstances. Metadata locks (including explicitly requested table locks) operate with a timeout. InnoDB record locks both support a timeout and explicit deadlock detection.

Note

Whether two locks are compatible with each other is very complex to determine. It becomes particularly interesting as the relationship is not symmetric, that is, a lock may be allowed in the presence of another lock, but not vice versa. For example, an insert intention lock must wait for a gap lock, but a gap lock does not have to wait for an insert intention lock. Another example (of lack of transitivity) is that a gap plus record lock must wait for a record-only lock, and an insert intention lock must wait for a gap plus record lock, but an insert intention lock does not need to wait for a record-only lock.

It is important to understand that failures to obtain locks are a fact of life when working with databases. In principle you can use very coarse-grained locks and avoid failed locks except for timeouts – this is what the MyISAM storage engine does with very poor write concurrency as a result. However, in practice to allow for high concurrency of write workloads, fine-grained locks are preferred which also introduces the possibility of deadlocks.

The conclusion is that you should always make your application prepared to retry getting a lock or fail gracefully. This applies whether it is an explicit or implicit lock.

Tip

Always be prepared to handle failures to obtain locks. Failing to get a lock is not a catastrophic error and should not normally be considered a bug. That said, as discussed in the section “Reducing Locking Issues,” there are techniques to reduce lock contention that are worth having in mind when developing an application.

The rest of this chapter will discuss the specifics of table-level timeouts, record-level timeouts, and InnoDB deadlocks.

Metadata and Backup Lock Wait Timeouts

When you request a flush, metadata, or backup lock, the attempt to get the lock will time out after lock_wait_timeout seconds. The default timeout is 31536000 seconds (365 days). You can set the lock_wait_timeout option dynamically and both at the global and session scopes, which allows you to adjust the timeout to the specific needs for a given process.

When a timeout occurs, the statement fails with the error ER_LOCK_WAIT_TIMEOUT (error number 1205). For example:
mysql> LOCK TABLES world.city WRITE;
ERROR: 1205: Lock wait timeout exceeded; try restarting transaction

The recommended setting for the lock_wait_timeout option depends on the requirements of the application. It can be an advantage to use a small value to prevent the lock request to block other queries for a long time. This will typically require you to implement handling of a lock request failure, for example, by retrying the statement. A large value can on the other hand be useful to avoid having to retry the statement. For the FLUSH TABLES statement, also remember that it interacts with the lower-level table definition cache (TDC) version lock which may mean that abandoning the statement does not allow subsequent queries from progressing. In that case, it can be better to have a high value for lock_wait_timeout to make it clearer what the lock relationship is.

InnoDB Lock Wait Timeouts

When a query requests a record-level lock in InnoDB, it is subject to a timeout similarly to the timeout for flush, metadata, and backup locks. Since record-level lock contention is more common than table-level lock contention, and record-level locks increase the potential for deadlocks, the timeout defaults to 50 seconds. It can be set using the innodb_lock_wait_timeout option which can be set both for the global and session scopes.

When a timeout occurs, the query fails with the ER_LOCK_WAIT_TIMEOUT error (error number 1205) just like for a table-level lock timeout. Listing 18-11 shows an example where an InnoDB lock wait timeout occurs.
-- Connection 1
Connection 1> START TRANSACTION;
Query OK, 0 rows affected (0.0003 sec)
Connection 1> UPDATE world.city
                 SET Population = Population + 1
               WHERE ID = 130;
Query OK, 1 row affected (0.0005 sec)
Rows matched: 1  Changed: 1  Warnings: 0
-- Connection 2
Connection 2> SET SESSION innodb_lock_wait_timeout = 3;
Query OK, 0 rows affected (0.0004 sec)
Connection 2> UPDATE world.city
                 SET Population = Population + 1
               WHERE ID = 130;
ERROR: 1205: Lock wait timeout exceeded; try restarting transaction
-- Connection 1
Connection 1> ROLLBACK;
Query OK, 0 rows affected (0.0003 sec)
Listing 18-11

Example of an InnoDB lock wait timeout

In this example, the lock wait timeout for Connection 2 is set to 3 seconds, so it is not necessary to wait the usual 50 seconds for the timeout to occur.

When the timeout occurs, the innodb_rollback_on_timeout option defines how much of the work done by the transaction is rolled back. When innodb_rollback_on_timeout is disabled (the default), only the statement that triggered the timeout is rolled back. When the option is enabled, the whole transaction is rolled back. The innodb_rollback_on_timeout option can only be configured at the global level, and it requires a restart to change the value.

Caution

It is very important that a lock wait timeout is handled as otherwise it may leave the transaction with locks that are not released. If that happens, other transactions may not be able to acquire the locks they require.

It is in general recommended to keep the timeout for InnoDB record-level locks low. Often it is best to lower the value from the default 50 seconds. The longer a query is allowed to wait for a lock, the larger the potential for other lock requests to be affected which can lead to other queries stalling as well. It also makes deadlocks more likely to occur. If you disable deadlock detection (discussed next), you should use a very small value for innodb_lock_wait_timeout such as one or two seconds as you will be using the timeout to detect deadlocks. Without deadlock detection, it is also recommended to enable the innodb_rollback_on_timeout option.

Deadlocks

Deadlocks sound like a very scary concept, but you should not let the name deter you. Just like lock wait timeout, deadlocks are a fact of life in the world of high-concurrency databases. What it really means is that there is a circular relationship between the lock requests. The only way to resolve the gridlock is to force one of the requests to abandon. In that sense, a deadlock is no different from a lock wait timeout. In fact, you can disable deadlock detection in which case, one of the locks will end up with a lock wait timeout instead.

So why are there deadlocks at all if they are not really needed? Since deadlocks occur when there is a circular relationship between the lock requests, it is possible for InnoDB to detect them as soon as the circle is completed. This allows InnoDB to tell the user immediately that a deadlock has occurred without having to wait for the lock wait timeout to happen. It is also useful to be told that a deadlock has occurred as it often provides opportunities to improve the data access in the application. You should thus consider deadlocks a friend rather than a foe. Figure 18-1 shows an example of two transactions querying a table which causes a deadlock.
../images/484666_1_En_18_Chapter/484666_1_En_18_Fig1_HTML.png
Figure 18-1

Example of two transactions causing a deadlock

In the example, transaction 1 first updates the row with ID = 130 and then the row with ID = 3805. In between, transaction 2 updates first the row with ID = 3805 and then the row with ID = 130. This means that by the time transaction 1 tries to update ID = 3805, transaction 2 already has a lock on the row. Transaction 2 can also not proceed as it cannot get a lock on ID = 130 because transaction 1 already holds that. This is a classic example of a simple deadlock. The circular lock relationship is also shown in Figure 18-2.
../images/484666_1_En_18_Chapter/484666_1_En_18_Fig2_HTML.png
Figure 18-2

The circular relationship of the locks causing the deadlock

In this figure, it is clear which lock is held by transactions 1 and 2 and which locks are requested and how the conflict can never be resolved without intervention. That makes it qualify as a deadlock.

In the real world, deadlocks are often more complicated. In the example that has been discussed here, only primary key record locks have been involved. In general, often secondary keys, gap locks, and possible other lock types are also involved. There may also be more than two transactions involved. The principle, however, remains the same.

Note

A deadlock may even occur with as little as one query for each of two transactions. If one query reads the records in ascending order and the other on descending order, it is possible to get a deadlock.

When a deadlock occurs, InnoDB chooses the transaction that has “done the least work” to become a victim. You can check the trx_weight column in the information_schema.INNODB_TRX view to see the weight used by InnoDB (the more work done, the higher weight). In practice this means that the transaction that holds the fewest locks will be rolled back. When this occurs, the query in the transaction that is chosen as the victim fails with the error ER_LOCK_DEADLOCK returned (error code 1213), and the transaction is rolled back to release as many locks as possible. An example of a deadlock occurring is shown in Listing 18-12.
-- Connection 1
Connection 1> START TRANSACTION;
Query OK, 0 rows affected (0.0003 sec)
Connection 1> UPDATE world.city
                 SET Population = Population + 1
               WHERE ID = 130;
Query OK, 1 row affected (0.0006 sec)
Rows matched: 1  Changed: 1  Warnings: 0
-- Connection 2
Connection 2> START TRANSACTION;
Query OK, 0 rows affected (0.0003 sec)
Connection 2> UPDATE world.city
                 SET Population = Population + 1
               WHERE ID = 3805;
Query OK, 1 row affected (0.0006 sec)
Rows matched: 1  Changed: 1  Warnings: 0
Connection 2> UPDATE world.city
                 SET Population = Population + 1
               WHERE ID = 130;
-- Connection 1
Connection 1> UPDATE world.city
                 SET Population = Population + 1
               WHERE ID = 3805;
ERROR: 1213: Deadlock found when trying to get lock; try restarting transaction
Connection 1> ROLLBACK;
Query OK, 0 rows affected (0.0438 sec)
-- Connection 2
Connection 2> ROLLBACK;
Query OK, 0 rows affected (0.0438 sec)
Listing 18-12

Example of a deadlock

In most cases, the automatic deadlock detection is great to avoid queries stalling for longer than necessary. Deadlock detection is not for free though. For MySQL instances with a very high query concurrency, the cost of looking for deadlocks can become significant, and you are better off disabling the deadlock detection which is done by setting the innodb_deadlock_detect option to OFF. That said, in MySQL 8.0.18 and later, the deadlock detection has been moved to a dedicated background thread which improves the performance.

If you do disable deadlock detection, it is recommended to set innodb_lock_wait_timeout to a very low value such as one second to quickly detect lock contention. Additionally, enable the innodb_rollback_on_timeout option to ensure the locks are released.

Now that you have an understanding of how locks work and how lock requests can fail, you need to consider how you can reduce the impact of locking.

Reduce Locking Issues

It is important to have locks in mind when you write an application and design the schema for its data and access. The strategies to reduce locking include adding indexes, changing the transaction isolation level, and preemptive locking.

Tip

Do not be carried away in optimizing locks. If you only occasionally encounter lock wait timeouts and deadlocks, it is usually better to retry the query or transaction rather than spend time avoiding the issue. How frequent is too frequent depends on your workload, but several retries every hour will not be an issue for many applications.

Transaction Size and Age

An important strategy to reduce lock issues is to keep your transactions small and to avoid delays that keep the transactions open for longer than necessary. Among the most common causes of lock issues are transactions that modify a large number of rows or that are active for much longer than necessary.

The size of the transaction is the amount of work the transaction does, particularly the number of locks it takes, but the time the transaction takes to execute is also important. As some of the other topics in this discussion will address, you can partly reduce the impact through indexes and the transaction isolation level. However, it is also important to have the overall result in mind. If you need to modify many rows, ask yourself if you can split the work into smaller batches or it is required that everything is done in the same transaction. It may also be possible to split out some preparation work and do it outside the main transaction.

The duration of the transaction is also important. One common problem is connections using autocommit = 0. This starts a new transaction every time a query (including SELECT) is executed without an active transaction, and the transaction is not completed until an explicit COMMIT or ROLLBACK is executed (or the connection is closed). Some connectors disable auto-commit by default, so you may be using this mode without realizing it which can leave transactions open for hours by mistake.

Tip

Enable the autocommit option unless you have a specific reason to disable it. When you have auto-committing enabled, InnoDB can also for many SELECT queries detect it is a read-only transaction and reduce the overhead of the query.

Another pitfall is to start a transaction and perform slow operations in the application while the transaction is active. This can be data that is sent back to the user, interactive prompts, or file I/O. Make sure that you do these kinds of slow operations when you do not have an active transaction open in MySQL.

Indexes

Indexes reduce the amount of work performed to access a given row. That way indexes are a great tool to reduce locking as only records accessed while executing the query will be locked.

Consider a simple example where you query cities with the name Sydney in the world.city table:
START TRANSACTION;
SELECT *
  FROM world.city
 WHERE Name = 'Sydney'
   FOR SHARE;
The FOR SHARE option is used to force the query to take a shared lock on the records read. By default, there is no index on the Name column, so the query will perform a full table scan to find the rows needed in the result. Without an index, there are 4103 record locks (some are duplicates):
mysql> SELECT INDEX_NAME, LOCK_TYPE,
              LOCK_MODE, COUNT(*)
         FROM performance_schema.data_locks
        WHERE OBJECT_SCHEMA = 'world'
              AND OBJECT_NAME = 'city'
        GROUP BY INDEX_NAME, LOCK_TYPE, LOCK_MODE;
+------------+-----------+-----------+----------+
| INDEX_NAME | LOCK_TYPE | LOCK_MODE | COUNT(*) |
+------------+-----------+-----------+----------+
| NULL       | TABLE     | IS        |        1 |
| PRIMARY    | RECORD    | S         |     4103 |
+------------+-----------+-----------+----------+
2 rows in set (0.0210 sec)
If you add an index on the Name column, the lock count decreases to a total of three record locks:
mysql> SELECT INDEX_NAME, LOCK_TYPE,
              LOCK_MODE, COUNT(*)
         FROM performance_schema.data_locks
        WHERE OBJECT_SCHEMA = 'world'
              AND OBJECT_NAME = 'city'
        GROUP BY INDEX_NAME, LOCK_TYPE, LOCK_MODE;
+------------+-----------+---------------+----------+
| INDEX_NAME | LOCK_TYPE | LOCK_MODE     | COUNT(*) |
+------------+-----------+---------------+----------+
| NULL       | TABLE     | IS            |        1 |
| Name       | RECORD    | S             |        1 |
| PRIMARY    | RECORD    | S,REC_NOT_GAP |        1 |
| Name       | RECORD    | S,GAP         |        1 |
+------------+-----------+---------------+----------+
4 rows in set (0.0005 sec)

On the flip side, more indexes provide more ways to access the same rows which potentially can increase the number of deadlocks.

Record Access Order

Ensure that you to as large degree as possible access the records in the same order for different transactions. In the deadlock example discussed earlier in this chapter, what led to the deadlock was that the two transactions accessed the rows in opposite order. If they had accessed the rows in the same order, there would have been no deadlock. This also applies when you access records in different tables.

Ensuring the same access order is by no means a trivial task. Different access orders may even happen when you perform joins and the optimizer decides on different join orders for two queries. If different join orders lead to excessive lock issues, you can consider using the optimizer hints described in Chapter 17 to tell the optimizer to change the join order, but you should of course also have the query performance in mind in such cases.

Transaction Isolation Levels

InnoDB supports several transaction isolation levels. Different isolation levels have different lock requirements: particularly REPEATABLE READ and SERIALIZABLE require more locks than READ COMMITTED.

The READ COMMITTED transaction isolation level can help on locking issues in two ways. Far less gap locks are taken, and rows that are accessed during a DML statement but not modified have their locks released again after the statement has completed. For REPEATABLE READ and SERIALIZABLE, locks are only released at the end of the transaction.

Note

It is often said that the READ COMMITTED transaction isolation level does not take gap locks. That is a myth and not correct. While far fewer gap locks are taken, there are still some that are required. This, for example, includes when InnoDB performs a page split as part of the update. (Page splits are discussed in Chapter 25.)

Consider an example where the population of the city named Sydney is changed using the CountryCode column to limit the query to one country. This can be done with the following query:
START TRANSACTION;
UPDATE world.city
   SET Population = 5000000
 WHERE Name = 'Sydney'
       AND CountryCode = 'AUS';
There is no index on the Name column, but there is one on CountryCode. So the update requires a scan of part of the CountryCode index. Listing 18-13 shows an example of executing the query in the REPEATABLE READ transaction isolation level.
-- Connection 1
Connection 1> SET transaction_isolation = 'REPEATABLE-READ';
Query OK, 0 rows affected (0.0003 sec)
Connection 1> START TRANSACTION;
Query OK, 0 rows affected (0.0003 sec)
Connection 1> UPDATE world.city
                 SET Population = 5000000
               WHERE Name = 'Sydney'
                 AND CountryCode = 'AUS';
Query OK, 1 row affected (0.0005 sec)
Rows matched: 1  Changed: 1  Warnings: 0
-- Connection 2
Connection 2> SELECT INDEX_NAME, LOCK_TYPE,
                     LOCK_MODE, COUNT(*)
                FROM performance_schema.data_locks
               WHERE OBJECT_SCHEMA = 'world'
                     AND OBJECT_NAME = 'city'
               GROUP BY INDEX_NAME, LOCK_TYPE, LOCK_MODE;
+-------------+-----------+---------------+----------+
| INDEX_NAME  | LOCK_TYPE | LOCK_MODE     | COUNT(*) |
+-------------+-----------+---------------+----------+
| NULL        | TABLE     | IX            |        1 |
| CountryCode | RECORD    | X             |       14 |
| PRIMARY     | RECORD    | X,REC_NOT_GAP |       14 |
| CountryCode | RECORD    | X,GAP         |        1 |
+-------------+-----------+---------------+----------+
4 rows in set (0.0007 sec)
Connection 1> ROLLBACK;
Query OK, 0 rows affected (0.0725 sec)
Listing 18-13

The locks held in the REPEATABLE READ transaction isolation level

Fourteen record locks are taken on each of the CountryCode index and the primary key, and one gap lock is taken on the CountryCode index. Compare this to the locks held after executing the query in the READ COMMITTED transaction isolation level as shown in Listing 18-14.
-- Connection 1
Connection 1> SET transaction_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.0003 sec)
Connection 1> START TRANSACTION;
Query OK, 0 rows affected (0.0003 sec)
Connection 1> UPDATE world.city
                 SET Population = 5000000
               WHERE Name = 'Sydney'
                 AND CountryCode = 'AUS';
Query OK, 1 row affected (0.0005 sec)
Rows matched: 1  Changed: 1  Warnings: 0
-- Connection 2
Connection 2> SELECT INDEX_NAME, LOCK_TYPE,
                     LOCK_MODE, COUNT(*)
                FROM performance_schema.data_locks
               WHERE OBJECT_SCHEMA = 'world'
                     AND OBJECT_NAME = 'city'
               GROUP BY INDEX_NAME, LOCK_TYPE, LOCK_MODE;
+-------------+-----------+---------------+----------+
| INDEX_NAME  | LOCK_TYPE | LOCK_MODE     | COUNT(*) |
+-------------+-----------+---------------+----------+
| NULL        | TABLE     | IX            |        1 |
| CountryCode | RECORD    | X,REC_NOT_GAP |        1 |
| PRIMARY     | RECORD    | X,REC_NOT_GAP |        1 |
+-------------+-----------+---------------+----------+
3 rows in set (0.0006 sec)
Connection 1> ROLLBACK;
Query OK, 0 rows affected (0.0816 sec)
Listing 18-14

The locks held in the READ-COMMITTED transaction isolation level

Here the record locks are reduced to one lock on each of the CountryCode index and primary key. There are no gap locks.

It is not all workloads that can use the READ COMMITTED transaction isolation level. If you must have SELECT statements return the same result when executed multiple times in the same transaction or have different queries correspond to the same snapshot in time, you must use REPEATABLE READ or SERIALIZABLE. However, in many cases, it is an option to reduce the isolation level, and you can choose different isolation levels for different transactions. If you are migrating an application from Oracle DB, you are already using READ COMMITTED, and you can also use it in MySQL.

Preemptive Locking

The last strategy that will be discussed is preemptive locking. If you have a complex transaction executing several queries, it can in some cases be an advantage to execute a SELECT ... FOR UPDATE or SELECT ... FOR SHARE query to take locks on the records you know you will need later in the transaction. Another case where it can be useful is to ensure you access the rows in the same order for different tasks.

Preemptive locking is particularly effective to reduce the frequency of deadlocks. One drawback is that you will end up holding the locks for longer. Overall, preemptive locking is a strategy that should be used sparingly, but when used for the right cases, it can be powerful to prevent deadlocks.

The final topic of this chapter is to review how to monitor locks.

Monitoring Locks

There have already been several examples of querying information about the locks held. This section will review the sources already mentioned as well as introducing some extra ones. Chapter 22 will go further into this by showing examples of investigating lock issues. The monitoring options can be divided into four groups: the Performance Schema, the sys schema, status metrics, and InnoDB lock monitoring.

The Performance Schema

The Performance Schema contains the source of most of the lock information available except for deadlocks. Not only can you use the lock information in the Performance Schema directly; it is also used for two lock-related views in the sys schema.

The information is available through four tables:
  • data_locks: This table contains details of table and lock records at the InnoDB level. It shows all locks currently held or are pending.

  • data_lock_waits: Like the data_locks table, it shows locks related to InnoDB, but only those waiting to be granted with information on which threads is blocking the request.

  • metadata_locks: This table contains information about user-level locks, metadata locks, and similar. To record information, the wait/lock/metadata/sql/mdl Performance Schema instrument must be enabled (it is enabled by default in MySQL 8). The OBJECT_TYPE column shows which kind of lock is held.

  • table_handles: This table holds information about which table locks are currently in effect. The wait/lock/table/sql/handler Performance Schema instrument must be enabled for data to be recorded (this is the default). This table is less frequently used than the other tables.

The metadata_locks table is the most generic of the tables, and there is support for a wide range of locks ranging from the global read lock to low-level locks like for the access control list (ACL). Table 18-3 summarizes the possible values of the OBJECT_TYPE column in alphabetical order with a brief explanation of the locks each value represents.
Table 18-3

Object types in the performance_schema.metadata_locks table

Object Type

Description

ACL_CACHE

For the access control list (ACL) cache.

BACKUP_LOCK

For the backup lock.

CHECK_CONSTRAINT

For the names of CHECK constraints.

COLUMN_STATISTICS

For histograms and other column statistics.

COMMIT

For blocking commits. It is related to the global read lock.

EVENT

For stored events.

FOREIGN_KEY

For the foreign key names.

GLOBAL

For the global read lock (triggered by FLUSH TABLES WITH READ LOCK).

FUNCTION

For stored functions.

LOCKING_SERVICE

For locks acquired using the locking service interface.

PROCEDURE

For stored procedures.

RESOURCE_GROUPS

For the resource groups.

SCHEMA

For schemas/databases. These are similar to the metadata locks for tables except they are for a schema.

SRID

For the spatial reference systems (SRIDs).

TABLE

For tables and views. This includes the metadata locks discussed in this chapter.

TABLESPACE

For tablespaces.

TRIGGER

For triggers (on tables).

USER_LEVEL_LOCK

For user-level locks.

The data in the Performance Schema tables is the raw lock data. Often when you investigate lock issues or monitor for lock issues, it is more interesting to determine if there are any lock waits. For that information, you need to use the sys schema.

The sys Schema

The sys schema has two views that take the information in the Performance Schema tables and return the lock pairs where one lock cannot be granted because of the other lock. Thus, they show where there are problems with lock waits. The two views are innodb_lock_waits and schema_table_lock_waits.

The innodb_lock_waits view uses the data_locks and data_lock_waits view in the Performance Schema to return all cases of lock waits for InnoDB record locks. It shows information such as what lock the connection is trying to obtain and which connections and queries are involved. The view also exists as x$innodb_lock_waits, if you need the information without formatting.

The schema_table_lock_waits view works in a similar way but uses the metadata_locks table to return lock waits related to schema objects. The information is also available unformatted in the x$schema_table_lock_waits view.

Chapter 22 includes examples of using both views to investigate lock issues.

Status Counters and InnoDB Metrics

There are several status counters and InnoDB metrics that provide information about locking. These are mostly used at the global (instance) level and can be useful to detect an overall increase in lock issues. A great way to monitor all of these metrics together is to use the sys.metrics view. Listing 18-15 shows an example of retrieving the metrics.
mysql> SELECT Variable_name,
              Variable_value AS Value,
              Enabled
         FROM sys.metrics
        WHERE Variable_name LIKE 'innodb_row_lock%'
              OR Variable_name LIKE 'Table_locks%'
              OR Type = 'InnoDB Metrics - lock';
+-------------------------------+--------+---------+
| Variable_name                 | Value  | Enabled |
+-------------------------------+--------+---------+
| innodb_row_lock_current_waits | 0      | YES     |
| innodb_row_lock_time          | 595876 | YES     |
| innodb_row_lock_time_avg      | 1683   | YES     |
| innodb_row_lock_time_max      | 51531  | YES     |
| innodb_row_lock_waits         | 354    | YES     |
| table_locks_immediate         | 4194   | YES     |
| table_locks_waited            | 0      | YES     |
| lock_deadlocks                | 1      | YES     |
| lock_rec_lock_created         | 0      | NO      |
| lock_rec_lock_removed         | 0      | NO      |
| lock_rec_lock_requests        | 0      | NO      |
| lock_rec_lock_waits           | 0      | NO      |
| lock_rec_locks                | 0      | NO      |
| lock_row_lock_current_waits   | 0      | YES     |
| lock_table_lock_created       | 0      | NO      |
| lock_table_lock_removed       | 0      | NO      |
| lock_table_lock_waits         | 0      | NO      |
| lock_table_locks              | 0      | NO      |
| lock_timeouts                 | 1      | YES     |
+-------------------------------+--------+---------+
19 rows in set (0.0076 sec)
Listing 18-15

Lock metrics

As you can see, not all of the metrics are enabled by default. Those that are not enabled can be enabled using the innodb_monitor_enable option as discussed in Chapter 7. The innodb_row_lock_%, lock_deadlocks, and lock_timeouts metrics are the most interesting. The row lock metrics show how many locks are currently waiting and statistics for the amount of time in milliseconds spent on waiting to acquire InnoDB record locks. The lock_deadlocks and lock_timeouts metrics show the number of deadlocks and lock wait timeouts that have been encountered, respectively.

InnoDB Lock Monitor and Deadlock Logging

InnoDB has for a long time had its own lock monitor with the lock information returned in the InnoDB monitor output. By default, the InnoDB monitor includes information about the latest deadlock as well as locks involved in lock waits. By enabling the innodb_status_output_locks option (disabled by default), all locks will be listed; this is similar to what you have in the Performance Schema data_locks table.

To demonstrate the deadlock and transaction information, you can create the deadlock from Listing 18-12 and create a new ongoing transaction that has updated a single row by primary key in the world.city table :
mysql> START TRANSACTION;
Query OK, 0 rows affected (0.0002 sec)
mysql> UPDATE world.city
          SET Population = Population + 1
        WHERE ID = 130;
Query OK, 1 row affected (0.0005 sec)
Rows matched: 1  Changed: 1  Warnings: 0
You generate the InnoDB lock monitor output using the SHOW ENGINE INNODB STATUS statement. Listing 18-16 shows an example of enabling all lock information and generating the monitor output. The complete InnoDB monitor output is also available from this book’s GitHub repository in the file listing_18_16.txt.
mysql> SET GLOBAL innodb_status_output_locks = ON;
Query OK, 0 rows affected (0.0022 sec)
mysql> SHOW ENGINE INNODB STATUSG
*************************** 1. row ***************************
  Type: InnoDB
  Name:
Status:
=====================================
2019-11-04 17:04:48 0x6e88 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 51 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 170 srv_active, 0 srv_shutdown, 62448 srv_idle
srv_master_thread log flush and writes: 0
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 138
OS WAIT ARRAY INFO: signal count 133
RW-shared spins 1, rounds 1, OS waits 0
RW-excl spins 109, rounds 1182, OS waits 34
RW-sx spins 24, rounds 591, OS waits 18
Spin rounds per wait: 1.00 RW-shared, 10.84 RW-excl, 24.63 RW-sx
------------------------
LATEST DETECTED DEADLOCK
------------------------
2019-11-03 19:41:43 0x4b78
*** (1) TRANSACTION:
TRANSACTION 5585, ACTIVE 10 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1
MySQL thread id 37, OS thread handle 28296, query id 21071 localhost ::1 root updating
UPDATE world.city
                 SET Population = Population + 1
               WHERE ID = 130
*** (1) HOLDS THE LOCK(S):
RECORD LOCKS space id 159 page no 28 n bits 248 index PRIMARY of table `world`.`city` trx id 5585 lock_mode X locks rec but not gap
Record lock, heap no 26 PHYSICAL RECORD: n_fields 7; compact format; info bits 0
 0: len 4; hex 80000edd; asc     ;;
 1: len 6; hex 0000000015d1; asc       ;;
 2: len 7; hex 01000000f51aa6; asc        ;;
 3: len 30; hex 53616e204672616e636973636f2020202020202020202020202020202020; asc San Francisco                 ; (total 35 bytes);
 4: len 3; hex 555341; asc USA;;
 5: len 20; hex 43616c69666f726e696120202020202020202020; asc California          ;;
 6: len 4; hex 800bda1e; asc     ;;
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
...
------------
TRANSACTIONS
------------
Trx id counter 5662
Purge done for trx's n:o < 5661 undo n:o < 0 state: running but idle
History list length 11
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 284075292758256, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 284075292756560, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 284075292755712, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 5661, ACTIVE 60 sec
2 lock struct(s), heap size 1136, 1 row lock(s), undo log entries 1
MySQL thread id 40, OS thread handle 2044, query id 26453 localhost ::1 root
TABLE LOCK table `world`.`city` trx id 5661 lock mode IX
RECORD LOCKS space id 160 page no 7 n bits 248 index PRIMARY of table `world`.`city` trx id 5661 lock_mode X locks rec but not gap
Record lock, heap no 41 PHYSICAL RECORD: n_fields 7; compact format; info bits 0
 0: len 4; hex 80000082; asc     ;;
 1: len 6; hex 00000000161d; asc       ;;
 2: len 7; hex 01000001790a72; asc     y r;;
 3: len 30; hex 5379646e6579202020202020202020202020202020202020202020202020; asc Sydney                        ; (total 35 bytes);
 4: len 3; hex 415553; asc AUS;;
 5: len 20; hex 4e657720536f7574682057616c65732020202020; asc New South Wales     ;;
 6: len 4; hex 8031fdb0; asc  1  ;;
...
Listing 18-16

The InnoDB monitor output

Near the top is the section LATEST DETECTED DEADLOCK which includes details of the transactions and locks involved in the latest deadlock and when it occurred. If no deadlocks have occurred since the last restart of MySQL, this section is omitted. Chapter 22 will include an example of investigating a deadlock.

Note

The deadlock section in the InnoDB monitor output only includes information for deadlocks involving InnoDB record locks. For deadlocks involving user-level locks, there is no equivalent information.

A little further down the output, there is the section TRANSACTIONS which lists the InnoDB transactions. Do note that transactions that are not holding any locks (e.g., pure SELECT queries) are not included. In the example, there is an intention exclusive lock held on the world.city table and an exclusive lock on the row with the primary key equal to 130 (the 80000082 in the record lock information for the first field means the row with the value 0x82, which is the same as 130 in decimal notation).

Tip

Nowadays the lock information in the InnoDB monitor output is better obtained from the performance_schema.data_locks and performance_schema.data_lock_waits tables. The deadlock information is however still very useful.

You can request the monitor output to be dumped every 15 seconds to stderr. You enable the dumps by enabling the innodb_status_output option. Do note that the output is quite large, so be prepared for your error log to grow quickly if you enable it. The InnoDB monitor output can also easily end up hiding messages about more serious issues.

If you want to ensure you record all deadlocks, you can enable the innodb_print_all_deadlocks option . This causes deadlock information like that in the InnoDB monitor output to be printed to the error log every time a deadlock occurs. This can be useful, if you need to investigate deadlocks, but it is recommended only to enable it on demand to avoid the error log to become very large and potentially hide other problems.

Caution

Be careful if you enable regular outputs of the InnoDB monitor or information about all deadlocks. The information may easily hide important messages logged to the error log.

Summary

The topic of locks is large and complex. Hopefully this chapter has helped you get an overview of why locks are needed and the various kinds of locks.

The chapter started out asking why locks are needed. Without locks, it is not safe to have concurrent access to the schema and data. Metaphorically speaking, database locks work in the same way as traffic lights and stop signs work in the traffic. It regulates the access to the data, so transactions can be sure there will not be a collision with another transaction causing inconsistent results.

There are two access levels to the data: shared access also known a read access and exclusive access also known as write access. These access levels exist for various lock granularities ranging from a global read lock to record and gap locks. Additionally, InnoDB uses intention shared and intention exclusive locks at the table level.

It is important to work at reducing the number of locks the application needs and to reduce the impact of the locks required. The strategies to reduce lock issues essentially boil down to doing as little work as possible in a transaction by using indexes and splitting large transactions into smaller ones, and holding the locks for as short time as possible. It is also important to attempt to access the data in the same order for different tasks in the application; otherwise, unnecessary deadlocks may occur.

The final part of the chapter went through the lock monitoring options in the Performance Schema, the sys schema, status metrics, and the InnoDB monitor. Most of the monitoring is best done using the Performance Schema tables and the sys schema views. The exception is for deadlocks where the InnoDB monitor is still the best option.

This is the conclusion of Part IV. It is time to become more practical with query analysis starting with finding the queries that are candidate for optimization.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.202.221