Loading Data Efficiently

Most of the time you'll probably be concerned about optimizing SELECT queries, because they are the most common type of query and because it's not always straightforward to figure out how to optimize them. By comparison, loading data into your database is straightforward. Nevertheless, there are strategies you can use to improve the efficiency of data-loading operations. The basic principles are as follows:

  • Bulk loading is faster than single-row loading because the index cache need not be flushed after each record is loaded; it can be flushed at the end of the batch of records.

  • Loading is faster when a table has no indexes than when it is indexed. If there are indexes, not only must the record be added to the data file, but also each index must be modified to reflect the addition of the new record.

  • Shorter SQL statements are faster than longer statements because they involve less parsing on the part of the server and because they can be sent over the network from the client to the server more quickly.

Some of these factors may seem minor (the last one in particular), but if you're loading a lot of data, even small efficiencies make a difference. We can use the preceding general principles to draw several practical conclusions about how to load data most quickly:

  • LOAD DATA(all forms) is more efficient than INSERT because it loads rows in bulk. Index flushing takes place less often, and the server needs to parse and interpret one statement, not several.

  • LOAD DATA is more efficient than LOAD DATA LOCAL. With LOAD DATA, the file must be located on the server and you must have the FILE privilege, but the server can read the file directly from disk. With LOAD DATA LOCAL, the client reads the file and sends it over the network to the server, which is slower.

  • If you must use INSERT, use the form that allows multiple rows to be specified in a single statement:

    INSERT INTO tbl_name VALUES(…),(…),…
    

    The more rows you can specify in the statement, the better. This reduces the total number of statements you need and minimizes the amount of index flushing.

    If you use mysqldump to generate database backup files, use the --extended-insert option so that the dump file contains multiple-row INSERT statements. You can also use --opt (optimize), which turns on the --extended-insert option. Conversely, avoid using the --complete-insert option with mysqldump; the resulting INSERT statements will be for single rows and will be longer and require more parsing than will statements generated without --complete-insert.

  • Use the compressed client/server protocol to reduce the amount of data going over the network. For most MySQL clients, this can be specified using the --compress command line option. Generally, this should only be used on slow networks because compression uses quite a bit of processor time.

  • Let MySQL insert default values for you; don't specify columns in INSERT statements that will be assigned the default value anyway. On average, your statements will be shorter, reducing the number of characters sent over the network to the server. In addition, because the statements contain fewer values, the server does less parsing and value conversion.

  • If a table is indexed, you can lessen indexing overhead by using batched inserts (LOAD DATA or multiple-row INSERT statements). These minimize the impact of index updating because the index needs flushing only after all rows have been processed, rather than after each row.

  • If you need to load a lot of data into a new table to populate it, it's faster to create the table without indexes, load the data, and then create the indexes. It's faster to create the indexes all at once rather than to modify them for each row.

  • It may be faster to load data into an indexed table if you drop or deactivate the indexes before loading and rebuild or reactivate them afterward.

If you want to use the strategy of dropping or deactivating indexes for data loading, be prepared to do some experimentation to find out if it is worthwhile. (If you're loading a small amount of data into a large table, building the indexes may well take longer than loading the data.)

You can drop and rebuild indexes with DROP INDEX and CREATE INDEX. An alternative approach is to deactivate and reactivate the indexes by using myisamchk or isamchk. This requires that you have an account on the MySQL server host and you have write access to the table files. To deactivate a table's indexes, move into the appropriate database directory and run one of the following commands:

% myisamchk --keys-used=0
						tbl_name
% isamchk --keys-used=0
						tbl_name
					

Use myisamchk for MyISAM tables that have an index file with an .MYI extension and isamchk for ISAM tables that have an index file with a .ISM extension. After loading the table with data, reactivate the indexes:

% myisamchk --recover --quick --keys-used=
						n tbl_name
% isamchk --recover --quick --keys-used=
						n tbl_name
					

n is the number of indexes the table has. You can determine this value by invoking the appropriate utility with the --description option:

% myisamchk --description
						tbl_name
% isamchk --description
						tbl_name
					

If you decide to use index deactivation and activation, you should use the table repair locking protocol described in Chapter 13, "Database Maintenance and Repair," to keep the server from changing the table at the same time that you are. (You're not repairing the table, but you are modifying it like the table repair procedure does, so the same locking protocol is appropriate.)

The preceding data-loading principles also apply to mixed-query environments involving clients performing different kinds of operations. For example, you generally want to avoid long-running SELECT queries on tables that are updated frequently. This causes a lot of contention and poor performance for the writers. A possible way around this, if your writes are mostly INSERT operations, is to add new records to a temporary table and then add those records to the main table periodically. This is not a viable strategy if you need to be able to access new records immediately, but if you can afford to leave them inaccessible for a short time, use of the temporary table will help you two ways. First, it reduces contention with SELECT queries that are taking place on the main table, so they execute more quickly. Second, it takes less time overall to load a batch of records from the temporary table into the main table than it would to load the records individually; the index cache need be flushed only at the end of each batch, rather than after each individual row.

One application for this strategy is when you're logging Web page accesses from your Web server into a MySQL database. In this case, it's probably not a high priority to make sure the entries get into the main table right away.

Another strategy for reducing index flushing is to use the DELAYED_KEY_WRITE table creation option for MyISAM tables if your data are such that it's not absolutely essential that every single record be inserted in the event of abnormal system shutdown. (This might be the case if you're using MySQL for some sort of logging.) The option causes the index cache to be flushed only occasionally rather than after each insert.

If you want to use delayed index flushing on a server-wide basis, start mysqld with the --delayed-key-write option. In this case, index block writes are delayed until blocks must be flushed to make room for other index values, until a flush-tables command has been executed, or until the indexed table is closed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.157.45