Chapter 7. Log Shipping

As noted in Chapter 3, log shipping is a tried-and-true method for achieving high availability for a Microsoft SQL Server database. There are two types of log shipping: you can code your own or you can use the log shipping functionality provided with the Developer or Enterprise editions of Microsoft SQL Server 2000 (depending on whether you are working in a development or production environment, respectively). This chapter describes how to plan for, configure, and administer a log shipping solution for SQL Server.

More Info

For a refresher on the basic concepts and terminology of log shipping, review Chapter 3.

Uses of Log Shipping

Log shipping has a variety of uses, most of which are related to availability, as follows:

  • Primary or additional form of availability for SQL Server. This is the "no-brainer" use of log shipping. You can use log shipping without any other technology for SQL Server high availability as a primary method of availability for your SQL Servers (for example, if you cannot afford or are not able to use failover clustering as a primary method) or in more of a disaster recovery role as a secondary form of availability (for example, to ensure that a failover cluster, which is local, is protected if the data center is damaged and you need to switch to a remote site).

  • Planned downtime. Planned downtime on the primary database (such as to apply a SQL Server service pack) can be longer than what you can normally tolerate. Instead of waiting for the primary server to become available while waiting for maintenance to complete, you can increase availability by switching to the secondary server. Doing so means that you are promoting the secondary server to be the new primary database accessed by applications and clients.

  • Upgrading or migrating SQL Server. Whether you are upgrading from Microsoft SQL Server 7.0 or going from SQL Server 2000 to another instance of SQL Server 2000, log shipping can facilitate highly available upgrades or migrations.

More Info

To see how to log ship from SQL Server 7.0 to SQL Server 2000, see the section "Configuring Log Shipping from SQL Server 7.0 to SQL Server 2000" later in this chapter.

  • A database for read-only access. This scenario applies only in limited cases. Although it is not a high availability use of log shipping, it is possible to use the secondary database for read-only reporting purposes as transaction logs are being applied at scheduled intervals. Using the log shipped database for read-only access is not recommended if your primary goal is high availability and you are generating and applying transaction logs on a very frequent basis; use another method. If you want to use your log shipped database for read-only access, be aware of the following:

    • When restoring the database that will have the transaction logs applied to it, you must restore it with the STANDBY option.

    • Because SQL Server requires exclusive access to apply transaction logs to the database, no users can be accessing the now read-only copy of the database during the transaction log restore. As noted earlier, you can configure the database not to kill the connections to the database, but that means you have to manually kill the connections, adding administrative overhead. This also means that, no matter what, the users cannot have continuous access to the database for read-only use; they will be interrupted at regular intervals.

    • You must ensure that users have the proper user names, logons, and permissions configured to access the secondary.

    • Set the copy and load times appropriately, which means longer intervals between copies and loads of the transaction logs, to allow users sufficient time to use the secondary for reporting queries. This could cause transaction logs to queue, thereby lengthening the time to bring the secondary online as a new primary in a disaster recovery scenario.

    • Secure the secondary in the same way you secure the primary for read-only access.

    • If you disabled the guest account on the secondary before configuring log shipping and you want to use it for read-only purposes, you might cause a failure when logging into the secondary server to access the read-only database.

More Info

See Knowledge Base article 315523, "PRB: Removal of Guest Account May Cause Handled exception Access Violation in SQL Server," and article 303722, "HOW TO: Grant Access to SQL Logins on a Standby Database When ’quest’ User Is Disabled," to see how to enable a database restored with STANDBY to be accessed for read-only purposes. Knowledge Base articles can be found at http://support.microsoft.com.

Basic Considerations for All Forms of Log Shipping

With log shipping, whether you use the SQL Server 2000 Enterprise Edition functionality or code your own, there are some common points you must think about prior to configuring your log shipping solution.

More Info

Many of the concepts and rules that apply to configuring log shipping are similar or the same as those for upgrading or migrating to SQL Server 2000. For more information, see Chapter 13.

Ask the Right Questions

Before you even think about implementing log shipping, you must determine the answers to certain questions. Without the answers, you cannot accurately plan and set up log shipping from both an availability and performance perspective.

Business Questions

Perhaps the most important questions to ask are not the technical ones but the ones related to why you are implementing log shipping and what the cost to the business will be.

  • How many transactions are generated per hour? Although transactions from an application or business standpoint are ultimately represented as one or many different SQL Server transactions, you need to understand the volume of the business that is planned or occurring and translate that into SQL Server usage. For example, if each business transaction inserts data into three tables, that is a 1:3 ratio from business to SQL Server transactions. Multiply that number by the number of users over time, and you have your total transactions per hour. The eventual number of transactions should account for all work done, not just one type.

  • How much downtime is acceptable to end users? Nearly every business can tolerate some downtime, it is just a matter of how much. Although there is no set number, the measurement is usually in minutes or, at most, hours. This number guides how frequently transaction logs are backed up, copied, and applied. Remember to take any existing service level agreements (SLA) into account when figuring out acceptable downtime, modify them as necessary, or put new ones in place that reflect the current state of the business.

  • How much data or how many transactions can be lost? Like the downtime question, most businesses are quick to respond, "We cannot lose any data or transactions." The reality, as with downtime, is that some losses can be tolerated; it is just a matter of how many. You will always be transactionally consistent as of the last transaction log applied to the secondary, but you might not be as current as the primary. Again, this number is usually measured in minutes or, at most, hours. The usual comfort level of lost data is anywhere from five minutes to about an hour for most businesses. Remember to take any SLA (new or existing) into account when figuring out acceptable loss of data.

  • How much money is allocated to the project? Although money is not the top linchpin, because log shipping is essentially an easier solution to conceptually understand, if, say, your network is too slow or your secondary is not at the same capacity as your primary, will you need to revamp things and if so, is that possible? You do not want to attempt to implement anything—including something as simple as log shipping—if you cannot handle it financially.

Technical Questions

After you understand the nontechnical issues and potential barriers, you can then consider the technical issues and barriers.

  • How large are the transaction log files generated? The size of the transaction logs generated is a crucial bit of information because it directly impacts your network as well as the time to copy and apply the transaction log. If you are consistently generating transaction logs in the 1 GB range (plus or minus), what will the effect be on a slower network? It certainly will not be good, especially if that network, which might only be 100 megabit, is shared with all other traffic and is already heavily used.

  • What is your network topology and speed? Your network can quickly become a barrier to availability in a log shipping solution if your network cannot handle the file sizes generated by the transaction log backup, as evidenced by the previous item.

  • How long does it take to copy the transaction log file? In conjunction with the previous two points, knowing how long it takes to copy the transaction log impacts how you configure your log shipping solution. This measurement is closely related to how fast the transaction logs can be made and then sent across the wire. For example, if it takes you two minutes to back up your transaction log, you should not be attempting copies every minute. That makes no sense. If it takes another two minutes to copy, that number influences how often you can then apply the transaction log.

  • How long does it take to apply each transaction log file? This number saves you in a disaster recovery scenario. If you know definitively it will take you, say, two minutes to apply your transaction logs (on average; some might be longer, some might be shorter) and you have 10 of them queued up, that means that, at the very earliest, your standby server could be brought online to the latest data available in about 20 minutes.

  • What is the capacity and configuration of your secondary server? Ideally, the capacity of your secondary server should be equal to that of the primary if you want things to run with the same horsepower.

  • Will you switch back to the primary if you perform a role change to a secondary server? As with failover clustering, once you have switched to another server, will you incur the availability hit again once the old primary is back in a state where it can assume its original role? If you have the proper capacity on the secondary, this should not be an issue.

  • Do you have domain connectivity? Log shipping only requires that all servers be able to connect to each other. However, some log shipping solutions might require domain connectivity. If you do not have domain connectivity and it is a requirement (such as to use Windows Authentication), this could be a potential barrier for some solutions involving log shipping.

  • Will any maintenance interfere with log shipping? Certain administrative tasks for SQL Server interfere with the ability for SQL Server to generate transaction logs. How often will you run such maintenance to ensure your secondary is as current as it needs to be? Or will you not run it at all?

  • How long will transaction logs be kept? Will they be archived? The transaction logs should be kept in accordance with an overall corporate archival scheme for your backups. Such a policy must be agreed on and set before implementing log shipping, as it will affect the planning stage.

  • Is replication configured on the primary database? If you are looking to combine log shipping and replication on a database, you need to take some considerations into account. For example, if you are looking to use log shipping to protect the publisher, you need to be able to rename the secondary SQL Server instance—something that is not possible with failover clustering.

More Info

For more information on log shipping and replication, see "Log Shipping and Replication" in Chapter 8.

How Current Do You Need To Be?

To briefly revisit a point mentioned in Chapter 3, log shipping inherently involves some amount of latency; it is not real time. Transactions are only applied to the secondary after they are backed up, copied, and restored from the transaction log. With log shipping, you are only as current as:

  • The last transaction completed on the primary

  • The last transaction log backed up on the primary

  • The last transaction log copied from the primary

  • The last transaction log applied to the secondary

The log shipping secondary is only as current as the last transaction log that is applied, so if the delta is five minutes off of the primary, your secondary should always remain five minutes behind if all is functioning properly. Analyzing these points is fairly straightforward.

  • If the SQL Server experiences a failure before that transaction log is backed up, there is a chance that if the server cannot recover and read the transaction log, it will be lost, and you will only be as current as the secondary.

  • If a transaction log is backed up but not copied over to the secondary and either the copy process fails or the hard disk that contains that transaction log is lost, again, you are only as current on the secondary as what is available to it.

  • If a transaction log is copied from the primary and is still accessible to the secondary, it can be applied, so your secondary will be as current as the last transaction in the transaction log.

Secondary Server Capacity and Configuration

As mentioned briefly earlier, the capacity and configuration of your secondary is crucial for the success of any log shipping solution. If you underpower your secondary server, end users will notice the performance difference. The point of switching servers is to provide, at a minimum, the same quality or level of service you provided before.

Log Shipping More Than One Database to One Secondary

One of the nice features of log shipping is that you can, as long as you have the capacity, use one instance as a standby for multiple databases. This takes two assumptions into account: that you have the capacity to handle the additional workload in the event that all databases would need to be servicing end users at once (not unlike planning failover clustering to ensure that multiple instances do not affect each other in a failover), and that there are no conflicts or security issues in doing so. Some common issues related to log shipping more than one database to a single secondary include the following:

  • Master and msdb databases. Because you cannot have multiple msdb and master databases, it is not possible to restore master or msdb databases from a different server with a different name. Any differences need to be manually reconciled.

  • Logins. You cannot have duplicate login names at the SQL Server level. Also, if you do configure more than one database that genuinely uses the same login, but the user has different security privileges that might compromise another database, that is not a wise choice.

  • Capacity. Do you have enough horsepower to run all the databases should they all become active?

  • Objects. Do you have any duplicate objects (such as jobs, alerts, and so on) that will conflict with those of other databases? If so, you need to resolve these types of issues prior to configuration.

More Info

Log shipping more than one database to a single standby is not unlike server consolidation. For more information on SQL Server consolidation and upgrades, consult Chapter 13.

Disk Space, Retention, and Archiving

It is very important to plan the disk capacity needed for implementing log shipping. You must know how large your transaction log file sizes are and how much disk capacity you have, and then coordinate those two factors with your corporate archival scheme. During the configuration of the built-in feature of SQL Server 2000, you can configure a retention period for the transaction log files, so it must be known from the start.

More Info

For more information on retention and archiving of backup files, see the topic "Backup Retention" in Chapter 10.

Full-Text Searching and Log Shipping

If the primary database utilizes full-text searching, you must consider how you will handle anything relating to full-text after the role change. The issues are pretty straightforward:

  • Full-text indexes are built as data is inserted into a database. In the event of a catastrophic failure, you cannot even consider moving the index on the primary server to the secondary because it is not available. Even if the primary server were available, what is the state of your secondary database? Is it in the same exact spot, and can you ensure that no changes were made prior to the role change? If the role change is planned and you know the two databases are in exactly the same consistent place, you might be able to copy the full-text index to the secondary. This scenario of a planned role change would also require the following:

    • The two databases are loaded with the same database identifier (DBID) as the primary.

    • The catalog must be re-created on the secondary server at least once before the database is loaded in STANDBY or NORECOVERY states. This is required because the MSSearch service places catalog-specific information in the registry when the catalog is rebuilt and repopulated.

    • Catalog files must be copied only after the MSSearch service is stopped on both machines.

    • Catalog files must be copied each time the catalog is repopulated or updated and as often as possible when using Change Tracking.

    • Copying the registry entries mentioned earlier might make the MSSearch service unusable, so they should not be modified.

The steps to do this are listed below. The only steps that you would need to perform (once you establish the backup of the database and the DBID) when using Change Tracking are Steps 4, 5, and 6. These should be performed as often as possible because all the changes made to the full-text catalogs are in files in the FTDATA folder. To roll these changes to the secondary server in log shipping, you need to copy these files over to the secondary machine at regular intervals.

Important

Stopping and starting MSSearch could cause an availability problem from an application perspective and cause perceived unavailability or downtime of your application. Do this with caution after making the proper risk assessment of your business.

  1. Create and populate the full-text catalog on the primary machine.

  2. Back up the primary database.

  3. Restore the primary database on the secondary server. The DBID has to be the same. Follow the procedure in Knowledge Base article 240867, "INF: How to Move, Copy, and Back Up Full-Text Catalog Folders and Files," to restore a database with the same DBID.

  4. Stop the MSSearch service on both the primary and secondary servers.

  5. Copy the SQLxxxxxyyyyy folder under FTDATA over to the location on the secondary server.

  6. Restart the MSSearch service.

  7. Re-create and repopulate the catalogs on the secondary database.

  8. Restore the database backup using the STANDBY option and a .tuf file extension.

  9. Run the wizard and select the existing database restored in Step 7.

  10. Select the appropriate settings for log shipping.

  11. Create a scheduled job to stop MSSearch, copy the catalog files from the FTDATA folder over to the secondary machine, and restart the MSSearch service.

  • You cannot take a copy of an older full-text index and apply it to a database that is out of sync with that copy. Although it might seem to work for a while, you will eventually encounter problems. This is not supported or recommended.

  • You cannot build a full-text index on a database that is in NORECOVERY or STANDBY mode, which is required for the transaction logs to be applied. You thus have to start building the index after the database is brought online after the role change. This means some features of the application will not be functioning (which might prove unacceptable) and, depending on the size of the index, it could take quite some time to generate.

More Info

The Microsoft Support Knowledge Base article 240867, "How to Move, Copy, and Back Up Full-Text Catalog Folders and Files," details the process of how to move, copy, and back up full-text files to another server. This article can be found at http://support.microsoft.com.

Recovery Models and Log Shipping

Log shipping requires that the databases participating in log shipping be in either Full or Bulk-Logged recovery models. By default, all databases are configured with Full recovery, as that is the default behavior of the model database. Recovery models were first introduced in SQL Server 2000, and there is no real equivalent of this feature in prior versions of SQL Server. Simple recovery does not allow you to make transaction log backups, and therefore does not let you use log shipping because it will break the log sequence number (LSN) chain. When you apply transaction logs, the process checks to see that the first LSN in the new backup file comes sequentially after the last LSN applied. A SQL Server 2000 error in LSNs during the transaction log applications should look similar to this:

Server: Msg 4305, Level 16, State 1, Line 2
The log in this backup set begins at LSN 6000000007200001, which is too late to
apply to the database. An earlier log backup that includes LSN 6000000005000001
can be restored.
Server: Msg 3013, Level 16, State 1, Line 2
RESTORE LOG is terminating abnormally.

This is the SQL Server 7.0 error:

Server: Msg 4305, Level 16, State 1, Line 2
This backup set cannot be restored because the database has not been rolled for
ward far enough. You must first restore all earlier logs before restoring this
log.
Server: Msg 3013, Level 16, State 1, Line 2
Backup or restore operation terminating abnormally.

With all versions of SQL Server prior to SQL Server 2000, if you did anything such as a Transact-SQL BULK INSERT or a nonlogged operation or if you set the Truncate Log On Checkpoint option (which is no longer valid in SQL Server 2000), you would invalidate regular transaction log backups. First and foremost, with Truncate Log On Checkpoint, it is what it says: when a checkpoint happens, the log is truncated and not backed up. Database administrators (DBAs) frequently like to turn this on without thinking about the recoverability of SQL Server.

From a log shipping perspective, there is no apparent difference between Full or Bulk-Logged but there are some differences that affect performance. If you are using Bulk-Logged, the transaction logs contain information added or modified since the transaction log backup. In addition, the transaction log contains the data pages modified by any bulk operations (such as BULK INSERT or BCP) since the last backup. This means that you will have potentially larger file sizes that need to be backed up, copied, and restored if you are doing bulk operations. This definitely impacts your time to copy if you have limited network bandwidth. Consider the following example: if you do a bulk load of data on a single processor server that runs at 3 MB per second and takes 10 minutes, that means you may have generated 1800 MB (or 1.8 GB) of changed data pages!

With Full, you still have the same issue as Bulk-Logged with large file sizes. However, one of the benefits of Full is that the data rate will be reduced to nearly match your network. Even though you are technically moving a bit more information across the network in total (Bulk-Logged and Full are about the same file size; Bulk-Logged gathers the data directly from the data file instead of indirectly from the transaction log as redo log records), the secondary server does not lag far behind because the primary is being loaded much more slowly due to the logging of all inserted data. If you have a server with eight processors, it stands to reason that if the load rate is eight times, it affects the time for logs.

Caution

If you configure log shipping and then switch the recovery model to Simple at any point after it is up and running, you invalidate log shipping. An example of when this could occur is if you were doing a bulk insert and wanted to minimize your log file size and not have a record of the page changes. The same would also apply if someone—even briefly, whether accidentally or on purpose—switched the recovery model to Simple and immediately changed it back.

More Info

For more information on recovery models, see Chapter 10.

Network Bandwidth

Network bandwidth is a potential barrier to successful log shipping implementation. The three main issues associated with network bandwidth were listed in the earlier section, "Technical Questions." They are:

  • How large are the transaction log files generated?

  • What is your network topology and speed?

  • How long does it take to copy the transaction log file?

Although log shipping is arguably the easiest SQL Server–based high availability method to configure, it can crush a network if your transaction log backup files are huge. That will eventually affect the delta of time for which the primary and secondary are out of sync. Coupled with that, you also need reliable network connections between the participating servers. Intermittent network problems will not increase your availability when it comes to log shipping; they can only hurt it.

Tip

If you are sending large transaction logs on a regular basis, you should configure a separate and dedicated network card as well as a private network (such as a 10.x.x.x network) for sending transaction logs between the primary and secondary servers. This ensures that user traffic will not be affected. You might also want to consider a faster network (such as Gigabit Ethernet) to ensure that your recoverability happens much more quickly.

Logins and Other Objects

Log shipping only captures anything in the initial database backup (including users, but not their corresponding server-level login) as well as any transactions captured in subsequent transaction logs. That leaves any other objects that reside outside the database or are not captured as part of the transaction log to be dealt with.

Tip

As far as objects go, outside of Data Transformation Services (DTS) packages, you should really be placing any objects related to the database in the database itself. Unless there is a logical reason to put the objects outside the database, you will create problems when you are trying to synchronize a secondary server to become a warm standby. If such objects exist, they should be recorded and placed in any document related to the solution. If possible, script the objects and have those scripts available.

Stored Procedures, Extended Stored Procedures, Functions, and More

As just noted, things such as stored procedures should be created in the database itself. Because the creation of a normal stored procedure is captured in the transaction log, it will automatically be created at the secondary. If you have anything residing outside the database, such as in msdb, you need to manually create all of these objects on the secondary server to ensure a successful role change.

Logins

There are two levels of logins that you need to take into account:

  • Server level. These are the top-level users referenced by the user in a database. These reside in the syslogins system table.

  • Database level. These are the users that a connection uses to access the database.

If you do not take both types into account, you could end up with a situation in which you have orphaned users in your database and application-related problems accessing the database after the role change process.

More Info

The section "Step 3: Post-Wizard Configuration Tasks" later in this chapter details one way to configure the transfer of your logins and users.

DTS Packages

To move DTS packages to another SQL Server, the easiest method is probably to save the package to a file and manually copy and add it to the secondary. There is no automated way to perform this task.

Caution

Make sure that the DTS package is compatible with the version of SQL Server that is the destination. If, for example, functionality or the format of the file is changed due to a fix in a service pack (as it did between SQL Server 7.0 Service Pack 1 and SQL Server 7.0 Service Pack 2), if the other server cannot either handle the functionality or it will be broken on the standby, you need to resolve those issues prior to bringing that secondary online as a primary. Similarly, you might need to fix settings (such as source server names or destination server names) within the DTS package to match the new server. Do not assume that it works unmodified.

Also, the DTS functionality does not exist if your source or destination servers are 64-bit SQL Server 2000 instances. You have to create DTS packages under 32-bit and run them from a 32-bit instance of SQL Server against a 64-bit instance.

Clients, Applications, and Log Shipping

One of the most important considerations for using log shipping is planning for how you will redirect end users and applications to the new server. Unlike failover clustering, where the switch to another physical server is abstracted to an end user or application, a change to another server in most cases is not abstracted. That means the systems and applications accessing SQL Server must tolerate such a change. Like failover clustering, there will be some downtime involved.

Coding Your Application for Log Shipping

The following are some tips for application developers when coding for a solution that will eventually include log shipping:

  • Heed the earlier warning about object placement. If developers create objects outside of the database, the IT staff should be notified and it should be well documented so that this can be accounted for in disaster recovery planning.

  • Make all transactions as small, or atomic, as possible. Atomic queries might increase performance and reduce some resource contention (such as I/O), but in the recovery process and application of transaction logs they could ensure that things go more quickly.

    Note

    This is not to say that longer running transactions are not appropriate, and you should not take one logical transaction and break it up for the sake of making it smaller. Small transactions usually minimize lock contention. I/O will be close to the same as a longer transaction because you are essentially doing the same work, just broken up into smaller chunks. A long running transaction, if it is interrupted (that is, canceled), could require a long undo operation, and it might also prevent log truncation.

  • Similarly, when queries are written, if they cross databases, how will that work after a role change? Can the queries tolerate two databases residing on different servers?

  • With SQL Server 2000, the worry of nonlogged operations does not exist because page changes and such are tracked with Bulk-Logged or Full recovery models (either is required for log shipping). However, those types of operations will impact the log size, so the use of those types of operations should still be taken into account to some degree.

  • Ensure all objects related to the log-shipped database are uniquely named. This prevents problems in the event that multiple databases are log shipped to a single secondary.

  • Do not code your application for a specific SQL Server service pack level, because if your secondary is not the same as your primary, you might encounter issues in a role change.

  • Do not hard-code server names and IP addresses into applications. This ensures that you could never name the server anything other than what it is in the application, and, for example, if you are using failover clustering for both your primary and secondary, you are unable to rename the server. Make applications flexible to handle a server name change.

  • Use fully qualified names for each object to ensure that you are always referencing the correct object. This not only helps your performance, it helps clarify things when you might have more than one database on a SQL Server with the same object name.

  • Code retry logic in your application to attempt a failed transaction again. If this is not possible, at a bare minimum you should post graceful error messages to give your end users a positive experience.

  • One way to mitigate lost transactions in the event of a failure is to use some form of middleware—such as Microsoft Transaction Server, Microsoft BizTalk, Microsoft Message Queue, or Microsoft Distributed Transaction Coordinator—to queue the transactions. This needs to be integrated into your application and overall solution should you choose to do this.

  • If you are using the SQL-DMO transfer operation in your code, it truncates the transaction log, thus breaking log shipping.

    More Info

    For updates or workarounds, see Knowledge Base article 316213, "FIX: SQLDMO Transfer Operation Truncates Transaction Log of Destination Database."

  • Similarly, if you are using the Copy Objects task of DTS in any of your packages, it might break log shipping because it switches the recovery mode to Simple for the database in its operations.

More Info

For updates or workarounds, see Knowledge Base article 308267, "FIX: DTS Copy Objects Task (DMO) Breaks Transaction Log Backup Chain By Switching Recovery Mode to Simple During Transfer."

Role Change

Performing the role change from a SQL Server perspective is straightforward, but redirecting clients is not, so you can have the least impact on applications and end users. As in failover clustering, not only is there an interruption in service, you are also switching to a completely different server. You have a few options available to you to assist in the role change:

  • As mentioned earlier, code a mechanism into the application to set the SQL Server.

  • Code your application to use Open Database Connectivity (ODBC), and all you would need to do is change the server that the ODBC Data Source Name (DSN) is pointing to. With this solution, you would still have the interruption, but the connection string would always be the same.

  • You can also use Network Load Balancing to abstract the server switch during the role change. This is one of the few uses for which Network Load Balancing works well with SQL Server.

    The goal is to pick a manageable solution that will incur the least downtime.

More Info

For more information on configuring Network Load Balancing for use with log shipping, see Chapter 5.

Security

As with any technology, with log shipping there are security considerations you must consider up front that influence how you will plan and deploy a solution. Because all security changes (such as GRANT statements) are in the transaction log, they are applied automatically to the secondary server once log shipping is configured and working properly.

More Info

For additional information or changes to what is written in this section, please reference Knowledge Base article 321247, "HOW TO: Configure Security for Log Shipping," which can be found at http://support.microsoft.com.

SQL Server Startup Account

If the server hosting your SQL Server 2000 instance is part of a domain, and not a workgroup, you should use a domain account to start the SQL Server services. A domain account is absolutely required for certain features, such as failover clustering. If this is a nonclustered instance, you can use a local network account or the Local System account.

  • A local network account is one that is created on the server, but it is not the Local System account. This would be used if you are using pass-through security. This means that all SQL Servers participating in log shipping must use the same network account, privileges, and passwords created locally on the server. You are bypassing traditional network security when a SQL Server process requests resources from another SQL Server.

  • Various processes on the server use a Local System account, also known as LocalSystem. SQL Server can also potentially use it, but modifying it in any way could have consequences outside of SQL Server. LocalSystem cannot be used for network pass-through security because the LocalSystem account’s password should be different on each server.

More Info

For more information, see the topic "Setting Up Windows Service Accounts" in SQL Server Books Online.

Securing the Transfer of Logins

Because the transfer of logins using the built-in functionality requires a bulk copy out of the syslogins table, this file should be contained in a secure directory. Files generated by the bcp utility are in plaintext and not encoded in any way.

Securing the Backup Share

You also need to secure the directories that contain your transaction log backups. The files created by SQL Server are not encrypted in any way. The network share should be secured so that all servers participating in log shipping should be able to access it.

Log Shipping Across Domains

You can log ship a database across domains. To do this, you should set up two-way trusts between the domains involved in log shipping. This is done through the Active Directory Domains and Trusts tool located in the Start menu in the Administrative Tools folder, as shown in Figure 7-1. If it is not there, it can be started from %systemroot%System32Domain.msc. If you cannot establish trusts, you can use pass-through security, which involves the use of the local network account. This cannot be done on a clustered instance of SQL Server.

The Active Directory Domains and Trusts tool of Windows 2000.

Figure 7-1. The Active Directory Domains and Trusts tool of Windows 2000.

Authentication for Log Shipping

Microsoft recommends you use Windows Authentication for the SQL Servers participating in log shipping. If you use mixed mode security, with the built-in functionality, a login with the name of log_shipping_monitor_probe will be created if it does not already exist.

Log Shipping and Database Backups

Will log shipping break if you still implement your daily, weekly, monthly, or other full or differential backup scheme? No. Log shipping can coexist peacefully with any full or differential backup on a database. However, if you are performing a full or differential backup, you cannot do a scheduled transaction log backup during the backup process; it will occur at the next possible scheduled time after the full or differential backup is complete. This means that the transaction log might grow while the full backup takes place (depending on your database usage). On a small database, this should be fairly insignificant, but on a very large database this is not trivial. It means that the log will take longer to copy and apply. It also means that the secondary server will be further out of sync than it is while log shipping is not happening. You need to take this into account in your planning.

The only way a full or differential backup will affect log shipping is if, in some way, you truncate the transaction log prior to doing an actual log backup. A good example is if your transaction log is growing quickly and you want to reclaim disk space by using a BACKUP LOG databasename NO_LOG or switch to Simple recovery.

On the other hand, if you have an existing transaction log backup plan for a database, log shipping will be affected by that plan. Although Microsoft cannot stop you from setting up multiple transaction log backup jobs for one database (you can only do one per database with the Database Maintenance Plan Wizard; when you code your own using Transact-SQL syntax, however, you can technically do an infinite amount), you need to take into account each transaction log and how it will be applied to a secondary server. For example, your senior DBA already has a custom SQL Server Agent Job that backs up the transaction log for the database named MyDatabase every 15 minutes. Another DBA then decides to configure log shipping using the Database Maintenance Plan Wizard with a transaction log backup frequency of 10 minutes.

  • Problem 1. Log shipping knows nothing about the existing transaction log backup plan.

  • Problem 2You now have two conflicting transaction log backups happening. Assuming log shipping is configured correctly, you will more than likely see the aforementioned 4,305 errors almost immediately.

  • Problem 3. Not only does log shipping not know about the existing job, but you have two different schedules. What is the actual required frequency for the database in question? You need to decide on one and then modify the Database Maintenance Plan with the right interval, and you need to delete or disable the custom job.

As you can see by this example, which assumes using the built-in functionality of SQL Server 2000, you can easily create a problem without realizing it right away. The same would be true for any custom solution. Before you configure log shipping, make sure there will be no conflicts in terms of existing backup plans.

If you have some "intrusive" maintenance that prevents the creation of a transaction log backup, you will not get a transaction log backup until these tasks are completed. These tasks more often than not require exclusive access to the database or something similar. A good example of maintenance that will prevent a transaction log backup is the aforementioned full backup as well as a database restore. These maintenance tasks and how to deal with more intrusive maintenance are discussed in more detail in Chapter 14.

Note

In terms of the transaction log backup itself, all committed and uncommitted work is sent as part of the transaction log. If you have a long-running transaction that fails midway through its execution and you need to bring the secondary server online, it needs to go through the rollback process of the transaction in the role change process. This affects your recovery time.

More Info

For more information on backups, see Chapter 9, and Chapter 10. To mitigate the creation of false errors by intrusive maintenance, see the section "Intrusive Maintenance" in Chapter 14.

Service Packs and Log Shipping

SQL Server 2000 service packs can be applied to instances that have log shipping with the following caveats, as documented in the Readme file (this was taken from SQL Server 2000 Service Pack 3).

More Info

For more information on SQL Server 2000 service packs and log shipping, see Chapter 13.

If Service Pack 3 Setup detects user databases or filegroups that are not writable, it:

  • Applies the Service Pack 3 replication updates to all writable user databases.

  • Writes a list of the nonwritable databases to the Setup log, which is located at %windir%Sqlsp.log.

  • Displays the following warning message:

    Setup has detected one or more databases and filegroups which are not
    writable.

This means that you might need to rerun the SQL Server 2000 service pack once the secondary server’s database has been recovered so that it is at the same level as the rest of the server.

Caution

Do not apply any SQL Server 2000 service pack prior to Service Pack 2 on a log shipped server. Service Pack 1 would not apply successfully on a server that had databases that could not be written to.

Files, Filegroups, and Transaction Logs

If you want to add a file to an existing filegroup and you are performing regular transaction log backups (as you would with log shipping), you break the restore of the transaction log. This occurs only if the specific path to the file does not exist on the secondary or if the target file already exists. If the file system structure is the same on the secondary, this should not be a problem. There is a two-step fix for the problem scenario:

  1. The restore process is now expecting the secondary to have the same physical structure as the primary. Chances are you are seeing this error message:

    [Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 5105:
    [Microsoft][ODBC SQL Server Driver][SQL Server]Device activation
    error. The physical file name 'C:Program FilesMicrosoft SQL
    ServerMSSQL$SQL2K1datalsdb_data2.NDF' may be incorrect.
    [Microsoft][ODBC SQL Server Driver][SQL Server]File 'lsdb_data2' can
    not be created. Use WITH MOVE to specify a usable physical file name.
    [Microsoft][ODBC SQL Server

    To solve this problem, restore the next transaction log manually including the WITH MOVE syntax. Then log shipping should continue without any more problems, unless you add another file. Here is an example of the RESTORE ... WITH MOVE statement:

    RESTORE LOG LOGSHIPDB FROM DISK ='path for the transaction log backup
    file'
    WITH MOVE 'Logical name of the new data file'
    TO 'physical name of the new data file (where you want the file to be
    created on the destination server)',
    NORECOVERY
    -- Or use STANDBY instead of NORECOVERY
  2. If you are using the built-in functionality of SQL Server 2000, you might need to see if the load_all column of the table log_shipping_plan_databases is set to 0 and then check the last_loaded_file column. It should be set to the transaction log file you loaded manually. If it is not, manually update the last_loaded_file column. If load_all is set to 1, all should be fine.

Custom Log Shipping Versus Microsoft’s Implementation

One of the decisions you need to make is whether or not you will use Microsoft’s implementation or code your own. Both have specific uses and advantages. Whether you are using a custom configuration or the built-in feature, you can log ship between all versions of SQL Server 2000 because the file formats and on-disk structures are the same. If an incompatibility that would not allow you to log ship from one version to another is introduced in a SQL Server service pack or some other hotfix, this will be documented. This section details the technological considerations. If your constraints are somewhat costbased (that is, you might not be able to afford or do not need the functionality of SQL Server 2000 Enterprise Edition), you must code your own solution. The technical reasons for coding your own log shipping usually boil down to the following three points for most IT shops:

  • You need to use an alternate protocol such as File Transfer Protocol (FTP) to send the transaction log backups to other servers. Microsoft’s implementation uses a simple copy.

  • You need to compress the transaction log backups that are made. The built-in solution does not involve compression at all, and it cannot be incorporated. A third-party product such as SQL LiteSpeed or PKZIP is needed. In some cases, this might shorten the time it takes to copy and possibly apply the transaction log.

  • You want a fully scriptable configuration process. Although the Microsoft implementation is built on stored procedures, you cannot set it up using stored procedures. You must use the Database Maintenance Plan Wizard.

More Info

For more information on coding a custom log shipping solution, see the section "Creating a Custom Coded Log Shipping Solution" later in this chapter.

Configuring and Administering the Built-In Functionality Using SQL Server 2000 Enterprise Edition

Microsoft recommends that whenever possible, you should use the log shipping functionality provided by SQL Server 2000 Enterprise Edition (or Developer Edition if you are in a development or testing environment). This feature is not present in any other edition of SQL Server 2000; if you want to use log shipping with another edition, you need to code your own solution. This section walks you through the components of what you get when using the built-in log shipping, how to configure it, and how to administer the solution. Figure 7-2 illustrates the workflow of the log shipping feature found in SQL Server 2000 Enterprise Edition.

Diagram of the built-in log shipping feature’s workflow.

Figure 7-2. Diagram of the built-in log shipping feature’s workflow.

Important

For the primary server that is the source of the data as well as any secondary server that will have the transaction logs applied to it, you must use SQL Server 2000 Enterprise Edition or SQL Server 2000 Developer Edition. However, Developer Edition cannot be used in a production environment.

Log Shipping Components

The built-in log shipping functionality is comprised of a database maintenance plan, stored procedures, SQL Server Agent jobs, and tables in msdb. There might be differences between the primary, secondary (or secondaries), and the Log Shipping Monitor about which objects exist and which edition of SQL Server is used.

Caution

The stored procedures, tables, and jobs used by log shipping are listed here for informational purposes only. Just because they exist does not mean you should or can use them, as they are used by SQLMAINT behind the scenes. What you can use is detailed later in this section.

Stored Procedures

The following list of stored procedures includes all stored procedures that are used by log shipping. Most are located in msdb, and some are in master.

  • sp_add_log_shipping_database

  • sp_add_log_shipping_monitor_jobs

  • sp_add_log_shipping_plan

  • sp_add_log_shipping_plan_database

  • sp_add_log_shipping_primary

  • sp_add_log_shipping_secondary

  • sp_can_tlog_be_applied

  • sp_change_monitor_role

  • sp_change_primary_role

  • so_change_secondary_role

  • sp_create_log_shipping_monitor_account

  • sp_define_log_shipping_monitor

  • sp_delete_log_shipping_database

  • sp_delete_log_shipping_monitor_info

  • sp_delete_log_shipping_monitor_jobs

  • sp_delete_log_shipping_plan

  • sp_delete_log_shipping_plan_database

  • sp_delete_log_shipping_primary

  • sp_delete_log_shipping_secondary

  • sp_get_log_shipping_monitor_info

  • sp_log_shipping_get_date_from_file

  • sp_log_shipping_in_sync

  • sp_log_shipping_monitor_backup

  • sp_log_shipping_monitor_restore

  • sp_remove_log_shipping_monitor

  • sp_remove_log_shipping_monitor_account

  • sp_resolve_logins

  • sp_update_log_shipping_monitor_info

  • sp_update_log_shipping_plan

  • sp_update_log_shipping_plan_database

Tables

Table 7-1 shows the tables used only for log shipping and which server has them populated. The tables are located in msdb.

Table 7-1. Table Usage for Log Shipping

Table

Primary

Secondary

Monitor

log_shipping_databases

Used

Not used (unless this is also a primary)

Not used (unless this is also a secondary)

log_shipping_monitor

Used

Used

Not used (unless this is also a secondary)

log_shipping_plan_databases

Not used (unless this is also a secondary)

Used

Not used (unless this is also a secondary)

log_shipping_plan_history

Not used (unless this is also a secondary)

Used

Not used (unless this is also a secondary)

log_shipping_plans

Not used (unless this is also a secondary)

Used

Not used (unless this is also a secondary)

log_shipping_primaries

Not used (unless this is also a monitor)

Not used (unless this is also a monitor)

Used

log_shipping_secondaries

Not used (unless this is also a monitor)

Not used (unless this is also a monitor)

Used

log_shipping_databases

Used

Not used (unless this is also a primary)

Not used

Note

If the monitor is configured on SQL Server 2000 Standard Edition, the only log-shipping-specific tables that will exist are log_shipping_primaries and log_shipping_secondaries.

Jobs

The following SQL Server Agent job exists on the primary for each database being log shipped:

  • Transaction Log Backup Job for DB Maintenance Plan ’database maintenance plan name’. This is a job created as part of the database maintenance plan to back up the transaction log of the primary database on a scheduled basis. It also exists on the secondary if you select the Allow Database To Assume Primary Role option during configuration.

The following SQL Server Agent jobs exist on the secondary for each database being log shipped:

  • Log shipping copy for SQLSERVERPRIMARYSERVERNAME. databasename_logshipping. This job exists on the primary server and copies the transaction log files from the primary.

  • Log shipping Restore SQLSERVERPRIMARYSERVERNAME. databasename_logshipping. This job exists on the secondary server and restores the transaction logs.

The following SQL Server Agent jobs exist on the monitor server for each database being log shipped:

  • Log Shipping Alert Job—Backup

  • Log Shipping Alert Job—Restore

Log Shipping Monitor

One of the best reasons to use the functionality provided with SQL Server is the Log Shipping Monitor, which provides status about the log shipping process through Enterprise Manager. It is located on a server designated by the person configuring log shipping.

Note

Although the primary and secondary servers for log shipping require SQL Server 2000 Enterprise Edition or Developer Edition, the Log Shipping Monitor can be placed on any version of SQL Server 2000, including SQL Server 2000 Standard Edition. You can use one Log Shipping Monitor to monitor multiple log shipping pairs, so if you have a dedicated server as recommended here, you do not need to worry about configuring a separate Log Shipping Monitor for each pair. However, keep in mind that you need to ensure that the Log Shipping Monitor itself is available or that you know how to move it.

Configuring Log Shipping

This section takes you through the process of configuring log shipping with SQL Server 2000 Enterprise Edition.

On the CD

Use the file SQL_Server_2000_Log_Shipping_Worksheet.doc when planning your installation. This document corresponds to Table 7-2. There is also a checklist to use to verify all preconfiguration tasks in the file Preconfiguration_Log_Shipping_ Checklist.doc.

Table 7-2. Log Shipping Parameters and Recommended Values

Parameter

Server Affected

Value or Recommendation

Back up database as part of the maintenance plan

Primary

By default, this is selected in the Specify The Database Backup Plan dialog box. It creates a SQL Server Agent job named DB Backup Job for DB Maintenance Plan ’ Configured Plan Name’. You might not have to select this option if a backup plan already exists for the database or you are just configuring log shipping and not an overall Database Maintenance Plan.

Primary server name

Primary

Name of the primary server.

Secondary server names

Secondary

Names of the secondary servers to log ship.

Database to log ship (on primary)

Primary

This is the database from which the transaction log files will be generated.

Directory to store the backup file

Primary

This can be an explicit path (such as C:Tranlogs) or a Universal Naming Convention (UNC) file path. This directory is used by the primary to back up transaction logs, and although it can be located on the primary, it might be smarter from a high availability perspective to place it elsewhere. Some might make this the same as Network share name for backup directory.

Create a subdirectory under the UNC for each database

Primary

Set this to Yes, as it avoids confusion and makes it easier to find files if you have configured log shipping for more than one database.

Delete transaction log files older than a certain time period

Primary

The value you choose for this parameter depends on how long you need to keep older backup files. Prior to deletion, back these files up to a medium that can be stored offsite for archival purposes.

Backup file extension (default is .trn)

All

Leave as .trn.

Network share name for backup directory

Primary and secondary

This directory, usually a UNC name, needs to be accessible by both the primary and the secondary. This is where the transaction logs are accessed from the secondary. This is basically the same directory as Directory to store backup file, but with a share associated with it.

Transaction log destination directory (should be a valid UNC on secondary server)

Secondary

As with Directory to store the backup file, this can be an explicit path (such as C:Tranlogs) or a UNC file path. This directory is used to copy the transaction logs from the network share to this location on the secondary.

Create and initialize new database

Secondary

Choose No and do the initial backup, copy, and restore yourself.

Database load state

Secondary

Set to No Recovery if you want to just apply transaction logs for high availability, or to Standby if you want to make it a read-only reporting server.

Terminate user connections in database

Secondary

Set to Yes only if Standby is selected for Database load state and you are using the secondary for read-only purposes. If No is selected and you have selected Standby, you have to set up a job to terminate the connections, otherwise the logs will never be applied. The syntax is as follows:

ALTER DATABASE databasename
 SET SINGLE_USER
 WITH ROLLBACK IMMEDIATE
  

If you are using your log-shipped database for reporting, you must reset it to allow users to have access after the transaction log is applied with the following:

ALTER DATABASE databasename SET
MULTI_USER

Allow database to assume primary role

Secondary

Set this to Yes to allow the secondary to become the primary. Set the Transaction log backup directory to the same one as Directory to store backup file if it was not located on the primary or set it to the same as Directory to store backup file as you did on the original primary.

Perform a full backup (if not using an existing database)

Primary

No.

Use most recent backup file (if not using an existing database)

Primary

No.

Transaction log backup schedule (default is every 15 minutes)

Primary

This should be set to a lower number for higher volume sites and smaller files. If you set this to a larger number, the files might be large and your standby will not be as close in time to the primary.

Copy/load frequency (default is 15 minutes)

Primary/secondary

The smaller the number, the closer match the log shipping pair will be. If you are using the secondary as a reporting server, use a higher value, as the user sessions would otherwise have to be terminated more frequently.

Load delay (default is 0 minutes)

Secondary

This is the amount of time the load process waits after the copy process is complete to restore the transaction log. The smaller the number, the closer match the log shipping pair will be. However, if the transaction log files are large, you might need to adjust this to allow time for the file to copy. If you are using the secondary as a reporting server, you might want to set this higher to stack the log files and allow users to do reporting. The default of 0 means that the transaction log will be loaded immediately.

File retention period

 

The default value is 24 hours. Configure this to match your corporate archival scheme.

Backup alert threshold

 

If the backups are large, adjust this number accordingly so you do not get false errors. The rule of thumb is to set this to three times the frequency of the backup job.

Out of sync alert threshold

 

Same as the backup alert threshold. If the file takes 15 minutes to copy because it is 3 GB and takes 45 minutes to apply, set this number accordingly; an hour or even two might be appropriate in this case. The rule of thumb is to set this to three times the amount of the slowest copy and restore job (if you are doing multiple secondaries).

Log Shipping Monitor server

N/A

This should be on a completely separate server than either the primary or secondary server.

Authentication mode for monitor server

Monitor

Set to Microsoft Windows, if possible, but if you change the password for the account that the SQL Server runs under, you must change it on all servers defined in log shipping only if all SQL Servers are starting under the same Windows account.

  

If you choose SQL Server, it creates the log_shipping_monitor_probe user, for which you enter a password. Do not set this to a blank password.

Generate a report

Primary

This is optional, and you would need to configure a directory to hold the reports and configure how long to retain the reports or whether to e-mail them using SQL Mail.

Limit number of history entries in the sysdbmaintplan_history table

 

Log shipping is verbose. This can enlarge your table quickly if you generate frequent transaction logs. Adjust accordingly. Also, ensure that if you allow unlimited growth, msdb is set to autogrow. To clear out the entries, you need to execute the sp_delete_backuphistory stored procedure.

  

Also, log shipping puts entries in the Application Event Log, so that might need to be backed up and cleared out from time to time.

Step 1: Restoring the Initial Backup on the Secondary

Although you can use the Database Maintenance Plan Wizard to do the initial backup, copy, and restore of the databases for log shipping, you should do each of these outside of the wizard. The main reason for this procedure is that the way log shipping is coded, it uses your default backup drive (which is usually C$) for the backup. Therefore, if you are trying to initialize a large database, you could run out of space on your drive, causing the operation to fail. Because this is not detected immediately, you could waste time if you get an error two hours into a backup operation. That said, if your database is small to medium sized and you know you have plenty of hard disk room, the wizard might work for you, although it is still probably better to do it on your own.

There is also a known issue with the Database Maintenance Plan Wizard such that even if you select the NORECOVERY mode during the install and have the wizard perform your backup, copy, and restore, the database is restored with STANDBY. However the database is switched to NORECOVERY mode after the application of the first transaction log backup. Therefore, if you need your database to start out with NORECOVERY (even though it will be corrected once logs are applied), the only way to do this is to restore the database manually.

If you do not restore the database with STANDBY or NORECOVERY, you will see an error similar to the following:

Error on loading file c:ackupsLog_Source_tlog_200010090015.TRN : [Microsoft
SQL-DMO (ODBC SQLState: 42000)] Error 4306: [Microsoft][ODBC SQL Server
Driver][SQL Server]The preceding restore operation did not specify WITH
NORECOVERY or WITH STANDBY. Restart the restore sequence, specifying WITH
NORECOVERY or WITH STANDBY for all but the final step. [Microsoft][ODBC SQL
Server Driver][SQL Server]Backup or restore operation terminating abnormally.

One reason for this is that sometimes mistakenly the database might have been placed within a database that is already recovered, or active, into read-only mode. Read-only mode of an active database is just that; you cannot apply a transaction log to the database. Visually, this appears similar to the way a database restored with STANDBY appears in Enterprise Manager (see Figure 7-3). To see if your database is set to read-only, look at the Options tab in the Properties dialog box for the specific database, as shown in Figure 7-4.

How a database that is restored with the STANDBY option appears in Enterprise Manager.

Figure 7-3. How a database that is restored with the STANDBY option appears in Enterprise Manager.

The Options tab for a database.

Figure 7-4. The Options tab for a database.

Caution

Please ensure that you are applying the proper database backup to the secondary server. For example, if you apply the wrong point-in-time full backup file, you will not be able to restore transaction logs. A sample error message follows:

[Microsoft SQL-DMO (ODBC SQLState: 42000)]
Error 4305: [Microsoft][ODBC SQL Server Driver][SQL Server]The log in
this backup set begins at LSN 7000000026200001, which is too late to
apply to the database. An earlier log backup that includes LSN
6000000015100001 can be restored. [Microsoft][ODBC SQL Server
Driver][SQL Server]RESTORE LOG is terminating abnormally.

Step 2: Running the Database Maintenance Plan Wizard

Once the point-in-time full backup is restored, you can run the Database Maintenance Plan Wizard. Table 7-2 lists all the parameters associated with configuring log shipping using the Database Maintenance Plan Wizard.

On the CD

For complete installation instructions using the Database Maintenance Plan Wizard, see the file Configuring_Log_Shipping.doc.

Warning

There is a known issue documented in Knowledge Base article 311801, "BUG: Error 3154 Reported in Log Shipping Restore Job Sporadically." If you configure log shipping for more than one database, back up all databases to the same share, and the transaction logs only differ by _tlog in name, this causes failures on the RESTORE job on the secondary. To fix the problem, you need to ensure each database gets its own directory for backups. See the article for any other relevant information.

Step 3: Post-Wizard Configuration Tasks

After you configure log shipping with the Database Maintenance Plan Wizard, your configuration is not complete. There are a few remaining tasks that must be performed. The four tasks involve the following:

  • Creating the DTS package for transferring database users to the secondary.

  • Creating a job to regularly bcp out syslogins for the system-level logins.

    Remember, as discussed earlier, there are two levels of logins that you need to worry about. All database logins are linked to some system-level login. Without the ability to properly synchronize them, you create what are known as orphans, and after the role change the users will be unable to connect to the database. An orphan happens because each database user does not have a corresponding serverlevel login in SQL Server.

  • Creating jobs to eventually run the stored procedures to perform the role change.

  • Modifying sp_resolve_logins.

In the event of a problem, you do not want to be worried about syntax. These steps remove that worry. If you do not want to make each procedure a SQL Server Agent job, you can also create the scripts for each server based on the information provided here, and then put them in a well-known place to have them ready for execution.

Task 1: Configure Transfer Logins Task DTS Package

In this exercise, you create the DTS package that transfers any database users from the primary to the secondary.

Note

Remember, DTS does not exist for 64-bit editions of SQL Server 2000. You must create DTS packages on a 32-bit instance to run against your 64-bit instances.

  1. In Enterprise Manager, expand the Data Transformation Services control tree.

  2. Right-click Local Packages and choose New Package from the shortcut menu.

  3. In the DTS Package window, go to the Task menu and select Transfer Logins Task.

  4. In the Transfer Logins Properties dialog box, click the Source tab. In the Source Server text box, enter the name of the primary server. Select Use Windows Authentication.

  5. On the Destination tab, select the name of the secondary server. Select Use Windows Authentication.

  6. On the Logins tab, select Logins For Selected Databases and then select the database that is being log shipped.

  7. Click OK.

  8. In the DTS Package window, go to the Package menu and choose Save As.

  9. In the Package Name text box, type Copy DBTOLOGSHIP Users or another name that makes sense. In the Server drop-down list box, select the name of the primary server. Select Use Windows Authentication.

  10. Click OK.

  11. Close the DTS Package window.

  12. In the Local Packages window, right-click Copy DBTOLOGSHIP Users and click Schedule Package.

  13. Schedule the package to run as often as required.

  14. Modify the job Copy DBTOLOGSHIP Users by changing the owner to the name of the owner of the primary database.

Warning

If you are transferring large numbers of logins with the transfer logins task, it could be slow. Check Knowledge Base article 311351, "BUG: Transfer Login Task Is Slow with a Large Number of Logins," for updates.

Task 2: Set Up a Job to bcp out SYSLOGINS

In this task, you create a new job on the primary server to back up the syslogins table.

  1. In Enterprise Manager, expand the Management control tree on the primary server.

  2. Expand SQL Server Agent.

  3. Right-click the Jobs icon and select New Job.

  4. On the General tab, in the Name text box, type Backup SYSLOGINS or another name that makes sense to you. Set the owner to a user who has privileges to access the system tables.

  5. On the Steps tab, click New.

  6. In the New Job Step dialog box, click the General tab. In the Step Name text box, type BCP Out or something that makes sense.

  7. In the Type drop-down list box, select Operating System Command (CmdExec).

  8. In the Command text box, type the following command:

    BCP master..syslogins out pathforfilesyslogins.dat /N /S
    name_of_current_primary_server /U sa /P sa_password
    • Pathforfile is the path where the file will be created. This should be the same location where the transaction logs are copied because it is accessible by both servers. It also eliminates the problem of one server being a single point of failure.

    • Syslogins.dat is the file containing the login information. You can name this anything you want.

  9. Click OK.

  10. On the Schedules tab, create a new schedule called Transfer Syslogins. Schedule this job to run as often as required. It should also be synchronized to some degree with the job to DTS out the database logins.

  11. Click OK to close the New Job Properties dialog box.

Task 3: Configure the Role Change SQL Server Agent Jobs for Manual Execution

In this exercise, you create the jobs on the proper servers that will contain the stored procedures used in the role-change process.

Subtask A: Create a Job to Demote the Primary On the primary server, create a SQL Server Agent job named Change Primary Role. Set the owner to the owner of the database or one that has the appropriate privileges. This job executes the sp_change_primary_role stored procedure, which has the following options:

  • @db_nameThis parameter is the name of the primary database currently being log shipped. The value is enclosed in single quotes.

  • @backup_log. This parameter tells SQL Server to back up the transaction log once more before changing the state of the database. This is set to either 0, which tells SQL Server not to make a final transaction log backup, or 1, which instructs SQL Server to make a final transaction backup. The default is 1.

  • @terminate. This parameter tells SQL Server to immediately roll back any active transactions and puts the database in single-user mode for the duration of this stored procedure execution. This is set to either 0, which tells SQL Server not to roll back any pending transactions, or 1, which instructs SQL Server to do an immediate rollback of transactions and put the database in single-user mode. The default is 0.

  • @final_state. This parameter sets the final state of the primary database after the stored procedure is run. The values that can be configured are 1, which leaves the database in recovery mode (available for writes and reads, but no longer the primary); 2, which leaves the database in no recovery mode (maps to the NORECOVERY function of the RESTORE statement) and allows the database to accept transaction logs but not be available for read-only access; and 3, which leaves the database in standby mode (maps to the NORECOVERY function of the RESTORE statement) and allows the database to accept transaction logs and also makes it available for read-only access. The default is 1, and if you want to eventually use the database again as a secondary in log shipping, you should set it to 2 or 3.

  • @access_level. This parameter sets the accessibility of the primary database after the stored procedure is run. The values that can be configured are 1, which leaves the database accessible for multiple users; 2, which allows access only by restricted users (maps to the RESTRICTED_USER function of the RESTORE statement) and is used only when @final_state is set to 1 and you want only members of db_owner, dbcreator, or sysadmin roles to have access; and 3, allowing only one user to access the database. The default value, if not specified, is 1.

The following is an example for this stored procedure with all of the parameters. If you do not put this in the job step, you must use msdb.dbo prior to sp_change_primary_role. If this is in a job step, you do not need msdb.dbo prior to sp_change_primary_role.

EXEC msdb.dbo.sp_change_primary_role @db_name = 'mylogshipdb',
@backup_log = 1,
@terminate = 0,
@final_state = 2,
@access_level = 1

Here are the full steps to create the job:

  1. In Enterprise Manager, expand the Management control tree on the primary server.

  2. Expand SQL Server Agent.

  3. Right-click the Jobs icon and select New Job.

  4. On the General tab, in the Name text box, type a name that is easily understood, such as Run sp_change_primary_role. Set the owner to the owner of the primary database.

  5. On the Steps tab, click New.

  6. In the New Job Step dialog box, click the General tab. In the Step Name text box, enter a name that makes sense to you.

  7. In the Type drop-down list box, select Transact-SQL Script (TSQL).

  8. Enter the syntax for the stored procedure (such as the example just shown).

  9. Click OK.

  10. Do not create a schedule for the job on the Schedules tab. You want to execute this job only on demand.

  11. Click OK to close the New Job Properties dialog box.

Subtask B: Create a Job to Promote the Secondary On each secondary server, create a SQL Server Agent job named Change Secondary Role. Set the owner of the job to the owner of the database or one who has the appropriate permissions. Further define the job to have one Transact-SQL step named Run sp_change_secondary_role and to use the database msdb. Do not schedule this, but allow it to be executed on demand. This step executes the sp_change_secondary_role stored procedure, which has the following options:

  • @db_name. This parameter is the name of the secondary database that will be promoted to the primary. The value is enclosed in single quotes.

  • @do_load. This parameter forces all remaining transaction logs that are pending to be copied and restored prior to recovering the database. This is set to either 0, which tells SQL Server to force the copy and restore, or 1, which tells it to make a final transaction backup. The default is 1.

  • @force_load. This parameter forces SQL Server to restore the pending transaction logs. This only works if @do_load is set to 1. This is set to either 0, which tells SQL Server not to force a restore, or 1, which tells SQL Server to force the restore. The default is 1.

  • @final_state. This parameter sets the final state of the primary database after the stored procedure is run. The values that can be configured are 1, which leaves the database in recovery mode (available for writes and reads, but no longer the primary); 2, which leaves the database in no recovery mode (maps to the NORECOVERY function of the RESTORE statement) and allows the database to accept transaction logs but not be available for read-only access; and 3, which leaves the database in standby mode (maps to the NORECOVERY function of the RESTORE statement) and allows the database to accept transaction logs and be available for read-only access. The default is 1, and it would make no sense to set it to 2 or 3 if this is to be the new active database.

  • @access_level. This parameter sets the accessibility of the primary database after the stored procedure is run. The values that can be configured are 1, which leaves the database accessible for multiple users; 2, which allows access only by restricted users (maps to the RESTRICTED_USER function of the RESTORE statement) and is used only when @final_state is set to 1 and you only want members of db_owner, dbcreator, or sysadmin roles to have access; and 3, allowing only one user to access the database. The default value if not specified is 1, and it would make no sense to set it to 2 or 3 because this will be the new primary.

  • @terminate. This parameter tells SQL Server to immediately roll back any active transactions and puts the database in single-user mode for the duration of this stored procedure execution. This is set to either 0, which tells SQL Server not to roll back any pending transactions, or 1, which makes it perform an immediate rollback of transactions and put the database in single-user mode. The default is 1.

  • @keep_replication. This parameter specifies that if replication was in use on the original primary, the settings will be preserved when restoring any pending transaction logs. This option is ignored if you do not set @do_load to 1. The default is 0.

Caution

If you do not set @do_load to 1 and @keep_replication to 1, you will lose all your replication settings when the database is recovered after the last transaction log loads.

  • @stopat. This parameter sets the accessibility of the primary database after the stored procedure is run. This option is ignored if you do not set @do_load to 1. The default value is NULL.

The following is an example for this stored procedure with all of the parameters. If you do not put this in the job step, you must use msdb.dbo prior to sp_change_primary_role. If this is in a job step, you do not need msdb.dbo prior to sp_change_primary_role.

EXEC msdb.dbo.sp_change_secondary_role @db_name = 'mylogshipdb',
@do_load = 1,
@force_load = 1,
@final_state = 1,
@access_level = 1
@terminate = 1
@keep_replication = 1
@stopat = NULL

Follow the steps listed under Subtask A to create the SQL Server Agent job, renaming appropriately and inserting the proper syntax for this stored procedure.

Subtask C: Create a Job to Resolve the Logins On the secondary server, create a SQL Server Agent job, which should be executed by someone with sysadmin privileges to execute sp_resolve_logins and to use the database master. This stored procedure has the following variables:

  • @dest_db. This parameter is the name of the database where logins will be synchronized. The value is enclosed in single quotes.

  • @dest_path. This parameter is the location of the BCP file you have been creating. The value is enclosed in single quotes.

  • @filename. This parameter is the exact file name that you have been creating.

The following is an example for this stored procedure with all of the parameters. If you do not put this in the job step, you must use master.dbo prior to sp_resolve_logins. If this is in a job step, you do not need master.dbo prior to sp_resolve_logins.

EXEC master.dbo.sp_resolve_logins @dest_db = 'mylogshipdb',
@dest_path = 'h:mydirectory',
@dest_filename = 'syslogins.dat'

Follow the steps listed under Subtask A to create the SQL Server Agent job, renaming appropriately and inserting the proper syntax for this stored procedure.

Subtask D: Create a Job to Change Primaries at the Log Shipping Monitor On the server containing the Log Shipping Monitor, create a SQL Server Agent job, which should be executed by someone with sysadmin privileges to execute sp_change_monitor_role and to use the database msdb. This stored procedure has the following variables:

  • @primary_server. This parameter is the name of the original primary server. The value is enclosed in single quotes.

  • @secondary_server. This parameter is the name of the secondary server that was converted to the new primary server. The value is enclosed in single quotes.

  • @database. This parameter is the name of the database promoted to the new primary. The value is enclosed in single quotes.

  • @new_source. This parameter is the path where the new primary will be putting its transaction logs. The value is enclosed in single quotes.

The following is an example for this stored procedure with all of the parameters. If you do not put this in the job step, you must use msdb.dbo prior to sp_resolve_logins. If this is in a job step, you do not need msdb.dbo prior to sp_resolve_logins.

EXEC sp_change_monitor_role @primary_server = 'primarysrv',
@secondary_server = 'secondarysrv'
@database = 'mylogshipdb',
@new_source = '\newprisrv1	logs'

Follow the steps listed under Subtask A to create the SQL Server Agent job, renaming appropriately and inserting the proper syntax for this stored procedure.

Step 4: Verifying and Testing the Log Shipping Pair

The easiest way to verify that the process is working properly is to open the Log Shipping Monitor after the process has had some time to do some copies and restores on the secondary. It is located on the Management tab of Enterprise Manager on the server you designated during configuration.

More Info

For more information on the Log Shipping Monitor, see the section "Administering Log Shipping" later in this chapter.

Before this goes into production, you should also test the role-change process. This means you might have to reconfigure the log shipping pair (if you do not set the secondary to also become the primary at some point, and so on), but you can also have confidence that everything you have set up is correct. For a disaster recovery drill, it is absolutely crucial that you have this down pat.

On the CD

For a test plan to test the log shipping role-change process, see the file Log_Shipping_Test_Plan.xls. There is also a checklist to easily verify all postconfiguration tasks in the file Postconfiguration_Log_Shipping_Checklist.doc.

Task 4: Modify sp_resolve_logins

As documented in Knowledge Base article 310882, "BUG: sp_resolve_logins Stored Procedure Fails If Executed During Log Shipping Role Change," there is a known issue with the sp_resolve_logins stored procedure that requires a manual fix. This stored procedure exists in the master database. Here’s the problem. The code currently contains

SELECT *
INTO #sysloginstemp
FROM syslogins
WHERE sid = 0x00

which is incorrect. This incorrectly uses the syslogins table because it does not qualify it. The new statement should be manually corrected to this:

SELECT *
 INTO #sysloginstemp
 FROM master.dbo.syslogins
 WHERE sid = 0x00

Tip

Back up your system databases before performing a modification to a system-stored procedure such as this one.

Troubleshooting Log Shipping

There are some steps that you can follow to troubleshoot an installation that is not behaving as expected:

  • Make sure the SQL Server Agent is started on the primary and the secondary.

  • Make sure you have no other transaction log backup jobs or processes configured or running for the database.

  • Make sure that no operations (such as changing the database back to simple recovery model) are breaking or have broken the LSN chain.

  • If you are only seeing first_file_000000000000.trn in the Log Shipping Monitor, it could mean one of a few things:

    • You have not given log shipping enough time to complete a full cycle.

    • The secondary has no rights or just cannot access the share that was configured during installation.

    • If last_file_loaded and last_file_copied still reflect first_file_000000000000.trn, even if the copy is occurring, the tables driving the GUI might not be getting updated. Last_file_loaded and last_file_copied are driven by msdb.dbo.log_shipping_secondaries, and last_backup_file is driven by msdb.dbo.log_shipping_primaries. This usually happens when you have configured log shipping to use Windows Authentication only and the SQL Server Agent service startup account of the primary or secondary does not have enough privileges to update the table on the monitor. It also could be an indication that the primary_server_name column of log_shipping_primaries or the secondary_server_name column of log_shipping_secondaries does not reflect the proper names. If it turns out to be a permissions problem, grant the UPDATE and SELECT rights for the appropriate account. If it is the server name issue, update the tables appropriately.

  • If you upgraded from SQL Server 2000 Standard Edition to Enterprise Edition and could not configure log shipping because the components seem to be missing (you will most likely see Errors 208 and 2812), run the file Instls.sql, which is found in the Install directory of the SQL Server 2000 Enterprise Edition installation CD.

Warning

If you have access to the Enterprise Edition CD, do not run this script on any other version of SQL Server. It is not supported.

  • Make sure that you applied the right point-in-time backup and that nothing could have broken the LSN chain. Fixes for those errors more than likely will involve a full reconfiguration of log shipping, which means you must remove the current install first. The error messages for these types of root causes were detailed earlier in this chapter.

Note

If you need to reconfigure log shipping, you will have to delete the current configuration and also possibly remove the Database Maintenance Plan that was created. This is documented later in the section "Removing Log Shipping." If for some reason you are still having problems, you might have to manually check each log shipping table in msdb and delete the offending rows.

Administering Log Shipping

After configuring log shipping, you need to understand how to administer log shipping, including how to monitor the process.

Removing Log Shipping

Use the following steps to remove log shipping from a Database Maintenance Plan:

  1. Open the Database Maintenance Plan for the database on the primary server.

  2. Select the Log Shipping tab and click Remove Log Shipping.

  3. You are prompted with the question, "Are you sure you want to remove log shipping?" as shown in Figure 7-5. Answer Yes or No, and SQL Server will do your bidding.

Removing log shipping from a Database Maintenance Plan.

Figure 7-5. Removing log shipping from a Database Maintenance Plan.

Caution

If you choose Yes, you remove all pairs participating in log shipping. If you have multiple secondaries and only want to remove one, follow the instructions in the next section, "Deleting a Secondary"

Deleting a Secondary

If you choose this option, you delete only one of the secondaries if multiple secondary servers are configured. The following are the steps to delete a single secondary from the log shipping definition:

  1. Open the Database Maintenance Plan for the database on the primary server.

  2. Click the Log Shipping tab and click Remove Log Shipping.

  3. You are prompted with the question "Are you sure you want to remove log shipping?" as shown in Figure 7-6. Answer Yes, and SQL Server will remove the secondary. The only confirmation you receive is that the secondary no longer appears in the Database Maintenance Plan.

    Removing a secondary from a Database Maintenance Plan.

    Figure 7-6. Removing a secondary from a Database Maintenance Plan.

Caution

If you have only one secondary, performing this operation would be the same as clicking Remove Log Shipping, as shown in the previous section. You would need to reconfigure log shipping from scratch.

Monitoring Log Shipping

To view information about log shipping, you have five options:

  • Log Shipping Monitor. The Log Shipping Monitor is the first place you should look to see the status of the log shipping process for the log shipping pair. It is located on the Management tab of Enterprise Manager on the server you designated during configuration. Figure 7-7 shows what the Log Shipping Monitor displays in Enterprise Manager.

    The basic Log Shipping Monitor.

    Figure 7-7. The basic Log Shipping Monitor.

    If you select a log shipping pair in the right-hand pane of the Log Shipping Monitor and double-click it, you bring up more information about the pair, as shown in Figure 7-8. The Status tab of the Log Shipping Monitor is the most important one from a monitoring standpoint—it tells you the last file backed up on the primary, the last file copied, and the last file applied, with the appropriate time deltas. The other two tabs are discussed in the section "Changing Log Shipping Parameters After Configuration" later in this chapter.

    Detailed status information from the Log Shipping Monitor.

    Figure 7-8. Detailed status information from the Log Shipping Monitor.

    Important

    The Log Shipping Monitor is not automatically refreshed. You must refresh it each time you want to see the updated status of log shipping, as shown in the left-hand pane of Figure 7-7.

  • SQL Server Agent jobs history. Each SQL Server Agent job related to log shipping contains a status history, like any other normal job. This is a good place to look when trying to determine why things are failing.

  • Event Viewer. Log shipping status is also logged to the Application Log of the Windows Event Viewer.

    Warning

    If you do frequent transaction log backups, this could fill up your Event Viewer quickly and cause alerts to go off unnecessarily in a monitoring center. Figure 7-9 shows the message that will pop up on your server. Please monitor your Event Viewer and maintain it as necessary.

    An error message indicating that the Event Viewer is full.

    Figure 7-9. An error message indicating that the Event Viewer is full.

  • Database Maintenance Plan on the primaryThe Database Maintenance Plan contains all of the information about the log shipping configuration. Database Maintenance Plans are located under the Management control tree of Enterprise Manager. There are two tabs to be concerned with: Log Shipping and Transaction Log Backup. An example of the Log Shipping tab is shown in Figure 7-10.

    Log Shipping tab of a Database Maintenance Plan.

    Figure 7-10. Log Shipping tab of a Database Maintenance Plan.

  • Querying the log shipping tables. You can also code your own queries against the tables listed earlier (Table 7-1) to create your own monitoring scheme.

Changing Log Shipping Parameters After Configuration

You can change the parameters for log shipping postinstallation. Parameters are not centralized in one place to be changed.

Database Maintenance Plan

The Database Maintenance Plan is generally the first place you would go to tweak parameters for log shipping.

Log Shipping Tab When you select the Log Shipping tab in the Destination Server Information window (see Figure 7-10), select the secondary you would like to modify and then click Edit. You are presented with three tabs: General, Initialize, and Thresholds.

  • General tab. The General tab, shown in Figure 7-11, allows you to tweak a few options: where the transaction logs are copied to on the secondary and if the secondary will be able to assume the role of the primary.

    General tab of a Log Shipping secondary in a Database Maintenance Plan.

    Figure 7-11. General tab of a Log Shipping secondary in a Database Maintenance Plan.

  • Initialize tab. The Initialize tab, shown in Figure 7-12, allows you to tweak the state of the secondary database after the transaction log loads, if the log shipping process will automatically terminate the users in the database, and the copy and load frequencies.

    Initialize tab of a Log Shipping secondary in a Database Maintenance Plan.

    Figure 7-12. Initialize tab of a Log Shipping secondary in a Database Maintenance Plan.

  • Thresholds tab. The Thresholds tab, shown in Figure 7-13, is important. It controls when SQL Server alerts you if you are too far out of sync as well as the load delay, file retention, and history retention. You have to update the number for the Out Of Sync Threshold if you change your transaction log frequency. If you feel the number is too high or too low for your environment, change it.

    Thresholds tab of a Log Shipping secondary in a Database Maintenance Plan.

    Figure 7-13. Thresholds tab of a Log Shipping secondary in a Database Maintenance Plan.

Transaction Log Backup Tab If you want to change anything relating to the transaction log backup scheme you must do it in the Transaction Log Backup tab, as shown in Figure 7-14. The parameters you can tweak are also shown in Figure 7-14.

Transaction Log Backup tab of a Database Maintenance Plan.

Figure 7-14. Transaction Log Backup tab of a Database Maintenance Plan.

Important

Changing any values here affects all secondary servers. Also, you might need to tweak the Out Of Sync Threshold value if you change the frequency of the transaction log backups.

Note

There is no way to change the backup network share using the Enterprise Manager GUI once you have log shipping configured. This must be done using one of the two following methods:

  1. If the new destination folder lives on the same server as the old folder, remove the old share and rename the new share to the old share’s name.

  2. If the new destination folder is on a different computer, which is the more likely scenario, run the query listed here to update the location. You should back up msdb prior to executing the query.

    UPDATE msdb.dbo.log_shipping_plans
    SET source_dir = '\new_computer_name
    ew_sharename'
    WHERE source_dir = '\old_computer_nameold_sharename'

Log Shipping Monitor

The Log Shipping Monitor is also used to change some parameters associated with log shipping. The two tabs you use are Source and Destination.

Source Tab The Source tab, shown in Figure 7-15, allows you to control various settings for when you want to be notified about when backups fail.

Source tab of the Log Shipping Monitor.

Figure 7-15. Source tab of the Log Shipping Monitor.

Tip

If you are performing intrusive maintenance regularly, or even once, on the primary, configure the day and times under Suppress Alert Generation Between. For example, if you do a full index rebuild every Sunday night between midnight and 3 A.M., set that time. That way, those monitoring it will not see false errors.

Destination Tab The Destination tab, shown in Figure 7-16, allows you to control settings in terms of when you want to be notified about how out of sync the secondary is.

Destination tab of the Log Shipping Monitor.

Figure 7-16. Destination tab of the Log Shipping Monitor.

Tip

Both the primary server and the secondary servers must be registered in the Enterprise Manager where the monitor instance is registered. If this is not the case, you will not be able to tweak the parameters just listed. See Figure 7-17 and Figure 7-18 for the error messages you would see.

Error message if the primary is not registered in Enterprise Manager.

Figure 7-17. Error message if the primary is not registered in Enterprise Manager.

Error message if the secondary is not registered in Enterprise Manager.

Figure 7-18. Error message if the secondary is not registered in Enterprise Manager.

Moving the Log Shipping Monitor

Moving the Log Shipping Monitor functionality to another SQL Server 2000 instance is not documented elsewhere. You would need to do this if the server containing the Log Shipping Monitor fails or you want to move it to another server.

Tip

Because you need to get information from the current Log Shipping Monitor to re-create another one, you should gather the information directly after configuring log shipping.

Step 1: Configure the New Log Shipping Monitor

You need to insert information into the database so that the Log Shipping Monitor knows about the primary. Repeat this for the primary in each log shipping pair by following these steps:

  1. Get the information needed to populate the Log Shipping Monitor. On the current monitor server, run the following query:

    SELECT primary_id, primary_server_name, primary_database_name,
    maintenance_plan_id, backup_threshold, threshold_alert,
    threshold_alert_enabled, planned_outage_start_time,
    planned_outage_end_time, planned_outage_weekday_mask
    FROM msdb.dbo.log_shipping_primaries
    WHERE primary_database_name = 'mydbname'

    On the CD

    This query, saved as a Transact-SQL script, is on the CD for you to use. The file name is Monitor_Primary_Info.sql.

  2. Run the stored procedure sp_add_log_shipping_primary on the new Monitor server, which uses the information from the preceding query. An example is shown here:

    EXEC msdb.dbo.sp_add_log_shipping_primary
     @primary_server_name = 'MyPrimaryServer',
     @primary_database_name = 'logshipdb',
     @maintenance_plan_id = '9B4E380E-11D2-41FC-9BA5-A8EB040A3DEF',
     @backup_threshold = 15,
     @threshold_alert = 14420,
     @threshold_alert_enabled = 1,
     @planned_outage_start_time = 0,
     @planned_outage_end_time = 0,
     @planned_outage_weekday_mask = 0

    Conversely, you could also insert the information directly into the msdb.dbo.log_shipping_primaries table on the Log Shipping Monitor.

You now need to insert information into the database so that the Log Shipping Monitor knows about the secondary. Repeat this for each secondary in each log shipping pair using the following steps:

  1. Get the information needed to populate the Log Shipping Monitor. On the monitor server, run the following query:

    SELECT primary_id, secondary_server_name, secondary_database_name,
    secondary_plan_id, copy_enabled, load_enabled, out_of_sync_threshold,
    threshold_alert, threshold_alert_enabled, planned_outage_start_time,
    planned_outage_end_time, planned_outage_weekday_mask
    FROM msdb.dbo.log_shipping_secondaries
    WHERE secondary_database_name = 'mysecondarydbname'

    On the CD

    This query, saved as a Transact-SQL script, is on the CD for you to use. The file name is Monitor_Secondary_Info.sql.

  2. Run the stored procedure sp_add_log_shipping_secondary, which uses the information from the preceding query. An example is shown here:

    EXEC msdb.dbo.sp_add_log_shipping_secondary
     @primary_id = 1,
     @secondary_server_name = 'MySecondaryServer',
     @secondary_database_name = 'logshipdb',
     @secondary_plan_id = 'B5C330FF-1081-4FCB-83D0-955DDFB56BA5',
     @copy_enabled = 1,
     @load_enabled = 1,
     @out_of_sync_threshold = 15,
     @threshold_alert = 14421,
     @threshold_alert_enabled = 1,
     @planned_outage_start_time = 0,
     @planned_outage_end_time = 0,
     @planned_outage_weekday_mask = 0,
     @allow_role_change = 0

    Important

    Make sure that the value for @primary_id matches the one inserted into the log_shipping_primaries table, as it is an automatically generated value. If you do not, you will get a message similar to this:

    Server: Msg 14262, Level 16, State 1, Procedure sp_add_log_
    shipping_secondary, Line 20
    The specified primary_id ('msdb.dbo.log_shipping_primaries') does not
    exist.

    Example Transact-SQL to get the new primary_id from the inserted row on the new monitor is:

    select primary_id
    from log_shipping_primaries
    where maintenance_plan_id = 'CE6960C2-F51F-4585-B79B-172E35AF8B4B'

    Conversely, you could also insert the information directly into the msdb.dbo.log_shipping_secondaries table on the Log Shipping Monitor.

  3. Create the jobs and alerts that are necessary for the Log Shipping Monitor. The easiest way to do this is to script out the current alerts configured on the current monitor and then modify them as necessary.

Step 2: Update the log_shipping_monitor Table

On the primary server and all secondary servers that will be using the new Log Shipping Monitor, execute the following Transact-SQL on each server to change the monitor defined in the log_shipping_monitor table:

EXEC msdb.dbo.sp_define_log_shipping_monitor
@monitor_name = 'GRANDILLUSION',
@logon_type = 1,
-- Use a @logon_type of 2 for SQL Server authentication that is using the
log_shipping_monitor_probe user
--@password = 'password'
--Only use the @password if @logon_type = 2
@delete_existing = 1

On the original monitor server, also execute the following. Do not do this on the new monitor server.

delete from log_shipping_primaries where primary_id = n
delete from log_shipping_secondaries where primary_id = n

where n is the original primary_id. You must also delete the alert jobs from the old Log Shipping Monitor.

Note

In some cases, the existing Log Shipping Monitor will still exist, but it will no longer be updated. All updates should now be done at the newly defined Log Shipping Monitor. Use Step 3 (below) to verify this.

Step 3: Verify the New Log Shipping Monitor

To verify that the newly configured Log Shipping Monitor has been set up properly, check for the following:

  • The Log Shipping Monitor should now display the information for all log shipping pairs defined in Step 2.

  • Wait for some time to ensure that all log shipping pairs are now functioning properly and in sync, or in sync according to the delta that you set.

  • Check the status of all the SQL Server Agent jobs on the server hosting the Log Shipping Monitor; no errors should be found.

Step 4: Delete Old Monitor History and Entries

There are now system tables that have orphaned rows or old rows. You can delete the relevant rows from the tables if you want. The tables can include the following:

  • msdb.dbo.log_shipping_plan_history

  • msdb.dbo.sysdbmaintplan_databases

  • msdb.dbo.sysdbmaintplan_history

  • msdb.dbo.sysdbmaintplan_jobs

  • msdb.dbo.sysdbmaintplans

  • msdb.dbo.sysjobs

  • msdb.dbo.sysjobschedules

  • msdb.dbo.backupfile

  • msdb.dbo.backupmediafamily

  • msdb.dbo.restorefile

  • msdb.dbo.restorefilegroup

  • msdb.dbo.restorehistory

  • msdb.dbo.backupset

  • msdb.dbo.backupmediaset

Caution

Do not delete data from system tables without thinking first. Chances are you should qualify by using a WHERE clause so as not to delete more than you want to.

Adding Additional Secondaries

Adding another secondary database to the Database Maintenance Plan is simple. Keep in mind that it utilizes the same transaction log backup schedule; you cannot customize it for each secondary. However, you can customize certain variables, such as how often you copy and load the transaction log.

To add a new secondary, click Add on the Log Shipping tab of the Database Maintenance Plan. You are then presented with three tabs—General, Initialize, and Thresholds—that are basically the same as the ones you use when editing an existing secondary. These are shown in Figure 7-19, Figure 7-20, and Figure 7-21. The parameters are also basically the same as those you use when configuring a secondary in the wizard. When you are finished, click OK. The only confirmation you see is that the new secondary has been added to the Database Maintenance Plan.

General tab when adding a new secondary.

Figure 7-19. General tab when adding a new secondary.

Initialize tab when adding a new secondary.

Figure 7-20. Initialize tab when adding a new secondary.

Thresholds tab when adding a new secondary.

Figure 7-21. Thresholds tab when adding a new secondary.

To verify that things are working properly, refresh the Log Shipping Monitor. The new pair is then displayed. Wait until one cycle is finished and see that the monitor reflects that the secondary is receiving and applying the transaction logs.

Role Changes

A role change is the process of promoting the current secondary to the new primary. This process might or might not include the demotion of the current primary, depending on its availability and status. There are different types of role changes and drivers for each type that influence the process.

Types of Role Changes

The two types of role changes are simple: planned and unplanned.

  • A planned role change is exactly what it sounds like—you will have planned downtime. This is the easiest one when you know you will be doing something like performing the role change due to maintenance on the primary because it is the easiest to plan for.

  • An unplanned role change is the most common scenario for log shipping. This is when a problem occurs on the primary and you need to switch to the secondary.

Performing a Role Change

The following steps show how to switch the roles of the primary and the secondary servers.

Tip

Before you proceed to the next step, where appropriate check the job history of each stored procedure run, which will tell you if it was a success or a failure.

  1. Notify any users or anyone who needs to know that there will be an interruption of service. How you do this and who you notify should be defined in your SLA.

  2. If there is a catastrophic failure (triggering an unplanned role change) on the primary server, skip directly to Step 5. This also means that you will not be able to back up the tail of the log and therefore will be a larger delta of time off of the primary. If the primary server is still available and there are connections into the current primary database, allow them to complete but do not allow new transactions. Make sure that a final transaction log backup is made and copied in this case.

  3. Before proceeding, you might want to consider putting the primary database (if it is still available) into single-user mode once activity winds down to ensure no other transactions or connections interfere with the role change process. This can be done using Enterprise Manager or Transact-SQL in the ALTER DATABASE command. If the primary database is not available or you set the @terminate option of sp_change_primary_role to 1, this is not necessary.

  4. On the primary server, execute the SQL Server Agent job you configured to run sp_change_primary_role.

  5. On the secondary server, execute the SQL Server Agent job you configured to run sp_change_secondary_role.

  6. On the secondary server, execute the SQL Server Agent job you configured to run sp_resolve_logins.

    Warning

    sp_resolve_logins only resolves typical SQL Server logins. If you have remote logins configured on the primary, they need to be manually re-created on the secondary. Also, if you do not configure the Transfer Logins Task DTS package and the bcp of the syslogins table to be run, execution of sp_resolve_logins will fail.

    More Info

    Running sp_resolve_logins is not necessary if you use the methods provided in Knowledge Base article 303722.

  7. On the server containing the Log Shipping Monitor, execute the SQL Server Agent job you configured to run sp_change_monitor_role.

  8. If you need to rerun a service pack to upgrade the promoted database on the secondary, do it now.

  9. Test to ensure that the new primary server functions properly, and then redirect client applications to the new server using your technology of choice.

    • If it is ODBC, change the server name (and subsequent entries) for the proper ODBC data source name (DSN). The tool Data Sources (ODBC) is located in the Start menu under Administrative Tools. Figure 7-22 shows the screen where you would select the new server.

      Changing server names in ODBC.

      Figure 7-22. Changing server names in ODBC.

    • If you are using Network Load Balancing under Windows 2000, execute the following in a command window. It ensures that all traffic will be redirected to the new primary server.

      WLBS drainstop NLB_cluster_name:primary_server_name /PASSW
      password_for_NLB_cluster
    • If you are using Windows Server 2003, use Network Load Balancing Manager to issue the drainstop.

    • Make sure Network Load Balancing is now started on the new Network Load Balancing cluster node using either wlbs on Windows 2000 or Network Load Balancing Manager under Windows Server 2003.

      More Info

      For more information on Network Load Balancing, see Chapter 5.

    • If you are using any other method, such as a custom switch in an application, make sure it is done to point users and applications to the new primary.

  10. Notify end users and others that the database is now ready for use and provide them with contact information should any problems be encountered after the role change.

  11. If you selected the Allow Database To Assume Primary Role option during the configuration of log shipping, configure it properly and start the appropriate SQL Server Agent job to start log shipping again.

Important

The Allow Database To Assume Primary Role option only sets up the proper maintenance plan on the secondary to start performing the transaction log backups. You need to add all secondaries and enable the jobs to start the log shipping process again once you have promoted the database on the secondary.

Tip

As noted earlier, the SQL Server Agent jobs make life easier in a role change. If you have these scripted, run the scripts in the order listed previously. Do not under any circumstances allow your DBA staff to enter syntax at the time of a failure. This only raises tensions and leaves room for errors and other failures.

Switching Back to the Original Primary

If you want to switch back to the original primary server, you have two options. However, make sure you need to go back to the primary server. If you are up and running on the secondary with no problems, is it worth causing another interruption in service, especially if you have SLAs that will be affected? You have the following options:

  • You can initiate the process from the start from the new primary to the old primary. Specifically, do a new backup of the current primary, apply it to the old primary, configure log shipping from the primary to what would be the new secondary, and finally pick a point in time to do a manual role change. Because you will probably want the server to assume the secondary role again, you should leave the database in NORECOVERY or STANDBY mode.

  • If you selected the Allow Database To Assume Primary Role option during the configuration of log shipping, the appropriate Database Maintenance Plans and SQL Server Agent jobs should have been created for you. You would just need to add secondary databases and create the transfer login tasks and the role-change procedures on the proper servers with the correct syntax reflecting the new roles of each server. If you left the primary database in NORECOVERY or STANDBY mode and the last transaction log from the original was applied to the secondary before it was brought online and no other transactions occurred in the primary database, you should now be able to reverse the process because the two databases were left in the same state.

Creating a Custom Coded Log Shipping Solution

If you have determined that coding your own log shipping solution is the best way to implement log shipping in your environment, all the considerations apply as they do to the built-in feature, but you need to create everything manually that is already included with SQL Server 2000 Enterprise Edition. This includes the following:

  • A process to restore the point-in-time full backup to the secondary server.

  • A process to back up, copy, and restore the transaction logs. Sometimes the easiest way to create transaction log backups is to use the Database Maintenance Plan Wizard because it creates unique file names for the transaction log backups without requiring you to code your own logic.

  • A process to change the roles of the servers. This will be custom for each solution.

  • A process to transfer users and logins. If your version of SQL Server has the stored procedure sp_resolve_logins in the master database, you should be able to use the same process as detailed for the built-in feature (using the DTS transfer logins task and bcping out the syslogins table) in the role change to synchronize logins. Otherwise, follow the information found in the section "Transferring Logins, Users, and Other Objects to the Standby" in Chapter 14.

  • Monitoring functionality (if it is desired to see the status other than in, say, SQL Server Agent jobs).

  • Testing procedures to verify that everything is working properly.

  • Custom alerts and notifications.

On the CD

For an example of a custom log shipping solution you can use or extend that also includes compression, see the file Custom_Log_Shipping.zip, which contains all the relevant Transact-SQL scripts and documentation. The scripts do not configure any additional functionality such as dealing with logins. The scripts do cover the backing up, copying, and applying of the transaction logs, as well as the monitoring of the process.

Log Shipping From SQL Server 7.0 to SQL Server 2000

It is possible to manually configure log shipping from SQL Server 7.0 to SQL Server 2000. The best use of this is to facilitate an upgrade from SQL Server 7.0 to SQL Server 2000 where you need to minimize the amount of downtime in the server switch. There are some constraints for doing this, though, namely the following:

  • SQL Server 7.0 must be configured with at least SQL Server 7.0 Service Pack 2.

  • The pending upgrade option must be set on SQL Server 7.0. This is a database-level option that will only affect the database that it is configured on. Setting pending upgrade means that indexes cannot be created and statistics cannot be generated. Therefore, you should not allow log shipping to occur for a lengthy period of time, as it will affect performance of the SQL Server 7.0 database.

  • Because recovery models are new to SQL Server 2000, truncate log on checkpoint and select into/bulk copy cannot be set as options on the SQL Server 7.0 databases.

  • When restoring the database under SQL Server 2000, you must use NORECOVERY only. STANDBY is not supported when an upgrade is required.

  • Replication cannot be configured between the server running SQL Server 7.0 and the SQL Server 2000 instance if log shipping is going to be used. If you are currently replicating using these servers, you have to script your configuration and unconfigure replication.

  • As with custom log shipping, there is no monitoring functionality for log shipping between SQL Server 7.0 and SQL Server 2000. You have to code your own.

Note

Transaction logs for SQL Server 2000 are not backwardcompatible. Therefore, it is not possible to apply transaction logs (or full backups for that matter) generated on SQL Server 2000 to a SQL Server 7.0 installation.

Configuring Log Shipping from SQL Server 7.0 to SQL Server 2000

The following steps show how to configure log shipping between SQL Server 7.0 Service Pack 2 or later and SQL Server 2000.

  1. Install and configure an instance of SQL Server 2000.

  2. Perform full backups for all databases on SQL Server 7.0.

  3. Restore the database that will be log shipped on your SQL Server 2000 instance using NORECOVERY. Here is example syntax:

    RESTORE DATABASE mydb
    FROM DISK = 'C:mydbbackup.bak'
    WITH NORECOVERY

    If requiring a new location:

    RESTORE DATABASE mydb
    FROM DISK = 'C:mydbbackup.bak'
    WITH NORECOVERY
    MOVE 'data file' TO
    'x:
    ewlocationdbdatafile.mdf',
    MOVE 'log file' TO
    'x:
    ewlocationdblogfile.ldf'
  4. Make sure that truncate log on checkpoint and select into/bulk copy are not selected as options for the database being log shipped from SQL Server 7.0.

  5. Create a location for the transaction logs to be stored.

  6. Execute the following command using Transact-SQL: sp_dboption 'database name', 'pending upgrade', 'TRUE', where the database name is the name of the database that will be log shipped to SQL Server 2000.

  7. Using the Database Maintenance Plan Wizard, back up your transaction logs on a regularly scheduled basis. Use the directory created.

    Warning

    If this is an actual SQL Server 7.0 to SQL Server 2000 upgrade and not just a way of creating a test database or something similar, stop all traffic and users from accessing the database at this point to ensure that if you have a problem, the database is in the state it was prior to the switch to SQL Server 2000. You will then have no data loss should you need to go back to SQL Server 7.0.

  8. Manually apply each transaction log generated. The following is example syntax:

    RESTORE LOG mydb
    FROM mydb_log1
    WITH NORECOVERY
  9. When it is time to restore the final transaction log, the syntax is slightly different. You now bring the database online and make it available for use. Here is sample syntax:

    RESTORE LOG mydb
    FROM mydb_finallog
    WITH RECOVERY
  10. Ensure that all users, objects, and other items not brought over as part of the transaction logs exist under SQL Server 2000.

  11. If necessary, redirect any applications to the new SQL Server 2000 database. Test all applications against the new database on SQL Server 2000 to ensure that everything functions as it did under SQL Server 7.0. If this is an upgrade, do not allow end users to access the new server if it has not been verified.

  12. If you need to use the SQL Server 7.0 installation again, disable the pending upgrade option with the following Transact-SQL statement: sp_dboption 'database name', 'pending upgrade', 'FALSE'

Summary

Log shipping is an extremely flexible and compelling option as a primary or secondary method of availability with SQL Server. It is a proven solution with a good track record. From an implementation and cost perspective, log shipping provides one of the best methods of protection for your databases. It overcomes the distance limitations that are inherent in most clustered solutions, and it is based on standard SQL Server processes like SQL Server Agent jobs, DTS, and backup and restore. As with any high availability technology, it comes with its caveats, and it is not the best solution for all environments.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.184.21