FACTORING FOR MIGRATION TO CLIENT/SERVER

You should consider a number of factors when deciding whether to migrate to client/server applications from the file-server world, such as the amount of data that you have, the number of people using the database, and security.

Amount of Data

A common misconception is that the size of the database is the only factor to consider in deciding whether to migrate to client/server. The size of the database does play a role, certainly, especially when you consider network traffic. Keep in mind that knowing your database will eventually grow to upward of 100MB doesn't necessarily mean you must move to client/server. However, as the complexity of your database increases, the size factor becomes more and more important.

You should ask yourself these questions:

  • How much data do I plan for the application to handle?

  • Do I anticipate 500 records or 1 million records?

The structure of the database comes into play as well. There's a huge difference between one table with 1GB of data versus 1,000 tables that comprise that amount. In general, complex queries against multiple tables take much longer to execute than simple queries against one table. Having a 1GB table isn't necessarily cause for migrating to client/server—it might perform adequately in its current file-server environment.

When you initially develop an application, using Access as a back end might make a lot of sense. However, consider initiating a standard set of tests on your application to monitor its performance as the size of your database grows. For instance, you can search for a random record on an indexed field (of course) and track the response time. Use this as a judge of performance degradation as your data file expands. Also consider doing searches on various joins and aggregations as additional performance tests.

Listing 24.1 shows the LogTimes procedure, which processes a number of queries based on Northwind.mdb and records the elapsed runtime in a table. You can modify this procedure to work with your application by replacing the code inside the For..Next loop with code that executes queries against your database. To test LogTimes with your copy of Northwind, you need to copy tblLog and basLogTime from Chap24.mdb to your copy of Northwind.mdb, and then run LogTimes from the immediate window. Chap24.mdb can be found in the ExamplesChap24 folder on the CD-ROM accompanying this book.

Listing 24.1. Chap24.mdb: Processing Queries and Recording a Table's Runtime
Sub LogTimes()
    Dim dbThis As Database
    Dim rstAny As Recordset
    Dim rstLog As Recordset
    Dim lngOrderID As Long
    Dim varStart As Variant
    Dim intCount As Integer

    ' Set the database reference
    Set dbThis = CurrentDb

    ' Get start time and seed randomizer
    varStart = Timer
    Randomize

    ' Set iteration count and process queries
    For intCount = 1 To 5

        ' Find random record in order details table by
        ' searching for a OrderID between 10248 and 11077
        lngOrderID = CLng((11077 – 10248 + 1) * Rnd + 10248)
        Set rstAny = dbThis.OpenRecordset("Order Details", dbOpenDynaset)
        rstAny.FindFirst "[OrderID] = " & lngOrderID
        rstAny.Close

        ' Run an aggregate query and move to the last record
        Set rstAny = dbThis.OpenRecordset("Product Sales for 1997", _
          dbOpenSnapshot)
        rstAny.MoveLast
        rstAny.Close
    Next

    ' Write the elapsed time to the log table
    Set rstLog = dbThis.OpenRecordset("tblLog", dbOpenDynaset)
    rstLog.AddNew
    rstLog![TestDate] = Now
    rstLog![TestDuration] = Timer – varStart
    rstLog.Update
    rstLog.Close
End Sub

LogTimes first searches for a random record in a rather large table, Order Details. This provides you with an idea of how raw size affects search times on an indexed field. Next, LogTimes executes the Product Sales for 1997 query and uses the MoveLast method to move to the end of the recordset. Product Sales for 1997 is an aggregate query that joins the Orders, Order Details, Products, and Categories tables. Running this query shows how database size affects table joins and aggregations.

Note

LogTimes uses a For..Next loop to process each query five times. This filters out any bias for caching.


If you choose to adapt LogTimes for use in your applications, you can see how the size of the database affects performance over time. Your times will be relative, of course, and subject to the hardware the database resides on. The code in LogTimes gives you data for making decisions regarding when a move to a client/server database might make sense.

Use and Purpose of Database

Is your application an OLTP (online transaction processing) or a DSS (decision support system)? This can affect your decision on whether to migrate to client/server. For OLTP applications, speed is essential. If phone operators are taking orders, they can't afford to wait five minutes for the database to bring up an inventory check list.

Database Design

A well-designed database with 1GB of data might run fine with Access as a back end, whereas a poorly designed database with 2MB of data might perform terribly. Refer to sources on database design if you're unsure of what a well-designed database really entails.

Concurrent Use and Number of Users

One of Access's great features is its capability to share an .mdb file on a network drive. Access handles most of the record-locking issues and allows multiple employees to access the company's data. However, having a shared .mdb file for more than 10 concurrent users isn't recommended. With more than 10 simultaneous users, a server database can handle the traffic demands much better.

Backup and Recovery

SQL Server and many other server databases can do live backups. What's the benefit of this? Administrators can back up data without forcing users to first log off. In a file-server application, such as one using Access as the back-end storage format, the administrator would have to ensure that everyone first logged out of any application using the server data. You can't backup, repair, or compact an Access database unless everyone is logged out of it. If your application needs to be running 24 hours a day, SQL Server is an obvious selection for a back-end database.

SQL Server also logs transactions automatically and can be restored if there's a system failure. Access has no built-in mechanism for doing this, although it's possible to write code to mimic this behavior. Access simply isn't as sophisticated as a true database server such as SQL Server or Oracle.

Security

Is security of your data an important factor for your organization or customer? Access and SQL Server have built-in security that you can implement, but SQL Server provides greater flexibility. With SQL Server, you can set permissions on tables, views, and even individual columns. Like Access security, SQL Server supports user- and group-level security. SQL Server also provides integrated security with Windows NT, which uses network logon IDs so users need to log in only once.

Why is SQL Server better?

  • With SQL Server, a database is bound tightly to its server and no direct access to the data is available. An Access .mdb file, on the other hand, always has the potential of being deleted, moved, or copied off the network. You can, however, choose to use Access to test security for your client/server application and then re-create the security on SQL Server.

  • SQL Server security, although flexible, is straightforward and data-centric. Access security, in contrast, must handle the complex security needs of the myriad Access objects (tables, queries, forms, reports, and code), and their relationships to each other and to the application users.

Data Sharing Among Applications

Another factor that warrants consideration is whether your data needs to be shared across various applications. By placing your data in an Access .mdb file, you're limiting its usefulness to those applications that use Jet or ODBC. SQL Server, on the other hand, is more easily accessible by a number of DOS and Windows applications by using the DBLibrary set of function calls. Obviously, if you now have existing DBLibrary applications accessing SQL Server and want to share data, your Access applications need to use SQL Server as the back end.

Network Traffic

Switching to a well-implemented client/server solution decreases the amount of network traffic because the processing is carried out on the server and only results are sent back across the network lines. If Access is the back end, it will send the much larger set of data across the network to be processed at the client machine.

Consider the query SELECT * FROM customers WHERE city = 'Richmond'. Now assume that 10,000 records are in the table and only 5 records are in the result set. This query executes much more quickly against a SQL Server database than an Access database located on the same network server. Why? Access databases don't process the query; they merely store the data in the .mdb format. In the case of an Access database, the entire recordset—yes, all 10,000 records—would be brought back across the network to your application in order to execute the query if the city column wasn't indexed. With SQL Server, on the other hand, the query would be processed on the back-end server, and then the result set of 5 records would cross the network to bring the answer to your application.

If your data request is based on indexed fields, the amount of data transmitted from back end to front end will be considerably less in the 10,000-record example because only the index for this table travels across the network. Access then uses the index information to create a request that will retrieve only the result set across the network. However, if your query is based on non-indexed fields, you're putting an incredible burden on your network.

Record Aggregation

If your application uses record aggregation (grouping, summing, and other operations that use the SQL GROUP BY clause), the performance differential between the file-server and client/server models becomes quite large.

Consider a query that groups data by one field and sums the values in a second field. A file-server Access application would need to retrieve every value needed for the sum (every record indicated by the SQL Where statement). Compare this scenario to a server-based model where the server processes data through its query engine or by using code placed in a stored procedure. The server aggregates or alters the data locally before sending only the results to the front-end application; the detail records used in the computation never leave the server. This approach yields far better performance than the file-server model.

Bet Your Career on Choosing the Right System

If you're in charge of selecting a server database for a mission-critical application, wouldn't you prefer using a robust, fault-tolerant system? If you had selected a shared Access .mdb file and one user crashed his computer, it could mean the corruption of the data, and all other users would be locked out of the system.

Using Access as a front end and back end doesn't protect you from potential data loss as SQL Server does. Remember, SQL Server logs transactions to enable recoveries and allows live backups; Access doesn't.

The old adage, “An ounce of prevention is worth a pound of cure,” applies to this scenario. Yes, it's more complex and likely more time-consuming to set up a SQL Server back-end database than to use a standard Access .mdb file, but you and your customer will be in a much better position in case of failures. If reliability, security, and safety are of the utmost importance, choose SQL Server.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.130.201