Code and database performance testing through code profiling

Performance testing should be done to validate the effects of newly developed code, to test the production environment as part of environment validation, or as a way to investigate a reported performance issue.

The premise here is that we start the data capture, perform the task or tasks we wish to test, and then review the results. This is done using the Trace Parser, with the following steps:

  1. Launch Microsoft Dynamics AX 2012 Trace Parser (if you're using Windows 8, search for Trace Parser).
  2. In the Register Database dialog, select the SQL Server that will store the trace result's database.
  3. Enter a database name for the trace file, for example, MyTraceDatabase:
    Code and database performance testing through code profiling
  4. Click on Yes when you are asked The specified database does not exist. Would you like to create it?.
  5. Close the AX client if it is running and relaunch the client using Run as administrator (right-click on the client icon).
  6. Launch the development environment (the quick way is to press Ctrl + D).
  7. Click on Tools and Tracing Cockpit.
  8. The defaults are fine, but to make the results more useful, you should also check Xpp parameters and Bind parameters, as shown in the following screenshot:
    Code and database performance testing through code profiling

    Note

    The reason Xpp parameters and Bind parameters are disabled by default is that they may reveal information (that is, actual data) as it is passed between the AX and SQL Servers. This could include sensitive information, for instance, salary details. You should therefore obtain approval for this before proceeding.

  9. Click on Start trace to start the trace.
  10. Enter a file for the trace to be stored within; this must be accessible by the server, so please use a UNC such as \servershare path race20150703.etl.

    Note

    At this point, all database activity between the AX server and SQL Server is logged, and you may notice that the server load increases at this point.

  11. Once the data has been captured, click on Stop trace.

    Note

    It is important to make sure that the trace is running only for the time needed, as this will add significant load to the system.

  12. To view the trace, click on Open trace, which imports the trace file into the registered database.
  13. The initial view of the trace is a brief summary. To view more details, select the user in question from the Session drop-down list.
  14. Use Call Tree to examine the call stack. The darker the red highlight, the larger the impact of the event on performance. You can use the code window to see the code that was executed. This is useful for developers, where custom code may be the cause of poor performance. It is shown in the following screenshot:
    Code and database performance testing through code profiling
  15. Use the X++/RPC tab to view aggregate X++ and server RPC call information. You can also see the call stack and the code that was executed.
  16. Finally, the SQL section shows the aggregate or detailed SQL transactions. This will give you the Transact-SQL statement that was executed.

If the trace fails to start, check whether:

  • The AX Server service account has write access to the file
  • You have rights to create the file
  • There is sufficient disk space
  • The AX client was started with Run as Administrator

    Tip

    You should perform the trace after the system has been in use for a period of time so as to simulate normal use and avoid having your results skewed by investigating the issues related to the system starting up. The system will behave slower than normal, as it takes time to initialize and cache. Always follow this advice, unless the issue is specifically that a task is slow the first time it is performed after system startup.

Interpreting the results

The trace is performed by the server monitoring the AX code and SQL activity and producing a file that you can import into Trace Parser.

The Event Trace Log (ETL) file can then be imported into Trace Parser (which is done by clicking on Open trace). The reason we created the database before starting the trace is that we can simply click on Open trace.

In undertaking a full performance test, we have analyzed the server counters for AX and SQL Server and also performed an AX trace with the tracing cockpit.

Each one in isolation will not always provide us with the reason for a performance problem, and even with all of these tools being run together, it will not always make the reason obvious. Users can experience performance problems even when all servers seem to be idling.

The information we now have in hand is as follows:

  • AX Windows Server counters for the CPU, disk, memory, and network
  • SQL Windows Server counters for the CPU, memory, and network
  • Details of the tasks being performed by the user in the window
  • Details of the SQL statements being executed and the ability to generate an execution plan

With this, we can correlate the information in order to determine the cause. The cause can be due to a hardware resource, indexing, statistics, code, or a combination of these.

Low resource utilization while you are experiencing a performance problem is also a clue; the resource in question might be waiting for something else.

The most likely causes of performance issues within SQL Server are as follows:

  • Disk subsystem: This is indicated by high disk queue length, low CPU usage, and high page life expectancy.
  • Memory: This is indicated by high disk queue length on the system disk, reducing the buffer cache hit ratio, and reducing the page life expectancy.
  • CPU: This is indicated by high CPU usage, low disk queue length (less than one per spindle), and high page life expectancy.
  • Network: This is often misreported. If SQL Server is slow to respond and the SQL counters are low, network latency could be the cause. It could also mean that there is a deadlock chain and the process causing the deadlock is "sleeping"; for example, AX has locked the record while communicating with an unresponsive service.

For AX Server, if a counter indicates high usage, either it is a case of badly written code, or the server is underpowered. Adding a second AOS may not solve the issue, as the code is not load balanced. If there is not a dedicated batch server, a batch task could be the cause. We should follow best practices by installing a dedicated AOS for batch tasks that are not part of the cluster to which users connect.

Enabling SQL execution plans

We have discussed execution previously, and one of the main reasons for which we enabled Bind parameters was to take the SQL statement into SQL Management Studio in order to determine its execution plan.

Do not try and use the DELETE or UPDATE statements. Even though we don't need to execute the statement in order to gain the execution plan, we will have to use the live database to gain an accurate plan. Not only that, there is little we can do to improve most UPDATE statements, and there is the risk that we could execute the statement by mistake.

The SELECT statements are safer, and when we are checking the resultant execution plan, we are looking primarily for:

  • Table scans (unless the table has few records)
  • Clustered index scans

As a lower priority, we are looking for index use and the corresponding bookmark lookups. It is always ideal to use clustered index seeks, and less ideal to use index seeks. Table or clustered index scans are expensive and indicated as incorrect indexing on the table or out-of-date data statistics.

Investigating the performance of views

Views are very powerful, and with the advent of computed columns, they offer large performance improvements, as calculations are performed by SQL Server. However, they can be a victim of their own success as we make them more and more complicated, which may result in them performing poorly. When a view performs slowly, the following points should be checked first:

  • Are ranges used by the view covered by an index?
  • Are relations correctly specified, and are these covered by an index?
  • Could the data calculated in a computed column be placed in the query instead?
  • For computed columns, are we joining correctly? For example, are DataAreaId and Partition in the join clause?

One of the best methods to test views is by using SQL Server Management Studio. Locate the view in the business data database, and from the view's context menu (right-click) navigate to Script view as | ALTER To | New Query Editor Window.

We don't actually want to alter the view, so we remove the lines prior to the ALTER VIEW statement, and the first part of the ALTER VIEW statement so that the SELECT statement is the first statement.

We can now use this as a normal SELECT statement, using SQL Server tools such as execution plans to determine whether the view could be improved. We can even target a different database that has more recent or appropriate data in order to test it before deploying it to the live environment.

Tip

The SELECT statement will be a long, barely readable line of text. Use a SQL beautifier to reformat the text into formatted Transact-SQL.

References

The following references are useful when performance tuning:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.82.217