NAV development projects – general guidance

Now that we understand the basic workings of the NAV C/SIDE development environment and C/AL, we'll review the process of software design for NAV enhancements and modifications.

When we start a new project, the goals and constraints for the project must be defined. The degree to which we meet these will determine our successes. Following are some examples:

  • What are the functional requirements and what flexibility exists within these?
  • What are the user interface standards?
  • What are the coding standards?
  • What are the calendar and financial budgets?
  • What existing capabilities within NAV will be used?

Knowledge is the key

Designing for NAV requires more forethought and knowledge of the operating details of the application than was needed with traditional models of ERP systems. As we have seen, NAV has unique data structure tools (SIFT and FlowFields), quite a number of NAV‑specific functions that make it easier to program business applications, and a software data structure (journal, ledger, and so on) which is inherently an accounting data structure. The learning curve to become expert in the way NAV works is not easy. NAV has a unique structure and the primary documentation from Microsoft is limited to the embedded Help (which improves with every release of the product). The NAV books published by PACKT Publishing are of great help as are the NAV Development Team blogs, the blogs from various NAV experts around the world, and the NAV forums that were mentioned earlier.

Data-focused design

Any new application design must begin with certain basic analysis and design tasks. That is just as applicable whether our design is for new functionality to be integrated into NAV or for an enhancement/expansion of existing NAV capabilities.

First, determine what underlying data is required. What will it take to construct the information the users need to see? What level of detail and in what structural format must the data be stored so that it may be quickly and completely retrieved? Once we have defined the inputs that are required, we must identify the sources of this material. Some may be input manually, some may be forwarded from other systems, some may be derived from historical accumulations of data, and some may be derived from combinations of all these, and more. In any case, every component of the information needed must have a clearly defined point of origin, schedule of arrival, and format.

Defining the needed data views

Define how the data should be presented. How does it need to be "sliced and diced"? What levels of detail and summary? What sequences and segmentations? What visual formats? What media will be used? Will the users be local or remote? Ultimately, many other issues also need to be considered in the full design, including user interface specifications, data and access security, accounting standards and controls, and so on. Because there are a wide variety of tools available to extract and manipulate NAV data, we can start relatively simply and expand as appropriate later. The most important thing is to ensure that we have all the critical data elements identified and then captured.

Designing the data tables

Data table definition includes the data fields, the keys to control the sequence of data access and to ensure rapid processing, frequently used totals (which are likely to be set up as SumIndex Fields), references to lookup tables for allowed values, and relationships to other primary data tables. We not only need to do a good job of designing the primary tables, but also all those supporting tables containing the "lookup" and "setup" data. When integrating a customization, we must consider the effects of the new components on the existing processing as well as how the existing processing ties into our new work. These connections are often the finishing touch that makes the new functionality operate in a truly seamlessly integrated fashion with the original system.

Designing the user data access interface

Design the pages and reports to be used to display or interrogate the data. Define what keys are to be used or are available to the users (though the SQL Server database supports sorting data without predefined NAV C/AL keys). Define what fields will be allowed to be visible, what are the totaling fields, how the totaling will be accomplished (for example, FlowFields or on-the-fly processing), and what dynamic display options will be available. Define what type of filtering will be needed. Some filtering needs may be beyond the ability of the built-in filtering function and may require auxiliary code functions. Determine whether external data analysis tools will be needed and will therefore need to be interfaced. Design considerations at this stage often result in returning to the previous data structure definition stage to add additional data fields, keys, SIFT fields, or references to other tables.

Designing the data validation

Define exactly how the data must be validated before it is accepted upon entry into a table. There are likely to be multiple levels of validation. There will be a minimum level, which defines the minimum set of information required before a new record is accepted.

Subsequent levels of validation may exist for particular subsets of data, which are in turn tied to specific optional uses of the table. For example, in the base NAV system, if the manufacturing functionality is not being used, the manufacturing-related fields in the Item Master table do not need to be filled in. But if they are filled in, they must satisfy certain validation criteria.

As mentioned earlier, the sum total of all the validations that are applied to data when it is entered into a table may not be sufficient to completely validate the data. Depending on the use of the data, there may be additional validations performed during processing, reporting, or inquiries.

Data design review and revision

Perform these three steps: table design, user access, and data validation for the permanent data (Masters and Ledgers) and then for the transactions (Journals). Once all the supporting tables and references have been defined for the permanent data tables, there are not likely to be many new definitions required for the Journal tables. If any significant new supporting tables or new table relationships are identified during the design of Journal tables, we should go back and re-examine the earlier definitions. Why? Because there is a high likelihood that this new requirement should have been defined for the permanent data and was overlooked.

Designing the posting processes

First define the final data validations, then define and design all the ledger and auxiliary tables (for example, Registers, Posted Document tables, and so on). At this point, we are determining what the permanent content of the Posted data will be. If we identify any new supporting table or table reference requirements at this point, we should go back to the first step to make sure that this requirement didn't need to be in the design definition.

Whatever variations in data are permitted to be Posted must be acceptable in the final, permanent instance of the data. Any information or relationships that are necessary in the final Posted data must be ensured to be present before Posting is allowed to proceed.

Part of the Posting design is to determine whether data records will be accepted or rejected individually or in complete batches. If the latter, we must define what constitutes a Batch; if the former, it is likely that the makeup of a Posting Batch will be flexible.

Designing the supporting processes

Design the processes necessary to validate, process, extract, and format data for the desired output. In earlier steps, these processes can be defined as "black boxes" with specified inputs and required outputs, but without overdue regard for the details of the internal processes. This allows us to work on the several preceding definition and design steps without being sidetracked into the inner workings of the output related processes.

These processes are the cogs and gears of the functional application. They are necessary, but often not pretty. By leaving design of these processes in the application design as late as possible, we increase the likelihood that we will be able to create common routines and standardize how similar tasks are handled across a variety of parent processes. At this point, we may identify opportunities or requirements for improvement in material defined in one of the previous design steps. In that case, we should return to that step relative to the newly identified issue. In turn, we should also review the effect of such changes for each subsequent step's area of focus.

Double-check everything

Do one last review of all the defined reference, setup, and other control tables to make sure that the primary tables and all defined processes have all the information available when needed. This is a final design quality control step.

It is important to realize that returning to a previous step to address a previously unidentified issue is not a failure of the process, it is a success. An appropriate quote used in one form or another by construction people the world over is Measure twice, cut once. It is much cheaper and more efficient (and less painful) to find and fix design issues during the design phase rather than after the system is in testing or, worse yet, in production.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.146.12