chapter 9

MODELS FOR SCIENCE AND BUSINESS

Scientific research is a broad term which specifies data science and other related analytic research streams. It can be defined as a systematic, controlled, empirical and critical investigation of hypothetical propositions about the presumed relations among observed phenomena. Scientific research can be either pure or applied. Pure research explains the world around us and tries to make us understand how the universe operates. It is about finding out what is already there without any greater purpose of research than the explanation itself. Applied research might look for answers to specific questions that help humanity, like, medical research or environmental studies. Such research generally takes a specific question and tries to find a definitive and comprehensive answer. A scientific model is one which aims to make a particular part or feature of the world easier to understand, define or simulate by referencing it to existing or commonly accepted knowledge. For this process, we need to select the attributes, identify the relevance in a real-life situation and try to model it using a strong mathematical or theoretical support. The limitations of scientific modelling are emphasized by the fact that models generally are not complete representations. In the attempt to fully understand an object or system, multiple models, each representing a part of the object or system, are needed. Collectively the models are able to provide more complete representation, or at least a more complete understanding, of the real object or system. Business models are abstract representation of an organization. It can be either conceptual, textual and/or graphical. A business model thus describes the rationale of how an organization creates, delivers and captures value, in economic, social, cultural or other contexts. The process of business model construction is a part of business strategy. This chapter discusses the basic methods/strategies involved in scientific research modelling and business modelling.

9.1 ALGORITHMIC RESEARCH

Algorithm is a step-by-step procedure to solve a problem. A formal definition is given like as a set of rules that precisely defines a sequence of operations. Starting from the initial empty space, the algorithm guides the users to a solution by performing finite well-defined successive states. Representations of algorithms are classed into three accepted levels: high-level description, implementation description and formal description.

Methodology of problem solving that takes the basic concept of algorithm is termed as algorithmic research. It is a very straight forward method where well-defined sequences of steps are provided to solve the organizational problems. This type of research is applicable to government, business and any corporate industry. A variety of problems such as polynomial or combinatorial can be solved by algorithmic research. In polynomial category, researchers develop a proper algorithm for optimal solution where a heuristic approach is chosen to solve the problem in other cases.

Algorithmic research is carried out when problems are precisely stated and are often generic rather than application specific. Various computational problems such as searching, sorting, shortest path, branch and bound techniques are solved using algorithmic methodology. Figure 9.1 shows the different types of algorithmic research problems.

img

Fig. 9.1 Types of algorithmic research problem

Consider the example of an algorithmic research; temperature, weight and time are usually well known and defined, with only the exact scale used in definition. If a researcher measures abstract concepts, such as intelligence, emotions and subjective responses, then a system of measuring numerically needs to be established, allowing statistical analysis and replication. So there should be some accurate method by which these abstract terms are changed or mapped to physical concepts. So, a well-defined and structured process must be explained and conceptualized. During the study of similar problems, algorithmic research is the ones that are used more common.

There are various advantages of algorithmic research. One of the big reasons of algorithmic research has become so popular because of the advantages that it holds over manual event decisions. The advantages fall in areas related to speed, accuracy and reduced costs. Since algorithms are written beforehand and are executed automatically, the main advantage is speed. The speed at which these trades are made is measured in fractions of a second, faster than humans can perceive.

Trading with algorithms has the advantage of scanning and executing on multiple indicators at a speed that no human could do. Since trades can be analyzed and executed faster, more opportunities are available at better prices. Another advantage of algorithmic trading is accuracy. If a computer is automatically executing a trade, you get to avoid the pitfalls of accidentally putting in the wrong trade associated with human trades. With manual entries, it is much more likely to buy the wrong currency pair, or for the wrong amount, compared to a computer algorithm that has been double checked to make sure that the correct order is entered.

Another advantage of algorithmic research is the ability to backtrack. As each and every step is explained and well defined, the areas where the error occurred are easily found out. The algorithms are reusable to design new solutions to another set of problems. Stronger rules can be set after a critical analysis of those problems.

Another advantage of automated trading is the reduced transaction costs. With auto trading, traders do not have to spend as much time for monitoring the markets, as trades can be executed without their continuous supervision. The dramatic time reduction for trading lowers transaction costs because of the saved opportunity cost of constantly monitoring the markets.

9.1.1 Analysis of Algorithm

Analysis of algorithms is the determination of the amount of resources (such as time and storage) necessary to execute them. Most algorithms are designed to work with inputs of arbitrary length. Usually, the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity). But the real question is, How can these attributes be calculated? In theoretical analysis of algorithms, complexities are determined in the asymptotic sense, i.e., to estimate the complexity function for a large input. Big O notation, Big-omega notation and Big-theta notation are used for this purpose.

Big O notation is a mathematical notation that describes the limiting behaviour of a function when the argument tends towards a particular value or infinity.

Let f and g be two functions defined on some subset of numbers,

f(n) = O(g(n)) as n → ∞; if and only if there is a positive constant M such that for all sufficiently large values of n, the absolute value of f(n) is at most M multiplied by the absolute value of g(n). That is, f(n) = O(g(n)), if and only if there exists a positive real number M and a real number n0 such that

 

| f(n) | ≤ M | g(n) |, for all nx0

 

Figure 9.2 shows the Big O function graph.

What is the importance of analysis? If we want to compare two different algorithms which perform the same task, there need to be some differentiators. The time and space complexities are the major differentiators that a programmer selects to find the quality of the algorithm. For example, if we are looking for an algorithm for sorting n numbers, there are lots of algorithms available – selection sort, bubble sort, quick sort, heap sort, etc. But which algorithm will we use? Here the programmer checks the best, average and worst time complexities of these algorithms and compares them. In which type of input does a particular sorting algorithm works well. And depending on that he/she chooses the algorithm.

img

Fig. 9.2 Big O function

9.1.2 Design of Algorithms

Algorithm design is a specific method to create a mathematical process in solving problems. It is identified and incorporated into many solution theories of operation research, such as dynamic programming and divide-and-conquer. In this section, let us look how to design an algorithm.

Designing the right algorithm for a given application is a difficult job. It requires a major creative act, taking a problem and pulling a solution out of it. The key to algorithm design is to proceed by asking yourself a sequence of questions to guide your thought process. What if we do this? What if we do that? While working in a group or as a team, make sure you do enough brain storming sessions to come up with the best solution to all your questions. After this brain storming, there will be enough directions to guide how to move forward. The strategy and the tactics used for algorithm design need to be dealt very carefully. Should we use bottom-up or top-down strategy? Should we incorporate dynamic programming? All these questions need to find answer during this brain storming.

Algorithm developed for all applications needs to have three major qualities. Prime need is that the algorithm should deliver the accurate results. The next major feature that the designer needs to focus is on the amount of time that the algorithm utilized to finish the process. Memory consumption is also a major focus; a best algorithm will be the one which delivers accurate results in least time utilizing less memory.

The basic questions that a designer has to find answer before starting algorithm design are given as follows.

  1. Have you understood the correct goal?
    • What are the inputs?
    • What are the desired outputs?
    • Is there any intermediate result?
    • Is it generic or specific?
    • What are the time constraints?
    • Where is it applied?
  2. Is there a more simpler or heuristic solution available?
    • Is there any other recursive method to make my solution simpler?
    • Can I make my logic, modular?
  3. Spot the special cases.
    • What are the deviations possible?
    • How good will the algorithm work in extreme inputs?
    • Maximum load possible?
  4. Is there any standard methods available for the one I am working with?
    • If I split the problem, is there any methods already defined to solve subproblems?
    • What are the general rules in solving generic algorithms?

If you start analyzing such a set of questions before designing an algorithm, the process will be smooth and an efficient algorithm will be obtained.

9.2 METHODS OF SCIENTIFIC RESEARCH

The scientific method is a way to ask and answer scientific questions by making observations and doing experiments. The following are the steps of the scientific method:

  • Ask a question
  • Do background research
  • Construct a hypothesis
  • Test the hypothesis by doing an experiment
  • Analyze the data and draw a conclusion
  • Communicate the results

It is important for the experiment to be a fair test. A “fair test” occurs when you change only one factor (variable) and keep all other conditions the same. The scientific method is a process for experimentation that is used to explore observations and answer questions. Scientists use the scientific method to search for cause and effect relationships in nature. In other words, they design an experiment so that changes to one item cause something else to vary in a predictable way. Just as it does for a professional scientist, the scientific method will help to focus on a project question, construct a hypothesis, and design, execute and evaluate the experiment.

Figure 9.3 shows the steps of scientific research.

Ask a question

The scientific method starts when a question is asked about observation: How, What, When, Who, Which, Why or Where? In order the scientific method to answer a question, it must be about something that is measurable, preferably with a number.

Do background research

Rather than starting from scratch in putting together a plan for answering a question, you need to be a savvy scientist using library and Internet research that helps to find the best way to do things and ensure that you do not repeat mistakes from the past.

Construct a hypothesis

A hypothesis is an educated guess about how things work:

img

Fig. 9.3 Steps of scientific research

“If …. [I do this] …., then …. [this] …. will happen.”

It must state the hypothesis in a way that it can easily measure, and of course, the hypothesis should be constructed in a way to help the answer of the original question.

Test the hypothesis by doing an experiment

The experiment tests whether the hypothesis is supported or not. It is important for the experiment to be a fair test. Conduct a fair test by making sure that you change only one factor at a time while keeping all other conditions the same. You should also repeat the experiments several times to make sure that the first results were not just an accident.

Analyze the data and draw a conclusion

Once the experiment is complete, collect the measurements and analyze them to see if they support the hypothesis or not. Scientists often find that their hypothesis was not supported, and in such cases they will construct a new hypothesis based on the information they learned during their experiment. This starts the entire process of the scientific method over again. Even if they find that their hypothesis was supported, they may want to test it again in a new way.

Communicate the results

To complete the research process, it is essential to communicate the results to others in a final report and/or a display board. Professional scientists do almost exactly the same thing by publishing their final report in a scientific journal or by presenting their results on a poster in a scientific meeting.

The whole process is collaborative and is conducted in a clearly documented manner to help other scientists who are doing research in the same field. Throughout history, there are instances where scientists have stopped their research before completing all the steps of the scientific method, only to have the enquiry taken up and solved by another scientist interested in answering the same question.

The basic correlation of a real-life data collection to scientific method is shown in Fig. 9.4.

img

Fig. 9.4 Glass hour

9.3 MODELLING

Scientific modelling is a scientific activity, the aim of which is to make a particular part or feature of the world easier to understand, define, quantify, visualize or simulate by referencing it to existing and usually commonly accepted knowledge. It requires selecting and identifying relevant aspects of a situation in the real world and then using different types of models for different aims, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify and graphical models to visualize the subject. Modelling is an essential and inseparable part of scientific activity, and many scientific disciplines have their own ideas about specific types of modelling

Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions will always be more reliable than modelled estimates of outcomes.

  1. Simulation: A simulation is the implementation of a model. A steady-state simulation provides information about the system at a specific instant in time. A dynamic simulation provides information over time. A simulation brings a model to life and shows how a particular object or phenomenon will behave. Such a simulation can be useful for testing, analyzing or training in those cases where real-world systems or concepts can be represented by models.
  2. Structure: Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child’s verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the concept of structure is an essential foundation of nearly every mode of enquiry and discovery in science, philosophy and art.
  3. Systems: A system is a set of interacting or interdependent entities, real or abstract, forming an integrated representation. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone. The concept of an “integrated whole” can also be stated in terms of system embodying a set of relationships which are differentiated from relationships of the set to other elements and from the relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: (1) discrete in which the variables change instantaneously at separate points in time and (2) continuous where the state variables change continuously with respect to time.

9.3.1 Steps in Modelling

The basic steps of the model-building process are as follows:

  1. Model selection
  2. Model fitting
  3. Model validation

These three basic steps are used iteratively until an appropriate model for the data has been developed. In the model selection step, plots of the data, process knowledge and assumptions about the process are used to determine the form of the model to be fit to the data. Then, using the selected model and possibly information about the data, an appropriate model-fitting method is used to estimate the unknown parameters in the model. When the parameter estimates have been made, the model is then carefully assessed to see if the underlying assumptions of the analysis appear plausible. If the assumptions seem valid, the model can be used to answer the scientific or engineering questions that prompted the modelling effort. If the model validation identifies problems with the current model, however, the modelling process is repeated using information from the model validation step to select and/or fit an improved model.

The three basic steps of process modelling described above assume that the data have already been collected and that the same dataset can be used to fit all of the candidate models. Although this is often the case in model-building situations, one variation on the basic model-building sequence comes up when additional data are needed to fit a newly hypothesized model based on a model fit to the initial data. In this case, two additional steps, experimental design and data collection, can be added to the basic sequence between model selection and model fitting. The flow chart shown in Fig. 9.5 gives basic model-fitting sequence with the integration of the related data collection steps into the model-building process.

img

Fig. 9.5 Model-fitting sequence

9.3.2 Research Models

Various business research companies use different types of tools, techniques and methods for analysis and building models. These research methods and types vastly depend on the particular requirements of their research project. However, the following are the basic types of industry analysis that are commonly used by various study firms across the globe.

  1. Quantitative analysis: This method deals with collecting all the objective and numerical data from various resources. Various statistical models and formulae help the experts to collect market data about the features. Questionnaire is the basic tool, provides adequate information about customer behaviour and their approach towards a particular product or a company. Compiling complete statistical investigation is the basic aim of quantitative analysis. Hence, the questions are also in objective sorts that draw yes and no responses from the customers chosen for the tests.
  2. Qualitative analysis: Qualitative analysis is exactly opposite to quantitative study. It is thoroughly subjective and deals with the market data, which can be stored in the form of words and visual presentations. In this study, experts observe customers’ record and analyze their responses in the form of answers and queries. These responses include various answers to open-ended questions, their overall behaviour and results of various tests performed on them. Qualitative examination mainly depends on case studies, which help the experts to collect the required market data.
  3. Observations: This is the basic weapon at the disposal of the experts undertaking the projects. Observational research plays a vital role in collecting crucial market data, which all the other methods cannot do. It is the method of collecting valuable data without any interference or inputs from the experts. They plainly observe the customers and their behaviour and then carry out reports based on these observations. They record the feedback and complaints of the customers based on their preferences and suggest improvements.
  4. Experiments: Business analysis based on experiments helps the researchers to change the set parameters and observe the results according to these changes. These projects generally take place in dedicated laboratories, but can also be performed at other places. This technique helps the experts understand various aspects and conditions that affect the behaviour of target customers.
  5. Basic research: Basic study concentrates on collecting all the basic things that are crucial yet unknown for the business or product.
  6. Applied research: This study helps in understanding of answers to all crucial issues and problems troubling the business.
  7. Developmental research: This study is very similar to applied analysis. However, it is mainly focused on using known solutions for product improvements and new business ventures.
9.4 SIMULATIONS

Simulation is imitation of the operation of a real-world process or system over time. The act of simulating something first requires a model development. This model represents the key characteristics or behaviour/functions of the selected physical or abstract system or process. The model represents the system itself, whereas the simulation represents the operation of the system over time.

Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, testing, training, education and video games. Often, computer experiments are used to study simulation models. Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist.

Key issues in simulation include acquisition of valid source information about the relevant selection of key characteristics and behaviour, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes. Procedures and protocols for model verification and validation are an ongoing field of academic study, refinement, research and development in simulations technology or practice, particularly in the field of computer simulation.

A computer simulation (or “sim”) is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works. By changing variables in the simulation, predictions may be made about the behaviour of the system. It is a tool to virtually investigate the behaviour of the system under study.

Traditionally, the formal modelling of systems has been via a mathematical model, which attempts to find analytical solutions enabling the prediction of the behaviour of the system from a set of parameters and initial conditions. Computer simulation is often used as an adjunct to, or substitution for, modelling systems in which simple closed form analytic solutions are not possible. There are many different types of computer simulation; the common feature they share is to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states would be prohibitive or impossible.

Medical simulators are increasingly being developed and deployed to teach therapeutic and diagnostic procedures as well as medical concepts and decision making to personnel in the health professions. Simulators have been developed for training procedures ranging from the basics such as blood draw, to laparoscopic surgery and trauma care. They are also important to help on prototyping new devices for biomedical engineering problems. Currently, simulators are applied to research and develop tools for new therapies, treatments and early diagnosis in medicine.

9.4.1 Types of Simulation Models

Based on the application development research, there are many simulation models.

  1. Active models: Active models that attempt to reproduce living anatomy or physiology are recent developments. The famous “Harvey” mannequin was developed at the University of Miami and is able to recreate many of the physical findings of the cardiology examination, including palpation, auscultation and electrocardiography.
  2. Interactive model: More recently, interactive models have been developed that respond to actions taken by a student or physician. Until recently, these simulations were two-dimensional computer programs that acted more like a textbook than a patient. Computer simulations have the advantage of allowing a student to make judgements and also to make errors. The process of iterative learning through assessment, evaluation, decision making and error correction creates a much stronger learning environment than passive instruction.
  3. Computer simulators: 3DiTeams learner is percussing the patient’s chest in virtual field hospital. Simulators have been proposed as an ideal tool for assessment of students for clinical skills. For patients, “cyber therapy” is used for sessions simulating traumatic experiences, from fear of heights to social anxiety.

Programmed patients and simulated clinical situations, including mock disaster drills, have been used extensively for education and evaluation. These “lifelike” simulations are expensive, and lack reproducibility. A fully functional “3Di” simulator would be the most specific tool available for teaching and measurement of clinical skills. Gaming platforms have been applied to create these virtual medical environments to create an interactive method for learning and application of information in a clinical context.

Immersive disease state simulations allow a doctor or HCP to experience what a disease actually feels. Using sensors and transducers, symptomatic effects can be delivered to a participant allowing them to experience the patient’s disease state.

Such a simulator meets the goals of an objective and standardized examination for clinical competence. This system is superior to examinations that use “standard patients” because it permits the quantitative measurement of competence, as well as reproducing the same objective findings.

9.4.2 Tools for Simulations

Because simulation is such a powerful tool to assist in understanding complex systems and to support decision making, a wide variety of approaches and tools exist. Many special-purpose simulators exist to simulate very specific types of systems. For example, tools exist for simulating the movement of water (and contaminants) in an estuary, the evolution of a galaxy, or the exchange rates for a set of currencies. The key attribute of these tools is highly specialized to solve a particular type of problem. In many cases, these tools require great subject-matter expertise to use. In other cases, the system being simulated may be so highly specified that using the tools is quite simple. That is, the user is presented with a very limited number of options. Other tools are not specialized to a particular type of problem. Rather, they are “tool kits” or general-purpose frameworks for simulating a wide variety of systems. There are a variety of such tools, each tailored for a specific type of problem. What they all have in common, is that they allow the user to model how a system might evolve or change over time. Such frameworks can be thought of as high-level programming languages that allow the user to simulate many different kinds of systems in a flexible way.

Perhaps the simplest and most broadly used general-purpose simulator is the spread sheet. Although spread sheets are inherently limited by their structure in many ways because of their ubiquity, they are very widely used for simple simulation projects (particularly in the business world) (e.g., representing complex dynamic processes is difficult, they cannot display the model structure graphically, and they require special add-ins to represent uncertainty).

Other general-purpose tools exist that are able to represent complex dynamics, as well as provide a graphical mechanism for viewing the model structure (e.g., an influence diagram or flow chart of some type). Although these tools are generally harder to learn to use than spread sheets (and are typically more expensive), these advantages allow them to realistically simulate larger and more complex systems than can be done in a spread sheet. The general-purpose tools can be broadly categorized as follows:

  1. Discrete event simulators: These tools rely on a transaction-flow approach to modelling systems. Models consist of entities (units of traffic), resources (elements that service entities) and control elements (elements that determine the states of the entities and resources). Discrete simulators are generally designed for simulating processes such as call centres, factory operations and shipping facilities in which the material or information that is being simulated can be described as moving in discrete steps or packets. They are not meant to model the movement of continuous material (e.g., water) or represent continuous systems that are represented by differential equations.
  2. Agent-based simulators: This is a special class of discrete event simulator in which the mobile entities are known as agents. Whereas in a traditional discrete event model, the entities only have attributes. The agents have both attributes and methods (e.g., rules for interacting with other agents). (Properties that may control how they interact with various resources or control elements), an agent-based model could simulate the behaviour of a population of animals that interact with each other.
  3. Continuous simulators: This class of tools solves differential equations that describe the evolution of a system using continuous equations. These types of simulators are most appropriate if the material or information that is being simulated can be described as evolving or moving smoothly and continuously, rather than in infrequent discrete steps or packets. For example, simulation of the movement of water through a series of reservoirs and pipes can most appropriately be represented by a continuous simulator. Continuous simulators can also be used to simulate systems consisting of discrete entities if the number of entities is large so that the movement can be treated as a flow. A common class of continuous simulators are system dynamics tools, based on the standard stock and flow approach developed by Professor Jay W. Forrester at MIT in the early 1960s.
  4. Hybrid simulators: These tools combine the features of continuous simulators and discrete simulators. That is, they solve differential equations, but can superimpose discrete events on the continuously varying system. Gold Sim is a hybrid simulator
9.5 INDUSTRIAL RESEARCH

Industrial research means the planned research or critical investigation aimed at the acquisition of new knowledge and skills for developing new products, processes or services or for bringing about a significant improvement in existing products, processes or services. It comprises the creation of components of complex systems, which is necessary for the industrial research, notably for generic technology validation, to the exclusion of prototypes. Industrial research led to a semantic innovation, the addition of “development” to “research”, thereby coining the new term of R&D. Before the beginning of the present century, people spoken about science, investigation and enquiry. Research as a term generalized after being used regularly by industries where science as often a contested term then applied to industry, the industrial research actually made the boundary between pure and applied research almost invisible. R&D is a component of innovation and is situated at front end of the innovation lifecycle. Innovation builds on R&D and includes commercialization phases. The activities that are classified as R&D differ from company to company, but there are two primary models, with an R&D department being either staffed by engineers and tasked with directly developing new products, or staffed with industrial scientists and tasked with applied research in scientific or technological fields which may facilitate future product development. In either case, R&D differs from the vast majority of corporate activities that is not often intended to yield immediate profit, and generally carries greater risk and an uncertain return on investment. A system driven by marketing is one that puts the customer needs first, and only produces goods that are known to sell. Market research is carried out, which establishes what is required. If the development is technology driven, then R&D is directed towards developing products that market research indicates will meet an unmet need. In general, R&D activities are conducted by specialized units or centres belonging to a company, or can be outsourced to a contract research organization, universities or state agencies. In the context of commerce, “research and development” normally refers to future-oriented, longer-term activities in science or technology, using similar techniques to scientific research but directed towards desired outcomes and with broad forecasts of commercial yield. Many a times, industrial research and operational research are used side by side. It is because every new industrial product needs to have a strong mathematical support for the public to accept, let it be a model or a product. So operational research also has a major impact on industrial research activities.

9.5.1 Operational Research

According to the Operations Research Society of America, “Operations research is concerned with scientifically deciding how to best design and operate man-machine systems, usually under conditions requiring the allocation of scarce resources.” No matter how operations research is defined, the construction and use of models are at its core. Models are representations of real systems. They can be iconic (made to look like the real system), abstract or somewhere in between. Iconic models can be full-scale, scaled-down or scaled-up in size. Sawmill heading control simulators are full-scale models. A model of the solar system is a scaled-down model, and a teaching model of a wood cell or a water molecule is a scaled-up model. Regardless of the type of model used, operations research approach comprises the following seven sequential steps: (1) Orientation, (2) Problem definition, (3) Data collection, (4) Model formulation, (5) Solution, (6) Model validation and Output analysis, and (7) Implementation and monitoring. Figure 9.6 shows this schematically.

img

Fig. 9.6 Steps of operations research

  1. Orientation: The first step in the operations research approach is referred to as problem orientation. The primary objective of this step is to constitute the team that will address the problem at hand and ensure that all its members have a clear picture of the relevant issues. Typically, the team will have a leader and be constituted of members from various functional areas or departments that will be affected by or have an effect upon the problem at hand. In the orientation phase, the team typically meets several times to discuss all of the issues involved and to arrive at a focus on the critical ones. This phase also involves a study of documents and literature relevant to the problem in order to determine if others have encountered the same (or similar) problem in the past, and if so, to determine and evaluate what was done to address the problem. The aim of the orientation phase is to obtain a clear understanding of the problem and its relationship to different operational aspects of the system, and to arrive at a consensus on what should be the primary focus of the project.
  2. Problem definition: This is the second, and in a significant number of cases, the most difficult step of the operations research process. The objective here is to further refine the deliberations from the orientation phase to the point where there is a clear definition of the problem in terms of its scope and the results desired. This phase should not be confused with the previous one since it is much more focussed and goal oriented; however, a clear orientation aids immeasurably in obtaining this focus.
  3. Data collection: In the third phase of the operations research process, data is collected with the objective of translating the problem defined in the second phase into a model that can then be objectively analyzed. Data typically comes from two sources – observation and standards. The first corresponds to the case where data are actually collected by observing the system in operation and typically, this data tend to derive from the technology of the system. Other data are obtained by using standards; a lot of cost-related information tends to fall into this category. For instance, most companies have standard values for cost items such as hourly wage rates, inventory holding charges, selling prices, etc. These standards must be consolidated appropriately to compute costs of various activities.
  4. Model formulation: This is the fourth phase of the operations research process. It is also a phase that deserves a lot of attention since modelling is a defining characteristic of all operations research projects. The term “model” is misunderstood by many, and is therefore explained in some detail here. A model is defined formally as a selective abstraction of reality. This definition implies that modelling is the process of capturing selected characteristics of a system or a process and then combining these into an abstract representation of the original. The main idea is usually far easier to analyze a simplified model than it is to analyze the original system, and as long as the model is a reasonably accurate representation, conclusions drawn from such an analysis may be validly extrapolated back to the original system. Models may be broadly classified into four categories: physical models, analogic models, computer simulation models and mathematical models. Amongst these which is the best model – a simple model or complex? A simple model is better than a complex one as long as it works as well. A model only needs to perform its intended function to be valid. It should be easy to understand. It is important to use the most relevant operations research tool when constructing a model. A modeller should not try to shape the problem to fit a particular operations research method. For example, a linear programming (LP) expert may try to use LP on a problem where there is no optimal solution. Instead, modellers should study the problem and choose the most appropriate operations research tool. For complicated systems, users need to remember that models are only simplified representations. If a user mistakenly considers a complicated model to be correct, he or she may disregard further study of the real system. Modellers and users of models never should rely only on a model’s output and ignore the real system being modelled. A good model should be easy to modify and update. New information from the real system can be incorporated easily into a well-planned model. A good model usually starts out simple and becomes more complex as the modeller attempts to expand it enough to give meaningful answers.
  5. Model solution: The fifth phase of the operations research process is the solution of the problem represented by the model. This is the area on which a huge amount of research and development in operations research has been focussed.
  6. Validation and analysis: Once a solution has been obtained, two things need to be done before one even considers developing a final policy or course of action for implementation. The first is to verify that the solution itself makes sense and do a detailed analysis to check whether the model is in scope for all its solutions.
  7. Implementation and monitoring: The last step in the operations research process is to implement the final recommendation and establish control over it. Implementation entails the constitution of a team whose leadership will consist of some of the members on the original operations research team. This team is typically responsible for the development of operating procedures or manuals and a time table for putting the plan into effect. Once implementation is complete, responsibility for monitoring the system is usually turned over to an operating team. From an operations research perspective, the primary responsibility of the latter is to recognize that the implemented results are valid only as long as the operating environment is unchanged and the assumptions made by the study remain valid.
EXERCISES
  1. Explain the steps in business modelling.
  2. How can we simulate a product which has a potential influence in society? Explain the various modes.
  3. How is OR different from BR?
  4. Explain any five advantages of simulation.
  5. What are the various methods employed in scientific research?
  6. Justify the statement, “research happens more in industries than in laboratories”.
  7. How can the business head motivate his team?
  8. What are the various simulation tools?
  9. Explain the various research tools.
  10. What are the various steps in designing an algorithm?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.89.2