14.4. The Go/No Go Problem Revisited

The go/no go example of Section 14.2, having only two available alternatives, is simplistic. There is usually a larger set of decision alternatives. The company may try to out-licence the drug. It can decide to postpone the decision so that more information regarding the drug's likely effect and market potential can be collected. Even if the decision is to start a Phase III programme immediately, there are different possible designs for the program. Important design factors are which dose(s) to use and the sample size. All these examples may be modelled by including more decision nodes in the decision tree. In this section we will illustrate the possibility of adding decision nodes with a problem involving the option to purchase information regarding an important AE.

Recall from Section 14.2 that there is good hope that the new drug in the example has an advantage over the existing therapy with respect to a certain adverse event. Assume that a preliminary test (pretest) can be run to investigate this further. If the pretest is positive, then it is highly likely that an AE advantage can be demonstrated in Phase III development. A negative result, on the other hand, predicts no advantage. The question is whether it is worth a relatively modest cost to perform the pretest? We will assume that this investigation may be run in parallel with other necessary activities before making the decision on whether to proceed to a Phase III programme (if this were not the case, and running the pretest would delay the project, the payoff data set should reflect that the reward depends on the time of marketing).

One can think of a number of different investigations that could, depending on the situation, possibly serve as pretests, e.g.:

  • Use of an AE animal model.

  • Studies of the binding between drug and receptor.

  • A clinical trial of limited size and duration, perhaps focussing on a surrogate marker in a selected high-risk population.

Often, the result of such investigations are not dichotomous but may be more or less positive or negative. For simplicity, however, we assume that only two outcomes (positive/negative) are possible.

Program 14.4 analyses the same problem as in Program 14.3 but with the addition of the pretest option. It is assumed that the cost of the pretest is 20 MUSD and that a positive or negative pretest predicts that superiority with respect to the AE can be shown in Phase III with probabilities 0.9 and 0.15, respectively. In order to be consistent with the problem considered in Program 14.3, the probability for a positive pretest must be 0.2, since


Example 14-4. Evaluated decision tree in the simple go/no go problem with two outcome variables (efficacy and safety) and a pretest
data stage4;
   length _stname_ _outcome_ _success_ $10.;
   input _stname_ $ _sttype_ $ _outcome_ $ _reward_ _success_ $;
   datalines;
   Pretest   D   No_test       0   Phase3
   .         .   Test        −20   AEtest
   AEtest    C   AEpos         .   Phase3
   .         .   AEneg         .   Phase3
   Phase3    D   No_go         0   .
   .         .   Go         −250   Develop
   Develop   C   Eff_super     .   AE
   .         .   Eff_noninf    .   AE
   .         .   Eff_inf       .   .
   AE        C   AE_super      .   .
   .         .   AE_equal      .   .
   ;
data prob4;
   length _given1_ _given2_ _event1_ _event2_ _event3_ $10.;
   input _given1_ $ _given2_ $ _event1_ $ _prob1_
         _event2_ $ _prob2_ _event3_ $ _prob3_;
   datalines;
   .        .      AEpos     0.2   AEneg      0.8   .        .
   .        .      Eff_super 0.2   Eff_noninf 0.5   Eff_inf 0.3
   No_test  .      AE_super  0.30  AE_equal   0.70  .        .
   Test     AEpos  AE_super  0.90  AE_equal   0.10  .        .
   Test     AEneg  AE_super  0.15  AE_equal   0.85  .        .
   ;
data payoff4;
    length _state1_ _state2_ $10.;
    input _state1_ $ _state2_ $ _value_;
    datalines;
    Eff_super  AE_super 1200
    Eff_super  AE_equal  550
    Eff_noninf AE_super  450
    Eff_noninf AE_equal  100
    Eff_inf    .           0
    ;
* Trial's outcome;
symbol1 value=triangle height=10 color=black width=3 line=1;
* Decision point;
symbol2 value=square height=10 color=black width=3 line=1;
* End nodes;
symbol3 value=none height=10 color=black width=3 line=1;
proc dtree stagein=stage4 probin=prob4 payoffs=payoff4;
    ods select parameters policy;
    treeplot/graphics norc nolegend compress
    linka=1 linkb=2 symbold=2 symbolc=1 symbole=3 display=(link);
    evaluate/summary;
    run;
    quit;

Output 14.4 shows that the optimal decision path is to run the pretest and then go into Phase III development only if the pretest is positive. Comparing the optimal values in Output 14.3 and Output 14.4, we see running the pretest increases the expected value of the project from 1.5 MUSD to 16.9 MUSD. The decision tree produced by Program 14.4 is not shown in order to save space.

Example. Output from Program 14.4
Decision Parameters

               Decision Criterion:    Maximize Expected Value (MAXEV)
          Optimal Decision Yields:    16.9

                            Optimal Decision Policy

                              Up to Stage Pretest

                    Alternatives    Cumulative    Evaluating
                    or Outcomes         Reward         Value
                    ----------------------------------------
                    No_test                  0          1.5
                    Test                   −20         36.9*

                            Optimal Decision Policy

                               Up to Stage Phase3

                                              Cumulative    Evaluating
              Alternatives or Outcomes            Reward         Value
          ------------------------------------------------------------
          No_test                 No_go                0          0.0
          No_test                 Go                −250        251.5*
          Test        AEpos       No_go              −20          0.0
          Test        AEpos       Go                −270        434.5*
          Test        AEneg       No_go              −20          0.0*
          Test        AEneg       Go                −270        205.8

It is often crucial to investigate the robustness of the conclusions. The model parameters, especially rewards and probabilities, are typically uncertain. It is good practice to vary the most important parameters and look at how these variations affect the analysis. PROC DTREE is an interactive procedure and allows the user to modify the rewards with the MODIFY statement (see Program 14.17).

However, the most powerful way of investigating robustness properties is via macro programming. A simple SAS macro that evaluates the robustness of the decision analysis model is given in Program 14.5. In this macro, all payoffs are scaled by a common factor f. The output of the program (not displayed) shows that the optimal decision pattern changes considerably when the payoff factor f is varied:

  • For sufficiently small payoffs, say, f = 0.5, the expected value of a Phase III trial is negative even if the drug has an AE advantage.

  • When f = 0.8, the AE advantage would motivate a Phase III trial. However, the cost of the pretest is too high in comparison with the information it will provide.

  • In the standard scenario, f = 1.0, it is optimal to run the pretest and run a Phase III trial if and only if the pretest result is positive.

  • For f = 1.2, the pretest still gives valuable information but the optimal strategy is to go to Phase III development directly, avoiding the pretest cost.

  • Finally, for really high payoffs (e.g., f = 1.5), it is worthwhile to run a Phase III trial even when the pretest is negative. Since the Phase III trial will be done irrespectively of the result of the pretest, the pretest is obviously redundant in this model and with these parameters. Thus, the analysis shows that the optimal decision is to conduct a Phase III trial without a previous pretest. Note, however, that a pretest may help optimise the design of the Phase III trial. If this is the case, an extended model may still show that the pretest has value.

Example 14-5. Robustness check when varying payoffs
/* Refers to data sets from the previous program */
%macro robust(payoff_factor);
    data payoff_changed;
        set payoff4;
        _value_=&payoff_factor*_value_;
    proc dtree stagein=stage4 probin=prob4 payoffs=payoff_changed criterion=maxev;
        evaluate/summary;
        run;
        quit;
%mend robust;

%robust(0.5);
%robust(0.8);
%robust(1.0);
%robust(1.2);
%robust(1.5);

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.37.254