Index

  • Page numbers followed by f and t refer to figures and tables, respectively.
  • Absurdity test, 166, 170
  • Accumulating snapshots, 273–274, 274f
  • Adobe security breach, 14, 147–150, 149t, 319
  • Adult Friend Finder, 319
  • Advanced data stealing threats (ADST), 269–272, 269f, 270t–272t, 272f
  • Advisen LTD, 52, 223, 224
  • AEL, see Annual expected losses
  • AIR Worldwide, 323–325
  • Algorithms:
    • to average expert estimates, 217
    • expert judgments vs., 79–84
    • performance of, 179
    • tolerance of errors from, 118–119
  • Algorithm Aversion, 118–119
  • AllClear ID, 201–202
  • Alpha parameter, 195–197, 196f, 197f, 200
  • Amazon, 263
  • American Statistical Association, 33
  • The American Statistician, 33
  • Amplifying effects, with risk matrix, 114–117, 116f
  • Analysis placebo, 75–77
  • Analytics technology, 258, 280
  • Anchoring, 90, 166, 170
  • Andreassen, Paul, 76
  • Anecdotal evidence, 126–127
  • Annual expected losses (AEL):
  • Anonymous, 143
  • Anthem security breach, 11, 147, 319
  • Antivirus definitions, 253
  • Aon, 125
  • Apache Superset, 263
  • Application‐based decomposition, 138
  • Armstrong, J. Scott, 94–95, 140
  • Arrival baselines, 240–245, 241f–244f
  • Arrival rates, 240
  • Ashley Madison, 319
  • Assets, 44, 46
  • Assumptions, 169
  • Attack surface, 10–12
  • Attitudes Toward Quantitative Methods (ATQM) survey, 102–103, 102t
  • Audits, 281–284. See also Rapid risk audit
  • Baselines. See also BOOM (baseline objectives and optimization measurements) metrics
    • arrival, 240–245, 241f–244f
    • burndown, 238–239
    • defined, 38
    • estimating, 51–52
    • ransomware, baselines and insurance case example, 225–229, 227f
    • size as baseline source, 223
    • survival, 239–240
    • wait‐time, 245–250, 246f–249f
  • Bayes, Thomas, 27, 183
  • Bayesian interpretation, 171
  • Bayesian methods, 26–28, 183–191, 193–230
    • advanced modeling considerations, 223–229
    • advantages of, 183–184
    • applications of, 190–191
    • beta distribution, 194–203 (See also Beta distribution)
    • comparing LOR and lens method, 212–217, 216t
    • cybersecurity professionals' use of, 104
    • decomposing probabilities with many conditions, 203–219
    • empirical, 249–252, 250f–252f
    • estimating value of information for cybersecurity, 219–220, 220t
    • gamma distribution, 225
    • introduction to, 184–187
    • Laplace's contributions to, 37
    • lens method, 210–212, 256
    • log odds ratio approach, 205–210
    • LOR for aggregating experts, 217–219
    • multifactor authentication example, 187–190
    • Poisson distribution, 224–225
    • proof of Bayes's rule, 186–187
    • ransomware, baselines and insurance case example, 225–229, 227f
    • size as baseline source, 223
    • using data to derive conditional probabilities, 220–222, 221t
  • Bayesian Networks, 309–313, 312f
  • BayesPhishArrive(), 242–245, 242f–244f
  • Bayes's rule, 184
    • applications of, 190–191
    • in beta distribution, 194, 197
    • proof of, 186–187
  • Bayes's theorem, 27
  • “Beat the bear” fallacy, see Exsupero Ursus fallacy
  • Behavioral aggregation methods, 179
  • Beliefs:
    • about quantitative methods, 119–122, 120f, 120t
    • about ransomware attacks, 225–227
    • updating, 179
  • Beta distribution, 194–203, 293–294, 293f
    • AllClear ID case example, 201–202
    • applied to breaches, 198–200, 199f
    • calculations with, 195–198, 196f, 197f
    • computing mean of, 203
    • models affected by, 200–201, 200f
    • updating, 200
  • BI, see Business intelligence
  • Biases, 158, 176, 178
  • Bickel, J. Eric, 115–117
  • Big data solutions, 263
  • Binary distribution, 156, 157f, 291, 291f
  • Binomial distribution, 197–198
  • BOGSAT method, 93
  • Bohn, Christopher “Kip,” 125
  • BOOM (baseline objectives and optimization measurements) metrics, 237–252
    • arrival baselines, 240–245, 241f–244f
    • burndown baselines, 238–239
    • escape rates, 250–252, 250f–252f
    • survival baselines, 239–240
    • wait‐time baselines, 245–250, 246f–249f
  • Booz Allen Hamilton, 319
  • Botnets, 314–318, 315f–317f
  • Box, George, 118
  • Bratvold, Reidar, 115–117
  • Breach “cascade,” 11
  • Brown, Robert D., III, 300–308
  • Brunswik, Egon, 210, 211
  • Budescu, David, 106–109, 127–128, 283
  • Budgets for cybersecurity, 7
  • Burndown baselines, 238–239
  • Business disruption, impact of, 51
  • Business intelligence (BI), 257–258
  • Calibrated estimates, 53, 155–180
    • aggregating experts, 179–180
    • comparing SMEs' estimates, 176–178
    • conceptual obstacles in, 167–172
    • controlling overconfidence, 165–167, 167t
    • effects of calibration, 172–176, 174f
    • exercise for, 159–165, 160t–161t, 162f, 164f
    • methods for improving, 166–167, 167t
    • reducing inconsistency, 178
    • subjective probability, 156–159, 157t
  • CapitalOne cyberattack, 148, 149t, 150
  • Catastrophe modeling, 323–325, 324f
  • CFO (chief financial officers), 86–88
  • CFO Magazine, 13
  • Chain rule, 186
  • Chief financial officers (CFO), 86–88
  • Chief information officers (CIO), 227
  • Chief information security officers (CISO), 13, 53–54, 189, 226
  • Chockalingam, Sabarathinam, 311
  • Chubb, 8–9, 148
  • CI, see Confidence intervals; Credible interval
  • “C, I, and A” (confidentiality, integrity, availability), 136–138, 137t
  • CICD (continuous integration and continuous development), 273
  • CIO (chief information officers), 227
  • CISO, see Chief information security officers
  • Clarification chain, 29–31
  • Clear, observable, and useful test, 141–142, 146
  • Clinical versus Statistical Prediction (Meehl), 80
  • Clinton, Jim, 122
  • Cloud services, 252, 253, 263
  • Collaboration:
    • among experts in risk analysis, 93–94
    • in estimating, 76
    • personalities in, 179
  • Colonial pipeline attack, 288
  • Common Vulnerability Scoring System (CVSS), 14
  • Communication:
    • illusion of, 109, 127, 128
    • of quantitative methods, 127–128
  • Complementary cumulative probability function, 64
  • Complement Rule, 185
  • Complexity of problems, 124–125
  • Compliance audits, 281
  • Component testing, 78–79
  • Conditional probability:
  • Confidence:
    • overconfidence, 87–88, 158, 165–167, 167t
    • and probability, 170–172
    • underconfidence, 158
  • Confidence intervals (CI). See also calibrated estimates
    • in CFO study, 86–87
    • credible intervals vs., 159
    • frequentist interpretation of, 171
    • and lognormal distributions, 59–60
    • in rapid risk model, 45–46
    • for reasonable range, 48
  • Confidentiality, integrity, availability (“C, I, and A”), 136–138, 137t
  • Configuration metrics, 236–237
  • Consensus:
    • as consistency measure, 89
    • open group discussion to build, 179
    • on quantitative methods, 127–128
  • Consistency:
    • data, 264
    • of experts' judgments, 88–93, 91f, 92f, 210–212
    • reduced, with calibrated estimates, 178
  • Continuous distribution, 156, 157f
  • Continuous integration and continuous development (CICD), 273
  • Continuous performance tracking, 176–178
  • Controls:
    • assessing performance of, 83–84
    • minimal, 65–66
    • return on, 53–55
  • Correlation neglect, 213
  • Costs of data breaches, 194
  • Countifs(), 67, 221
  • Coverage metrics, 236–237
  • COVID pandemic, 11, 12
  • Cox, Tony, 111–113, 115, 283
  • Credible interval (CI), 156
    • 90%, 170–172
    • confidence interval vs., 45–46
    • in rapid risk model, 45–46
  • CRQ (cyber risk quantification), 16
  • CSRM, see Cybersecurity risk management
  • Cumulative probability function (cpf), 195, 196
  • CVSS (Common Vulnerability Scoring System), 14
  • Cyberattacks. See also individual companies and types of attacks
    • applying beta to breaches, 198–200, 199f
    • avoiding the “Big One,” 287–288
    • chance of multiple major breaches, 201
    • data on, 194–195
    • insurance for, 8–9
    • malware used in, 9
    • and multifactor authentication, 187–190
    • multiple, within a time period, 224–225
    • perpetrators' goals for, 10
    • relationship between size and breach risk, 223
    • reputation damage from, 145–151, 149f
    • in “The Year of the Mega Data Breach,” 7
    • threat response, 12–16
    • types of, in rapid risk model, 44, 45t
  • Cyber risk quantification (CRQ), 16
  • Cybersecurity. See also specific topics
    • allocating resources for, 7, 13–14
    • changes needed in ecosystem of, 284–285
    • chief information security officers, 13
    • growing attention to, 12
    • terms used in, 30–32
    • total workers in, 13
  • Cybersecurity experts, see Experts; Subject matter experts (SMEs)
  • Cybersecurity risk:
    • catastrophe modeling applied to, 323–325, 324f
    • executive‐level attention on, 12–13
    • increase in (2014 through 2021), 7
    • insurance for, 8–9
    • measuring, 16–17 (See also Measurement)
    • obstacles in measuring (see Risk measurement obstacles)
    • popular assessment approaches to, 68
    • prioritizing resources for, 16
    • response to, 12–16
    • scoring methods for, 13–16
    • systemic, 7–9
  • Cybersecurity risk management (CSRM), 277–288
    • analytics technology in, 280
    • audits for, 282–284
    • to avoid the “Big One,” 287–288
    • budgets for, 7
    • decision analysis to support ransomware CSRM (see Ransomware CSRM decision analysis)
    • decision making for, 277
    • enterprise integration of, 285–287
    • establishing strategic charter for, 277–279
    • estimating value of information for, 219–220, 220t
    • and global attack surface, 10–11
    • new methods for, 16–17 (See also specific methods)
    • organizational roles and responsibilities for, 279–281, 279f
    • program management in, 280–281
    • proposal for, 15–18
    • quantitative risk analysis team for, 279–280, 279f
    • support needed for, 284–285
    • training and development in, 280
  • Cyberseek.org, 13
  • Cyber warfare, 9. See also Cyberattacks
  • Cyentia Institute, 52
  • Data:
    • for assigning probabilities, 27
    • in deriving conditional probability, 220–222, 221t
    • encrypting, 217
    • for evaluating risk analysis methods, 77–79
    • lack of, 124, 193
    • little, computing frequencies with (see Beta distribution)
    • sparse data analytics, 235, 257
  • Data compromise, impact of, 51
  • Data consistency, 264
  • Data marts, 252, 253, 262–264
  • Data scarcity, 309–310
  • Data sources:
    • integrated for measuring historical performance, 257
    • querying against, 264
    • for rapid risk audit, 50–52
  • Data tables, 61–63, 62t, 316, 316f
  • DataVault 2.0, 263
  • Data warehouses, 263
  • Dawes, Robyn, 80, 81
  • DBIR, see Verizon Data Breach Investigations Report
  • Decision analysis:
  • Decision making:
    • audits of models/methods in, 282–284
    • estimating value of information in, 219–220, 220t
    • integrated decision management for, 286–287
    • objections to probabilistic analysis in, 176
    • by subjectivists, 171
    • uncertainty reduction for, 24, 26
  • Decision psychology, 157–158, 210
  • Decision support:
    • measurements for, 30
    • rapid risk audit in, 53–55
  • Decomposition, 135–152
    • avoiding over‐decomposition, 142–144
    • in business intelligence, 268
    • comparing LOR and lens method, 212–217, 216t
    • guidelines for, 140–145
    • of impacts, 71, 135–138, 137t
    • informative, 143
    • joint probability, 185–186
    • lens method, 210–212
    • log odds ratio approach, 205–210
    • LOR for aggregating experts, 217–219
    • of one‐for‐one substitution model, 136–140, 137t, 139t
    • “overdecomposing,” 95
    • of probabilities with many conditions, 203–219 (See also Probability(‐ies) with many conditions)
    • reputation damage example of, 145–151, 149f
    • of risk analysis models built by experts, 94–95
    • rules for, 144–145
    • strategies for, 138–140, 139t
    • uninformative, 95, 215
  • Delphi technique, 179
  • Descriptive analytics, 254
  • De Wilde, Lisa, 311
  • Dieharder tests, 61
  • Dietvorst, Berkeley, 118
  • Dimensions, 262–268, 263f, 265f, 266f, 268t, 269t
  • Dimensional modeling:
  • Discrete distribution, 156, 157f
  • Distributions:
  • Drill‐across cases, 265
  • Dunning‐Kruger effect, 121
  • Duplicate pair method, 90–92, 92f, 212
  • Einstein, Albert, 22
  • Empirical Bayes, 249–252, 250f–252f
  • Encrypting data, 217
  • Endpoint security effectiveness, 253
  • Enterprise risk management (ERM), 285–286
  • EOL (expected opportunity loss), 219
  • Equifax cyberattack, 147–150, 149t
  • Equivalent bet test, 163
  • ERM (enterprise risk management), 285–286
  • Escape rates, 250–252, 250f–252f
  • Estimation methods. See also specific methods, e.g.: Bayesian methods
    • algorithms vs. experts in synthesizing, 79–84
    • experts' subjective judgments, 52–53
    • Laplace's contributions to, 37
    • simple, building on, 39–40
  • Evans, Dylan, 114–115
  • Evidence, 77
    • anecdotal, 126–127
    • proof vs., 187
  • EVPI (expected value of perfect information), 219
  • Excel, 71
    • data tables in, 61–63, 62t, 316, 316f
    • deriving conditional probabilities in, 221
    • distributions in, 289, 290, 292, 293
    • generating random events and impacts in, 57–61, 59f
    • inverse probability function in, 195, 224–225
    • larger datasets and simulations with, 263
    • and Poisson distribution, 224–225
    • simulating distributions in, 224–225
    • simulation‐to‐simulation in, 63–64
    • templates in, 43, 56–57
  • Expected losses:
    • annual (see Annual expected losses (AEL))
    • controlling overconfidence in estimating, 165–166
    • from cyberattacks, 323–325
    • in rapid risk model, 47, 50
    • using ranges and probabilities to represent, 55–64
  • Expected opportunity loss (EOL), 219
  • Expected value of perfect information (EVPI), 219
  • Experts, 79–95. See also Subject matter experts (SMEs)
    • aggregating estimates of, 93–94, 179–180
    • biases of, 158
    • collaboration among, 93–94
    • comparing estimates of, 176–178
    • consistency of, 88–93, 91f, 92f, 210–212
    • decomposition of models based on, 94–95
    • improving performance of, 85–86
    • judgments of, in rapid risk audit, 52–53
    • misconceptions about statistics among, 34
    • mistakes made by, 60
    • overconfidence of, 87–88
    • performance of algorithms vs., 79–84
    • proclaimed expertise of, 77
    • on risk analysis methods, 73
    • subjective probability judgments of, 86–88
  • Expert Political Judgment (Tetlock), 81, 82
  • Exsupero Ursus fallacy, 117–118
    • forms of, 123–125
    • Target data breach as counter to, 126–127
  • Extremizing the average, 179
  • Facts (measures), 264
  • Factor Analysis of Information Risk (FAIR) framework, 70, 129
  • Fact table, 264, 267, 267f
  • The Failure of Risk Management (Hubbard), 105, 126, 285–286
  • FAIR (Factor Analysis of Information Risk) framework, 70
  • Feedback, 82–83
  • Feller, William, 170
  • Feynman, Richard P., 75
  • First American Financial Corporation cyberattack, 147–150, 149t
  • “Flaw of Averages in Cyber Security, The” (Savage), 314–318
  • Forbes Magazine, 7, 146
  • Fox, Craig, 109–110
  • Freund, Jack, 70
  • Functional security metrics (FSMs), 235–237. See also BOOM metrics
  • Gamma distribution, 225, 227, 227f, 241
  • GDPR (General Data Protection Regulation), 150
  • Geer, Daniel E., Jr., xiii–xiv
  • Gelman, Andrew, 235
  • General Data Protection Regulation (GDPR), 150
  • Giga Information Group, 173–175, 174f
  • Girnius,Tomas, 323–325
  • Gladwell, Malcolm, 21
  • Global attack surface, defined, 10–11
  • Google, 263
  • Graves‐Morris, Peter, 115
  • Hacking passwords, 319–322, 321f
  • Hacktivists, goals of, 10
  • Hardening systems, 12
  • Hazards, 239
  • HBGary, 319
  • HDR, see Hubbard Decision Research
  • HDR PRNG, 60–61
  • Heuer, Richards, J., Jr., 106, 111, 283
  • High validity environment, 84
  • Holland, Bo, 202
  • Home Depot security breach, 147, 150, 319
  • Howard, Ron, 27, 28, 135, 141–142, 146, 176
  • “How Catastrophe Modeling Can Be Applied to Cyber Risk” (Stransky), 323–325, 324f
  • How to Measure Anything (Hubbard), 1, 2, 21, 27, 33–35, 70, 105, 155–156, 159, 198, 215, 219
  • Hubbard, Douglas W.:
    • on analysis placebo, 75–76
    • calibration comments heard by, 167–168, 170
    • calibration experiments by, 172–176, 174f
    • Department of Veterans Affairs IT metrics, 29
    • on equivalent bet test, 163
    • on estimate reality checks, 178
    • The Failure of Risk Management, 105, 126, 285–286
    • on HDR PRNG, 61
    • How to Measure Anything, 1, 2, 21, 27, 33–35, 70, 105, 155–156, 159, 198, 215, 219
    • lens models used by, 210, 215
    • on measurement scales, 109
    • on range estimating, 166
    • on risk management methods, 110
    • risk matrix studies by, 114–115
    • on risk tolerance curves, 69, 70
    • on subjective expert judgments, 87, 88, 90
  • Hubbard Decision Research (HDR), 13, 103, 167–172, 202, 218, 223, 286
  • IDM (integrated decision management), 286–287
  • Illusion of communication, 109, 127, 128
  • Illusion of learning, 81
  • Impact(s):
    • assessing, 47
    • of business disruption, 51
    • of data compromise, 51
    • decomposing, 71, 135–138, 137t
    • generated in Excel, 57–61, 59f
    • of ransomware extortion, 50–51
    • in rapid risk model, 45
    • refining presentation of (see Decomposition)
  • Information:
    • confidentiality, integrity, and availability of, 136–137, 137t
    • estimating value of, 219–220, 220t
    • lack of, 124
    • mathematical definition of, 24
    • synthesized, for making estimates, 79
  • Information systems, attack surface of, 10
  • Informative decomposition, 143
  • Inherent risk curve, 65–68, 66t
  • Insider threats, 10
  • Insurance:
    • claims paid from, 8
    • for cybersecurity risk, 8–9
    • premiums paid for, 50
    • ransomware, baselines and insurance case example, 225–229, 227f
    • risk limitation by companies, 10
  • Integrated decision management (IDM), 286–287
  • Integrity of information, 136–137, 137t
  • Intelligence analysis, 106
  • Interarrival time, 245–250, 246f–249f
  • Intergovernmental Policy on Climate Change (IPCC), 106
  • International Organization for Standardization (ISO), 104
  • International Standards Organization (ISO), 14
  • Internet, 10–11
  • Interval measurement scales, 25, 110
  • Inverse probability functions, 195, 224–225
  • Investments probabilities, 76
  • IPCC (Intergovernmental Policy on Climate Change), 106
  • IRIS report, 224
  • ISO (International Organization for Standardization), 104
  • Jaquith, Andrew, 236, 258
  • Jaynes, Edwin T., 159, 183, 193
  • JC Penney security breach, 147
  • Jefferies, Harold, 159
  • Jones, Jack, ix–xi, 70, 129–131
  • Kahneman, Daniel, 32, 38, 82–84, 87, 157
  • Kent, Sherman, 106, 109
  • Kettering, Charles, 28
  • Key performance indicators (KPIs), 236, 247
  • Klein, Gary, 82–83
  • Kuypers, Marshall, 147
  • Laplace, Pierre‐Simon, 37–39, 155
  • Laplace's rule of succession (LRS), 37–39, 110, 203
  • Law of total probability, 186, 187
  • LB (lower bound), 48, 166
  • Learning:
    • experience resulting in, 82–83
    • high validity environment for, 84
  • Learning, illusion of, 81
  • LEC, see Loss exceedance curve
  • Lens method, 210–217, 216t, 256
  • Lie detection, probability example of, 76
  • Lie factor, 115
  • Likelihood:
  • Lindley, Dennis V., 184
  • Lloyd's of London, 7, 9
  • Logical modeling, 258. See also Dimensional modeling
  • Lognormal distribution, 58–60, 59f, 292–293, 292f
  • Log odds ratio (LOR), 205–210
    • for aggregating experts, 217–219
    • caveats on use of, 208–210
    • comparing lens method and, 212–217, 216t
  • Loss exceedance curve (LEC), 64–70
    • elements of, 64–66, 65f, 66f
    • inherent and residual, generating, 67–68, 67t
    • risk matrix compared to, 117
    • risk tolerance curve from, 69–70
  • Lower bound (LB), 48, 166
  • LRS, see Laplace's rule of succession
  • NASA, 81, 116
  • National Association of Insurance Commissioners (NAIC), 8, 223
  • National Institute of Standards and Technology (NIST), 14, 284–285
  • National Vulnerability Database, 284–285
  • Nation‐states, goals for cyberattacks by, 10
  • NIST (National Institute of Standards and Technology), 14, 284–285
  • Node probability table (NPT), 204–205, 204t
  • Nominal measurement scales, 25
  • Normal distribution, 58, 59f, 291–292, 291f
  • NotPetya malware, 9, 11
  • NPT (node probability table), 204–205, 204t
  • Nuclear Regulatory Commission (NRC), 85
  • One‐for‐one substitution:
    • decomposing model for, 136–140, 137t, 139t
    • probability assignments used in, 156, 157t
    • with random scenarios, 61, 62t
  • “On the Theory of Scales and Measurement” (Stevens), 25
  • Open group discussions, to build consensus, 179
  • Open Web Application Security Project (OWASP), 14, 15, 104, 113–114
  • Operational security metrics maturity model, 234–235, 234f
  • Oracle, 14
  • Ordinal measurement scales, 25, 26, 28, 104–105
    • beliefs about, 119, 122–123
    • in developing consensus, 127–128
    • with highly uncertain events, 193
    • with Open Web Application Security Project, 113–114
    • psychology of, 105–111
    • and range compression, 111–113, 112t
  • Organizational CSRM roles and responsibilities, 279–281, 279f
    • analytics technology, 280
    • in CSRM strategic charter, 277–270
    • program management, 280–281
    • quantitative risk analysis, 279–280
    • training and development, 280
  • Organized crime, goals for cyberattacks by, 10
  • Orion network management system, 9
  • Overconfidence, 158, 178
    • controlling, 165–167, 167t
    • of experts, 87–88
  • Over‐decomposition, avoiding, 142–144
  • Over‐dispersion, 248
  • OWASP, see Open Web Application Security Project
  • Palo Alto Networks 2022 Unit 42 Ransomware Threat Report, 50–51
  • Partition dependence, 132n7
  • “Password Hacking” (Mobley), 319–322, 321f
  • Pearl, Judea, 310
  • Penance projects, 150–151
  • People, calibrating, 172–176, 174f
  • People processes, modeling, 273–274, 274f
  • Personalities, in team collaboration, 179
  • Petya code, 9
  • PGM (probabilistic graphical models), 310–311
  • Phishing, 242–245, 242f–244f, 319
  • Pivot tables, 221
  • Poisson distribution, 224–225
  • Population proportion, 195
  • Power law distribution, 295, 295f
  • Practical estimates, 176–178
  • Practice estimates, 176–178
  • Prediction markets, 93
  • Predictive analytics, 234, 235, 254–255
  • Prescriptive analytics, 254–256, 287
  • Priors, 183, 189–190, 196, 200
  • PRNGs (pseudorandom number generators), 60–61, 225
  • Probabilistic graphical models (PGM), 310–311
  • Probabilistic risk assessment, 64, 103
  • Probability(‐ies):
  • Probability density function (pdf), 196, 197, 197f
  • Probability of exceedance, 64
  • Probability(‐ies) with many conditions, 203–219, 204f
    • comparing LOR and lens method, 212–217, 216t
    • lens method, 210–212
    • log odds ratio approach, 205–210
    • LOR for aggregating experts, 217–219
  • Program management, 280–281
  • Pseudorandom number generators (PRNGs), 60–61, 225
  • Psychological diagnosis probabilities, 76
  • Psychology of Intelligence Analysis (Heuer), 106
  • Psychology of scales, 105–111, 107f, 108t
  • PWC, 12
  • Rand() function, 57–58, 60–61, 63
  • Random events generation, 57–61, 59f
  • Ranges:
  • Range compression, 111–113, 112t
  • Rank reversal, 113
  • Ransomware:
    • impact of payments to, 50–51
    • Petya used in, 9
    • ransomware, baselines and insurance case example, 225–229, 227f
  • Ransomware CSRM decision analysis, 300–308
    • business model insights, 304–308, 305f, 306f, 308f
    • decision model function, 304
    • framing the decision, 301–302, 302f
    • further considerations in, 308
    • setting up, 303–304, 303f
  • Rapid risk audit, 43–70
    • adding detail to (see Decomposition)
    • basic threats in, 44, 45t
    • in decision support, 53–55
    • example of, 48t
    • expert judgment in, 52–53
    • initial sources of data for, 50–52
    • loss exceedance curve in, 64–70, 65f, 66f
    • Monte Carlo simulation in, 55–64, 59f, 62t
    • and other models, 70–71
    • risk tolerance curve in, 69–70
    • setup and terminology, 44–46
    • steps in, 46–50, 48t, 49t
  • Ratio measurement scales, 25, 104
  • Reference class, 38–39, 78
  • Regression methods:
    • compared to softer methods, 116
    • in lens method, 210–212
  • Reid, Thomas, 28
  • Reputation damage, 145–151, 149f
    • claiming lack of data with, 194
    • and penance projects, 150–151
    • and stock price, 147–150, 149f
  • Residual risk, measuring, 254
  • Residual risk curve, 65–68, 66t
  • Resilience, 7, 226, 227
  • Return on control (ROC), 53–55
  • Risk(s). See also Cybersecurity risk
    • conflation of computed risks and risk tolerance with risk matrix, 113
    • covered by insurance, 10
    • defined, 31
    • measurement of, defined, 31
    • moving across a barrier, 250
    • prioritizing spending on, 13
    • reports and surveys on, 12
    • residual, 253, 254
    • systemic, 7–9
  • Risk analysis:
  • Risk appetite, 66
  • Risk map, 104
  • Risk matrix, 14–16, 14f
    • as “best practice,” 103
    • concerns with use of, 104–117, 107f, 108t, 112t, 116f
    • conflation of computed risks and risk tolerance with, 113
    • model to replace (see Rapid risk audit)
    • substituting quantitative model for, 49t
  • Risk measurement obstacles, 7, 101–131
    • Algorithm Aversion fallacy, 118–119
    • and beliefs about feasibility of quantitative methods, 119–122, 120f, 120t
    • communication and consensus objections, 127–128
    • cybersecurity professionals' perspectives on, 101–103, 102t
    • Exsupero Ursus fallacy, 117–118
    • forms of Exsupero Ursus fallacy, 123–125
    • of risk matrix, 104–117, 107f, 108t, 112t, 116f
    • Target data breach as counter to Exsupero Ursus fallacy, 126–127
  • Risk tolerance, 65, 66, 66t, 69–70, 113
  • ROC (return on control), 53–55
  • RockYou, 320–321
  • Roenigk, Dale, 175–176
  • “Rule of five,” 35–36
  • Russell, Bertrand, 22
  • Russia, cyberattacks by, 9
  • Sample size, 35–39, 196–199
  • Savage, L. J., 159, 170
  • Savage, Sam, 221, 314–318
  • Scoring methods/systems, 13–16, 113–114
  • SDA (sparse data analytics), 235, 257
  • SDMs, see Security data marts
  • Security business intelligence:
    • addressing concerns about, 263–264
    • advanced data stealing threats use case, 269–272, 269f, 270t–272t, 272f
    • dimensional modeling for, 262–263, 263f
    • dimensional modeling overview, 264–269, 265f, 266f, 267t–269t
    • modeling people processes, 273–274, 274f
    • with modern data stack, 258–262, 259f–261f
  • Security data marts (SDMs):
  • Security event management/security management, 13
  • Security information and event management (SIEM), 253, 255
  • Security Metrics (Jaquith), 236, 258
  • Security metrics maturity, 233–256
    • BOOM metrics, 237–252 (See also BOOM metrics)
    • functional security metrics, 235–237 (See also BOOM metrics)
    • operational security metrics maturity model, 234–235, 234f
    • prescriptive analytics, 254–256
    • security data marts concept, 252–254
    • sparse data analytics, 235
  • Seiersen, Richard, 1
    • background of, 7
    • Cyber Superforecasting training session of, 225–226
    • on estimate reality checks, 178
    • lens models used by, 210, 215
    • The Metrics Manifesto, 233, 236, 237, 257
    • probability statement assessments by, 106, 107f
    • and Target security breach, 126
  • Shannon, Claude, 23–24
  • SIEM (security information and event management), 253, 255
  • Simmons, Joseph P., 118
  • Singer, Omer, 258
  • Size:
    • as baseline source, 223
    • sample, 35–39, 196–199
  • SMEs, see Subject matter experts
  • Snowflake, 258, 263
  • SolarWinds security breach, 9, 11, 147–150, 149t, 288
  • Sony cyberattack, 11
  • Sparse data analytics (SDA), 235, 257
  • Sports picks probabilities, 75, 76
  • Stability, 89, 93
  • Standards organizations, 14, 74, 284–285
  • Statistical literacy, 119–122, 120f, 120t
  • Statistical significance, 33–35
  • Stevens, Stanley Smith, 25
  • Stransky, Scott, 323–325
  • Subjective judgments/estimates:
    • experts' skill in, 52–53, 86–88 (See also Experts)
    • methods for improving, 176–180, 177t
    • of quantitative probabilities, 155
    • repeating, 178
  • Subjective probability, 52–53
  • Subjectivist interpretation, 27, 171–172
  • Subject matter experts (SMEs). See also Experts
    • aggregating estimates of, 217–219
    • comparing estimates of, 176–178
    • model improving on performance of, 210–212, 211f
    • reducing inconsistency of, 178
  • Surowiecki, James, 93
  • Survival analysis, 239, 254, 269, 270
  • Survival baselines, 239–240
  • System availability risk, 143–144
  • Systemic risk, 7–9
  • UB, see Upper bound
  • Ukrainian electrical grid attack, 288
  • Ulam, Stanislaw, 56
  • Uncertainty:
    • in decision making, 287
    • defined, 31
    • measurement of, 31
    • personal, 27, 28
    • possibility range of, 123–124
    • probabilities reflecting, 26–28 (See also Probability(‐ies))
    • quantifying, 24–25
  • Uncertainty reduction, 26
    • with Bayesian methods (see Bayesian methods)
    • decomposing ranges for (see Decomposition)
    • in mathematical definition of information, 24
    • measurement as, 26–28
    • statistical significance vs., 34
  • Underconfidence, 158
  • Uninformative decompositions, 95
  • US Department of Health and Human Services Breach Portal, 223
  • US Department of Veterans Affairs, 29, 168–169
  • US Office of Personnel Management, 319
  • Upper bound (UB):
    • estimating, 166
    • in expert estimates, 60
    • in rapid risk model, 48
  • Urn of mystery, 37–39, 198–201, 203, 223–225
  • Value of information, estimating, 219–220, 220t
  • Verbal scales, 106–108, 110
  • VERIS database, 224
  • Verizon Data Breach Investigations Report (DBIR), 61, 194–195, 198, 199, 202
  • Vivosecurity, Inc., 223
  • Voltaire, 7
  • Von Neumann, John, 56
  • Vulnerability:
  • Vulnerability management, 13–14
  • Vulnerability management systems, 253–254
  • Wait‐time, 245
  • Wait‐time baselines, 245–250, 246f–249f
  • The Wisdom of Crowds (Surowiecki), 93
  • “The Year of the Mega cyberattack,” 7
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.157.39