Index

Note: Page numbers followed by “b”, “f” and “t” refer to boxes, figures and tables respectively

0-9

2-to-1 design, 59–61
80-20 specifications, 16–17
twelve principles, 15b

A

Agile, 6, 325–326
Agile data warehousing, 325–328
challenge of breadth, 19–22
challenge of depth, 22–26
cycles within, See Cycles
testing requirements, 288–292
warnings, 30–31
Agile methods, 4–8, 14
advantages, 18, 22t
Agile modeling, 24
Analytics, 4, 171
Application wireframes, 136–137
Architectural layers, 3–4, 19–20, 244
Architectural reserve, 64
Automated and continuous integration testing. see Testing
Automated testing, 294–297
Automation, need for, 292–293

B

Basis-of-estimate (BOE) cards, 53–54, 54t, 215–216, 298–299
Big bang approach, 8
Big design up-front strategy, 8, 13, 305
Bullpen, See Commons
Burn-up charts, 228, 331–332, 333f
Burndown charts, 81–82, 87–92
developers overtime work, 91–92
diagnosing problems with, 97–102
early hill to climb pattern, 98–99, 99f
measuring velocity, 92–93
perfect line, 88–89, 88f
persistent inflation pattern, 100–101, 101f
problems with, 89–91, 89f
shallow glide pattern, 99–100
team aggregate progress, 87–92
variations on, 94–96
Business centric, 15–16
Business intelligence (BI), 3, 19–20, 40, 47, 176
Business modeling, 233
Business partner, 15–16
Business rules, 23
Business target model, 152f, 158, 234

C

Capabilities Maturity Model (CMM), 311–312
Categorized services model, 235–238
Caves and commons, 42
Chaos Reports, 104, 208, 210
Coders, 45, 87
Colocation of team members, 16, 72–73
Command and control approach, 8, 311–312
Commit line, 50–51
Commons, 42
Community demonstration, 67
Compliance, architectural, 63–65
Component testing, 330–331
Conceptual model, 37–39
Controlled chaos, 273
Corporate strategy, 155–157
Cross-method comparison projects, 333–334
Current estimates, 219, 230–231, 230f, 306–307
Cycle time, 334, 339
Cycles, in ADW
daily, 40
development, 39–40
release, 36–39, 38t
sprints, See Sprints
Cycles, in waterfall projects
disappointment, 8–12

D

Daily cycle, 39–40
Daily stand-up meetings, 76, 85–86
Dashboarding, 19
Data architect, 182, 186, 262–264
Data architect, responsibilities, 262–264
Data architecture, 186, 233–234
Data churn, avoiding, 269–272, 270f
Data cubes, 186
Data integration, 19–20, 180
Data schemas, 24–25, 186
Data topology chart, 316f
Data warehousing/business intelligence (DWBI), 3
Data warehousing reference architecture, 185–186
Database administrators (DBA), 267–268
Decision support, 4
Defects by iteration, 330–331, 331f
Developer stories for data integration, 178–180
agile practitioners and, 181–182
defining work units, 183f
format, 179–180
forming backlogs, 187–190
load revenue fact, 198
load sales channel, 196–197
need for, 176–178
in requirements management, 180
sample user story, 188f
secondary techniques, 195–205
column types, 200–201, 200f
columns, 198–200, 199f
sets of rows, 196–198, 197f
tables, 201–203, 202f
workshop, 182–185
Developers, 44–45
Development cycle, 39–40
Development iteration, 39
Development phase, 6–7, 55–65, See also Sprints
DILBERT’S test, 190–195, 191f
Dimensional model, 234
Disappointment cycle, waterfall method, 8–12
Discovery phase, See Release cycle
Domain model, 37–39
Done, definition of, 40–41
DWBI reference data architecture, 185–187, 187t

E

Earned-value reporting, 319–325
Elaboration phase, See Release cycle
Epics, See User stories
Estimating bash, 210
Estimation
accuracy, 227–228
agile estimation, 215–219
causes of inaccurate forecasts, 208–215
criteria for better approach, 213–215
ideal time, 223–227
labor-hour, 104–106
remaining labor estimates, 87–88
size-based, 207, 213–214, 226
story points, 223–227
traditional approaches, 209–213
twelve objectives, 214t
value points, 228–229
Estimation poker, 51, 219–222, 220f
Evolving target data model, 297–300
Extract, transform, and load (ETL), 19–20, 84
Extreme Programming (XP), 34

F

Fact-qualifier matrix, 23
“Fail fast and fix quickly” strategy, 17–18, 255
Financial analysis, 156–157

G

Groupware, 112

H

Hypernormalization, 264, 298

I

Ideal time, See Estimation
Increments, 15
Initial project backlogs, 144–145
enterprise and project architects, 148–154, 153t
interview, 157–163
release cycle, 146–148, 147f
sample project, 145–146, 159t–160t
user role modeling, 154–155, 155f
Integrated quality assurance, 18
Integration testing, 286
INVEST criteria, See User stories
IT Infrastructure Library (ITIL), 312
Iteration backlog, 48
Iteration length, 73–74
Iterations, See Sprints
Iterative and incremental development, 14–19

J

Just in time, 16, 101
Just-in-time requirements, 122

K

Kanban, 335–336
advantages, 340–341

L

Lead time, 339
Leadership subteam, 266–267
Lightweight modeling, 137
Load modules, 176–177, 198
Logical data modeling, 234
Logical models, 23

M

Meta data, 315–316
Meta scrums, 315–316
Milestones, 318–319
Minimally marketable feature, 171

N

Net promoter score (NPS), 329

O

Online transaction processing (OLTP), 143
Organizational change management, 313

P

Perfect line, See Burndown charts
Personas, 135
Physical data modeling, 234
Physical models, 23
Pipelined delivery, 273–285
as buffer-based process, 283–284
delaying defect correction, 279–280, 279f, 281t
iterations –1 and 0, 276–278
resolving task board issues, 280–282
technique, 275, 275f
two-step user demos, 278–279
Plan-driven approach, 8, 13, 311–312
Prime directive, for sprint retrospectives, 68b
Product board, 137, 138f
Product owners, 43–44
Programming, accelerated, 59–62
Project architect, responsibilities, 256–262
Project backlog, 46, 48, 66–67, 73, 81, 101, 117, 123, 126, 130, 143–174, 175, 182, 183, 184, 207, 228, 229, 230, 232, 254, 318, 325, 332, 334
Project manager
improved role for, 45–46
as scrum master, 46–47
Project plans, 229–232
Project segmentation, 232–234
Pull-based approach, 335–339, 341–343

Q

Quality of estimates, 330

R

Refactoring, 298
Refactoring databases, 299t
categories for, 300t
converting Type 1 to Type 2 dimension, 301t
sample, 299t
verbs and nouns for, 299t
Regression testing, 286
Release backlog, 48
Release cycle
construction phase, 36–37
discovery and elaboration phase, 36–37
inception phase, 36–37
transition phase, 36–37
Release planning, 218
Remaining labor estimates, See Estimation
Remote teammates, managing, 106–112
Requirements
gathering, 118–122
identifying, 124–125
management, 124, 125f
nonfunctional, 262, 263t
Requirements bash, 118
Requirements traceability matrix (RTM), 119
Resident resources, 267–269, 268f
Retrospectives, See Sprints
Return on investment (ROI), 228–229
Rework, 246–247
Role modeling, 134–135
Roles and responsibilities, 257t–260t
data architect, 262–264
product owner, 43–44
project architect, 256–262, 263t
scrum master, 44
systems analyst, 264–265
systems tester, 265–266
team leaders, 267–269, 268t

S

Scaling agile, 309–325
application complexity, 310–311
compliance requirements, 311–312
geographical distribution, 311
IT governance, 312
organization distribution, 313–314
organizational culture, 312–313
team size, 311
Scaling factors, 310t
Scope creep, 96–97
Scrum, generic, 5–8, 10, 30–31, 34, 40
adaptations, 20
advantages, 5–6
history, 77–78
plain vanilla, 33–34, 143
time boxes, 41–42
Scrum (daily meeting), 57–59
Scrum masters, 31, 44
responsibilities, 44
Scrum of scrums, 315–318
Scrum team, 303–309
automatic and continuous integration testing, 307–309
developer stories and current estimates, 306–307
managed development data and test-driven development, 307
pipelined delivery, 306
pull-based collaboration, 309
time box and story points, 305–306
Scrumban, 283–284, 335–336, 343
Scrummerfall, 273, 273f
Segmentation techniques
categorized services model, 243–245
star schema, 238–240
tiered integration model, 240–243
Self-organizing teams, 16, 56–57
Service level agreement (SLA), 339
Shippable code, 40–41, 286
Single-pass efforts, See Waterfall methods
Source-to-target mappings, 265
Sprint, extending, 102–104
Sprint 0, See Sprints, nonstandard
Sprints, 39
development phase, 55–65
retrospectives, 67–72
story conference, 50–52
task planning, 52–55
time boxes, 41–42, 73–74
user demos, 65–67
Sprints, nonstandard, 74–77
architectural, 75
hardening, 76
implementation, 75–76
spikes, 76
sprint 0, 74–77
Standish Group, 9, 78, 104, 208
Star schema, 235
Story cards, 51, 123
Story conference, See Sprints
Story point distribution, 334, 335f
Story points, 6, 48, 224
estimating, 48–50, 49f
vs. ideal time, 219, 223, 224t
Stretch goals, 51–52
Summary goals, 138
Systems analyst, responsibilities, 264–265
Systems tester, responsibilities, 265–266

T

Task boards, 82–87, 83f
integrate team efforts, 85–86
monitoring, 86–87
quality assurance, 84–85
Task cards, 82, 96
Task planning, See Sprints
Tech debt, 63–65, 95–96
Test-led development, 62–63, 62t
Themes, See User stories
Tiered data model, 236f, 243
Tiered integration model, 235
Time-boxed development, 41–42
Time boxes, See Sprints

U

Unit testing, 195
Use cases, 120–121
User demos, See Sprints
User goals, 138–139, 139f
User requirements, See User stories
User roles, 135
User stories, 6, 117
acceptance criteria, 140–141
advantages of, 123–124
and backlogs, 6, 47–48
epics and themes, 128–130, 131t
focus on understanding “who, what, and why,", 134–140
INVEST criteria for quality, 126–127
keeping simple, 132–133
prioritization, 172–173
reverse story components, 134
techniques for writing, 130–141
uncertainty, 133–134

V

Vision box, 136, 136f
Visiting resources, 267–268, 268f

W

War room, See Commons
Warehouse test engines, requirements for, 293–294
Waterfall methods, 35–36
as a mistake, 12–14
single-pass efforts, 211
Waterscrum, 273f, 274
“Why isn’t somebody coding yet?!” (WISCY), 120
Work breakdown structure (WBS), 36, 211
Work-in-progress limits, 309
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.183.63