Chapter 22

Ten (Or So) Modules You Can Add to SPSS

In This Chapter

arrowUnderstanding what modules bring to SPSS

arrowDeciding which modules to add to SPSS

IBM SPSS Statistics comes in the form of a base system, but you can acquire additional modules to add on to that system. If you’ve installed a full system, you may already have some of these add-ons. Most are integrated and look like integral parts of the base system. Some may be of no interest to you; others could become indispensable. Note that if you have a trial copy of SPSS, it likely has all the modules, including those that you might lose access to when you acquire your own copy. This chapter introduces you to the modules that can be added to SPSS and what they do; refer to the documentation that comes with each one for a full tutorial.

The Advanced Statistics Module

The following is a list of the statistical techniques that are part of the Advanced Statistics module.

  • General Linear Models (GLM)
  • Generalized Linear Models (GENLIN)
  • Linear Mixed Models
  • Generalized Estimating Equations (GEE) Procedures
  • Generalized Linear Mixed Models (GLMM)
  • Survival Analysis Procedures

These procedures are among the most advanced in SPSS, but some of them are quite popular. For instance, Hierarchical Linear Modeling (HLM), part of Linear Mixed Models, is popular in education research. HLM models are statistical models where parameters vary at more than one level. For instance, students vary and schools vary, and in an HLM model you have information at both levels.

The key point is that this whole module is about specialized techniques that you need to use if you don’t meet the assumptions of plain vanilla regression and analysis of variance (ANOVA). These techniques are more of an ANOVA flavor. Survival Analysis is so-called “time-to-event” modeling, like estimating time to death after diagnosis.

The Custom Tables Module

This has been the most popular module for years, and for good reason. If you need to squeeze a lot of information into a report, you need this module. For instance, if you do survey research and you want to report on the entire survey in tabular form, this module comes to your rescue. Picture your entire dataset summarized in an appendix. It isn’t merely a convenience. If you need this kind of summary, get this module.

The Regression Module

The following is a list of the statistical techniques that are part of the Regression module.

  • Multinomial and Binary Logistic Regression
  • Nonlinear Regression (NLR) and Constrained Nonlinear Regression (CNLR)
  • Weighted Least Squares Regression and Two-Stage Least Squares Regression
  • Probit Analysis

In some ways, this module is like the Advanced Stats module — you use these techniques when you don’t meet the standard assumptions — except with this module, the techniques are fancy variants of regression when you can’t do Ordinary Least Squares Regression. Binary Logistic Regression is very popular and used quite often. It’s used when your dependent variable has two categories — for example, stay or go (churn), buy or not buy, or get a disease or not get a disease.

The Categories Module

The Categories module is designed to enable you to reveal relationships among your categorical data. To help you understand your data, the Categories module uses perceptual mapping, optimal scaling, preference scaling, and dimension reduction. Using these techniques, you can visually interpret the relationships among your rows and columns.

Categories performs its analysis and displays results so you can understand ordinal and nominal data. It uses procedures similar to conventional regression, principal components, and canonical correlation. It performs regression using nominal or ordinal categorical predictor or outcome variables.

The procedures of the Categories module make it possible to perform statistical operations on categorical data:

  • Using the scaling procedures, you can assign units of measurement and zero-points to your categorical data, which gives you access to new groups of statistical functions because you can analyze variables using mixed measurement levels.
  • Using correspondence analysis, you can numerically evaluate similarities among nominal variables and summarize your data according to components you select.
  • Using nonlinear canonical correlation analysis, you can collect variables of different measurement levels into sets of their own, and then analyze the sets.

You can use this module to produce a couple of very useful tools:

  • Perceptual maps: High-resolution summary charts that serve as graphic displays of similar variables or categories. They give you insights into relationships among more than two categorical variables.
  • Biplots: Summary charts that make it possible to look at the relationships among products, customers, and demographic characteristics.

The Data Preparation Module

Let’s face it: Data preparation is no fun. We’ll take all the help we can get with it. No module will eliminate all the work for the human in this human–computer partnership, but the Data Preparation module is designed to eliminate some of the routine, predictable aspects. It helps you process your rows and columns of data. For your rows of data, it helps you identify outliers that might distort your data. As for your variables, it helps you identify the best ones, and lets you know that you could improve some by transforming them. It also allows you to create special validation rules to speed up your data checks and avoid a lot of manual work. Finally, it helps you identify patterns in your missing data.

The Decision Trees Module

Decision trees are, by far, the most popular and well known of the data mining techniques. In fact, there are entire software products dedicated to this approach. If you aren’t sure if you need to do data mining, but you want to try it out, this would be just about the best way because you already know your way around SPSS Statistics. The Decision Trees module doesn’t quite have all the features of the decision trees in SPSS Modeler (which is a whole software package dedicated to data mining), but there is plenty here to give you a good start.

What are decision trees? Well, the whole idea is that you have something that you want to predict (the target variable), and lots of variables that could possibly help you do that, but you don’t know which ones are most important. SPSS indicates which variables are most important and how the variables interact, and helps you predict the target variable in the future.

SPSS supports four of the most popular algorithms for doing this: CHAID, Exhaustive CHAID, C&RT, and QUEST.

The Forecasting Module

You can use the Forecasting module to rapidly construct expert time-series forecasts. This module includes statistical algorithms you can use to analyze historical data and predict trends. You can set it up to analyze hundreds of different time series at once instead of running a separate procedure for each one.

The software is designed to handle the special situations that arise in trend analysis. It automatically determines the best-fitting autoregressive integrated moving average (ARIMA) or smoothing model. It automatically tests data for seasonality, intermittency, and missing values. The software detects outliers and prevents them from unduly influencing the results. The graphs generated include confidence intervals and indicate the model’s goodness of fit.

As you gain experience at forecasting, the Forecasting module gives you more control over every parameter when you’re building your data model. You can use the Expert Modeler in the Forecasting module to recommend starting points or to check calculations you’ve done by hand.

Version 23 has an exciting new algorithm that is part of this module. It’s called Temporal Causal Modeling (TCM). This new algorithm attempts to discover key causal relationships in time series data by including only those inputs that have a causal relationship with the target. This differs from traditional time series modeling where you must explicitly specify the predictors for a target series.

The Missing Values Module

The Data Preparation module seems to have missing values covered, but the two modules are actually quite different. The Data Preparation module is really about finding data errors; its validation rules will tell you that a data point just isn’t right. On the other hand, the Missing Values module is focused on when there is no data value at all. It attempts to estimate the missing piece of information using other data that you do have. This process is called imputation, also known as replacing with an educated guess. All kinds of researchers, data miners, and statisticians can benefit, but if you’re a survey researcher, this is really bound to come in handy.

The Bootstrapping Module

Hang on tight, we’re going to get a little technical. Bootstrapping is a technique that involves “resampling” with replacement. What that means is that the Bootstrapping module picks a case at random, makes note about it, replaces it, and picks another. In this way, it’s possible to pick a case more than once or not at all. The net result is another “version” of your data that is similar, but not identical. If you do this 1,000 times (which is the default), you can do some very powerful things indeed.

The Bootstrapping module allows you to build more stable models using your data by overcoming the effect of outliers and other problems in your data. Traditional statistics assumes that your data has a particular distribution, but this technique avoids that kind of assumption. The result is a more accurate sense of what’s going on in the population. It is, in a sense, a simple idea, but because it takes a lot of computer horsepower, it’s more popular now than when computers were slower.

Bootstrapping is a popular technique outside of SPSS, as well, so you can find articles on the web about the concept. The Bootstrapping module just lets you apply this powerful concert to your data in SPSS Statistics.

The Complex Samples Module

Sampling is a big part of statistics. A simple random sample is what we usually think of as a sample — like picking names out of a hat. The hat is our population, and the scraps of paper we pick belong to our sample. Each slip of paper has an equal chance of being picked. Research is often more complicated than that. The Complex Sample module is about more complicated forms of sampling: two stage, stratified, and so on.

Most often, survey researchers need this module, although many kinds of experimental researcher may benefit from it, too. It helps you design the data collection, and then takes the design into account when calculating your statistics. Nearly all statistics in SPSS are calculated with the assumption that the data is a simple random sample. Your calculations can be distorted when this assumption is not met.

The Conjoint Module

The Conjoint module provides a way for you to determine how each of your product’s attributes affect consumer preference. When you combine conjoint analysis with competitive market product research, it’s easier to zero in on product characteristics that are important to your customers.

With this research, you can determine which product attributes your customers care about, which ones they care about most, and how you can do useful studies of pricing and brand equity. And you can do all this before incurring the expense of bringing new products to market.

The Direct Marketing Module

This module is a little different from the others. It’s a bundle of related features in a wizardlike environment. It’s designed to be one-stop shopping for marketers. The main features are recency frequency monetary (RFM) analysis, cluster analysis, and profiling:

  • RFM analysis: RFM analysis reports back to you about how recently, how often, and with how much spend your customers bring your business. Obviously, those customers who are currently active, spend a lot, and spend often are your best customers.
  • Cluster analysis: Cluster analysis is a way of segmenting your customers into different customer segments. Typically, you use this approach to match different marketing campaigns to different customers. For example, a cruise line may try different covers on the travel catalog going out to customers, with the adventurous types getting Alaska or Norway on the cover, and the umbrella-drink crowd getting pictures of the Caribbean.
  • Profiling: Helps you can see what customer characteristics are associated with specific outcomes and in this way you can calculate the propensity score that a particular customer will respond to a specific campaign. Virtually all these features can be found in other areas of SPSS, but the wizardlike environment of the Direct Marketing module makes it easy for marketing analysts who don’t happen to have extensive training in the statistics behind the techniques to be able produce useful results.

The Exact Tests Module

The Exact Tests module makes it possible to be more accurate in your analysis of small datasets and datasets that contain rare occurrences. It gives you the tools you need for analyzing such data conditions with more accuracy than would otherwise be possible.

When only a small sample size is available, you can use the Exact Tests module to analyze that smaller sample and have more confidence in the results. Here, the idea is to perform more analyses in a shorter period of time. This module allows you to conduct different surveys rather than spend time gathering samples to enlarge the base of the surveys you have.

The processes you use, and the forms of the results, are the same as those in the base SPSS system, but the internal algorithms are tuned to work with smaller datasets. The Exact Tests module provides more than 30 tests covering all the nonparametric and categorical tests you normally use for larger datasets. Included are one-sample, two-sample, and K-sample tests with independent or related samples, goodness-of-fit tests, tests of independence, and measures of association.

The Neural Networks Module

A neural net is a latticelike network of neuronlike nodes, set up within SPSS to act something like the neurons in a living brain. The connections between these nodes have associated weights (degrees of relative effect), which are adjustable. When you adjust the weight of a connection, the network is said to learn.

In the Neural Network module, a training algorithm iteratively adjusts the weights to closely match the actual relationships among the data. The idea is to minimize errors and maximize accurate predictions. The computational neural network has one layer of neurons for input, another for output, with one or more hidden layers between them. The neural network is combined with other statistical procedures to provide clearer insight.

Using the familiar SPSS interface, you can mine your data for relationships. After selecting a procedure, you specify the dependent variables, which may be any combination of scale and categorical types. To prepare for processing, you lay out the neural network architecture, including the computational resources you want to apply. To complete preparation, you choose what to do with the output:

  • List the results in tables.
  • Graphically display the results in charts.
  • Place the results in temporary variables in the dataset.
  • Export models in XML-formatted files.

Amos

Amos is an interactive interface you can use to build structural equation models. Not a true “module,” it’s standalone software with its own graphical user interface (GUI). Using the diagrams you create with Amos, you can uncover otherwise-hidden relationships and observe graphically how changes in certain values affect other values. You can create a model on nonnumeric data without having to assign numerical scores to the data. You can analyze censored data without having to make assumptions beyond normality.

Amos provides a more intuitive interface than plain SPSS for a certain family of problems. Amos contains structural modeling software that you control with a drag-and-drop interface. Because the interface is intuitive, you can create models that come closer to the real world than the multivariate statistical methods of SPSS. You set up your variables, and then you can perform analyses using hypothetical relationships.

Amos enables you to build models that more realistically reflect complex relationships with the ability to use observed variables, such as survey data or latent variables like “satisfaction” to predict any other numeric variable. Structural equation modeling, helps you gain additional insight into causal models and the strength of variable relationships.

The Sample Power Module

The Sample Power module was developed in conjunction with the late Jacob Cohen. Cohen was a contemporary statistics powerhouse, deservedly famous for his books and articles, and largely responsible for drawing more attention to Type II error. The idea is that our university training emphasizes avoiding Type I error to such a degree that we forget about the other kind of risk. Type I error is the risk of “crying wolf,” just like the old fable of “The Boy Who Cried Wolf.” When we commit Type I error, we claim that the effect of a variable is important, but it turns out that it isn’t a finding that will generalize to the population.

Type II error is the risk that there is an amazing finding awaiting us in the population, but our analysis of the sample data doesn’t reveal it. That’s pretty bad, too — claiming that there is no effect when there really is one. The Sample Power module allows us to accurately calculate that risk, and it may prompt us either to collect more data to avoid the risk, or maybe, just maybe, we figure out that we can get by with a little less data and we can save our organization money during the data collection phase.

If you do survey research and have to go out into the world to collect your data, this module is one that you’ll want to consider. Even if you get your data through other means — the day-to-day running of the business, for instance — look up Jacob Cohen on the web and seek out his writing. One of our favorites is Things I Have Learned (So Far). Don’t obsess over 0.05 or forget about Type II error.

The Visualization Designer Module

The Visualization Designer module doesn’t get as much attention as it deserves. Even veteran SPSS users don’t seem to know that much about it. Graphboard Template Chooser is one of the graphing methods in SPSS, and this module is actually a sibling product to Graphboard in a sense.

If you want to create really fancy graphs in SPSS, you have two choices: Learn how to program Graphics Production Language (GPL) or use the Visualization Designer module. GPL isn’t really that bad, but for some folks, writing code just isn’t their thing. The Visualization Designer module allows you to create all kinds of graphics that aren’t possible otherwise, and when you’re done, you can add new “templates” to your copy of SPSS and to that of your colleagues, too. When you’re done, the new templates will show up as new chart types in the Graphboard Template Chooser.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.144.248