List of Tables

Chapter 1. Understanding collective intelligence

Table 1.1. Some of the ways to harness collective intelligence in your application

Table 1.2. Seven principles of Web 2.0 applications

Chapter 2. Learning from user interactions

Table 2.1. Summary of services that a typical application-embedding intelligence contains

Table 2.2. Examples of user-profile attributes

Table 2.3. The many ways users provide valuable information through their interactions

Table 2.4. Dataset with small number of attributes

Table 2.5. Dataset with large number of attributes

Table 2.6. Sparsely populated dataset corresponding to term vectors

Table 2.7. Ratings data used in the example

Table 2.8. Dataset to describe photos

Table 2.9. Normalized dataset for the photos using raw ratings

Table 2.10. Item-to-item using raw ratings

Table 2.11. Normalized rating vectors for each user

Table 2.12. User-to-user similarity table

Table 2.13. Normalized matrix for the correlation computation

Table 2.14. Correlation matrix for the items

Table 2.15. Normalized rating vectors for each user

Table 2.16. Correlation matrix for the users

Table 2.17. Normalized matrix for the adjusted cosine-based computation

Table 2.18. Similarity between items using correlation similarity

Table 2.19. Normalized rating vectors for each user

Table 2.20. Normalizing the vectors to unit lengthr

Table 2.21. Adjusted cosine similarity matrix for the users

Table 2.22. Bookmarking data for analysis

Table 2.23. Adjusted cosine similarity matrix for the users

Table 2.24. Normalized dataset for finding related articles

Table 2.25. Related articles based on bookmarking

Chapter 3. Extracting intelligence from tags

Table 3.1. Raw data used in the example

Table 3.2. Normalized vector for the items

Table 3.3. Similarity matrix between the items

Table 3.4. Raw data for users

Table 3.5. The normalized metadata vector for the two users

Table 3.6. Similarity matrix between users and items

Table 3.7. Data used for the bookmarking example

Table 3.8. The result for the query to find other tags used by user 1

Table 3.9. Result of other items that share a tag with another item

Table 3.10. Data for the tag cloud in our example

Table 3.11. Bookmarking data for analysis

Chapter 4. Extracting intelligence from content

Table 4.1. The different content types

Table 4.2. Description of the tables used for persistence

Table 4.3. Uses of wikis

Table 4.4. Entities for message boards and groups

Table 4.5. Content type categorization

Chapter 5. Searching the blogosphere

Table 5.1. Description of the QueryParameters

Table 5.2. The different date formats returned by the different providers

Table 5.3. Query URLs for some blog-tracking providers

Table 5.4. Decomposing the query parameters across providers

Chapter 7. Data mining: process, toolkits, and standards

Table 7.1. Common terms used to describe attributes

Table 7.2. Summary of different kinds of data mining algorithms

Table 7.3. The key packages in WEKA

Table 7.4. The data associated with the WEKA API tutorial

Table 7.5. Key JDM packages

Table 7.6. Key subclasses for Model

Chapter 8. Building a text analysis toolkit

Table 8.1. Common terms used to describe attributes

Table 8.2. Available tokenizers from Lucene

Table 8.3. Available filters from Lucene

Table 8.4. Common Analyzer classes that are available in Lucene

Table 8.5. Some use cases for text analysis infrastructure

Chapter 10. Making predictions

Table 10.1. Raw data used in the example

Table 10.2. Data available when the user isn’t a high-net-worth individual

Table 10.3. Data available when the user is a high-net-worth individual

Table 10.4. Data when user has high-net-worth individual but is not interested in watches

Table 10.5. Data when user has high-net-worth individual and is interested in watches

Table 10.6. Computing the probabilities

Table 10.7. Shortening the attribute descriptions

Table 10.8. The prediction table for our example

Table 10.9. The data used for regression

Table 10.10. The raw and the predicted values using linear regression

Chapter 11. Intelligent search

Table 11.1. Explanation of terms used for computing the relevance of a query to a document

Table 11.2. Description of the query classes

Table 11.3. Description of the filter classes

Table 11.4. Description of the HitCollector-related classes

Chapter 12. Building a recommendation engine

Table 12.1. Representing the user as an N-dimensional vector

Table 12.2. Ratings data used in the example

Table 12.3. Correlation matrix for the users

Table 12.4. NearestNeighborSearch classes in WEKA

Table 12.5. Classifiers in WEKA based on nearest-neighbor search

Table 12.6. Term-document matrix

Table 12.7. Sample data for iterative item-to-item algorithm

Table 12.8. Item-to-item matrix

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.83.199