A project – discovering association rules in software project tags

In 1997, the website, Freshmeat, was created as a directory that tracked free, libre, and open source software (FLOSS) projects. In 2011, the site was renamed Freecode. After sales and acquisitions and several site redesigns, in 2014 all updates to the Freecode site were discontinued. The site remains online, but it is no longer being updated and no new projects are being added to the directory. Freecode now serves as a snapshot of facts about FLOSS projects during the late 1990s and 2000s. These facts about each software project include its name, its description, the URL to download the software, tags that describe its features, a numeric representation of its popularity, and so on.

As part of my FLOSSmole project, I have catalogued data from Freshmeat/Freecode since 2005. Freshmeat/Freecode provided periodic RDF downloads describing each project on the site. I downloaded these, parsed out the project data, organized it into database tables, and provided basic visualizations of the data. For our purposes here, we can use this data to answer a question about which project tags are most frequently found together on FLOSS projects. To do this, we will find frequent itemsets from the project tags and generate subsequent association rules. A sample frequent itemset will take the form {GPL, Linux, C}. A sample association rule might be GPL, Linux -> C [s=.60, c=.90, av=.15].

To get started, log in to your MySQL server, and select a database to use for this project (mine is called test) and create a database table to hold the master list of projects and their tags:

CREATE TABLE IF NOT EXISTS fc_project_tags (
 project_id int(11) NOT NULL DEFAULT '0',
 tag_name varchar(50) NOT NULL DEFAULT '0',
 PRIMARY KEY (`project_id`,`tag_name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

In this dataset, each project will be identified a number, given by the Freecode site itself, and a list of tags, given by the person who added the project to the directory. For example, project number 8 has been given the tags GPL, multimedia, and Sound/Audio.

To populate this table, use the data file available on the GitHub site for this book, which is https://github.com/megansquire/masteringDM. This specific file is located in the chapter 2 directory at https://github.com/megansquire/masteringDM/blob/master/ch2/fc_project_tags.sql.gz.

To load this into your MySQL database from the command line, unzip the file into your working directory, then login to your MySQL server, use the correct database, and then issue a source command to run all the INSERT statements into that file. The process is as follows:

mysql> use test;
Database changed
mysql> source fc_project_tags.sql;

Note

Each project is only identified by its number in this chapter project. However, if you want to find out more detail about the individual projects, or use this data for another project, all the Freshmeat/Freecode data is freely available on the FLOSSmole website in the following directory: http://flossdata.syr.edu/data/fc/. The data dump that we are using for this chapter is from March 2014, and in the FLOSSmole system that dataset has been given the number 8079. To keep things simple, you will not see that number in the examples for this chapter.

To get started in answering our question (Which tags are most frequently found together?), we first need to explore the data a little bit. First, we can discover the total number of project-tag combinations, keeping in mind that a project can have multiple tags:

SELECT COUNT(*)
FROM fc_project_tags;

353400

Next, we can calculate the total number of projects. In terms of association rule terminology, we can think of a Freecode project as a shopping basket or a transaction, and each project tag is equivalent to an item in a shopping basket:

SELECT count(DISTINCT project_id)
FROM fc_project_tags;
46510

How many unique items are in our dataset?

SELECT count(DISTINCT tag_name)
FROM fc_project_tags;
11006

So there are 46,510 baskets, and 11,006 items. To reduce the number of possible association rules, we can count how many projects have each tag (how many baskets include each product), and prune the tags that are very rare. The following table shows the required number of projects required to reach each possible support threshold:

Support rate for tag

Number of projects needed

50%

23,255

40%

18,604

30%

13,953

10%

4,651

5%

2,325

For example, by using a 5% threshold, we are able to reduce the possible set of items to 29, down from 11,006. This reduced set of tags will become our singletons. All the frequent doubletons will be based on these singletons, and the tripletons will in turn be built from those doubletons. Here is the SQL to generate the list of singletons, keeping a 5% minimum support threshold:

SELECT tag_name, COUNT(project_id)
FROM fc_project_tags
GROUP BY 1 
HAVING COUNT(project_id) >= 2325
ORDER BY 2 DESC;

The first few results are shown in the following table:

Tag name

Number of projects

GPL

21,182

POSIX

16,875

Linux

16,288

C

10,292

OS Independent

10,180

Our program, the code for which can be found in the GitHub repository for this book at https://github.com/megansquire/masteringDM/tree/master/ch2, calculates the number of baskets and then uses the minimum support threshold percentage to find the singletons, as shown in the following code. MINSUPPORTPCT is a constant that you can set to whatever you like. It is set to 5 at the beginning:

import itertools
import pymysql

# set threshold as a percent 
# (for example, 5% of Freecode baskets is about 2325)
MINSUPPORTPCT = 5

allSingletonTags = []
allDoubletonTags = set()
doubletonSet = set()

# Open local database connection
db = pymysql.connect(host='localhost',
                     db='test',
                     user='megan',
                     passwd='',
                     port=3306,
                     charset='utf8mb4')
cursor = db.cursor()

Next we calculate the number of baskets as the number of projects in the database table:

queryBaskets = "SELECT count(DISTINCT project_id) FROM fc_project_tags;"
cursor.execute(queryBaskets)
baskets = cursor.fetchone()[0]

Using that number of baskets and our minimum support threshold set earlier, we can calculate the minimum number of baskets:

minsupport = baskets*(MINSUPPORTPCT/100)
print("Minimum support count:",minsupport,"(",MINSUPPORTPCT,"% of",baskets,")")

Now we can get a set of tags that meets our minimum support threshold:

cursor.execute("SELECT DISTINCT tag_name 
            FROM fc_project_tags 
            GROUP BY 1 
            HAVING COUNT(project_id) >= %s ORDER BY tag_name",(minsupport))
singletons = cursor.fetchall()

for(singleton) in singletons:
    allSingletonTags.append(singleton[0])

Next we use these frequent singletons to create our candidate doubletons. We encapsulate this task into a function called findDoubletons(). We will discuss the findDoubletons(), findTripletons(), and generateRules() functions later. The final line of our program closes the database connection when we are done with it:

findDoubletons()
findTripletons()
generateRules()
db.close()

As we discussed, when outlining the Apriori strategy earlier, it is not practical to pre-populate the database with all possible candidate doubletons and then count them there, since there are so many possible pairs. Instead, we will generate candidate doubletons in memory, count their support threshold, and only keep the ones that pass our support threshold. Just as with the preceding singleton counting, the support threshold remains at 5% (2,325 projects) for both doubletons and tripletons. We use a constant, called MINSUPPORT, to hold this support value. Additionally, we rely on the itertools.combinations() function to generate all possible combinations of size=2 from our allSingletonTags list. Finally, we add these frequent tags to a new list called allDoubletonTags, which we will use in our findTripletons() function, shown here:

def findDoubletons():
    print("======")
    print("Frequent doubletons found:")
    print("======")
    # use the list of allSingletonTags to make the doubleton candidates
    doubletonCandidates = list(itertools.combinations(allSingletonTags, 2))
    for (index, candidate) in enumerate(doubletonCandidates):
        # figure out if this doubleton candidate is frequent
        tag1 = candidate[0]
        tag2 = candidate[1]
        cursor.execute("SELECT count(fpt1.project_id)
                        FROM fc_project_tags fpt1
                        INNER JOIN fc_project_tags fpt2
                        ON fpt1.project_id = fpt2.project_id
                        WHERE fpt1.tag_name = %s
                        AND fpt2.tag_name = %s", (tag1, tag2))
        count = cursor.fetchone()[0]

        # add frequent doubleton to database                
        if count > minsupport:
            print (tag1,tag2,"[",count,"]")
            
            cursor.execute("INSERT INTO fc_project_tag_pairs 
                            (tag1, tag2, num_projs) 
                            VALUES (%s,%s,%s)",(tag1, tag2, count))
            
            # save the frequent doubleton to our final list
            doubletonSet.add(candidate)         
            # add terms to a set of all doubleton terms (no duplicates)
            allDoubletonTags.add(tag1)
            allDoubletonTags.add(tag2)

Our program writes the doubletons (and later, the tripletons) to two new database tables, but if you do not want to do this, you can remove the INSERT statements. The CREATE statements for these two tables are shown in the following code. These SQL statements can be found in the additionalQueries.sql file, downloadable from the GitHub site for this book, as referenced earlier:

CREATE TABLE IF NOT EXISTS fc_project_tag_pairs (
 tag1 varchar(255) NOT NULL,
 tag2 varchar(255) NOT NULL,
 num_projs int(11) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

CREATE TABLE IF NOT EXISTS fc_project_tag_triples (
 tag1 varchar(255) NOT NULL,
 tag2 varchar(255) NOT NULL,
 tag3 varchar(255) NOT NULL,
 num_projs int(11) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

Once we have the list of doubletons, the program uses those to find the candidate tripletons. The findTripletons() function is similar to findDoubletons(), except that we must take into account the closure property. By this I mean that we cannot generate any candidate tripletons that have doubletons inside that are not frequent. Just before we ended the findDoubletons() function, we created a list of all the doubletons (called doubletonList). Now we use the enumerate() function to get a list of all possible doubletons inside the candidate tripleton, and if all those doubletons were not already on our list of frequent doubletons, we can reject the tripleton.

This may seem a little bit confusing, so an example is in order. Suppose we have generated frequent doubletons as follows:

foo, bar
bar, baz

If we simply used all the items inside and created a candidate tripleton of foo, bar, baz, that tripleton would be invalid, since it contains a doubleton of {foo, baz}, which is not a frequent doubleton. Therefore, we need to only generate tripletons for which every possible doubleton inside is also frequent. The code to find tripletons is shown here:

def findTripletons():
    print("======")
    print("Frequent tripletons found:")
    print("======")
    # use the list of allDoubletonTags to make the tripleton candidates
    tripletonCandidates = list(itertools.combinations(allDoubletonTags,3))

    # sort each candidate tuple and add these to a new sorted candidate list    
    tripletonCandidatesSorted = []
    for tc in tripletonCandidates:
        tripletonCandidatesSorted.append(sorted(tc))
    
    # figure out if this tripleton candidate is frequent
    for (index, candidate) in enumerate(tripletonCandidatesSorted):          
        # all doubletons inside this 
        # tripleton candidate MUST also be frequent
        doubletonsInsideTripleton = list(itertools.combinations(candidate,2))
        tripletonCandidateRejected = 0
        for (index, doubleton) in enumerate(doubletonsInsideTripleton):
            if doubleton not in doubletonSet:
                tripletonCandidateRejected = 1
                break
        # add frequent tripleton to database
        if tripletonCandidateRejected == 0:
            cursor.execute("SELECT count(fpt1.project_id) 
                FROM fc_project_tags fpt1 
                INNER JOIN fc_project_tags fpt2 
                ON fpt1.project_id = fpt2.project_id 
                INNER JOIN fc_project_tags fpt3 
                ON fpt2.project_id = fpt3.project_id 
                WHERE (fpt1.tag_name = %s 
                AND fpt2.tag_name = %s 
                AND fpt3.tag_name = %s)", (candidate[0],
                                          candidate[1],
                                          candidate[2]))
            count = cursor.fetchone()[0]
            if count > minsupport:
                print (candidate[0],",",
                       candidate[1],",",
                       candidate[2],
                       "[",count,"]")
                cursor.execute("INSERT INTO fc_project_tag_triples 
                                (tag1, tag2, tag3, num_projs) 
                                VALUES (%s,%s,%s,%s)",
                                (candidate[0],
                                 candidate[1],
                                 candidate[2],
                                 count))

When run against the Freecode dataset, our program produces 37 doubletons, which are shown in the following table in order from the highest support to the lowest support:

tag1

tag2

num_projs

C

GPL

5543

C

Linux

5653

C

POSIX

6956

C++

GPL

2914

C++

Linux

3428

C++

POSIX

3502

Communications

GPL

2578

Dynamic Content

Internet

3173

Dynamic Content

Web

3171

English

Linux

2662

GPL

Internet

4038

GPL

Linux

8038

GPL

Multimedia

2883

GPL

OS independent

4405

GPL

PHP

2376

GPL

POSIX

10069

GPL

Software development

3319

GPL

Web

2901

GPL

Windows

2605

Internet

OS

3007

Internet

POSIX

2832

Internet

Web

5978

Java

OS independent

3436

Java

Software development

2360

Libraries

Software development

5638

Linux

Mac OS X

2974

Linux

POSIX

11903

Linux

Software development

2336

Linux

Unix

2494

Linux

Windows

5281

Mac OS X

Windows

3131

Multimedia

POSIX

2539

OS independent

Software development

3566

OS independent

Web

2605

POSIX

Software development

3503

POSIX

Unix

2326

POSIX

Windows

4467

The program also produces four tripletons, as shown in the following table, in highest-to-lowest support order:

Tag1

Tag2

Tag3

Num_projs

Internet

OS independent

Web

2519

GPL

Linux

POSIX

7384

C

GPL

Linux

3299

C

GPL

POSIX

4364

GPL

Internet

Web

2878

Dynamic Content

Internet

Web

3166

Linux

POSIX

Windows

3315

C++

Linux

POSIX

2622

C

Linux

POSIX

4629

Once we have the frequent itemsets, we can begin to design association rules from these, assigning support and confidence to each. Here is code for a rule generation routine that generates rules from tripletons. We start with generating rules with a single item on the right-hand side, due to the same closure properties as with generating frequent itemsets. In other words, if a rule like this {vanilla wafers, bananas -> marshmallows} is not interesting, then there is no sense in measuring the other options that have marshmallows on the right side such as {vanilla wafers -> bananas, marshmallows}.

Finally, this code also prints the added value score for each rule, calculated by subtracting the support of the right-hand side from the confidence of the whole rule:

def generateRules():
    print("======")    
    print("Association Rules:")
    print("======")

    # pull final list of tripletons to make the rules
    cursor.execute("SELECT tag1, tag2, tag3, num_projs 
          FROM fc_project_tag_triples")
    triples = cursor.fetchall()
    for(triple) in triples:
        tag1 = triple[0]
        tag2 = triple[1]
        tag3 = triple[2]
        ruleSupport = triple[3]
        
        calcSCAV(tag1, tag2, tag3, ruleSupport)
        calcSCAV(tag1, tag3, tag2, ruleSupport)
        calcSCAV(tag2, tag3, tag1, ruleSupport)
        print("*")

def calcSCAV(tagA, tagB, tagC, ruleSupport):
    # Support
    ruleSupportPct = round((ruleSupport/baskets),2)

    # Confidence    
    query1 = "SELECT num_projs 
          FROM fc_project_tag_pairs 
          WHERE (tag1 = %s AND tag2 = %s) 
          or (tag2 = %s AND tag1 = %s)"
    cursor.execute(query1, (tagA, tagB, tagB, tagA))
    pairSupport = cursor.fetchone()[0]
    confidence = round((ruleSupport / pairSupport),2)
    
    # Added Value
    query2 = "SELECT count(*) 
          FROM fc_project_tags 
          WHERE tag_name= %s"
    cursor.execute(query2, tagC)
    supportTagC = cursor.fetchone()[0]
    supportTagCPct = supportTagC/baskets
    addedValue = round((confidence - supportTagCPct),2)
    
    # Result
    print(tagA,",",tagB,"->",tagC,
          "[S=",ruleSupportPct,
          ", C=",confidence,
          ", AV=",addedValue,
          "]")

The Freecode rules that are generated from this code are shown next. Since each tripleton can generate three rules, each having a single item on the right-hand side, we have divided these into groups of three lines each for display purposes:

C++ , Linux -> POSIX [S= 0.06 , C= 0.76 , AV= 0.4 ]
C++ , POSIX -> Linux [S= 0.06 , C= 0.75 , AV= 0.4 ]
Linux , POSIX -> C++ [S= 0.06 , C= 0.22 , AV= 0.09 ]

C , Linux -> POSIX [S= 0.1 , C= 0.82 , AV= 0.46 ]
C , POSIX -> Linux [S= 0.1 , C= 0.67 , AV= 0.32 ]
Linux , POSIX -> C [S= 0.1 , C= 0.39 , AV= 0.17 ]

GPL , Linux -> POSIX [S= 0.16 , C= 0.92 , AV= 0.56 ]
GPL , POSIX -> Linux [S= 0.16 , C= 0.73 , AV= 0.38 ]
Linux , POSIX -> GPL [S= 0.16 , C= 0.62 , AV= 0.16 ]

Linux , POSIX -> Windows [S= 0.07 , C= 0.28 , AV= 0.12 ]
Linux , Windows -> POSIX [S= 0.07 , C= 0.63 , AV= 0.27 ]
POSIX , Windows -> Linux [S= 0.07 , C= 0.74 , AV= 0.39 ]

C , GPL -> POSIX [S= 0.09 , C= 0.79 , AV= 0.43 ]
C , POSIX -> GPL [S= 0.09 , C= 0.63 , AV= 0.17 ]
GPL , POSIX -> C [S= 0.09 , C= 0.43 , AV= 0.21 ]

Dynamic Content , Internet -> Web [S= 0.07 , C= 1.0 , AV= 0.87 ]
Dynamic Content , Web -> Internet [S= 0.07 , C= 1.0 , AV= 0.83 ]
Internet , Web -> Dynamic Content [S= 0.07 , C= 0.53 , AV= 0.46 ]

Internet , OS Independent -> Web [S= 0.05 , C= 0.84 , AV= 0.71 ]
Internet , Web -> OS Independent [S= 0.05 , C= 0.42 , AV= 0.2 ]
OS Independent , Web -> Internet [S= 0.05 , C= 0.97 , AV= 0.8 ]

GPL , Internet -> Web [S= 0.06 , C= 0.71 , AV= 0.58 ]
GPL , Web -> Internet [S= 0.06 , C= 0.99 , AV= 0.82 ]
Internet , Web -> GPL [S= 0.06 , C= 0.48 , AV= 0.02 ]

C , GPL -> Linux [S= 0.07 , C= 0.6 , AV= 0.25 ]
C , Linux -> GPL [S= 0.07 , C= 0.58 , AV= 0.12 ]
GPL , Linux -> C [S= 0.07 , C= 0.41 , AV= 0.19 ]

Based on these results, how do we know which rules are interesting? Just looking at the support does not yield particularly interesting clues, since we specified that every rule had to have a 5% support in order to even be considered.

Confidence, combined with support, could be an interesting measure. For example, the rule {GPL , Linux -> POSIX} has the highest support (16%) and a confidence over 90%. On the other side, the rule {Linux , POSIX -> C++} has a support barely above the threshold (6%) and the lowest confidence in the list (22%).

Added value tells us how much better the association rule predicts the right-hand side of the equation as opposed to simply looking at the right-hand side by itself. This collection of rules does not have any outright negatively-correlated items, but it does have several rules that are extremely close to zero, indicating that the right-hand side might do just as well on its own as part of the shown rules. Examples of very low added value scores are, {Internet , Web -> GPL}, indicating that GPL would probably do just as well on its own, as it is a very high-scoring item even as a singleton. The rule {Linux , POSIX -> C++} also falls into this category of having a very low added value score, the second lowest in the list. This, along with the very low support and confidence scores, makes it one of the least valuable rules on the list.

Rules with high added value scores include {Dynamic Content , Internet -> Web} and {Dynamic Content , Web -> Internet}. These two rules are especially interesting because the third rule in the group, {Internet, Web -> Dynamic Content}, has an unremarkable added value score (.53). Next we notice that all of the highest scoring added value rules in this list have either Web or Internet on the right-hand side, with the other term appearing somewhere on the left. This shows that Web and Internet are very tightly coupled as terms in this dataset, but are not themselves as predictive of other terms as they are predictive of each other.

Finding this relationship means that we can probe more deeply into the relationship between Web and Internet. Specifically, we should look at the rules Web -> Internet and Internet -> Web. Since we helpfully stored the support counts in the database, we can use a query in SQL to figure out the support, confidence, and added value for these two rules as well:

SELECT 

round(num_projs / (SELECT count(DISTINCT project_id) FROM fc_project_tags),2) as 'support', 

round((num_projs / (SELECT count(DISTINCT project_id) FROM fc_project_tags)) / ((SELECT count(*) FROM fc_project_tags WHERE tag_name='Internet') / (SELECT count(DISTINCT project_id) FROM fc_project_tags)),2) as 'conf I-> W',

round((num_projs / (SELECT count(DISTINCT project_id) FROM fc_project_tags)) / ((SELECT count(*) FROM fc_project_tags WHERE tag_name='Web') / (SELECT count(DISTINCT project_id) FROM fc_project_tags)),2) as 'conf W-> I',

round(((num_projs/(SELECT count(DISTINCT project_id) FROM fc_project_tags)) / ((SELECT count(*) FROM fc_project_tags WHERE tag_name='Internet')/(SELECT count(DISTINCT project_id) FROM fc_project_tags))) - ((SELECT count(*) from fc_project_tags where tag_name='Web')/(SELECT count(DISTINCT project_id) FROM fc_project_tags)),2) as 'AV I->W',

round(((num_projs / (SELECT count(DISTINCT project_id) FROM fc_project_tags)) / ((SELECT count(*) FROM fc_project_tags where tag_name='Web') / (SELECT count(DISTINCT project_id) FROM fc_project_tags))) - ((SELECT count(*) from fc_project_tags where tag_name='Internet') / (SELECT count(DISTINCT project_id) FROM fc_project_tags)),2) as 'AV W->I'

FROM fc_project_tag_pairs where tag1='Internet' and tag2='Web'

This yields the following result from SQL:

support  conf I-> W  conf W-> I  AV I->W  AV W->I
0.13  0.74    0.95    0.77    0.60

That SQL code is pretty hairy, so here is a little Python script to run each individual query against the database, and use those numbers to calculate support, confidence, and added value. As before, fill in your database connection details, and fill in the constants X and Y with the two terms you are interested in comparing:

import pymysql

X = 'Internet'
Y = 'Web'

# Open local database connection
db = pymysql.connect(host='',
    db='',
    user='', 
    passwd='', 
    port=3306,
    charset='utf8mb4')
cursor = db.cursor()

# grab basic counts from the database that we need
numBasketsQuery = "SELECT count(DISTINCT project_id) 
    FROM fc_project_tags"
cursor.execute(numBasketsQuery)
numBaskets = cursor.fetchone()[0]

supportForXQuery = "SELECT count(*) 
    FROM fc_project_tags 
    WHERE tag_name=%s" 
cursor.execute(supportForXQuery, (X))
supportForX = cursor.fetchone()[0]

supportForYQuery = "SELECT count(*) 
    FROM fc_project_tags 
    WHERE tag_name=%s"
cursor.execute(supportForYQuery, (Y))
supportForY = cursor.fetchone()[0]

pairSupportQuery = "SELECT num_projs 
    FROM fc_project_tag_pairs 
    WHERE tag1=%s AND tag2=%s"
cursor.execute(pairSupportQuery,(X,Y))
pairSupport = cursor.fetchone()[0]
 
# calculate support : support of pair, divided by num baskets
pairSupportAsPct = pairSupport / numBaskets

# calculate confidence of X->Y
supportForXAsPct = supportForX / numBaskets
confidenceXY = pairSupportAsPct / supportForXAsPct

# calculate confidence of Y->X
supportForYAsPct = supportForY / numBaskets
confidenceYX = pairSupportAsPct/ supportForYAsPct

# calculate added value X->Y
AVXY = confidenceXY - supportForYAsPct
AVYX = confidenceYX - supportForXAsPct

print("Support for ",X,"U",Y,":", round(pairSupportAsPct, 2))
print("Conf.",X,"->",Y,":", round(confidenceXY, 2))
print("Conf.",Y,"->",X,":", round(confidenceYX, 2))
print("AV",X,"->",Y,":", round(AVXY, 2))
print("AV",Y,"->",X,":", round(AVYX, 2))

db.close()

The results are the same for the pair of terms Internet and Web as they were for the longer SQL query, as shown here:

Support for Internet U Web : 0.13
Conf. Internet -> Web : 0.74
Conf. Web -> Internet : 0.95
AV Internet -> Web : 0.6
AV Web -> Internet : 0.78

Even if these results seem somewhat underwhelming - after all, it is not particularly shocking that Internet and Web are such closely related terms - there are a few important lessons we can learn from this process. First, the results can be used to make suggestions. If a person tags their project Web, maybe we would also like to suggest Internet as a related tag. Alternatively, perhaps we want to cross-promote Web projects to people looking at Internet projects, and vice versa. Unlike in a store where we have to physically co-locate items next to each other, the costs of making recommendations or suggestions in a digital environment is not as high. In any case, finding frequent itemsets and generating association rules are useful tasks that can either confirm what we already suspect about our data, or that can help us understand underlying patterns in the data that we would not necessarily know about otherwise.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.68.3