Chapter 5. Collection Types

You can’t go very far in Python without encountering collection types. Collection types store a grouping of data, such as a list of users, or a lookup between restaurant or address. Whereas other types (ints, floats, bools) may focus on a single value, collections may store any arbitrary amount of data. In Python, you will encounter common collection types such as dictionaries, lists, and sets (oh, my!). Even a string is a type of collection; it contains a sequence of characters. However, collections can be difficult to reason about when reading about new code. Different collection types have different behaviors.

Back in Chapter 1, I went over some of the differences between the collections, where I talked about mutability, iterability, and indexing requirements. However, picking the right collection is just the first step. You must understand the implications your collection implies, and ensure that users can reason about it. You also need to recognize when the standard collection types aren’t cutting it, and you need to roll your own. But the first step, is knowing how to communicate your collection choices to the future. For that, I’ll turn to my old friend: type annotations.

Annotating Collections

I’ve covered type annotations for non-collection types, and now you need to know how to annotate collection types. Fortunately, these annotations don’t differ too much from the annotations you’ve already learned.

To illustrate this, suppose I’m building a digital cookbook app. I want to organize all my cookbooks digitally so I can search them by cuisine, ingredient or author. One of the questions I might have about a cookbook collection is how many books from each author I have:

def count_authors(cookbooks: list) -> dict:
    counter = defaultdict(lambda: 0)
    for book in cookbooks:
        counter[book.author] += 1
    return counter

This function has been annotated; it takes in a list of cookbooks and will return a dictionary. Unfortunately, while this tells me what collections to expect, it doesn’t tell me how to use the collections at all. There is nothing telling me what the elements inside the collection are. For instance, how do I know what type the cookbook is? If you were reviewing this code, how do you know that the use of book.author is legitimate? Even if you do the digging to make sure book.author is right, this code is not future-proof. If the underlying type changes, such as removing the author field, this code will break. I need a way to catch this with my typechecker.

I’ll do this by encoding more information with my types by using bracket syntax to indicate information about the types inside the collection.

AuthorToCountMapping = dict[str, int]
def count_authors(cookbooks: list[Cookbook]) -> AuthorToCountMapping:
    counter = defaultdict(lambda: 0)
    for book in cookbooks:
        counter[book.author] += 1
    return counter
Warning

In Python 3.8 and earlier, built-in collection types such as list, dict and set did not allow this bracket syntax, such as list[Cookbook] or dict[str,int]. Instead, you needed to use type annotations from the typing module:

from typing import Dict,List
AuthorToCountMapping = Dict[str, int]
def count_authors(cookbooks: List[Cookbook]) -> AuthorToCountMapping:
    # ...

I can indicate the exact types expected in the collection. The cookbooks list contains Cookbook objects and the return value of the function is returning a dictionary mapping strings (keys) to ints (values). Note that I’m using a type alias to give more meaning to my return value. Mapping from a string to an int does not tell the user the context of the type. Instead, I create a type alias named AuthorToCountMapping to make it clear how this dictionary relates to the problem domain.

You need to think through what types are contained in collection in order to be effective in type hinting it. In order to do that, you need to think about homogeneous and heterogeneous collections.

Homogeneous vs. Heterogeneous Collections

Homogeneous collections are collections where every value in the collection has the same type. In contrast, heterogeneous collections have values that may have different types within them. From a usability standpoint, your lists, sets and dictionaries should nearly always be homogenous. Users need a way to reason about your collections, and they can’t if they don’t have the guarantee that every value is the same type. If you make a list, set or dictionary a heterogeneous collection, you are indicating to the user that they need to take care to handle special cases. Suppose I want to resurrect an example from Chapter 1 for adjusting recipes for my cookbook app.

# Take a meal recipe and change the number of servings
# by adjusting each ingredient
# A recipe's first element is the number of servings, and the remainder
# of elements is (name, amount, unit), such as ("flour", 1.5, "cup")
def adjust_recipe(recipe, servings):
    new_recipe = [servings]
    old_servings = recipe[0]
    factor = servings / old_servings
    recipe.pop(0)
    while recipe:
            ingredient, amount, unit = recipe.pop(0)
            # please only use numbers that will be easily measurable
            new_recipe.append((ingredient, amount * factor, unit))
    return new_recipe

At the time, I mentioned how parts of this code were ugly, and one of the things made this tough to work with was the fact that the first element of the recipe list was a special case: an integer representing the servings. This contrasts from the rest of the list elements which are tuples representing actual ingredients, such as ("flour", 1.5, "cup"). This highlights the troubles of a heterogeneous collection. For every use of your collection, the user needs to remember to handle the special case. This is predicated on the assumption that the developer even knew about the special case in the first place. There’s no way in the type system to represent that a specific element needs to be handled differently. Therefore, a typechecker will not catch when a developer forgets. This leads to brittle code down the road.

When talking about homogeneity, it’s important to talk about what a single type means. When I mention single type, I’m not necessarily referring to a concrete type in Python; rather, I’m referring to a set of behaviors that define that type. A single type indicates that a consumer must operate on every value of that type in the exact same way. For the cookbook list, the single type is a Cookbook. For the dictionary example, the key’s single type is a string and the value’s single type is an integer. For heterogeneous collections, this will not always be the case. What do you do if you must have different types in your collection and there is no relation between them?

Consider what my ugly code from Chapter 1 communicates:

# Take a meal recipe and change the number of servings
# by adjusting each ingredient
# A recipe's first element is the number of servings, and the remainder
# of elements is (name, amount, unit), such as ("flour", 1.5, "cup")
def adjust_recipe(recipe, servings):
    # ...

There is a lot of information in the comment, but comments have no guarantee of being correct. They also won’t protect developers if they accidentally break assumptions. This code does not communicate intention adequately to future collaborators. Those future collaborators won’t be able to reason about your code. The last thing you want to burden them with is having to go through the codebase, looking for invocations and implementations to work out how to use your collection. Ultimately, you need a way to reconcile the first element (an integer) with the remainder of the elements, which are tuples? To answer this, I’ll use a Union (and some type aliases to make the code more readable).

Ingredient = tuple[str, int, str] # (name, quantity, units)
Recipe = list[Union[int, Ingredient]] # the list can be servings or ingredients
def adjust_recipe(recipe: Recipe, servings):
    # ...

This takes a heterogeneous collection (items could be a integer or an ingredient) and allows developers to reason about the collection as if it were homogeneous. The developer needs to treat every single value as the same: it is either an integer or an Ingredient before operating on it. While needing more code to handle the typechecks, you can rest easier knowing that your typechecker will catch users not checking for special cases. Bear in mind, this is not perfect by any means; it’d be better if there was no special case in the first place and that servings was passed to the function another way. But for the cases where you absolutely must handle special cases, represent them as a type so that the typechecker benefits you.

This can go too far, though. The more special cases of types you handle, the more code a developer has to write every time they use that type, and the more unwieldy the codebase becomes.

At the far end of the spectrum lies the Any type. Any can be used to indicate that all types are valid in this context. This sounds appealing to get around special cases, but it also means that the consumers of your collection have no clue what to do with the values in the collection, defeating the purpose of type annotations in the first place.

Warning

Developers working in a statically typed language don’t need to put in as much care to ensure collections are homogeneous; the static type system does that for them already. The challenge in Python is due to Python’s dynamically typed nature. It is much easier for a developer to create a heterogeneous collection without any warnings from the language itself.

Heterogeneous collection types still have a lot of uses; don’t assume that you should use homogeneity for every collection type because it is easier to reason about. Tuples, for example, are often heterogeneous.

Suppose that I represent a Cookbook as a tuple.

Cookbook = tuple[str, int] # name, page count

I am describing specific fields for this tuple: name and page count. This is a prime example of an heterogeneous collection:

  • Each field (name and page count) will always be in the same order

  • All names are strings; all page counts are integers.

  • Iterating over the tuple is rare, since I won’t treat both types the same

  • Name and page count are fundamentally different types, and should not be treated as equivalent.

When accessing a tuple, you will typically index to the specific field you want:

food_lab: Cookbook = ("The Food Lab", 958)
odd_bits: Cookbook ("Odd Bits", 248)

print(food_lab[0])
>>> "The Food Lab"

print(odd_bits[1])
>>> 248

However, in many codebases, tuples like these soon become burdensome. Developers tire of writing cookbook[0] whenever they want a name. A better thing to do would be to find some way to name these fields. A first choice might be a dictionary.

food_lab = {
    "name": "The Food Lab",
    "page_count": 958
}

Now, they can refer to fields as food_lab['name'] and food_lab['page_count']. The problem is, dictionaries are typically meant to be a homogeneous mapping from key to a value. However, when dictionaries are used to represent data that is heterogeneous, you run into similar problems as above when writing a valid type annotation. I cannot write a meaningful type to represent this data. If I wanted to try to use a type system to represent this dictionary, I end up with the following:

def print_cookbook(cookbook: dict[str, Union[str,int]])
    # ...

This approach has the following problems:

  • Large dictionaries may have many different types of values. Writing a Union is quite cumbersome.

  • It is tedious for a user to handle every case for every dictionary access (since I indicate that the dictionary is homogeneous, I convey to developers that they need to treat every value as the same type, meaning typechecks for every value access. I know that the name is always a string and the page_count is always an int, but a consumer of this type would not know that.

  • Developers do not have any indication what keys are available in the dictionary. They must search all the code from dictionary creation time to the current access to see what fields have been added.

  • As the dictionary grows, developers have a tendency to use Any as the type of the value. Using Any defeats the purpose of the typechecker in this case.

Note

Any can be used for valid type annotations; it merely indicates that you are making zero assumptions what the type is. For instance, if you wanted to copy a list, the type signature would be def copy(coll: list[Any]) -> list[Any]. Of course, you could also do def copy(coll: list) -> list as well, and it means the same thing.

These problems all stem from heterogeneous data in homogeneous data collections. You either pass the burden onto the caller, or abandon type annotations completely. In some cases, you want the caller to explicitly check each type on each value access, but in other cases, this is overcomplicated and tedious. So, how can you explain your reasoning with heterogeneous types, especially in cases where keeping data in a dictionary is natural, such as API interactions or user-configurable data. For these cases, you should use a TypedDict.

TypedDict

TypedDict, introduced in Python 3.8, is for the scenarios where you absolutely must store heterogeneous data in a dictionary. These scenarios are typically ar e when you can’t avoid heterogeneous data. JSON APIs, YAML, TOML, XML and CSVs all have easy-to-use Python modules that convert these data formats into a dictionary and are naturally hetereogeneous. Which means the data that gets returned has all the same problems as listed in the previous section. Your typechecker won’t help out much and users won’t know what keys and values are available.

Tip

If you have full control of the dictionary, meaning you create it in code you own and handle it in code you own, you should consider using a dataclass or a class instead.

For example, suppose I want to augment my digital cookbook app to provide nutritional information for the recipes listed. I decide to use the Spoonacular API1 and write some code to get nutritional information:

nutrition_information = get_nutrition_from_spoonacular(recipe_name)
# print grams of fat in recipe
print(nutrition_information["fat"]["value"])

If you were reviewing the code, how would you know that this code is right? If you wanted to also print out the calories, how do you access the data? What guarantees do you have about the fields inside of this dictionary? To answer these questions, you have two options:

  • Look up the API documentation (if any) and confirm that the right fields are being used. In this option, you hope that the documentation is actually complete and correct.

  • Run the code and print out the returned dictionary. In this option, you hope that test responses are pretty identical to production responses.

The problem is that you are requiring every reader, reviewer and maintainer to do one of these two steps in order to understand the code. If they don’t, you will not get good code review feedback and developers will run the risk of using the response incorrectly. This leads to incorrect assumptions and brittle code. TypedDict allows you to encode what you’ve learned about that API directly into your type system.

from typing import TypedDict
class Range(TypedDict):
    min: float
    max: float

class NutritionInformation(TypedDict):
    value: int
    unit: str
    confidenceRange95Percent: Range
    standardDeviation: float

class RecipeNutritionInformation(TypedDict):
    recipes_used: int
    calories: NutritionInformation
    fat: NutritionInformation
    protein: NutritionInformation
    carbs: NutritionInformation

nutrition_information:RecipeNutritionInformation = get_nutrition_from_spoonacular(recipe_name)

Now it is incredibly apparent exactly what data types you can rely upon. If the API ever changes, a developer can update all the TypedDict classes and let the typechecker catch any incongruities. Your typechecker now completely understands your dictionary, and readers of your code can reason about responses without having to do any external searching. Even better, these TypedDict collections can be as arbitrarily complex as you need them to be. You’ll see that I nested TypedDict instances for reusability purposes, but you can also embed your own custom types, Unions and Optionals to reflect the possibilities that an API can return. And while I’ve mostly been talking about API, remember that these benefits apply to any heterogeneous dictionary, such as when reading JSON or YAML.

Note

TypedDict is only for the type-checker’s benefit. There is no run-time validation at all; the run-time type is just a dictionary.

So far, I’ve been teaching you how to deal with built-in collection types: lists/sets/dictionaries for homogeneous collections and tuples/TypedDict for heterogenous collections. What if these types don’t do everything that you want? What if you want to create new collections for your using? To do that, you’ll need a new set of tools.

Creating New Collections

When writing a new collection, you should ask yourself: Are you trying to write a new collection that isn’t representable by another collection, or are you trying to modify an existing collection to provide some new behavior? Depending on the answer, you may need to employ different techniques to achieve your goal.

If you write a collection type that isn’t representable by another collection type, you are bound to come across generics at some point.

Generics

A generic type indicates that you don’t care what type you are using. However, it helps restrict users from mixing types where inappropriate.

Consider the innocuous reverse list function:

def reverse(coll: list) -> list:
    return coll[::-1]

To achieve this, I use a generic, which is done with a TypeVar in Python.

from typing import TypeVar
T = TypeVar('T')
def reverse(coll: list[T]) -> list[T]:
    return coll[::-1]

This says that for a type “T”, reverse takes in a list of elements of type “T”, and returns a list of elements of type “T”. I can’t mix types: a list of integers will never be able to become a list of strings if those lists aren’t using the same TypeVar.

I can use this sort of pattern to define entire classes. Suppose I want to integrate a cookbook recommender service into the cookbook collection app. I want to be able to recommend cookbooks or recipes based on your ratings. To do this, I want to store each of these information into a graph. A graph is a data structure that contains a series of entities known as nodes, and tracks relationships between those nodes, known as edges. However, I don’t want to write separate code for a cookbook graph and a recipe graph. So I define a graph class that can be used for generic types. In my example, I’ll use T for my node type and W for my edges.

from collections import defualtdict
from typing import Generic, TypeVar

T = TypeVar("T")
W = TypeVar("W")

# directed graph
class Graph(Generic[T, W]):
    def __init__(self):
        self.edges: dict[T, list[W]] = defaultdict(list)

    def add_relation(self, node: T, to: W):
        self.edges[node].append(to)

    def get_relations(self, node: T) -> list[W]:
        return self.edges[node]

With this code, I can define all sorts of graphs and still have them typecheck successfully.

cookbooks: Graph[Cookbook, Cookbook] = Graph()
recipes: Graph[Recipe, Recipe] = Graph()

cookbook_recipes: Graph[Cookbook, Recipe] = Graph()

recipes.add_relation(Recipe('Pasta Bolognese'),
                     Recipe('Pasta with Sausage and Basil'))

cookbook_recipes.add_relation(Cookbook('The Food Lab'),
                              Recipe('Pasta Bolognese'))

While this code does not typecheck:

cookbooks.add_relation(Recipe('Cheeseburger'), Recipe('Hamburger'))
code_examples/chapter5/invalid/graph.py:25: error: Argument 1 to "add_relation" of "Graph" has incompatible type "Recipe"; expected "Cookbook"

Using generics can help you write collections that use types consistently throughout their lifetime. This reduces the amount of duplication in your codebase, which minimizes the chances of bugs and reduces cognitive burden.

Modifying Existing Types

Generics are nice for creating your own collection types, but what if you just want to tweak some behavior of an existing collection, such as a list or dictionary. Having to completely rewrite all the semantics of a collection would be tedious and error-prone. Thankfully, methods exist to make this a snap. Let’s go back to our cookbook app. I’ve written code earlier that grabs nutrtition information, but now I want to store all that nutrition information in a dictionary. However, I hit a problem: the same inredient has very different names depending on where you’re from. Take a dark leafy green, common in salads. While a U.S. chef might call it “arugula”, a European might call it “rocket”. This doesn’t even begin to cover the names in languages other than English.d To combat this, I want to create a dictionary-like object that automatically handles these aliases:

nutrition = NutritionalInformation()
nutrition["arugula"] = get_nutrition_information("arugula")
print(nutrition["rocket"]) # arugula == rocket

So how can I write NutritionalInformation to act like a dict?

A lot of developer’s first instinct is to sub-class dictionaries. No worries if you aren’t awesome at subclassing, I’ll be going much more in depth in a later chapter. For now, just treat sub-classing as a way of saying “I want my subclass to behave exactly like the parent class”.

class NutritionalInformation(dict): 1
    def __getitem__(self, key): 2
        try:
            return super().__getitem__(key) 3
        except KeyError:
            pass
        for alias in get_aliases(key):
            try: 4
                return super().__getitem__(alias)
            except KeyError:
                pass
        raise KeyError(f"Could not find {key} or any of its aliases") 5
1

The (dict) syntax indicates that we are subclassing from dictionaries

2

__getitem__ is what gets called when you use brackets to check a key in a dictionary. (nutrition["rocket"]) calls __getitem__(nutrition, "rocket")

3

If a key is found, use the parent dictionaries key check.

4

For every alias, check if it is in the dictionary

5

Throw a KeyError if no key is found, either with what’s passed in or any of its aliases.

We are overriding the __getitem__ function, and this works!

If I try to access nutrition["rocket"] in that snippet above, I get the same nutritional information as nutrition["arugula"]. Huzzah! So you deploy it in production and call it a day.

But (and there’s always a but), as time goes on, a developer comes to you and complains that sometimes, the dictionary doesn’t work. You spend some time debugging, and it always works for you. You look for race conditions, threading, API tomfoolery or any other nondeterminism, and come up with absolutely zero potential bugs. Finally, you get some time where you can sit with the other developer and see what they are doing.

And sitting at their terminal is the following line:

# arugula == rocket
nutrition = NutritionalInformation()
nutrition["arugula"] = get_nutrition_information("arugula")
print(nutrition.get("rocket", "No Ingredient Found"))

The get function on a dictionary tries to get the key, and if not found, will return the second argument (in this case “No Ingredient Found”. And whenever this line is executed, you see just that: “No Ingredient Found”. And herein lies the problem: When subclassing from a dictionary and overriding methods, you have no guarantee that those methods are called from every other method in the dictionary. Built-in collection types are built with performance in mind; many of methods use inlined code to go fast. This means that overriding one method, such as __getitem__, will not be used in most dictionary methods. This certainly violates the Law of Least Surprise, which we talked about in Chapter 1.

Note

It is okay to subclass from the built-in collection if you are only adding methods, but because future modifications may make this same mistake, I still prefer to use one of the other methods of building custom collections.

So overriding dict is out. Instead I’l use types from the collections module. For this case, there is a handy type called UserDict. UserDict fits the exact use case that I need: I can subclass from UserDict, override key methods, and get the behavior I expect.

from collections import UserDict
class NutritionalInformation(UserDict):
    def __getitem__(self, key):
        try:
            return self.data[key]
        except KeyError:
            pass
        for alias in get_aliases(key):
            try:
                return self.data[alias]
            except KeyError:
                pass
        raise KeyError(f"Could not find {key} or any of its aliases")

This fits your use case exactly. You subclass from UserDict instead of dict, and then use self.data to access the underlying dictionary.

You go run your teammate’s code again:

# arugula == rocket
print(nutrition.get("rocket", "No Ingredient Found"))

And you get the nutrition information for arugula.

UserDict isn’t the only collection type that you can override in this case. There also is a UserString and a UserList in the collections model. Anytime you want to tweak a dictionary, string or list, these are the collections you want to use.

Warning

Inheriting from these classes does incur a performance cost. Built-in collections make some assumptions in order to achieve performance. With UserDict, UserString, and UserList, methods can’t be inlined, since you might override them. If you need to use these constructs in performance-critical code, make sure you benchmark and measure your code to find potential problems.

You’ll notice that I talked about dictionaries, lists and strings above, but left out one big built-in: sets. There exists no UserSet in the collections module. I’ll have to select a different abstraction from the collections module. More specifically, I need abstract base classes (or collections.abc.)

As Easy as ABC

Abstract Base Classes in the collections.abc module provide another grouping of classes that you can override to create your own collections. These Abstarct Base Classes (or ABCs) is a class that is intended to be subclassed, and require the subclass to implement very specific functions. For the collections.abc, these ABCs are all centered on custom collections. In order to create a custom collection, you must override specific functions, depending on the type you want to emulate. You can find a full list of required functions to implement at the collections.abc's module documentation. 2 Once you implement these required functions though, the ABC fills in other functions automatically.

Note

In contrast to the User* classes, there is no built in storage, such as self.data, inside these classes. You must provide your own storage

Let’s look at a collections.abc.Set, since there is no UserSet elsewhere in collections. I want to create a custom set that automatically handles aliases of ingredients (such as rocket and arugula). In order to create this custom set, I need to implement three methods:

  • __contains__: This is for membership checks: "arugula" in ingredients.

  • __iter__: This is for iterating: for ingredient in ingredients

  • __len__: This is for checking the length: len(ingredients)

Once these three methods are defined, methods like relational operations, equality operations and set operations (union,intersection,difference,disjoint) will just work. That’s the beauty of collections.abc. Once you define a select few methods, the rest come for free. Here it is in action:

import collections
class AliasedIngredients(collections.abc.Set):
    def __init__(self, ingredients: set[str]):
        self.ingredients = ingredients

    def __contains__(self, value: str):
        return value in self.ingredients or any(alias in self.ingredients for alias in get_aliases(value))

    def __iter__(self):
        return iter(self.ingredients)

    def __len__(self):
        return len(self.ingredients)

ingredients = AliasedIngredients({'arugula', 'eggplant', 'pepper'})
for ingredient in ingredients:
    print(ingredient)
>>> 'arugula'
'eggplant'
'pepper'

print(len(ingredients))
>>> 3

print('arugula' in ingredients)
>>> True

print('rocket' in ingredients)
>>> True

list(ingredients | AliasedIngredients({'garlic'}))
>>>['pepper', 'arugula', 'eggplant', 'garlic']

That’s not the only cool thing about collections.abc, though. Using collections.abc in type annotations can help you write more generic code. Take this code from all the way back in Chapter 2:

def print_items(items):
    for item in items:
            print(item)

print_items([1,2,3])
print_items({4, 5, 6})
print_items({"A": 1, "B": 2, "C": 3})

I talked about how duck typing can be both a boon and a curse for robst code. It’s great that I can write a single function that can take so many different types, but communicating intent through type annotations becomes challenging. Fortunately, I can use the collections.abc classes to provide type hints:

def print_items(items: collections.abc.Iterable):
    for item in items:
            print(item)

In this case, I am indicating that items are simply iterable through the Iterable ABC. As long as the parameter supports an __iter__ method (and most collections do), this code will typecheck.

As of Python 3.9, there are 25 different ABCs for you to use. Check them all out in the Python documentation3.

Wrap-up

You can’t go far without running into collections in Python. Lists, dictionaries, and sets are commonplace, and it’s imperative that you provide hints to the future about what collection types you’re working with. Consider if your collections are homogeneous or heterogeneous, and what that tells future readers. For the cases where you do use heterogeneous collections, provide enough information for other developers to reason about them, such as a TypedDict. Once you learn the techniques to allow other developers to reason about your collections, your codebase becomes so much more understandable.

Always think through your options when creating new collections:

  • If you are just extending a type, such as adding new methods, you can subclass directly from collections such as list or dictionary. However, beware the rough edges, as there is some surprising Python behavior if a user ever overrides a built-in method.

  • If you are looking to change out a small part of a list, dictionary or string, use collections.UserList, collections.UserDict, or collections.UserString, respectively. Remember to reference `self.dat`a to access the storage of the respective type.

  • If you need to write a more complicated class with the interface of another collection type, use collections.abc. You will need to provide your own storage for the data inside the class and implement all required methods, but once you do, you can customize that collection to your heart’s content.

Discussion Topic

Look through your use of collections and generics in your codebase, and assess how much information is conveyed to future developers. How many custom collection types are in your codebase? What can a new developer tell about the collection types by just looking at type signatures and names? Are there collections you could be defining more generically? What about other types using generics?

Now, type annotations don’t reach their full potential without the aid of a typechecker. In the next chapter, I’m going to focus on the typechecker itself.You’ll learn how to effectivly configure a typechecker, generate reports, and evaluate different checkers. The more you know about a tool, the more effectively you can wield it. This is especially true for your typechecker.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.156.156