Chapter 1. World Of The Mainframe

In 1985, Stewart Alsop started the P.C. Letter, which quickly became a must-read for the rapidly growing tech industry. He would go on to create several conferences and would eventually become an editor for InfoWorld.

For the most part, Alsop had a knack for anticipating the next-big trends. But he was not perfect either. In 1991, he wrote the following: “I predict that the last mainframe will be unplugged on 15 March 1996.”

At the time, this prediction was not necessarily controversial. The king of the mainframes – IBM – was struggling against the onslaught of fast-growing companies like Dell, Compaq and Sun Microsystems. There was even buzz that the company could go bust.

But to paraphrase Mark Twain, the death of the mainframe was greatly exaggerated. This technology proved quite durable. In fact, in 2002 Alsop admitted his mistake and wrote: “It’s clear that corporate customers still like to have centrally controlled, very predictable, reliable computing systems – exactly the kind of systems that IBM specializes in.”

And this is the case today. Keep in mind that the mainframe is a growth business for IBM and is likely to be important for key trends like the hybrid cloud, ecommerce and even fintech. Since 2010, more than 250 companies have migrated their workloads to IBM Z systems.

The mainframe is also pervasive across the world. Consider that this technology is used by:

  • 92 of the world’s top 100 banks

  • All ten of the world’s insurers

  • 18 of the top 25 retailers

  • 70% of Fortune 500 Companies

In this chapter, we’ll take a look at the mainframe, such as the history, the pros/cons, the capabilities, and the future.

What Does “Mainframe” Mean Anyway?

The first use of the word “mainframe” was in 1964. But it is not clear who coined the term. But there was a glossery from Honewell that mentioned it as well as a paper for a company journal (authored by IBM engineer, Gene Amdahl).

The concept of mainframe came from the telecommunications industry. It was to describe the central system of a telephone exchange where lines were interconnected.

In terms of a mainframe computer, it was to describe the CPU or central processing unit, which connected to peripherals. But it would also become synonymous for a large computer system that could handle huge amounts of data processing.

OK then, how is a mainframe different from a supercomputer? Well, a supercomputer is focused on scientific applications. These machines are also the most expensive in the world and process huge amounts of data. For example, the Fugaku supercomputer has over 7.6 million cores and can operate at 442 petaflops (a petaflop is one quadrillion floating point operations per second).

Mainframes, on the other hand, are usually designed for business purposes and are ideal for managing transactions at scale. In fact, a supercomputer only has a fraction of the I/O capabilities.

A Brief History

The earliest computers were mainframes. These machines were housed in large rooms – which could be over 10,000 feet – and have many electric tubes and cables. Because of the size, the mainframe would often be referred to as “Big Iron.”

A major catalyst for the development of mainframes was World War II. The U.S. government saw this technology as a superior way to calculate ballistics of weapons, logistics and for cracking enemy codes.

The first mainframe is considered to be the Harvard Mark I. The inventor was Harvard mathematics professor Howard Aiken, who wanted to build a system to go beyond using paper and pencil. He proposed his idea to IBM, who agreed to back the effort.

The development of the Harvard Mark I began in 1939 and would not be launched until February 1944. One of the first uses of the Mark I was to do calculations for the Manhattan Project, which was the U.S. effoert in World War II to build a nuclear bomb.

This electromechanical computer was 51 feet in length and eight feet high. The weight? About five tons. There were actually 500 miles of wire and three million connections, along with 2,225 counters and 1,464 switches. The system could calculate three additions per second, one multiplication per six seconds, and a logarithm within a minute. To input instructions, there was a paper tape drive as well as stored memory. The output was provided through an electric typewriter.

Note that the machine would be quite durable and it operated for roughly 15 years.

What Are Punch Cards?

A punch card – or punched card – is a piece of stiff paper that has perforations on it that represent information. These were not used just by early computers, though. Punch cards have actually been employed as early as the 1700s. They helped to provide the patterns for textile mills.

Then by the 1830s, Charles Babbage used punch cards for his Analytical Engine, which was a mechanical general-purpose computer. Oh, and then for the 1890 U.S. Census, Herman Hollerith used them for counting the population. The result was that it only took two and a half years to complete the massive project, versus the typical seven years.

Herman Hollerith’s business would eventually morph into IBM and punch cards would remain a lucrative business for decades. They would also become essential for programming mainframe computers.

Here’s how it worked: A person entered code in an electric typewriter that would make holes in the punch cards. This could easily result in a large stack. The programmer would then hand these off to a computer operator and he or she would place them in a card reader. This system began the processing at the top left and then read down the first column. It would go to the top of the next column until all the code was read on the card. Through this process, the information was converted in machine language that the computer could understand. Interestingly enough, it could take hours or even days to get the output back!

Growth of the Mainframe

No doubt, mainframe technology got more powerful and the systems would find more usage within the business world. Consider that the main automation systems for the office included typewriters, file cabinets and tabulation machines.

A critical breakthrough in mainframes for business came in 1959 with IBM’s launch of the 1401 system. It used only transistors and could be mass produced. It became a huge seller for IBM.

The result was that the mainframe started to transform the business world. Yet there were some problems emerging. A mainframe was often a custom device for a particular use case, such as for inventory or payroll. There would also be a unique operating system for each. As a result, software would have to be rewritten when a new mainframe system was deployed, which was expensive and time consuming.

But IBM CEO Thomas J. Watson Sr. realized that this could not last. The complexity of managing a myriad of systems was just too much. The company had to deal with six disparate divisions and each would have their own departments for R&D, sales and support. This is why Watson set out to rethink his computer business.

At the heart of this was the development of the System/360 (the name referred to 360 degrees of a compass, which was an indication that the mainframe was a complete solution). The original budget was roughly $2 million but this quickly proved to be far off the mark. IBM would ultimately invest a staggering $5 billion for the System/360 (in today’s dollars, this would be about $300 billion!). It represented the largest investment during the 1960s, behind only the U.S. space program.

The investment was not just a huge financial risk. IBM was also going to essentially make its existing machines obsolete. The company also would need to come up with innovations to allow for a new type of computing architecture.

It was Gene Amdahl who led this ambitious effort. A critical goal was to ensure that a customer could upgrade from a smaller machine to a larger one without having to rewrite the software and buy new peripherals. In other words, there would be backwards combability.

And yes, this would ultimately be one of the most important advantages for the System/360. For example, if you wrote a program for the computer in the 1960s, it would still be able to run on today’s IBM mainframe.

But to allow this kind of continuity, there needed to be a way to standardize the instruction code. At the time, the main approach was to embed the instructions within the hardware, which often proved to be inflexible.

IBM’s innovation was to develop a software layer, called microcode, that used eight-bit bytes (before this, the memory was addressed with varying bit sizes) to interact with the hardware. This made it possible to allow for changes in the instruction set – while not replacing the whole computer system!

Another key goal for the System/360 was that it needed to be accessed by a large number of users simultaneously. This would lead to the business of time-sharing, in which companies could rent a mainframe. This innovation would also be leveraged in the creation of the Internet during the end of the 1960s

Well, in the end, Watson’s bet would pay off in a big way. When the System/360 was launched on April 7, 1964, the demand was staggering. Within a month, IBM received more than one thousand orders.

Initially, the company built five computers and 44 peripherals. Here are some of the machines:

Model 20

This was the most basic system. It could handle binary numbers but not floating point numbers and the memory was as up to 32KB. The Model 20 would become the most popular in terms of units sold.

Model 65

The maximum memory was one megabyte and the machine could handle floating point numbers and decimals. Time sharing was available from IBM’s TSO (Time Sharing Option).

Model 75

This was built specifically for NASA (five units were built). Note that the machine was instrumental in helping with the Apollo space program. For example, the Model 75 helped with the calculations for the space vehicles and even were critical in helping to make the go, no-decisions for flights. According to Gene Kranz, who was the flight director for the Apollo missions: “Without IBM and the systems they provided, we would not have landed on the Moon.”

The mainframes were certainly not cheap. They could easily cost over $2 million a piece. But many companies saw this technology as a must-have for being competitive. They would even showcase their mainframes by placing them in glass rooms at the headquarters.

They would also become part of the entertainment culture. The System/360 would have cameo appearances in various films like The Doll Squad and The Girl Most Likely To…

Now there was certainly competition from other mainframe companies. The main rivals included Sperry Rand, Burroughs, NCR, RCA, Honeywell General Electric and Control Data Corporation. But they were often referred to as the “seven dwarfs” because of the dominance of IBM. The company has remained the No. 1 player in the market – uninterrupted – until today, and this was due primarily to the impact of the System/360.

Mainframe Innovation

IBM did not rest on its laurels. The company continued to invest heavily in its mainframe business.

One breakthrough innovation was virtualization. IBM launched this in 1972 with its System/370 mainframe. With virtualization, it was possible to get much more resources from existing machines. This was accomplished by using sophisticated software called a hypervisor, which made it possible to turn a mainframe into multiple different machines. Each was treated as its own system – called a VM or virtual machine — with an operating system and applications.

Virtualization would be game changer, with advantages like:

Cost Savings

A company could greatly reduce its physical footprint since there was not much of a need to buy new computers. But there were other benefits, such as lower energy expenses.


It was fairly easy to spin up and manage a VM.

Lower downtime

If a machine went down, you could move a VM to another physical machine quickly.

Another innovation – which was commercialized in the mid-1970s – was the Universal Product Code (UPC). IBM researcher George Laurer led a program to use a mainframe to connect with a supermarket scanner for labels. He would use bar codes to make unique identifiers. The result was a significant improvement in automation for retailers.

The Terminal

From the 1960s through the 1990s, the terminal was a common way for non-technical people to access a mainframe. For example, this could be a travel agent who booked a flight or an insurance agent who processesed a claim.

The terminal was often called a “green screen” because the characters were green (there were also no graphics). They were based on cathode ray tube (CRT) technology and the size of the screen was 80X24 characters.

But these machines were also known as “dumb terminals.” Why? The reason was that they were not computers; rather, these machines just transmitted data.

But as PCs grew in popularity, they would become the norm for accessing mainframes. During the 1980s, DOS was able to connect to mainframes (this was done through DOS/360). Then in the 1990s, the Windows platform became a common way to gain access to these machines.

Mainframe Challenges

By the 1980s, IBM’s mainframe business was starting to come under pressure. One of the reasons for this was the growth in minicomputers, which were much cheaper but still were quite powerful. Digital Equipment was the pioneer of this category and would become a juggernaut.

Then there was the PC revolution. With applications like spreadsheets, databases and word processors, this technology became pervasive in businesses.

However, IBM was still able to navigate the changes. Mainframes still served important needs, especially for large scale data processing.

Fast forward to today: IBM’s mainframe business remains a key source of cash flows and is even seeing a resurgence in growth. The latest version is the z15, which has memory of up to 40 terabytes, over 100 processors and compute power of up to 9,215 MIPS (Million Instructions Per Second).

Figure 1-1 shows the z15 model.

An image
Figure 1-1. This is the latest mainframe for IBM, which is the z15 model.

Why Have A Mainframe?

A big reason for why the mainframe has lasted so long is that it would be incredibly expensive to get rid of them! It would also be risky. What if the migration did not work? This could be a huge problem since mainframes often handle mission-critical operations.

“While there are reasons to complain about mainframe processing – large single line item costs compared to more dispersed spending on distributed or cloud, the increasing attrition of seasoned mainframe staff and the ‘uncool’ factor of the mainframe – for many specific use cases and many industries, it still represents the best value for IT spend,” said Jeff Cherrington, who is the Vice President of Product Management and Systems at ASG Technologies.

So then, let’s take a deeper look at the advantages of using a mainframe:


Mainframes have hundreds of processors that can process terabytes of data through input storage systems and efficiently generate output. This is certainly critical for handling such things as customer records, invoices, inventory and other business applications. Mainframes also have vertical scale – that is, resources can be upgraded or downgraded depending on the volumes.

Flexible Compute

It’s a mistake to think that mainframes are only for large companies. The fact is that IBM has programs to allow startups to access the technology, such as through the cloud.


Mainframes are built to run continuously (the uptime is at 99.999%). The “z” in “z15” is short for “zero downtime.” To this end, a mainframe has systems that monitor for errors, which are built into both the hardware and OS. There is also the ability to quickly recover from mishaps. To do this, there is redundancy in the mainframe. 24/7 reliability is definitely essential for many business applications, such as with ATMs, credit card systems at retailers and processing of insurance claims. In fact, a z15 mainframe is a capable of withstanding an 8.0 magnitude earthquake.


Mainframes are built to make it easy to change the systems, such as by swapping out processors. There will also be no interruption of the running of the systems. Consider that mainframes are built with a modular design and this is based on “books.” They can be easily configured to customize for processors, memory and I/O.

Security and Encryption

Both the z14 and z15 actually have encryption built into the hardware. Moreover, it’s the only server that has achieved Common Criteria Evaluation Assurance Level 5 (EAL5), which is the highest degree of security. This is certainly a key selling point for companies in highly regulated industries, such as banking, healthcare, insurance and utilities.


It’s true that mainframes are not cheap. But they may ultimately be more cost-effective than alternatives, such as when you look at the cost per transaction. This may be much lower than, say, having to manage a large number of smaller servers. Mainframes also have the advantage of lower energy costs because the processing is centralized and there are conservation systems built in (this includes an energy meter). The average watt per MIPS is about 0.91 – and this is declining every year. Note that energy costs can – over time – be the biggest expense for an IT system.


IBM has continued to invest heavily in innovating the system. A big part of this has been the adoption of open source software, such as Linux, Git and Python. Then again, IBM bought the biggest player in the market, RedHat, for $34 billion. There have also been innovations with cutting-edge areas like AI (Artificial Intelligence), the native cloud and DevOps. Interestingly enough, there have been breakthroughs with the design of the IBM mainframe door. The IBM z15 is made of aluminum and acoustic form shapes. Why so? This allows for a low level of noise but also has helped to cool down the system. There is even a patent on the design. As Watson Jr. once said: “Good design is good business.”

The OS

As a developer, you will usually not spend much time with the mainframe’s OS. This will instead be the focus for systems programmers. Regardless, it is still important to understand some of the basic concepts.

So what is the OS for the IBM? No doubt, there has been considerable changes over the years. The OS has seen a myriad of names like OS/360, MVT, OS/VS2, OS/390 and so on.

The most current version is the z/OS. This 64-bit platform got its start in 2000 and has seen major upgrades. But again, it has still maintained backwards compatibility as the core of the z/OS still has much of the same functionality as the original System/360.


64-bit means that a system can address up to 16 exabytes of data. This is the equivalent of one million terabytes. To put this into human terms, it would be enough to store the entire Library of Congress 3,000 times over.

While z/OS is similar to typical operating systems, there are still some notable differences. For example, the memory management does not use the heap or stack. Instead, z/OS allocates memory to programs based on using large chunks or several of them.

Here are some of the other capabilities of the OS:

  • Concurrency: This allows for more than one program to be executed at the same time. This is possible because a CPU’s resources are usually idle or not heavily used.

  • Spooling: Certain functions, like printing, can cause problems in terms of handling the process. This is why there is spooling, which manages the queue for files, which are stored on disk.

  • Languages: z/OS supports a myriad of languages like COBOL, Java, C, C++, Python and Swift.

  • POSIX Compatibility: This provides Unix file access.

Yet z/OS is not the only OS supported on the IBM Z. There are five others, which include z/VSE (Virtual Storage Extended), z/TPF (Transaction Processing Facility), z/VM (Virtual Machine), Linux, and KVM (Kernel-based Virtual Machine). Let’s look at each.


This was part of the original System/360 architecture. But the focus for this OS has been for smaller companies.

The original name for z/VSE was DOS or Disk Operating System. But this is not to be confused with the OS that Microsoft developed in the 1980s. IBM’s DOS was used to describe how the system would use the disk drive to handle processing.

Even though z/VSE was a slimmed down version of z/OS, the OS was still powerful. It allows for secure transactions and batch workloads and integrates with CICS and DB2. z/VSE has also proven effective with hybrid IT environments.

It’s also common that – as a company grows – they will eventually migrate to z/OS as and the process is relatively smoth.


This was developed to handle the airline reservations for SABRE, which was launched in the early 1960s. The project was one of the first examples of using transactional operations with a mainframe.

The language for the system was based on assembler to allow for high speed and efficiency. But this proved complicated and unwieldly. This is why developers would move over to using the C language.

z/TPF is an expensive system and can be leveraged across various mainframes. But it can be cost-effective for customers that have enormous transactional workloads.


This was introduced in 1972 when IBM developed virtualization. The z/VM allowed for the use of a type 1 hypervisor (also known as a bare-metal hypervisor). This is where the software layer is installed directly on top of the physical machine or server. As a result, there is generally higher performance and stability since there is no need to run inside an OS (z/VM can host thousands of instances of operating systems). Essentially, a type 1 hypervisor is a form of an OS.

A type 2 hypervisor, on the other hand, runs within an OS. This is usually used with environments with a small number of machines.


Linus Torvalds created this OS in 1991 while he was a student at the University of Helsinki. He did this primarily because he did not want to pay for the licensing fees for existing operating systems. So Torvalds made Linux open source, which led to significant adoption. Another factor for the success was the emergence of the Internet as a means of software distribution.

Linux has proven to be robust and adaptable. It has also become pervasive within enterprise environments.

Regarding IBM, it adopted Linux for is mainframes in 2000 and this was key in the company’s modernization efforts. Then in 2015, IBM launched LinuxONE, which was a Linux-only mainframe system.

When using Linux on an IBM mainframes, there are some factors to note:

  • Access - You do not use a 3270 display terminal. Instead, Linux uses X Window terminators or emulators on PCs. This is the standard interface.

  • ASCII - This is the character set. But a traditional mainframe system will use an IBM alternative, called EBCDIC12 (Extended Binary Coded Decimal Interchange Code).

  • Virtualization: You can use Linux with z/VM to clone different Linux images.


This is an open source virtualization module for the Linux kernel. It essentially makes it function as a type 1 hypervisor.

IBM has adopted KVM for its mainframes to allow for better deployment of Linux workloads and consolidation of x86 server environments. The software has an easy installation process and uses the typical Linux administration console (it’s possible to operate 8,000 Linux VMs at the same time).

By using KVM, a mainframe can leverage technologies like Docker and Kubernetes.

Processor Architecture

The processor architecture for the modern IBM Z mainframe looks similar to the original developed in 1964. There are three main components, which include the CPU (Central Processing Unit), main storage and channels. The CPU processes the instructions that are stored in the main memory. To speed this up, there is a cache system built into the processor. The virtualization capabilities for the main memory also rely on caching, which means offloading data to the disk.

Regarding the channels, these are the input/output devices like terminals and prints. These are connected to high-speed fiber optic lines, which boosts the performance.

A typical IBM Z system will have a multiprocessor as well. This is another way to help enhance the speed of the machine. But a multiprocessor can help with reliability. If one of the processors fails, then another one can take over the tasks.


LPAR is short for “logical partitions.” This is a form of virtualization, in which a machine can be divided into separate mainframes (it’s based on a type 1 hypervisor). The current z15 system allows for up to 40 LPARs.

Each LPAR has its own OS and software. Except for the z/OS, an OS will not know that another one is running on the machine. There is complete independence. Although, to allow for seamless operation across the machine, the z/OS uses Cross-Memory Services to handle the tasks for the various LPARs.

There is also much flexibility with the allocation of the resources. For example, it is possible to use one or more processors per LPAR or to spread them across multiple LPARs. It’s even possible to assign weightings for the resources – say LPAR1 could have two times as much processor time as the LPAR2.

Part of the advantage of the LPAR architecture is reliability. If one goes down, then another one can takeover.

But when a developer uses an LPAR, they will notice nothing different. It will be acting just like any other mainframe system.

Consider that the LPAR technology relies on PR/SM (this is pronounced as “priz-em”) or Processor Resource/Systems Manager. This is based on firmware, which is software that is embedded into the hardware. With PR/SM, a mainframe has built-in virtualization that allows for the efficient use of CPU resources and storage for the LPARs.


Disk storage is extremely important for mainframe computers, as they are used to manage enormous amounts of data. This is why it is critical to have a general idea how this technology works.

A disk drive is made up of a stack of circular disks that consist of magnetic material to store data. In the middle is a hole, which is where it can be placed on a spindle. This allows the disk to be spun at a high rate.

The surface of a disk is divided into tracks and each one of these has various sectors. This is the case whether for a PC or a mainframe.

To access the data, a disk drive will use an actuator that moves a head to a location of a particular sector (all of them are moved in unison). This can be done because there is a memory address of each of the sectors.

When it comes to an IBM mainframe disk drive, there are some differences with the jargon. The drive is called a DASD (direct access storage device), which still has the original IBM System/360 architecture, and is pronounced as “Dazz-Dee.” What’s more, a sector is instead called a cylinder.

Mainframe disk drivers are definitely fast. But of course, since they are mechanical, the speed is much slower than working with memory or the CPU. As a result, mainframe developers will look for ways to minimize the accessing of the disk drive.

Note that a DASD is connected to the mainframe via arrays. There are also the use of caching to help speed things up as well as controllers to manage the processing and provide for sharing of the system.

Batch and Online Transaction Processing

In the early days of mainframes, the primary approach for handling data was batch processing. An example of this would be to input data during a period of time – such as for business hours – and then process everything at night, when there was less activity. Or another use case was to process payroll. The information would be collected for a couple weeks and then processed at the end of the period.

Batch processing may seem unusual for those developers who have experience with modern languages like Java or Python. The reason is that there is usually no or minimal user input with the mainframe. Rather, a program is run by using a system called JCL and a job is scheduled to process the data.

It’s important to keep in mind that batch processing can be a cost-efficient way to manage large amounts of data and it remains a common use case for mainframes. But there are still some notable limitations. Let’s face it, there are certain types of activities that need to be processed in real-time. For a mainframe, this is known as online transaction processing (OLTP).

A classic case of this is the SABRE platform for handling airline reservations. It was able to handle millions of transactions across the U.S. Then OLTP would be used for other real-time processing areas like credit cards and banking.

Nowadays, a mainframe will typically use a system like Customer Information Control System (CICS) for real-time transactions. Consider that it can process up to 100 million tractions and port to databases like DB2.

Mainframe Trends

For 15 consecutive years, BMC Software has published an annual survey of the mainframe industry. The latest one included more than 1,000 respondents.

The good news is that the prospects for the industry look bright. About 54% of the respondents indicated that their organizations had higher transaction volumes and 47% reported higher data volumes.

Here are some of the other interesting findings from the survey:

  • 90% believe that the mainframe will be a key platform for new growth and long-term applications.

  • Roughly two-thirds of extra-large organizations had over half of their data in mainframe environments – indicating the critical importance of the technology.

  • While cost has usually been the highest priority, this changed in 2020. The respondents now look at compliance and security as the most important. Data recovery was another area that saw a rise in priority.

  • About 56% of the respondents were using some form of DevOps on the mainframe. But this was seen as part of a journey, as there is usually a need for cultural change.

  • The survey showed that some of the reasons for the adoption of modern DevOps were for benefits like stability, better application quality and performance, security and improved deployment. The efforts have also led to the use of AI, such as with AIOps to help automate processes.

The Mainframe “Shop”

The IT organization that manages the mainframe systems is known as a shop. Each will have its own approaches, strategies, standards and requirements. Although, there are certain types of roles that are quite common across many shops.

Here’s a look at the main ones:

Systems Programmer: This person will provide engineering and administration for the mainframe and z/OS. Some of the duties include: installation, configuration, training and maintenance. But a systems programmer will also help provide analysis and documentation for the hardware and software.

Systems Administrator: Depending on the organization, this person may essentially serve the same role as a systems programmer. But for larger companies, there will be clear differences – that is, there will be more specialization. A systems administrator will usually spend more time helping with of data and applications while the systems programmer will be more focused on the maintenance of the system.

This separation in duties may also be due to the importance of security and audit purposes. For the most part, you do not want a person to have too much access to certain parts of the mainframe.

The systems administer may have specialties as well. Examples include the database administer and the security administrator.

Application Programmer or Designer: This person will develop, test, deploy and maintain applications. This may involve using a language like COBOL, PL/I, Java, C, C++ and so on.

The specifications for the program will often come from a business analyst or manager.

Systems Operator: This person will monitor the operation of the mainframe. If there is a problem, he or she can take action – say to stop and restart a system — or notify the right person.

Production Control Analyst: This person will manage the batch workloads. This will ensure that there are no errors.

Vendor Support: Usually, this means calling someone at IBM for assistance! The company has a long history of world-class support.

Granted, it seems like there is a need for many people to maintain a mainframe installation. But given that a system has significant scale, the headcount is actually fairly small. It also helps that a mainframe will have a variety of automation systems.


As seen in this chapter, the mainframe is alive and well. This type of machine can handle certain workloads at scale that would not be practical or economical for other types systems. And as for the growth prospects for the industry, they continue to look bright.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.