1

Introduction to Quantum Computing and Blockchain

It was the best of times, it was the worst of times,
it was the age of wisdom, it was the age of foolishness,
it was the epoch of belief, it was the epoch of incredulity,
it was the season of Light, it was the season of Darkness,
it was the spring of hope, it was the winter of despair.

I am sure Charles Dickens did not foresee quantum computing or Blockchain. His words from 160 years ago, however, still apply to the ebbs and flows we have seen with these two technologies. Quantum computing has been around for a good part of a century. In contrast, Blockchain was first introduced to the world in 2008.

Unlike the Blockchain wave that has hit us in recent years, quantum principles have been around for several decades. Quantum physics has been a very debated field and is fundamental to quantum computing. However, the field of quantum computing has gained momentum in recent times.

Despite the differences in the age of the two technologies, they have had interesting histories. For instance, most people who understand Blockchain agree that the framework is robust. However, the technology is still far from perfect, and that is true for quantum computing too.

The momentum behind quantum computing in the past decade has been largely due to advancements in algorithms and infrastructure. However, in my opinion, it is also because of the data age we live in, and some of the use cases for quantum computers are becoming clearer and relevant. In this chapter, I will cover the history of both these technologies that have had controversial pasts. Their place in modern society as transformational technologies is hard to dispute.

What this book does

The purpose of this book is to explore the overlaps between quantum computing and Blockchain. The two technologies are fundamentally based on cryptography. As a result, there is a possibility that they are on a collision course. However, when we look at the real-world applications of these technologies, they are quite complimentary to one another.

In this chapter, we will discuss technical concepts that are fundamental to quantum computing and Blockchain. We will delve into quantum computing and its history, and then touch upon some of the key concepts of Blockchain that are relevant to the thesis of the book.

One of the key themes that I would like to establish in this book is that Technology is just a means to an end. While it is important to understand it, and feel excited about the possibilities, a technology can only be special if it can make a difference to people's lives.

There is a lot of hype on social media that quantum computing would kill Blockchain. In a data age, both these technologies have a place. Quantum computing can vastly improve our problem-solving abilities. In a social media age, we will need our technologies to cope with big data volumes and understand the interdependencies between variables that we analyze. Quantum computing, when it goes mainstream, should address those areas.

On the other hand, a simple way to describe Blockchain's application is Decentralized Data Integrity. An immutable record of every transaction gets maintained and managed by the network. That is the fundamental advantage of Blockchain over data storage mechanisms we have used in the past.

Through industry-specific chapters and interviews with thought leaders in quantum computing, AI and machine learning, I will try to establish the business relevance of these two technologies. In doing so, I will establish that these two technologies have vertical synergies in a data centric world we live in.

In the next section, I will go through the history of quantum computing. In the process of doing that, I will also touch upon several key concepts of the technology.

An introduction to quantum computing

We are living through a data era, with several technologies sharing symbiotic relationships with each other. Of all the exciting technology paradigms, quantum computing has the potential to create disruption at scale. The principles of quantum physics, which are the bedrock of quantum computing, have been around for over a century.

An understanding of the evolution of quantum Physics is interesting because of the personalities involved and their contradicting philosophical views. However, the history of this field also gives us an insight into the counter intuitive nature of these concepts that challenged even the brightest minds. This chapter focuses on the story of quantum computing, and touches upon some of the basic principles of this technology.

The history of quantum mechanics

In a conversation between an investor and a professor in academia, the investor is often left thinking, "Wow, that is great, but so what?", and the academic is wondering, "Does the investor get it?". The exploration of quantum computing has been one such experience for me, where the nerd in me wanted to delve deep into the physics, math, and the technical aspects of the discipline. However, the investor in me kept on asking, "So what's of value? What's in it for the world? What's in it for businesses?".

As a result of this tug of war, I have come up with a simplified explanation of quantum principles that lays the foundations of quantum mechanics. For a better understanding of quantum computing, we need to first study the basics of quantum information processing with respect to the flow of (quantum) bits, and how they process data and interact with each other. Therefore, let us begin with the tenets of quantum physics as the basis of quantum information processing.

Quantum physics provides the foundational principles that explains the behavior of particles such as atoms, electrons, photons, and positrons. A microscopic particle is defined as a small piece of matter invisible to the naked human eye.

In the process of describing the history of quantum mechanics, I will touch upon several of its fundamental concepts. The discovery and the evolution in scientists' understanding of these concepts has helped shape more modern thinking around quantum computing. The relevance of these concepts to quantum computing will become clear as this chapter unravels. However, at this stage the focus is on how this complex field has continued to perplex great minds for almost 100 years.

Quantum mechanics deals with nature at the smallest scales; exploring interactions between atoms and subatomic particles. Throughout a good part of the 19th century and the early part of the 20th century, scientists were trying to solve the puzzling behavior of particles, matter, light, and color. An electron revolves around the nucleus of an atom, and when it absorbs a photon (a particle of light), it jumps into a different energy level. Ultraviolet rays could provide enough energy to knock out electrons from an atom, producing positive electrical charge due to the removal of the negatively charged electron. Source: https://www.nobelprize.org/prizes/physics/1905/lenard/facts/

Scientists observed that an electron absorbing a photon was often limited to specific frequencies. An electron absorbing a specific type of photon resulted in colors associated with heated gases. This behavior was explained in 1913 by Danish scientist Niels Bohr. Further research in this field led to the emergence of the basic principles of quantum mechanics. Source: https://www.nobelprize.org/prizes/physics/1922/bohr/biographical/

Bohr postulated that electrons were only allowed to revolve in certain orbits, and the colors that they absorbed depended on the difference between the orbits they revolved in. For this discovery, he was awarded the Nobel prize in 1922. More importantly, this helped to cement the idea that the behavior of electrons and atoms was different from that of objects that are visible to the human eye (macroscopic objects). Unlike classical physics, which defined the behavior of macroscopic objects, quantum mechanics involved instantaneous transitions based on probabilistic rules rather than exact mechanistic laws.

This formed the basis of further studies focused on the behavior and interaction of subatomic particles such as electrons. As research identified more differences between classical physics and quantum physics, it was broadly accepted that quantum principles could be used to define the idiosyncrasies of nature (for example: black holes). Two great minds, Albert Einstein and Stephen Hawkins, have contributed to this field through their work on relativity and quantum gravity. Let us now look into how Albert Einstein viewed quantum physics and its concepts. Source: https://www.nobelprize.org/prizes/physics/1921/einstein/facts/

Einstein's quantum troubles

We may have to go back some years in history to understand how Einstein got entangled (pun intended) in the world of quantum mechanics. For a layman, space is just vast emptiness, yet when combined with time, space becomes a four-dimensional puzzle that has proven to be a tremendous challenge to the greatest minds of the 19th and 20th centuries. There were principles of quantum mechanics that Einstein did not agree with, and he was vocal about it.

One of the key principles of quantum mechanics was Copenhagen Interpretation. This explains how the state of a particle is influenced by the fact that the state was observed; the observer thus influenced the state of the particle. Einstein did not agree with this indeterminate aspect of quantum mechanics that Niels Bohr postulated.

In 1927, Einstein began his debates with Bohr at the Solvay Conference in Brussels. He believed in objective reality that existed independent of observation. As per the principles of quantum theory, the experimenters' choice of methods affected whether certain parameters had definitive values or were fuzzy. Einstein couldn't accept that the moon was not there when no one looked at it and felt that the principles of quantum theory were incomplete. Source: https://cp3.irmp.ucl.ac.be/~maltoni/PHY1222/mermin_moon.pdf

One interesting aspect of this indeterministic nature of objects is that as babies, we tend to appreciate these principles better. This is illustrated in the peek-a-boo game that babies often love. They believe that the observer exists only when they observe them, and do not demonstrate the cognitive ability called object permanence. However, as we grow older, we base our actions on the assumption of object permanence.

Niels Bohr believed that it was meaningless to assign reality to the universe in the absence of observation. In the intervals between measurements, quantum systems existed as a fuzzy mixture of all possible properties – commonly known as superposition states. The mathematical function that described the states that particles took is called the wave function, which collapses to one state at the point of observation.

This philosophical battle between the two scientists (Einstein and Bohr) intensified in 1935 with the emergence of the property of Entanglement. It meant that the state of two entangled particles was dependent on each other (or had a correlation) irrespective of how far they were from each other. Einstein (mockingly) called it the Spooky action at a distance.

As a response to Bohr's findings, the infamous EPR (Einstein, Podolsky, Rosen) paper was written in 1935/36 by Albert Einstein, Boris Podolsky, and Nathan Rosen. The purpose of the paper was to argue that quantum mechanics fails to provide a complete description of physical reality. Podolsky was tasked with translating it to English, and Einstein was not happy with the translation. Apart from that, Podolsky also leaked an advance report of the EPR paper to the New York Times, and Einstein was so upset that he never spoke to Podolsky again. Source: https://www.aps.org/publications/apsnews/200511/history.cfm

The EPR paradox identified two possible explanations for the entanglement property. The state of one particle affecting another could potentially be due to shared, embedded properties within both particles, like a gene. Alternatively, the two particles could be making instantaneous communication with each other about their states. The second explanation was thought to be impossible, as this violated the theory of special relativity (if the particles were making instantaneous communication at faster than the speed of light) and the principle of locality.

The principle of locality states that an object is influenced by only its immediate surroundings.

The theory of special relativity states that the laws of physics are the same for all non-accelerating observers, and Einstein showed that the speed of light within a vacuum is the same no matter the speed at which an observer travels.

If entanglement existed, and if particles could influence the state of each other at a great distance, then the theory of locality was also considered to be breached. Hence, the EPR paper challenged the assumption that particles could communicate their states instantaneously and from a good distance.

Hence, the EPR concluded that the two entangled particles had hidden variables embedded in them, which gave them the information to choose correlated states when being observed. Albert Einstein continued to challenge the principles of quantum mechanics.

"Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot but does not really bring us any closer to the secret of the 'old one.' I, at any rate, am convinced that He does not throw dice."

Albert Einstein

Einstein and Bohr could not come to an agreement, even in the presence of an arbitrator. This arbitrator came in the form of John Wheeler. In 1939, Bohr and Wheeler started working at Princeton University and shared a good working relationship. Wheeler was a pleasant persona and could speak German. Einstein – who was the professor in Exile at Princeton – became Wheeler's neighbor and there arose a possibility for these great minds to come together. Wheeler saw merits in Bohr's view on complementarity – where two particles could be entangled. He also agreed with Einstein's challenge to the theory that, when we view particles, we unavoidably alter them. Despite several attempts, John Wheeler did not manage to come up with a theory that convinced both Bohr and Einstein.

Bell's inequality

Following on from the likes of Einstein and Bohr, John Bell entered the arena of quantum in the latter half of the 20th century. He was born in Belfast in 1928, and after several years of flirting with theories of quantum mechanics, he finally chose to take the plunge in 1963 when he took a leave at Stanford University. He explained entanglement as the behavior of identical twins who were separated at the time of birth. If, after a lifetime, they were brought together, they would have surprising things in common. He had come across this in a study by the Institute for the Study of Twins. This led to the thought that perhaps electrons behaved like they had genes. At the minimum, it helped a layman understand what entanglement of quantum particles meant.

However, in 1964, Bell subsequently came up with Bell's inequality. Through a set of experiments on electrons and positron pairs, and probability theory, Bell proved that the conclusion of EPR was wrong. The assumption that particles had to have properties embedded in them to explain entanglement did not seem the right way forward after all. Bell's inequality was supported through several subsequent experiments. The probability explanation through Venn diagrams of Bell's inequality is simple. There is a simpler possible home experiment that can explain the spooky nature of quantum mechanics using a polarizing lens used on photons.

You can check out the YouTube video of the experiment here, https://www.youtube.com/watch?v=zcqZHYo7ONs&t=887s, and it does get quite counter-intuitive.

The video shows the following:

  • Look at a white background through a polarized lens. It looks gray, indicating that a lot of light is being blocked from going through the lens.
  • Add another polarized lens B, and you will observe less light coming through it – indicated by the background getting even darker.
  • Now, by adding another polarized lens C on top of A and B, you would expect the white background to look even darker. But surprisingly, it looks brighter than with just A and B.

The results of the experiment can perhaps be explained by one possibility. What if the nature of the photon changes when it goes through one filter? This could mean the way the changed photon interacts with subsequent filters is different too.

I will explain another weird behavior of light particles (photons) using the Quantum Slit experiment later in this chapter. Currently, the behavior of subatomic particles is most clearly explained through the principles of quantum mechanics. If any new alternative is to be offered, it must be more convincing than the existing principles.

Quantum computers – a fancy idea

Whilst the theories underlying the behavior of particles in nature were being postulated, there were a few individuals who were starting to think about the implications of simulating these behaviors using classical computers. In 1965, the Nobel Prize in Physics was awarded jointly to Sin-Itiro Tomonaga, Julian Schwinger, and Richard P. Feynman for their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles. It was in the 1980s that Richard Feynman first discussed the idea "Can a classical computer simulate any physical system?". He is considered to have laid the foundations of quantum computing through his lecture titled "Simulating Physics with Computers."

In 1985, the British physicist David Deutsche highlighted the fact that Alan Turing's theoretical version of a universal computer could not be extended to quantum mechanics. You may ask what Turing's computer was.

In 1936, Alan Turing came up with a simple version of a computer called the Turing machine. It had a tape with several boxes, and bits coded into each one of them as "0"s and "1"s. His idea was that the machine would run above the tape, looking at one square at a time. The machine had a code book that had a set of rules, and, based on the rules, the states ("0"s and "1"s) of each of these boxes would be set. At the end of the process, the states of each of the boxes would provide the answer to the problem that the machine has solved. Many consider this to have laid the foundation for the computers we use today.

However, David Deutsche highlighted that Turing's theories were based on classical physics (0s and 1s), and a computer based on quantum physics would be more powerful than a classical computer.

Richard Feynman's idea started to see traction when Peter Shor of Bell Laboratories invented the algorithm to factor large numbers on the quantum computer. Using this algorithm, a quantum computer would be able to crack even recent cryptography techniques.

In 1996, this was followed by Grover's search algorithm. In a classical computer, when an item has to be searched in a list of N items, it needs, on average, N/2 checks to recover the item. However, with Grover's algorithm, the number of checks could be brought down to √N. In a database search, this offered a quadratic improvement to the search performance. This is considered a key milestone in the field of quantum computing.

Déjà vu

Grover's algorithm and subsequent work in this space have since accelerated the excitement and hype around quantum computing. More recently, tech giants IBM, Google, Intel, Microsoft, and a few others have ramped up their work in quantum computing. At CES 2019, IBM showed off their prowess through the launch of an integrated system for quantum computing for scientists and businesses. IBM also has a cloud-based quantum computing infrastructure that programmers could use. More on what the tech giants are up to will be revealed in Chapter 16, Nation States and Cyberwars.

When I first looked at the picture of IBM's quantum computer replica as revealed at CES 2019, my immediate thought was Déjà vu. The previous generation witnessed the rise of the classical computing revolution, with its far-reaching impacts upon all aspects of society. We stand on the brink of another revolution; we will be fortunate enough to see the evolution of quantum computing first-hand.

The weirdness of quantum

Before we explore quantum computing, it would be good to understand the behavior of particles as described by quantum mechanics. Below, I describe an experiment that helps us to understand the counter-intuitive nature of quantum theory.

A scary experiment

The famous Quantum Slit experiment describes the behavior of photons/particles and how they interact with each other and themselves. As we will see, this posed a challenge to physicists attempting to describe their behavior.

In the 19th century, a British scientist, Thomas Young, postulated that light particles traveled in waves, rather than as particles. He set up a simple experiment where he cut two slits on a piece of metal and placed it as a blocker between a light source and a screen. He knew that if light traveled in the same manner as particles, then the particles that passed through the slits would hit the screen. Those that were blocked by the metal would bounce off the surface and would not reach the screen. Effectively, if the light was made of particles, then the screen should look like a spray of paint on a stencil. Figure 1 shows the experiment and the slit formation.

However, he assumed (before the experiment) that light was formed of waves, and the waves, when they passed through the slit, would interfere with one another and form patterns on the screen. The pattern would be defined based on how the waves passing through the slits interacted.

Where the waves interfered with each other (called constructive interference), the screen would display bright spots, and where peaks interfered with troughs (called destructive interference), they would form dark spots. Hence, the pattern would be slit shapes at the center followed by progressively darker slit shapes to the left and the right. Young successfully proved that light traveled in waves.

Figure 1: Young's double slit experiment

Einstein's photons – weirder now

Albert Einstein once more proved to be of great influence in the field of quantum mechanics. He proposed that light was made of photons – a discrete quantum of light that behaved like a particle. As a result, the experiment was repeated and this time, photons were passed through the slit one by one and the patterns still appeared. This could only happen if:

  • Photons travelled in waveforms.
  • All possible paths of these waveforms interfered with each other, even though only one of these paths could happen.

This supports the theory that all realities exist until the result is observed, and that subatomic particles can exist in superposition. As detectors were placed to observe photons passing through the slits, the patterns disappeared. This act of observation of particles collapses the realities into one.

We have discussed the three principles of quantum mechanics: superposition, entanglement, and interference. These principles are fundamental to the way in which particles are managed within a quantum computer.

Figure 2: A quantum computing timeline

The history of quantum computing and the key milestones are captured in Figure 2. The key takeaway is the contributions made to the field that have brought this technology to the brink of achieving impact at scale.

Inside a quantum computer

Quantum computing has quantum bits called qubits (pronounced cue-bit) as their fundamental unit. In the classical computing world, bits take 0 and 1 states. Qubits exist in these two states, but also in a linear combination of both these states called superpositions.

Superpositions can solve some problems faster than the deterministic and probabilistic algorithms that we commonly use today. A key technical difference is that while probabilities must be positive (or zero), the weights in a superposition can be positive, negative, or even complex numbers.

The other important quantum mechanics principle that is fundamental to understanding quantum computers is Entanglement. Two particles are said to display entanglement if one of the two entangled particles behaves randomly and informs the observer how the other particle would act if a similar observation were made on it.

This property can be detected only when the two observers compare notes. The property of entanglement gives quantum computers extra processing powers and allows them to perform much faster than classical computers.

Quantum computers have similarities and differences compared to traditional transistors that classical computers use. Research in quantum computers is moving forward to find new forms of qubits and algorithms. For example, optical quantum computers using photons have seen significant progress in the research world since 2017. Optical quantum computers using photonic qubits work at room temperatures.

A quantum computer should satisfy the following requirements:

  • Qubits need to be put into a superposition
  • Qubits should be able to interact with each other
  • Qubits should be able to store data and allow readout of the data

Quantum computers also demonstrate some features (typically):

  • Tend to operate at low temperatures, and are very sensitive to environment/noise
  • Tend to have short lifetimes – the reasons are explained below

We encode qubit states into subatomic particles; electrons in the case of semiconductor quantum computers. There are several methods to create qubits and each method has advantages and disadvantages. The most common and stable type of qubits is created using a superconducting loop. A superconductor is different from a normal conductor because there is no energy dissipation (no resistance) as the current passes through the conductor. Superconductor circuits operate at close to absolute zero temperatures (that is, 0 Kelvin, or -273 degree Celsius) in order to maintain the states of their electrons.

Another qubit architecture where transistor-based classical circuits are used is called SQUIDs. SQUID stands for Superconducting Quantum Interference Device. They are used to track and measure weak signals. These signals need to only create changes in energy levels as much as 100 billion times weaker than the energy needed to move a compass needle. They are made of Josephson junctions. One of the key application areas for SQUIDs is in measuring magnetic fields for human brain imaging. Source: https://whatis.techtarget.com/definition/superconducting-quantum-interference-device

Superconducting qubits (in the form of SQUIDs) have pairs of electrons called Cooper pairs as their charge carriers. In this architecture, transistor-based classical circuits use voltage to manage electron behavior. In addition, a quantum electrical circuit is defined by a wave function. SQUIDs are termed artificial atoms, and in order to change the state of these atoms, lasers are used. As described earlier in this chapter, based on the principles of quantum mechanics, only light with specific frequency can change the state of subatomic particles. Therefore, lasers used to change the state of qubits will have to be tuned to the transition frequency of the qubits.

A superconducting qubit can be constructed from a simple circuit consisting of a capacitor, an inductor, and a microwave source to set the qubit in superposition. However, there are several improvements of this simple design, and the addition of a Josephson junction in the place of a common inductor is a major upgrade. Josephson junctions are non-linear inductors allowing the selection of the two lowest-energy levels from the non-equally spaced energy spectrum. These two levels form a qubit for quantum-information processing. This is an important criterion in the design of qubit circuits – a selection of the two lowest energy levels. Without the Josephson junction, the energy levels are equally spaced, and that is not practical for qubits. Source: https://web.physics.ucsb.edu/~martinisgroup/classnotes/finland/LesHouchesJunctionPhysics.pdf

Like the gate concept in classical computers, quantum computers also have gates. However, a quantum gate is reversible. A common quantum gate is the Hadamard (H) gate that acts on a single qubit and triggers the transition from its base state to a superposition.

Qubit types and properties

There are several variations of qubit circuits based on the properties here. The key properties that need consideration in the design of these circuits are:

  • Pulse time: This is the time taken to put a qubit into superposition. The lower the pulse time, the better.
  • Dephasing time: This is the time taken to decouple qubits from unwanted noise. The lower the dephasing time, the better. Higher dephasing times lead to a higher dissipation of information.
  • Error per gate: As gates are used to create a transition in states of qubits when there is a faulty gate, the error can propagate onto qubits that were originally correct. Hence, error per gate needs to be measured regularly.
  • Decoherence time: This is the time duration for which the state of the qubit can be maintained. Ionic qubits are the best for coherence times as they are known to hold state for several minutes.
  • Sensitivity to environment: While semiconductor qubits operate in very low temperatures, the sensitivity of the particles involved in the construction of the circuit to the environment is important. If the circuit is sensitive to the environment, the information stored in the qubit is corrupted easily.

Figure 3: Qubit circuits

IBM recently launched the 50-qubit machine, and also provides a cloud-hosted quantum infrastructure that programmers can go and code in. There are also several advances in quantum assembly language that will act as the interface between these machines and the code that developers write. Figure 3 shows different qubit circuit types.

We've now covered the fundamentals of quantum computing, so let's move on to look at the other technology in focus for this book: Blockchain.

Blockchain and cryptography

Unlike quantum computing, Blockchain has had a relatively short history. If quantum computing is the Mo Farah of emerging technologies, Blockchain is the Usain Bolt. Several Blockchain properties have their roots in cryptography, and it is essential to understand some of the terminologies in order to be able to enjoy the rest of the chapter.

It is important to understand how Blockchain depends on cryptography. This would help us in subsequent chapters to understand how Blockchain and quantum computing could potentially collide in future. A detailed, yet simplified, description of some key terms of Blockchain and cryptography are as follows:

Hashing

Hashing is a process where a collection of data is input into a function to get a fixed length string as an output – called a hash value. We use them every day. When you create an email ID with a password, the password goes through a hash function, a unique string is created, and this is stored in the database of the email provider. When you try to log in again, the password entered is put through the hashing algorithm, and the resulting string is matched with the string stored in the data base of the email provider. If they match, you get to access your email.

Figure 4: An illustration of the transaction process for Bitcoin. Source: https://bitcoin.org/bitcoin.pdf

The bitcoin hash

The bitcoin system uses a function called Hashcash. The Hashcash proof of work algorithm was invented in 1997 by Adam Back. The bitcoin hash uses two additional parameters – a nonce, and a counter. The nonce is just a random number that is added to the collection of data before it gets fed into the hashing function. So, the hash created is a combination of the previous hash, the new transaction, and a nonce. The bitcoin system requires the hash value to start with a certain number of zeros; the challenge of identifying the right hash value increases exponentially as the number of zeros increases. The counter parameter of the Hashcash function is used to record increments until the right hash value is arrived at.

Mining a bitcoin

The nodes in a bitcoin network work hard to find the hash value that has the correct number of zeros. They use different nonces to generate hashes, until the right hash is generated. This exercise takes a lot of computing power, and when the right hash value is found, the node that has achieved that will be rewarded bitcoins for identifying the right nonce.

Determining the nonce that, when put through the hash function, results in a specific hash value within a difficulty level is called mining. The difficulty level increases as the number of zeros increases. Mining bitcoins has become harder over the years as more computing power is required to determine the nonce. There are only 21 million bitcoins to ever be produced, and at the time of writing this book, about 17.5 million bitcoins have been mined. The reward to mine a block is at 12.50 bitcoins, and there are about 144 blocks mined per day. There are 65,000 more blocks to be mined before the mining reward halves again to 6.25 bitcoins.

A block

A block is just a group of transactions validated together. If a bunch of transactions are not able to make it into a block in time, they get moved into the next block. The number of bitcoins that are rewarded for mining a block started at 50 and is halved with every 210,000 blocks mined.

Proof of work

The term proof of work was coined by Markus Jakobsson and Ari Juels in a document published in 1999. Proof of work was used in the bitcoin system to ensure that transactions are validated through sheer computing power. After a chain of blocks has been established through this method, to hack through the block would require an immense amount of computing power too.

Also, in a proof of work system, the processing power that a node has decides the control the node has over the network. For example, in the bitcoin network, one CPU is equivalent to a vote, which can be exercised at the time of decision making.

Transactions

New transactions are broadcast to all nodes for validation. Transactions are collected into blocks and nodes are busy finding a proof of work for their blocks. When a node finds the proof of work, it broadcasts the block to all nodes that accept the block only if all transaction in it are valid. The acceptance of the block results in the network starting to work on new blocks.

Hacking a block means a new nonce needed to be identified that solved the work of not just one miner, but of all subsequent miners too. Also, when there are multiple chains of blocks, the longest chain of blocks, in terms of the amount of computing power required to create them, is accepted by the network.

Several of these concepts are quite fundamental to understanding how Blockchain networks work, and you should now be able to approach the topic of Blockchain with greater confidence. With that said, we'll now discuss another key concept of Blockchain: utility and security tokens. Understanding the differences between a security and a utility token has recently proven to be a conundrum for the global Blockchain community.

Utility versus security token

As solutions based on Blockchain started raising capital, they were broadly classified into two buckets – utility tokens or security tokens. A utility token is like loyalty points or digital coupons needed to use an application. Loosely, they are used for distributing profits (or dividends) when a firm makes money.

On the other hand, a security token derives its value from an underlying asset. For example, a real estate fund could be tokenized, and the token can be traded. The value of the token is derived from the value of the real estate fund. In the same way, firms raising capital can issue tokens and investors would get a share of the company. This is effectively owning equity in the company and is classified as a security token.

While I have made it sound like utility and security tokens are mutually exclusive concepts, they are often not. For instance, in the case of Ether (Ethereum's token), it is more of a utility than a security as the token is used across the ecosystem in applications and largely derives its value from Ether's demand. The SEC has developed a simple methodology to identify a token as a security, as security tokens fall under their regulatory umbrella. It's called the Howey test.

The Howey test gets its name from a Supreme court decision in 1946: SEC v W.J. Howey Co. Howey Co was offering service contracts for producing, harvesting, and marketing orange crops in Lake County, Florida. These contracts were sold to tourists who stayed at a hotel that was owned by Howey Co. The company sold land and service contracts to these visitors. The court was asked whether the land purchase plus the service contract created an investment contract. The court agreed, and the Howey test was born.

As per the Howey test, a transaction would be an investment contract (and therefore a security) if:

  1. It is an investment of money
  2. There is an expectation of profits from the investment
  3. The investment of money is in a common enterprise
  1. Any profit comes from the efforts of a promoter or third party

Let's take the Ethereum crowdsale in 2014 as an example. Money was invested (albeit in bitcoins) – and the investment was made by at least a few with a view that the tokens would increase in value over a period, and they could cash out at a profit. With the Ethereum crowdsale, the capital was pooled by investors in a scheme, and that is viewed as common enterprise by the SEC. And the value increase in Ether was expected to happen through the work of Vitalik and company. Therefore, Ether should be a security, as per the Howey test.

The way the Ethereum crowdsale happened in 2014, it is easy to categorize it as a security token. However, Ethereum is now the oxygen of a big community of applications. As a result, we can say that Ether is an example of a token, which initially raised capital like a security, but due to the way the firm and the technology have evolved, it is more of a utility today. Ethereum is decentralized due to the community it has and no longer just thrives on the initial founders of the firm.

Recently, I was part of a round table discussing the challenge in categorizing tokens as utility or security. I would describe it as a progress bar; at one end of it is the security token, and at the other end is the utility token. Depending on how the token derives it value and how it is used by the community, it would move closer to one end of the progress bar or another. Security vs utility shouldn't be seen as binary states of tokens.

We have discussed the background of quantum computers and touched upon some interesting Blockchain concepts too. The idea is to use these basic ideas as the building blocks before moving onto real-world applications across industries in future chapters. The cryptographic element is fundamental to these two technologies. Does that mean quantum computing makes Blockchain obsolete? We'll touch upon that question in future chapters.

Conclusion

The journey of Bohr, Einstein, Alan Turing, and several others almost a century back has now led to the invention of quantum computers. The hype and the headlines in this field are getting bigger every day. However, mass industry adoption of this technology is several years (if not decades) away. In this chapter, I wanted to take the reader through a journey and introduce key people, principles, events and technology components within quantum computing.

It is important to understand why qubits are different from bits that today's computing world largely relies on. This chapter provides quantum methods and real-world applications that we will touch upon in future chapters. Applications of optical quantum computers that use photons will also be discussed in a subsequent chapter.

We briefly touched upon Blockchain and the use of cryptography. This is also critical, so that we can see the technological overlap between the two technologies. It is essential that the Blockchain community views this overlap as an opportunity rather than a major roadblock. I firmly believe that both these technologies are here to stay, and definitely here to enrich our lives by complementing each other across industries.

There are several practical applications of quantum computers across industries, including healthcare, logistics, finance, and cybersecurity in general. We will cover these in detail in this book.

References

  1. https://www.coinmama.com/guide/history-of-ethereum
  2. https://medium.com/@aakash_13214/the-scalability-trilemma-in-blockchain-75fb57f646df
  3. https://www.apriorit.com/dev-blog/578-blockchain-attack-vectors
  4. https://www.investopedia.com/articles/personal-finance/050515/how-swift-system-works.asp
  5. https://medium.com/altcoin-magazine/how-to-tell-if-cryptocurrencies-are-securities-using-the-howey-test-da18cffc0791
  6. https://blog.advids.co/20-blockchain-use-case-examples-in-various-industries/
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.140.198.43