21

Future Trends, Forecasting, the Age of Adaptation and More Transformative Transforms

21.1 Future Forecasts

The ongoing narrative of the past twenty chapters has been that forecasting is essentially a process of looking behind and side to side. Behind means analysing past technical and commercial success and failure, but also whether past projections and predictions were right or wrong, if yes why, if not why. Side-to-side means analysing and comparing all device options, all delivery options including guided and unguided media and all network options in terms of technical and commercial efficiency including energy and environmental efficiency.

Our thesis throughout is that if a process or product is technically inefficient it is less likely to be commercially efficient, although I have had some wry comments about Microsoft having broken this rule and yes this is the second time this morning that my computer has crashed. In network terms this means that bandwidth functionality and the value realisable from that functionality must always exceed network cost amortised over the life of the network.

We have drawn on our own past work in terms of prior published books and articles and research undertaken over the past twenty five years and the advice and guidance and suggestions of many friends and colleagues in the industry.

In this last chapter we take a final look at our research over the last five years and dangerously and presumptuously use this to make some predictions for the next five and 50 years forward.

In February 2006 we wrote a piece about competitive networks. We suggested that future networks would need to develop new mechanisms for capturing user bandwidth and user value and that the most successful networks would be those that most successfully competed for billable revenue – aggressively competitive networks.

Competitive networks were defined as intelligent networks with ambition. They could also be described as ‘smart networks’ using ‘smart’ in the contemporary sense of a network having a developed ability to make money. Competitive networks may have other obligations, which may include social and political gain. Smart competitive networks are efficient at translating social and political gain into economic advantage. Economic gain is the product of a composite of cost and functional efficiency and added value.

Added value is delivered through service provision and billing, but is dependent on a broad mix of enabling technologies and enabling techniques that evolve over time. This process of technology evolution is managed within a standards-making process that in itself determines the rate at which new technologies are deployed.

The standards-making process also introduces a cyclical pattern of technology maturation that is not naturally present in the evolutionary process, which is largely linear. This produces market distortions that reduce rather than enhance added value.

An understanding of the evolutionary process is therefore a useful precondition for identifying technologies that deliver long-term competitive advantage and provides an insight into how standards making could become, and probably will become, more productive over time.

21.2 The Contribution of Charles Darwin to the Theory of Network Evolution

To study the evolutionary process, where better to start than Charles Darwin. Inspired by amongst others the famous botanist, geologist, geographer and ‘scientific traveller’ the Baron von Humboldt (the current man), Darwin spent five years (1831–1836) on the HMS Beagle observing the flora and fauna of South America.

From these observations Darwin developed his theories of natural selection.

In parallel, similar studies on similar expeditions to the Amazon were leading Alfred Russell Wallace to develop and promote similar theories that in turn prompted Darwin (in 1859) to publish ‘On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life’.

A summation of Darwin's theory:

Evolution exists.

Evolution is gradual.

The primary mechanism for evolution is natural selection.

Evolution and natural selection occur through a process of ongoing specialisation.

So, what relevance does Darwin and his fellow botanist and naturalist colleagues have to network design and network evolution?

Well, to paraphrase,

The survival (or success) of each organism (network) is determined by that organism's (network's) ability to adapt to its environment.

The fittest (most power and bandwidth-efficient networks) win out at the expense of their rivals (other networks) because they succeed in adapting themselves best to their environment.

21.3 Famous Mostly Bearded Botanists and Their Role in Network Design – The Dynamics of Adaptation

Darwin suggested that the process of evolution and adaptation occurs over millions of years. One way or another humans seem to have used those millions of years to achieve a competitive advantage over (most) other species. This competitive advantage is based on energy efficiency (output available for a given calorific intake), observational abilities (sight, sound, smell) and the ability to use observed information intelligently to avoid danger or to exploit locally available opportunities.

To us, the process of long-term adaptation is unnoticeable due to the time scales involved. The process appears to us to be static. However, part of the process of successful adaptation over time is to become more adaptive, particularly in the way in which we conserve energy, the way in which we manage to have energy available to support short concentrated bursts of activity and the way in which we couple these processes to observed information. Short-term adaptation functions are noticeable. We can experience them and measure their extent and effect. We can recognise and observe the dynamic nature of the process.

In network and device design, adaptation is an already important mechanism for achieving power and bandwidth efficiency and is becoming more important over time. As such, it is useful to study how adaptation works in an engineering context but using several million years of biological and botanical experience and several hundred years of biological and botanical study and analysis as a reference point.

21.4 Adaptation, Scaling and Context

We can describe almost any process in an end-to-end delivery channel in terms of three interrelated functions – adaptation, scaling and context:

i. Adaptation is the ability of a system or part of a system, to respond to a changed requirement or changed condition.

ii. Scale (scaling) is the expression of the order of magnitude over which the adaptation takes place (the range) and may also comprehend the rate and resolution of the response.

iii. Context is the amount of observable information available in order for the system itself or some external function to take a decision on the adaptation process. The accuracy of the observation process and the ability of the system to interpret and act on the observed information will have a direct impact on overall system efficiency.

The function of the human heart provides a biological example. Our heart rate and blood pressure increases when we run. Our heart rate and blood pressure also increases when we use our observation system (sight, sound, smell, touch and sense of vibration) to perceive potential danger and/or local opportunity. As such, the process of adaptation is both reactive and proactive.

The scale in this example ranges from the lowest to highest heart rate. The context is either actual (we have started to run) or predictive (we know we might have to run). Human observation systems in themselves are adaptive. For example, our eyes can adapt to light intensities ranging from small fractions of a lux to over 100 000 lux. This is the scale of the adaptation process. The context, in this example, is the ambient light level.

21.5 Examples of Adaptation in Existing Semiconductor Solutions

Many present semiconductor solutions used in cellular phones, consumer and professional electronics products and network devices use adaptation to decrease power drain. In this context, adaptation is based on the ability to change the clock speed (heart rate) and/or voltage (blood pressure) of the system when presented with specific processing tasks (the context).

The scale is the highest to lowest clock rate and highest to lowest voltage. The rate of response (milliseconds or microseconds) and accuracy of response will directly impact overall system efficiency. They may also be additional ‘policy’ issues to consider, for example the charge state of the battery and external or internal operating temperatures.

Peripheral devices such as displays (one of the largest power-consuming items in most present appliances), also use adaptation by responding to changes in ambient light level or by decreasing frame rates, resolution and colour depth in response to a changed operational requirement.

21.6 Examples of Adaptation in Present Mobile Broadband Systems

Adaptation has always been an inherent part of cellular network design. First-generation analogue cellular networks used receive signal strength measurements to determine power and edge of cell handover thresholds (an intelligent use of observed information).

Second-generation networks made the handset work harder in that the handset compiles a measurement report that is a composite of received signal level and received signal quality (bit error rate and bit error probability). This is sent to the radio network controller that in turn makes power control and handover decisions. The handset is instructed to look at up to 6 base stations, its own present serving base station and five ‘neighbour’ handover candidates though there is support for more extended measurement reporting (up to ten base stations) in more recent releases. Power control is implemented within a relatively relaxed duty cycle of 480 milliseconds, though optional enhanced (120 millisecond) and fast (20 millisecond) power control is now supported.

So in these examples, the adaptation process is the combined function of power control and handover. The scale of power control is typically about 25 dB (first generation) or 30 to 35 dB in second-generation systems, the rate is either 480 milliseconds, 120 milliseconds or 20 milliseconds and the resolution is typically half a dB. The context is provided by the measurement report. Note that overall system efficiency is determined directly by the accuracy of the measurement report and the ability of the power control and handover algorithms to interpret and act on the observed information – this is algorithmic value.

Release 99 WCDMA introduced more aggressive power control (an outer loop power control every 10 milliseconds and an inner power control loop running at 1500 Hz), a substantially greater dynamic range of power control (80 dB) and the ability to change data rate and channel coding at ten-millisecond intervals.

HSDPA simplifies power control but allows data rates and coding and modulation to be changed initially at 2-millisecond intervals and in the longer term at half-millisecond intervals. LTE takes this process a step further and can opportunistically map symbols to OFDM subcarriers enjoying favourable propagation conditions.

The context in which the data rate and channel coding and symbol mapping decisions are taken is based on a set of channel quality indicators (CQIs). This is a composite value (one of 30 possible values) which indicates the maximum amount of data the handset thinks it should be able to receive and send, taking into consideration current channel conditions and it's designed capabilities (how many uplink and downlink multicodes and/or OFDM subcarriers and modulation schemes it supports).

There are, however, many additional contextual conditions that determine admission policy. These include fairly obviously the level of service to which the user has subscribed but also the local loading on the network. Admission policy may also be determined by whether sufficient storage, buffer or server bandwidth is available to support the application.

So, we have a 30-year example of evolution in cellular network design in which step-function changes have been made that deliver more adaptability over a wide dynamic range (scale) based on increasingly complex contextual information. It is this process of increasingly aggressive adaptation that has realised a progressive increase in power and bandwidth efficiency that in turn has translated into lower costs that in turn have supported lower tariffs that in turn have driven traffic volume and value.

21.7 Examples of Adaptation in Future Semiconductor Solutions

There is still substantial optimisation potential in voltage and frequency scaling. Present frequency scaling has a latency of a few microseconds and voltage scaling a latency of a few tens of microseconds but the extraction of efficiency benefits from scaling algorithms is dependent on policy management.

Scaling policy has to take into account instantaneous processor load and preferably be capable of predicting future load (analogous to our heart rate increasing in anticipation of a potential need for more energy). Proactive rather than reactive algorithms can and will deliver significant gains particularly when used with multiprocessor cores that can perform load balancing. These are sometimes known as ‘balanced bandwidth’ processor platforms. Note this is an example of algorithmic value.

21.8 Examples of Adaptation in Future Cellular Networks

Developments for near-future deployment include adaptive source coding, ‘blended bandwidth’ radio schemes that use multiple simultaneous radio bearers to multiplex multimedia traffic, and ‘balanced bandwidth’ IP RAN and IP core network architectures that adaptively manage buffer bandwidth, storage bandwidth and server bandwidth to support bandwidth efficient multiservice network platforms.

Adaptive source coding includes adaptive multirate vocoders, AAC/AAC Plus- and MP3Pro-based audio encoders, JPEG image encoders and MPEG video encoders, wavelet-based JPEG encoders and MPEG2000 Part 10 scalable video coders. The realisation of the power and bandwidth efficiency potential of these encoding techniques is dependent on an aggressively accurate analysis of complex contextual information.

‘Blended bandwidth’ includes the potential efficiency benefits achievable from actively sharing bandwidth between wide-area, local-area and personal-area radio systems. Admission control algorithms can be based on channel availability, channel quality and channel ‘cost’ that in an ideal world could be reflected in a ‘blended tariff’ structure to maximise revenues and margin in combination with a more consistent (and hence higher value) user proposition.

When combined with other downlink delivery options such as DAB, DVB, these ‘blended bandwidth’ offerings are sometimes described in the technical literature as ‘cooperative networks’.

The technical and potential cost benefits of such schemes are compelling but adoption is dependent on the resolution of conflicting commercial objectives. The fiscal aims of cooperation and competition rarely coincide unless a certain amount of coercion is used.

‘Balanced bandwidth’ essentially involves achieving a network balance between radio access bandwidth, buffer bandwidth in the end-to-end channel, (including IP RAN and IP core memory bandwidth distribution), persistent storage bandwidth and server bandwidth and being able to manage this bandwidth mix proactively to match rapidly changing traffic loading and user-application requirements.

21.8.1 How Super Phones Contribute to Competitive Network Value

We have already stated that existing phones and mobile broadband devices play a key role in collecting the contextual information needed to support power control and handover and admission control algorithms.

Power control, handover and admission control algorithms are needed to deliver power efficiency (which translates into more offered traffic per battery charge) and spectral efficiency (which translates into a higher return on investment per MHz of allocated or auctioned spectrum).

21.8.2 Phones are the Eyes and Ears of the Network

However, we also suggested that there are three categories of phone, standard phones, smart phones and super phones. Standard phones are voice and text dominant and change the way we relate to one another. Smart phones aspire to change the way we organise our work and social lives. Super phones change the way we relate to the physical world around us.

We studied the extended image and audio capture capabilities of super phones and the ways in which these capabilities could be combined with ever more accurate macro- and micropositioning information.

Macropositioning information is available from existing satellite systems, for example GPS with its recently upgraded higher-power L2C signal and, at some stage, Galileo with its optimised European coverage footprint. Macropositioning is also available from terrestrial systems (observed time difference) and from hybrid satellite and terrestrial systems.

Micropositioning is available from a new generation of multiaxis low-G accelerometers and digital compass devices that together can be used to identify how a phone is being held and the direction in which it is pointed.

This moves competitive networks onto new territory. The eyes and ears of the network see more and hear more than ever before. The network has a precise knowledge of where users are, what they are doing and where they are going (direction and speed).

In simple terms, this knowledge can be used to optimise handover and admission control algorithms. More fundamentally, the additional contextual information becoming available potentially transforms both the adaptability and scalability of the network and service proposition. This has to be the basis of future mobility value in which handsets help us to relate to the physical world around us and networks help us to move through the physical world around us.

21.8.3 And Finally Back to Botany and Biology

Biological evolution may from time to time appear as nonlinear. The Cambrian Explosion 543 to 490 million years ago (when most of the major groups of animals first appear in the fossil record) is often cited as an example of nonlinear evolution.

Technology evolution may also appear from time to time to be nonlinear. The invention of the steam engine or transistor for example might be considered as inflection points that changed the rate of progress in specific areas of applied technology (the industrial revolution, the birth of modern electronics). This apparent nonlinearity is, however, a product of scaling and disappears if a longer time frame is used as a benchmark of continuing progress.

The essence of competition, however, remains relatively constant over time.

Have humans become more adaptive over time, have they adapted by becoming more adaptive? Possibly. Certainly we have become more adept at exploiting context partly due to our ability to accumulate and record and analyse the knowledge and experience of prior and present generations. And this is the basis upon which competitive networks will capture and deliver value to future users.

Network value is increasingly based on knowledge value but knowledge value can only be realised if access efficiency can be improved over time. Access efficiency in mobile broadband networks is a composite of power and bandwidth efficiency (the blended bandwidth proposition) and access efficiency. Access efficiency is dependent on the efficiency of access, admission and storage and server algorithms that anticipate rather than respond to our needs.

The asset value of cellular and mobile broadband networks has traditionally been denominated in terms of number of cell sites or MHz of allocated or auctioned bandwidth and number of subscribers. This still remains valid but these are not inherently competitive networks.

Competitive networks are networks that exploit accumulated knowledge and experience from present and past subscribers to build new value propositions that deliver future competitive advantage. This in turn increases network asset value. Efficient competitive networks are networks that combine access efficiency with power and bandwidth efficiency. An intelligent network with ambition and ability, a network that adapts over time by becoming more adaptive.

21.9 Specialisation

But in addition, networks need to find some way of differentiating themselves from one another, creating ‘distance’ in the service proposition. This is where Darwin's (and Wallace's) theories of specialisation begin to have relevance. The services required from a network in the Amazon and Malay archipelago and Galapagos Islands are different from the services required in Manhattan, Maidenhead and Mancuria. Radio and TV stations are increasingly specialised in terms of regionalised and localised content provision. Cellular and mobile broadband networks will need to develop similar techniques in terms of their approach to specialist regionalised and localised network service platforms. As we said earlier, smart competitive networks are networks that are efficient at translating social and political gain into economic advantage. This implies an ability to respond to extremely parochial geographic and demographic interests. A bee in Mongolia might be outwardly similar to a bee in Biggleswade but will have different local interests and requirements. A ‘one size fits all’ network proposition becomes increasingly less attractive over time. Chapter 19 used the Nokia OVI platform as an example of the competitive advantage achievable from localisation.

21.10 The Role of Standards Making

We argued the case that technology evolution is to all intents and purposes a linear process. It may appear from time to time to be nonlinear, but this is either due to an issue of scaling (not studying the evolutionary process over a sufficiently long time scale) or due to distortions introduced by the standards-making process.

The answer is of course to make the standards process more adaptive and to avoid artificially managed step-function generational changes. Of course the counterargument here is that artificially managed change can be exploited to deliver selective competitive advantage. This is, however, a manipulative process and manipulative processes tend to yield relatively short-term gains, the danger of a Pyrrhic victory. The destruction of the potential market for ultrawideband devices through a standards dispute is one example.

21.11 The Need for a Common Language

Value can also be destroyed through misunderstanding and poor advocacy. Poor advocacy can be particularly expensive if the result is decision making based on inaccurate or incomplete or poorly presented technical facts. As an antidote to this we recommend a short consideration of the work of Dr Ludwig Zamenhof, founder of the Esperanto movement.

Esperanto is often dismissed as a marginal curiosity with little relevance to the modern world. Dr Zamenhof had a minor planet named after him by Yro Vaisala, the Finnish astronomer and physicist, and is considered as a god by the Omoto religion. Aside from these distractions, his work on a universal language has real relevance to engineers and engineering. For example, the Chinese Academy of Science conducts a biannual international conference on Science and Technology in Esperanto and publishes a quarterly journal, ‘Tutmondaj Sciencoj kaj Teknikoj’.

However, we are not on a mission to preach the virtues of Esperanto to engineers. More specifically, Esperanto has practical relevance to our own area of interest, user equipment and network design.

Esperanto is an example of a language that is both efficient and precise in the way that it describes the world around us. A language optimised for logical thought and analysis. For most of us, learning Esperanto is never going to be a high priority, even if it should be. We can, however, learn lessons from Esperanto in terms of how we approach decision making in network design.

Decision making is dependent on the accuracy with which a specific problem or set of choices is described. We suggest a number of ‘descriptive domains’ that can be applied that help to clarify some of these choices. We show how ‘descriptive domains’ can be used to validate R and D resource allocation and partitioning and integration decisions.

21.11.1 Esperanto for Devices

The principles of Esperanto are applicable for human-to-human and device-to-device communication. New functionality in user equipment is often introduced on discrete components. Audio integrated circuits, voice and speech recognition, micropositioning MEMS or GPS integrated circuits are recent examples of new ‘real estate’ introduced as additional add-in components.

This places new demands on packaging and interconnect technology and creates a need for commonality (a common language) both in terms of physical (compatible pin count and connector) and logical (bus architecture) connectivity. The reason for the discrete approach is usually performance and/or risk related. The new function can initially be made to work more efficiently on a discrete IC and engineers responsible for other functional areas feel safer if the new functionality is ring fenced within its own dedicated physical and logical space.

Standards initiatives such as the Mobile Industry Processor Initiative1 are then needed to ensure at least a basic level of compatibility between different vendor solutions. This process is neither particularly efficient nor effective and is generally a consequence of interdiscipline communication issues.

21.11.2 Esperanto for Engineers

This in turn is a language problem. Partly this has been solved by the use of English as a common language, though English is neither precise nor efficient. The problem is, however, not just one of spoken language but the descriptive language used.

Software engineers speak a different ‘language’ from hardware engineers, DSP engineers speak a different ‘language’ from RF engineers, imaging system engineers speak a different ‘language’ from audio engineers who speak a different ‘language’ from radio system engineers, who speak a different ‘language’ from micro- and macropositioning engineers, who speak a different ‘language’ from mechanical design engineers.

Silicon design engineers speak a different ‘language’ from handset design engineers who speak a different ‘language’ from network design engineers who speak a different ‘language’ from IT engineers. Engineers speak a different ‘language’ from product marketing and sales who speak a different ‘language’ from business modelling specialists who speak a different ‘language’ from lawyers and accountants who make the world go round. This is frustrating as, in reality; all these ‘communities of interest and specialist expertise’ have more in common than seems initially apparent. ‘Descriptive domains’ provide a route to resolving these interdisciplinary communication issues.

21.12 A Definition of Descriptive Domains

A descriptive domain is simply a mechanism for describing form and function The ‘analogue domain’ and the ‘digital domain’ provide two examples. The analogue domain can be widely understood in a modern context as a set of continuously variable values. In a linear system, the output should be directly proportional to the input.

This is the ‘form’ of the domain; its defining characteristic. The ‘function’ of the domain is to provide a generic method for describing the physical world around us. Light, sound and gravity and of course radio waves are all in the analogue domain. The ‘form’ of the digital domain is defined as the process of describing data sets as a series of distinct and discrete values. The ‘function’ of the digital domain is to provide a generic method for describing analogue signals as discrete and distinct values.

Most people are comfortable with these descriptions but do not necessarily understand some of the practical implications of working in either domain. For example, an engineer might need to describe the fiscal merits of digital processing in terms that are accessible to an accounting discipline. This suggests the need to add in ‘cost’ and ‘value’ to the descriptive domain. Let's test the validity of this on some practical examples

21.12.1 Cost, Value and Descriptive Domains – A Description of Silicon Value

As with many technology-based industries, the telecommunications industry is built on a foundation of sand, also known as silicon. Silicon-based devices provide us with the capability to capture, process, filter, amplify, transmit, receive and reproduce complex analogue real-world signals. Silicon-based devices allow us to speak, send (data, audio, image and video), spend and store. Silicon value translates into software value that translates into system value that translates into spectral and network bandwidth value.

Decreasing device geometry delivers a bandwidth gain both in terms of volume and value. ‘Bandwidth value’ at the device level determines bandwidth value at the system and network level that, for cellular and mobile broadband networks, implies a return on spectral investment. By studying silicon value we determine how future software, system and spectral value will be realised.

As suggested in previous chapters, we can consider at least five value domains – Radio systems, Audio systems, Positioning systems, Imaging Systems and Data Systems. These are illustrated in Table 21.1. Silicon vendors and silicon design teams who successfully integrate these five domains will be at a competitive advantage.

Table 21.1 Value domains

Table 21-1

Earlier in this chapter we described the concept of blended bandwidth and balanced bandwidth as interrelated concepts that could be used to qualify radio access and network transport functionality and radio access and network transport value. Integration is the process of blending and balancing.

The same principle applies at the silicon level. First, define the domains and then qualify how these domains add individual and/or overall value. Use this to qualify partitioning and integration decisions and R and D resource decisions.

‘Blended bandwidth’ (the horizontal axis in Table 21.1) implies a need to integrate each of the five domains and within each of the five domains, to integrate individual subsystem functions. The amount of crossintegration determines the ‘breadth’ of the blended bandwidth proposition.

For example, ‘blended bandwidth’ in the ‘radio system domain’ means getting wide-area systems (for example, HSPA or LTE or EVDO) to work with broadcast wide-area, local-area WiFi and personal-area Bluetooth and/or NFC local-area connectivity.

In the ‘audio system domain’, ‘blended bandwidth’ involves a successful integration of voice encoder/decoder functionality with audio encoder/decoder functionality (AAC AAC Plus, MP3Pro and Windows Audio). This includes functions such as voice and speech recognition.

In the ‘positioning system domain’, ‘blended bandwidth’ requires the integration of micropositioning systems (for example, MEMS-based low-G accelerometer devices) with macropositioning (GPS, A GPS or observed time difference systems). Micro- and macropositioning systems have the capability of adding value to all other domains and should be considered as critical to the overall silicon value proposition.

In the ‘imaging system domain’, ‘blended bandwidth’ means the integration of image and video and the integration of imaging systems with all other system domains. Imaging systems include sensor arrays, image processing and display subsystems. MMS is an imaging domain function.

In the ‘data system domain’, ‘blended bandwidth’ involves making personal and corporate information systems transparent to all other domains. SMS Text is inherently a data domain function.

‘Balanced bandwidth’ is how much functionality you support in any one domain and in any one function within each domain – the ‘depth’ of the domain.

For example, in the radio system domain, the choice for wide-area system functionality will depend on the mix of technologies deployed and the bands into which they are deployed.

For local-area radio system functionality, the choice would be between 802.11 a, b and g and related functional extensions (802.11n and MIMO).

In the personal area, the decision would be whether to support Bluetooth EDR and/or NFC.

In the audio system domain, the choice for voice would be which of the higher-rate voice codecs to support, the choice for audio would be which audio codec to support, other choices would be functional such as voice or speech recognition support, advanced noise cancellation, extended audio capture or advanced search and playback capabilities.

In the positioning domain, the choice is essentially rate, resolution and accuracy both in micro- and macropositioning. An improvement in any one of these metrics will imply additional processor loading.

In the imaging domain, the decision would be whether to support newer coding schemes such as JPEG2000 (imaging) or H264/MPEG Part 10 AVC/SVC or, in general, any of the emerging wavelet-based progressive rendering schemes.

In the data domain, decisions revolve around the amount of local and remote storage dedicated to personal and corporate data management systems and related data set management capabilities.

21.13 Testing the Model on Specific Applications

To be valid, we now need to show that the model has relevance when applied to specific applications. For example, a gaming application may be a composite of radio layer functionality, audio functionality, micro- and macropositioning functionality, imaging functionality and personal profiling (personal data set management). A camera phone application will already have wide-area radio access, may have enhanced audio, should probably have integrated positioning, will certainly have imaging and should have data functionality.

21.13.1 The Concept of a ‘Dominant Domain’

This leads us towards defining future handsets in terms of their ‘dominant domain’. The ‘dominant domain’ of an audio phone is the audio system domain with potentially all other domains adding complementary domain value. The dominant domain of a location device is the positioning domain with potentially all other domains adding domain value. The ‘dominant domain’ of a camera phone is the imaging system domain, with potentially all other domains adding complementary domain value. Within the imaging domain, the dominant functionality is image capture (the optical subsystem and sensor array).

The ‘dominant domain’ of a games phone is also the imaging domain but the dominant functionality within the domain is, arguably, the display subsystem and associated 2D and 3D rendering engines. The ‘dominant domain’ of a ‘business phone’ is the data domain, with the emphasis within the data domain on corporate information management. All other domains are, however, potential value added contributors to the overall system value of the dominant domain.

21.13.2 Dual Dominant-Domain Phones

The above examples are reasonably clear cut, but let's take for example an ultralow-cost phone.

The dominant domain of an ultralow-cost phone is the wide-area part of the radio system domain and the voice part of the audio domain plus some parts of other domains (text from the data domain function, possibly basic camera functionality from the imaging domain). Being pragmatic, it is sensible to describe a ULC phone as a ‘dual dominant domain device’.

In Chapter 19 we described how Nokia is working on introducing richer functionality to these entry-level devices. Note that they still retain a market volume advantage so can add functionality more aggressively without necessarily compromising product margins.

21.14 Domain Value

Each individual domain has a value and cost. The cost is functional and physical and is a composite of processing load and occupied silicon real estate. Value is a composite of the realised price of the product plus incremental through life revenue contribution as functionality is increased in any one domain.

Domain value is independent of partitioning or integration. Presently, individual domains may be on separate interconnected devices. For example, in the radio domain, Bluetooth has been historically separate from the mobile broadband RF and baseband and/or from embedded WiFi functionality. In the audio domain, speech codecs and audio codecs may be and often are, separate entities. Micropositioning is separate from macropositioning. Imaging is separate from video. For example, most camera phones have two cameras, one for still-image capture, one for lower-resolution video. The data domain may be distributed across several devices, for example, the application processor, the SIM card and so on.

This does not mean that domain value is not a valid approach for qualifying partitioning or integration decisions. Adding functionality to phones has increased the complexity of the decision process and implies a need for a level of interdomain understanding that is hard to achieve. Radio system specialists have not historically needed to know much about audio systems or positioning systems or imaging systems or data systems.

So the domain-value approach can be useful at engineering level either to work through the mechanics of partitioning or integration or to decide how much functionality to support in any one domain and/or to decide how to allocate R and D effort to achieve a maximum return. Similarly, the domain-value approach can help in developing future market models. Forecasting the future sales volume and value of voice phones, audio phones, location devices, camera phones and business phones and/or personal organisers and smart phones and tablets and dongles can become muddled by a lack of descriptive clarity.

Given that function determines form factor, domain-based functional descriptions provide a valid alternative approach allowing product families to be developed at the silicon and handset level with clearly differentiated functionally based cost and value metrics.

The same applies in market research. The telecoms industry, in our case, the wireless telecoms industry, is a classic example of a technology-driven rather than customer-driven industry.

This is altogether a good thing. However, in a technology-driven industry, ‘listening to your customer’ is probably the worst thing you can possibly do. Understanding your customer is, however, completely crucial. ‘Understanding’ in this context implies a quantitative understanding of the economic and emotional value of each of the domains and the perceived quality metrics of each domain.

21.15 Quantifying Domain-Specific Economic and Emotional Value

Such an approach is not particularly difficult and can be objectively based. For example, it is possible to quantify the economic and emotional value of wide-area radio system mobility and ubiquity and build models of how the value/cost metric changes as bandwidth and perceived quality increases over time.

It is possible to quantify audio system value on the basis of voice and audio value using well-defined and calibrated mean opinion score methodologies and to model how the value/cost metric changes as bandwidth and perceived quality increases over time. It is possible to quantify micro- and macropositioning value and to model how the value/cost metric changes as accuracy and fix speed increases over time.

It is possible to quantify imaging system value and to model how the value/cost metric changes as resolution and colour depth and perceived quality increase over time using well-defined and calibrated mean opinion score methodologies.

It is possible to quantify data system value both in terms of personal efficiency and corporate efficiency metrics. It might be argued that emotional value is hard to quantify. However, emotional value is part of the ‘engagement cycle’ that determines the ‘soft value’ (or ‘fondness’) that users feel towards particular service offerings. Intuitively, as emotional value increases, session length increases; a directly measurable metric.

Is domain value relevant to user-equipment design?

Yes. We have chosen silicon value as an example but it is equally applicable to handset research and design. It is a valid approach to developing handset technology policy over a forward three–five-year time scale.

Is domain value relevant to infrastructure and network design?

Certainly, all five value domains can be used to quantify cost metrics and value metrics in a radio network proposition. For example, radio-system costs and radio-system value, audio-system costs and audio-system value, positioning-system costs and positioning-system value, imaging-system costs and imaging-system value and data-system costs and data-system value can be and should be separately identified.

Is domain value relevant to content management?

Absolutely. Content has a direct impact on radio-system cost and radio-system value, audio value is an integral part of the content proposition but benefits from being separately identified as a cost and value component, positioning adds value to content, imaging is an integral part of the content proposition but benefits from being separately identified as a cost and value component, data systems are an integral part of the content cost and value proposition.

Note that audio, image and data costs are a composite of delivery and storage cost and delivery and storage value, both are usefully described in their individual domains.

Is domain value relevant to application value?

Yes. Chapter 20 provided examples of this.

Is it a universal model?

Yes. It is certainly universally useful as a mechanism or framework for getting engineering and marketing teams to work together on product definition projects. It provides an objective basis upon which R and D resource allocation can be judged and an objective basis for deciding on ‘hard to call’ partitioning and integration decisions.

The definition and development of user equipment with integrated radio system, audio system, positioning system, imaging and data system functionality demands particular interdiscipline design skills. Convergence increases rather than decreases the need for interprocess communication.

Making multimedia handsets work with multiservice networks introduces additional descriptive complexity and requires a closer coupling between traditionally separate engineering and marketing and business modelling disciplines. Developing consistent descriptive methodologies that capture engineering and business value metrics helps the interdiscipline communication process.

The challenge is to combine the language of engineering with the language of fiscal risk and opportunity. Descriptive domains provide a mechanism for promoting a more efficient and effective crossdiscipline dialogue – engineering Esperanto.

21.16 Differentiating Communications and Connectivity Value

Tom Standage in his book, The Victorian Internet,2 explains how the electric telegraph delivered ‘network value’ to the rapidly industrialising nations of the nineteenth century. In other work3 he identifies the role of coffee shops in the seventeenth and eighteenth century as centres for information distribution and social and information exchange. Customers were happy to pay for the coffee but expected information for free, free speech and free information as the basis for social, political and economic progress.

In the twentieth and twenty first century, the same principles apply but the delivery options have, literally, broadened. Present economic growth can be at least partially ascribed to the parallel growth of connectivity bandwidth. Connectivity implies the ability to access, contribute and exchange information. Connectivity is a combination of voice-, audio-, text- image-, video- and data-enabled communication (the internet) but crucially includes access to storage bandwidth (the World Wide Web and the cloud).

21.17 Defining Next-Generation Networks

So, noncontentiously we can say that next-generation networks are likely to be an evolution from present-generation networks and present-generation networks are, or will be, based on access to the internet and access via the internet to the World Wide Web.

This is straightforward. The problem is the user's expectation that access to this commodity (the sharing of human knowledge, experience, insight and opinion) is a basic human right analogous to the provision of sanitation and healthcare. There may, of course, be a valid argument that governments should provide internet access for free on the basis that the added tax income from the associated additional growth will provide a net economic, social and political gain.

Selective free access is of course already available, for example internet access in schools and internet access in libraries (free to pensioners). The provision of such services is analogous to the right of citizens to have access to free to air broadcast content and the related public service remit to ‘inform, educate and entertain’. Interestingly, most of us in the UK are happy to pay a license fee (to the BBC though not perhaps to Mr Murdoch) on the basis that this makes it more likely that the content delivered in return is ‘free’ from political bias or at least overt political interference.

Of course, governments do not need to invest in communications networks because other people do it for them. However, the contention is that network access, broadly defined to include all forms of present connectivity, directly contributes to an increase in GDP. There are exceptions. Watching the Simpsons4 or Neighbours5 does not necessarily add significantly to the economic well being of the nation but the general principle prevails. Therefore, governments have a duty to ensure connectivity is provided at an affordable cost.

Most of us are willing, if not necessarily happy, to pay 40 or 50 dollars per month for the right of access to the various forms of communications connectivity available to us. Price perceptions are influenced by what we pay for other forms of connectivity – water, gas, electricity.

Emerging nations are, however, different. You cannot expect someone on a dollar a day to pay 50 dollars per month for a broadband connection. What's needed is an ultralow-cost network and ultralow-cost access to and from that network.

21.18 Defining an Ultralow-Cost Network

21.18.1 Ultralow-Cost Storage

Storage costs are halving each year, a function of solid-state, hard-disk and optical-memory cost reduction and improvements in content compression methodologies.

The low error rates intrinsic to nonvolatile storage media suggest further improvements in compression techniques are possible, provided related standards issues can be addressed. A closer harmonisation of next-generation video- and audio-compression techniques would, for example, be advantageous. Other techniques such as load balancing across multiple storage and server resources promise additional scale and cost efficiencies.

21.18.2 Ultralow-Cost Access

This is dependent on the relative cost economics of fibre, copper and wireless access. All three of these delivery/access media options have enjoyed incremental but rapid improvements in throughput capability. Throughput rate increases have been achieved through a mix of frequency-multiplexing and time-multiplexing techniques.

Fibre and copper are both inherently stable and consistent in terms of their data rates and error rates. Error rates in wireless are intrinsically higher and more variable as a consequence of the effects of fast and slow fading. These effects are particularly pronounced in wide-area mobile wireless systems but are still present in fixed wireless systems particularly when implemented at frequencies above 3 GHz.

Higher error rates and uneven error rate distribution make it harder to realise the full benefits of higher-order content-compression techniques. Wide-area wireless mobility also implies a significant signalling overhead that adds directly to the overall cost of delivery.

All three access options (fibre, copper and wireless), have a related real-estate cost that is a composite of the site or right of access negotiation and acquisition cost and ongoing site or right of access administration costs.

Present fibre to the home (ftth) deployments, for example, imply significant street-level engineering and financial-resource allocation that has to be justified in terms of bidirectional data throughput value amortised against relatively ambitious return on investment expectations.

In cellular and mobile broadband networks, similar ROI criteria have to be applied to realise a return from spectral investment, site acquisition and ongoing site-management costs. Cellular and mobile broadband networks are, of course, in practice, a composite of radio access, copper and fibre. Wireless costs are reducing but whether they are reducing faster than other delivery options remains open to debate.

21.18.3 The Interrelationship of RF Access Efficiency and Network Cost

The escalation of spectral and site values has highlighted a perceived need for cellular and mobile broadband technologies to provide incremental but rapid improvements in capacity, the number of users per MHz of allocated or auctioned spectrum multiplied by typical user per data throughput requirements, and coverage (range). New technologies are therefore justified on the basis of their ‘efficiency’ in terms of user/data throughput per MHz, which we will call, for lack of a better term, ‘RF access efficiency’. In practice, new technologies need to offer useful performance gains, lower real costs and increased margins for vendors.

21.18.4 The Engineering Effort Needed To Realise Efficiency Gains

However, new technologies, particularly new radio technologies, generally deliver initially disappointing performance gains. This is because the potential gains implicit in the technology can only be realised through the application of significant engineering effort.

21.18.5 The Impact of Standards and Spectral Policy on RF Network Economics

At this point it is worth considering the impact that standards and spectral policy have on RF network economics.

Standards policy has an impact on technology costs. Spectral policy has an impact on engineering costs.

Generally, technologies will have been specified and standardised to take effective advantage of available and anticipated device capabilities. Engineering is the necessary process whereby those capabilities are turned into cash and/or competitive advantage. Technology design and development has to be amortised over a sufficient market volume and value to achieve an acceptable return on investment. Engineering effort has to be amortised over a sufficient market volume and value to achieve an acceptable return on investment.

A poorly executed standards policy results in the duplication and dissipation of technology design and development effort. The lack of a mandated technology policy increases technology risk. This requires vendors to adopt more aggressive ROI policies. As a result, technology costs increase.

A poorly executed spectral allocation policy results in the duplication and dissipation of engineering effort. The lack of a globally harmonised spectral allocation policy increases engineering risk. This requires vendors to adopt more aggressive ROI policies. As a result, engineering costs increase.

21.19 Standards Policy, Spectral Policy and RF Economies of Scale

Cellular and mobile broadband handsets benefit from substantial economies of scale. Cellular base stations and related network components have much lower scale economies. There are (several orders of magnitude) fewer base stations in the world than handsets. This makes it harder to recover nonrecurring technology and engineering costs. The result is that base stations and network components are necessarily expensive.

As network density has increased over time, the number of base stations has increased, providing the basis for more effective cost amortisation. Successive generations of radio-access technologies options compete with each other on the promise of offering efficiency gains and a cost/performance advantage. For example, GSM was justified on the basis of capacity and coverage benefits when compared to ETACS. CDMA and UMTS and HSPA were justified on a similar basis and more recently, LTE has been promoted on the basis of data rate and efficiency gains.

In practice, progress in terms of radio and network efficiency tends to be more incremental than market statement or sentiment would imply. Step function gains in efficiency just don't happen in practice. Performance gain is achieved as the result of technology maturation based on engineered optimisation combined with market volume.

Similarly, technology can deliver useful gains in terms of reduced component count and component cost, but again these gains are incremental and need to be coupled with engineering effort and market volume to be significant.

So, component-cost reductions are a product of technology, engineering effort and market volume. Improvements in access efficiency are a product of technology, engineering effort and market volume. The requirement for market volume implies that economies of scale are dependent on the ability of vendors to ship common products to multiple markets.

21.20 The Impact of IPR on RF Component and Subsystem Costs

Similarly, there is a need to have equitable intellectual property agreements in place that provide an economic return on company specific organisationally specific technology-specific research and development investment.

This is a troubled area made more troublesome by regional differences in IPR law and accepted good practice. The recent success by CSIRO6 in Australia in pursuing OFDM patent rights is relevant. Overall, it would seem to make economic sense to develop effective patent pooling arrangements as part of an integrated international standards making activity, though such a suggestion might be considered over sanguine.

21.21 The Cost of ‘Design Dissipation’

Resolving intellectual property rights through an adversarial and fragmented international IPR regime is just another example of how potential gains from technology and engineering innovation may fail to deliver their full cost efficiency and economic benefits due to poorly executed governmental and intergovernmental policy.

To summarise:

Technologies when first introduced have a potential ‘efficiency gain’. The potential gain is only realised after considerable engineering effort is invested to ensure the technology actually works as originally intended.

In cellular and mobile broadband networks having to support multiple technologies deployed into multiple bands that are different from country to country results in an unnecessary dissipation of design effort and engineering resource. A lack of international clarity on IPR issues creates additional aggravation.

This implies a need for an Emerging Market Network Programme which includes internationally mandated spectral allocation, internationally mandated technology standards and an internationally mandated IPR regime based on well-established and demonstrably successful patent pooling principles. Such a programme would need to reflect the shift in influence from the US to Asia and the BRICS markets (Brazil, Russia, India, China, South Africa). Such a requirement is largely at odds with present ‘light-touch’ regulatory policy and the continuing dominance of the US and Europe in standards making remains problematic.

The adoption of technology neutrality as a policy is an abrogation of international governmental responsibility – a policy to not have a policy is not a policy.

Regulators have a duty to ensure efficiencies of scale can be applied in the supply chain and to avoid the negative impact that design dissipation has on product efficiency, product pricing and product availability. This is particularly true if we are serious about addressing cost-sensitive markets and, incidentally, serious about wireless competing with, or working effectively with, continuously improving fibre and copper access technologies.

Bridging the ‘digital divide’ in emerging nations requires cooperation not exhortation.

This in turn implies a return to prescriptive globally harmonised spectral allocation combined with prescriptive and closely integrated mandated technology standards and a globally harmonised approach to IPR management.

On the income side the focus has shifted dramatically over the past five years from voice and text value to content value to application value. There was a period around about 2008 when content was thought of as king. Vendors of billing products introduced content management and billing platforms and operators waited for the money to roll in. Content, however, has significant associated costs and the value is often resident with parties other than those that bear the costs either of origination and/or storage and delivery cost.

21.22 The Hidden Costs of Content – Storage Cost

For example, the assumption is that storage costs are decreasing over time, a function of the halving of memory costs on a 12-month cycle. This would only be true if content was expanding at a lower rate than memory bandwidth, and this is presently not the case.

Content bandwidth inflation is being caused by the transition to high-definition TV, a fourfold bandwidth expansion. This is compounded by the move to higher-fidelity audio, a composite of enhanced MP3 and five- or seven-channel surround sound.

Still-image content expansion is being driven by ever higher-resolution image capture platforms, 44-megapixel cameras being an extreme but relevant example. This bandwidth expansion hits every stage of the content production chain from original capture through post production through to storage and distribution.

21.23 The Hidden Costs of User-Generated Content – Sorting Cost

This includes user-generated content. For example, the BBC has a programme called Autumn Watch, a study of how weather affects people and plants.

A request for viewers to send in their gardening photos generates several hundred e mails with attachments, a significant percentage of which contain uncompressed files. Someone has the Herculean task of sorting through these pictures, deciding which ones to keep, which ones to use and how to describe them in the data base. User-generated content is therefore not free and indeed has a significant cost that is increasing over time.

21.24 The Hidden Cost of Content – Trigger Moments

Broadcast content may also have trigger moments, voting for example in a talent show. Instead of an avalanche of photos the problem is now an avalanche of SMS messages or phone calls that have to arrive and be dealt with by a specified time in a specified way.

An organisation now exists to monitor and manage and regulate the operation of phone-in promotions. Several producers lost their jobs for failing to manage this process and participation revenues in the UK reduced significantly as a result.

21.25 The Hidden Cost of Content – Delivery Cost

Trigger moments can create loading issues that can only be fully addressed by over provisioning store and forward and onward delivery bandwidth. Overprovisioning store and forward and delivery bandwidth both over the air and through the network implies substantial underutilisation for most of the time. Additionally, content has different delivery requirements ranging from best effort (lowest delivery cost, highest buffer cost), streaming (audio and video downloading), interactive (gaming) and conversational (highest delivery cost, no buffer cost).

Supporting all four types of content simultaneously adds substantial load to the network. The administrative effort counted in software clock cycles is significant and outweighs any benefits theoretically available from multiplexing best-effort data into the mix.

21.26 The Particular Costs of Delivering Broadcast Content Over Cellular Networks

The DTV alliance analysed delivery costs and delivery revenues in a White Paper. The purpose of the White Paper was to illustrate the costs of delivering broadcast content either as unicast (eye wateringly expensive) or multicast (less but still too expensive).

Vendors and operators might argue that broadcast over cellular standards will reduce these costs but until these standards are implemented the costs are real and actual and the comparisons in the White Paper are valid. The thesis of the White Paper is that there is a pain threshold, which is the point at which delivery costs exceed delivery revenues excluding spectral cost amortisation.

The pain threshold point was calculated to be a 6-minute low-resolution video.

The bandwidth delivery cost of delivering a 6-minute high-resolution TV programme would be $2.76 or $13.80 for a 30 minute programme. To achieve margins equivalent to voice the operator would need to charge $13.80 for the 6-minute video and $33.60 for the 30-minute programme.

This implies that the opportunity cost has to be factored in of any possible impact the audio and/or video service might have on existing voice traffic. A 3-G BTS was taken as an example with a busy hour capacity of 2.5 Mbps times 3600 seconds, equivalent to 9 gigabits of delivery bandwidth. If 5% of the subscribers watched two-minute clips at 128 kbps then they will consume 4.8 Gigabits in an hour, half the capacity of the cell site. Double the data rate or number of users or length of the clip and the voice traffic completely disappears.

Now, you might argue with some of these numbers but the essentially valid point is that rich content is expensive to deliver. This is particularly true in mobile broadband networks when a user is close to the edge of the cell.

21.27 Summary – Cost and Value Transforms

Essentially, we are saying that content has hidden costs and that these costs are increasing rather than decreasing over time. Costs may be apparently reducing but may reappear in other areas. For example, the quality of service mechanisms needed in IP networks to handle rich media substantially increase the overall cost of delivery.

In parallel, value is decreasing including content with presently highly contested acquisition costs, premier league football for example. We should not assume that so-called premium content maintains that premium over time. As humans we have a finite absorption bandwidth and we must be close to the safe absorption limit, particularly as far as football is concerned.

There is an argument that the value is still there but is realised in different ways. O2 Telefonica sponsor events at the Millenium Dome. This includes concerts by artists such as Prince, the Rolling Stones and Led Zeppelin. Some artists, for example Prince, give their music away for free or at a deep discount but the live concert grosses millions. The majority of this value goes to the artists (typically 110% of the ticket costs) and venue owner, the American Anschutz Entertainment Group. These are examples of cost and value transforms.

Costs do not necessarily disappear but may reappear in other areas. Value may be realised by third parties who may be the unintended though possibly deserving beneficiaries of the original investment process.

And transforms are probably a good place to end this narrative, particularly as it is more or less where we started. When asked to make some closing remarks at a wireless conference held coincidentally at St Johns, my old college in Cambridge it struck me that the college was having its 500th anniversary.

The presentations at the college reflected many of the themes that we have explored in this book but also introduced some broader perspectives. Sometimes the obvious can be profound. For example, the fact that most families living in the slums of India have no water or sanitation but own a TV, that the most used function in a mobile phone is often the FM receiver, that India prospers despite its lack infrastructure while China prospers because of it,7 that SMS is used in India to send greetings, that countries are different and cultures are different and that means markets are different.

From where we were sitting at that particular time in that particular place we could see St Johns, Trinity College and Caius College with Clare College and Kings College in the distance, nestling along the River Cam.

Over the centuries those colleges had hosted Isaac Newton 1643 to 1727, who did that stuff with the apple, but more importantly developed the science of differential and integral calculus, Charles Babbage 1791 to 1871 the man who built the world's first mechanical computer unless you count the abacus several thousand years before, Augustus Morgan 1806 to 1871 a founding father of algebraic and algorithmic innovation, Srinivasa Ramanujan 1887 to 1920 the Indian mystic and number theory man, Alan Turing 1917 to 1954 the master of encryption, Paul Dirac 1927 to 1969 a pioneer of quantum mechanics and nanotechnology and Stephen Hawking … … cosmology and string theory.

These seven mathematicians of course only represent a small fraction of the legacy of intellectual innovation upon which telecommunications value is ultimately built, a future that will be built on a combination of materials innovation, manufacturing innovation and mathematical and algorithmic innovation.

As we started this book with a science fiction author we may as well end it with one, William Gibson. Gibson invented the term cyberspace in a short story called Burning Chrome that was then included in his debut novel Neuromancer in 1984.

In a 1993 interview he is said to have said – ‘The future is already here – it's just not very evenly distributed’.

Telecommunications has the power to unlock and redistribute the intellectual and innovation power of the world's population – a truly transformative transform.

1 http://www.mipi.org/.

2 http://tomstandage.wordpress.com/books/the-victorian-internet/.

3 http://www.economist.com/node/2281736?story_id=2281736.

4 A popular US TV cartoon programme.

5 A popular Australian soap.

6 http://www.csiro.au/news/ps2gw.html.

7 R Swaminathan Reliance Communications India and Kanwar Chadha CMO CSR.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.230.203