The Challenge Network

   back   menu   next   

Telecommunications.

Telecommunications.

The fields of information management, long distance communications, computing and short range data interchange have all changed very greatly, both in their scale of implementation and the cost of their use. The first section reviews what has happened and what is in train.

The information infrastructure has received a huge amount of attention, both in the media and in the capital marketplace. The second section separates some of the key distinctions that have to be made in this area, looks at the implications and makes some (brave) estimates of the pace with which events may unfold.

Key points are the following:

The scale of change

The scale of change

The pace of growth in telecommunications volume in the period 1995-2000 implies that the entire information throughput of the year 2000 will be encompassed in one second in 2020. Extrapolation is always dangerous, but as Figure 1 shows, a semi-logarithmic pace of change has been valid for four decades. In computing, the famous Moore's Law continues to hold, demonstrating similar dynamics.

Figure 1: The cost of long distance telephonics falls rapidly.

Technologically, features such as multimode fibre optics seems set to deliver much greater capacity than hitherto. A terabyte per second can now be sent down a more or less conventional fibre, transmitting the equivalent of the entire Library of Congress in a second. To put this in its context, very rough estimates put the world's production of information at around 2 exabytes of per annum. (Most of this is not currently digitised, of course.)

The Internet has daily peak traffic of around 0.47 terabytes per second at the beginning of 2000 and around 5.6 terabytes per second at the end of 2001. Fibre optic communication capacity doubles every nine months. Bottle-necks closer to and within individual computers are likely to become an increasing issue.

Glossary:

Byte: a group of data bits which encode modules of information.
Megabyte: million (106) bytes
Gigabyte : billion (109) bytes
Terabyte : trillion (1012) bytes
Petabyte : 1015 bytes
Exabyte : 1018 bytes

A bed sheet is about 1000 by 1000 threads, and so contains a million stitches. If one byte is a single stitch, then a terabyte is a million bed sheets. End to end, twenty terabytes - four seconds of internet traffic, by the end of 2001 - would make a band of sheets around the planet. Two exabytes of data would 'cover' 8 million square kilometres. The surface of the Earth is around 500 million square kilometres. It is not long before we weave a metaphorical informational duvet cover for the planet.

We tend to think of information as discrete things: railway time tables, pieces of text. Mass transmission capacity will, however, permit more flexible possibilities, where data are actively mined and information resources are allocated dynamically.

One can see the early stages of dynamical data searching in the internet search engines, in data mining tools and in the variety of active agents which are being developed. Computing resource can also be 'delocated' and made available dynamically. As an example of this, CERN, the European effort aimed to understand particle physics, has created software that connects together webs of computers which are otherwise free-standing, creating virtual machines with exabyte memories. A number of companies and non-profit organisations have set up similar schemes to access underused capacity. One detector at CERN will alone generate around a petabyte of data per second.

There are many additional possibilities. Exotic medium-range communications include aeroplanes which can remain resident in the stratosphere over a city for weeks at a time. There are projects that may allow systems using free air lasers to offer terabyte networks to cities at very low cost. Third world cities may be helped by relatively cheap loops which can be placed around areas without communications, allowing simple mobile telephones to be used within them.

Short range communications (such as the Blue Tooth protocol which is now being commercialised) make us of a wide area of bandwidth in which the atmosphere absorbs strongly. Transmissions have only local impact as a result, creating virtually unlimited band width in any one place, and transmission between locals is handled either by conventional telephonics or by the messages hopping from one short range transponder to another. As a result, communications can be built into virtually anything - from a garment to a light fitting - and parts of a system can communicate with each other on a spontaneously-created local network. These can be parts of a central heating system or the personal preferences of dancers at a night club.

It is possible to compress data and thus much reduce the band width that it needs. Projects which allow television to be transmitted down conventional telephone lines make use of this, for example. The degree of compression boils down to two factors: the repetitiveness of the matter being transmitted and the interpretive facilities at the receiving end. For example, human speech consists of around 60 phonemes. A detector that can isolate and signal these (and a receiving system which can reproduce these from a standard chip) will create recognisable speech, of course without intonation or individuality. It can do this with very low data rates indeed.

This may sound both primitive and pointless, but human facial expressions have stereotyped movements, and images have simple components which can be defined and transmitted in the same way. Data compression may allow us to have films, video messages and simulations transmitted on demand to - for example - something resembling spectacles. Imagine the engineer, superimposing a blueprint over a piece of equipment. Imagine the safety interlocks, the expert help that can be called down, the real-time conference with peers that can be triggered by this. Now extend the concept to general commerce, to defence or policing and the potential is self-evident.

Click the image for much higher resolution view

Figure 2: A snapshot of the internet, colour coded by ISP.



You can click the image for a much higher resolution version. However, please note that the image is around 800 kbyte.

About 450 million people are regular internet users, and this is expected to grow to 2 billion by 2005. By this time, however, several billion domestic appliances and other machinery will be using the internet for connectivity, about which more below. The current internet grows by around 100 gigabytes per day, and there is around 20 terabytes of indexed material now available on in, behind which huge, un-indexed databases lie. This 'invisible web' could be up to 500 times as big as the 'declared' web. In 1995, by contrast, one could have put the whole internet on a single hard drive: it was around 500 megabytes. The computer systems which store all of this material consumer significant quantities of electricity: data farms are estimated to consumer 5% of London's baseload electricity, for example.

The next wave of development will transform the 'internet' into what has been called 'interspace', where users navigate across an abstract space of concepts, rather than a network of concrete links between documents. There are examples of such structures given below. A new mark-up language, XML, will - if properly used - play an important role in establishing the though processes and conceptual context from which a given item on the web developed. Additionally, assorted mechanisms now exist to classify the similarity of documents on the basis of the contextual frequency of phrases within them, offering what has been called 'scalable semantics'. The upshot is that automatic registration of similarities of content and intent will begin to replace keyword searches and other contemporary machinery, and that 'relatedness' may be a property which a site holds for itself - rather in the manner of user-defined links - rather than relying upon an external search engine to build such connections on behalf of it or the user of it.

The implications of connectivity.

The implications of connectivity.

It is sensible to set aside the possible limits of computation and communication and to assume that both are cheap and freely available. What then follows?

There are three areas of central interest. First, will these forces change the effectiveness with which we run our economic and social institutions? Second, what are the completely new things that appear likely? Third, how fast will all of this occur?

Changes in productivity.

Economists consider economic added value as the consequence of the use of 'factors of production', such as land, labour and capital. The effectiveness with which these are used is measured as changes in productivity, and increasing productivity either increases the level of output or else releases factors to do other things. As we become productive, so we both become richer and are enabled to do new things.

On such measures, information technologies have had a notable, but not remarkable impact on productivity. There are close analogies to the introduction of the railways, where huge investment (and often meagre returns) resulted in major changes in how life was lived: in how cities were laid out, in how industry conducted its activities and in how nations went to war. The railways are estimated to have been responsible for an increase in the growth of world economic output of about 0.25-0.5%. This cumulates to an enormous sum, but created a gradual - rather than an abrupt - revolution. The non-economic impact of posts and telecommunications, rail and electronic data processing have had similar social and economic impacts, spread over time and requiring the societies concerned to absorb a wholly new way of operating. The armies of clerks who managed the bureaucracies of the middle years of the C20th created the framework in which electronic data processing developed. The "PC" revolution in offices created the grounds for the restructuring and re-engineering that characterised much commerce in the 1990s. None of this is strictly attributable to "IT", and so direct measurement gives us strange answers.

Figure 4: Total factor productivity growth in the USA.

On the left of the chart, we see the increase in total factor productivity - the increase in output that cannot be explained simply by increases in the resources deployed - for a number of sectors in the USA. Those with the greatest gains - such as mining and wholesale retailing - are not those most typically associated with IT use. Banking, by contrast, which has been very IT intensive, shows reduced productivity. On the right so the figure, an absence of any relationship between productivity gains and IT use is evident. A range of chiefly US studies have shown the same ambiguity. Only in the IT producing industries themselves is IT use correlated with productivity gains. We have, therefore, to be very careful indeed about claims for a 'new economy'.

Figure 5 The impact of new organisation on business costs.

Here, we come to a key distinction, between the 'e-economy' and the 'k-economy', where "k" stands for knowledge. (The knowledge economy is reviewed elsewhere.) Figure 4 shows an example of what this implies.

K-commerce E-commerce
Depends on getting people to pool what they know, often working across organisational boundaries. Key skill consists of getting from generalisations to the specific.
There are many nuances, and value comes from finesse. Solutions are not replicable, and do not scale easily.
Depends on having a detailed specification of what is to be done. Key skill lies in starting from an established, agreed clear idea.
Huge economies of scale, such that value comes from standardisation. Solutions must be replicable, anywhere and by any contractor.

The "just-in-time" approach to manufacture allowed substantial cuts to be made in working capital. The means by which this was done usually involved computers and always required telecommunications, but owed much more to conceptual analysis, practical implementation and managerial determination than it did to technology. The technology enabled the change, but did not enforce it. Railway timetables could not have been implemented without the telegraph and related IT, but the organising principles needed to be created in addition to the technology. Knowledge (of "how" to do something, and of "what" to do) are what transform raw potential into an integrated whole.

The industrial economies are being transformed into a toolbox of potential, some of it drawn from IT and much more of it from other capabilities. Information technology can be harnessed to help with the exploitation of this potential, but does not of itself point to either the 'what' or the 'how'. This relies upon human ingenuity. Unprecedented numbers of better informed, better inter-linked and better-educated people are now considering a toolkit of huge and growing potential. It is to this that we must look for the 'new economy'.

All change! New possibilities and rapid erosion.

Many existing industries are about to face precipitate change. For example, television and the advertising that often supports it are based around the idea of channels. Wide band width to the home implies that the user can schedule their own viewing, or have someone else do this for them. How advertising is to cope with this is less than sure. New technology already allows broadcast TV to be recorded on a local drive in a selective manner, with the advertising edited out and the content organised to reflect user preference. It may be that advertisers pay to access specific viewers in specific states of mind by, for example, dynamic product placement in video streams. Viewers will be able to stop a program, examine a part of it - for example, a product - and perhaps branch out to buy this, index it to some preference list that they may have or in other ways react to such placement. One could buy a dinner 'just like that one' or the makeup on a star's face, the design skills that created a set or access to a service that will help to fund such a purchase.

Later, we may see even more startling possibilities as the content that is being viewed itself becomes open to customisation. (The 'story engine' in a current games machine, which accepts input and drives the 'rendering engine' to show the consequences of this on the screen hints at what may be possible. As we have seen earlier, local 'decompression' of data allows very powerful things to be done once the capability is installed. Whole new industries can and will be grown from this base. One thought, with early practical applications, has been developed by various science fiction writers as 'partials' or 'betas': representations of oneself that are sufficiently faithful that they can be allowed to act as an intermediary. Early versions may act as a screen for telephone calls, a scheduling tool and as a search agent for ideas and entertainment.

Broad band will offer other features, such as contextual live help with everything from balky software to life's more general problems. The helper will certainly be in a remote location, but may - as time goes by - be less a person that a synthetic structure, a simulation, with humans in the background should the simple responses fail. The complexity of life may well make a what appears as an individualised, universal helper a very attractive service.

The future is going to appear complex and to be complex. The speed with which events will occur will increase. This will be coupled to a wider range of options and choices which can be exercised, with real penalties being paid for poor choices and inaction. Systems which can reduce this complexity will be welcomed, whether they be driven chiefly by people or by software. Elsewhere, we review the changing nature of public institutions and the governance of companies. It is clear, however, that both are challenged by the complexity which they face. At one level, therefore, the must be a ready welcome for the means to represent these complex systems, to establish the facts that are available about them and to estimate the consequences of taking certain actions within them. The tools by which to do this are being developed in areas as distinct as consumer marketing, military hardware and computer games.

One absolutely fundamental step beyond this will be the development of systems which appear to be aware. (The philosophical problems of what going beyond 'appear' may mean are not a proper thread for this section.) Artificial intelligence (AI) has promised much and delivered little, to date. However, it is now evident that not only are we on the verge of a revolution in the understanding of biological cognition, but that we may have been looking in the wrong place for artificial intelligence. We have been looking at computers and software when we should have been looking at social phenomena, institutions and companies.

In brief, three points can be made:

How quickly will all this happen?

The pace of change is set by the realisation of potential, not by the creation of its constituent parts. Social acceptance and the design of appropriate interfaces may be the slowest feature, as we observe in business-to-customer e-commerce.

Figure 5 shows the speed with which a range of information-intense products and services have penetrated households in what is today regarded as the industrial world. It took the mail around 100 years to access 90% of such households. Other, more recent developments have occurred in an environment with more disposable wealth, yet the pace remains relatively slow. Internet access, as a cheap add-on the ownership of a PC, has gone more rapidly, whilst broadband access (cable and satellite) follows historical rates.

Such trends are far from infallible. Nevertheless, products which show surprising mass penetration are extremely unusual in anything but FMCG. Even in the world of entertainment and short-life information, however, the conduit remains durable and slow to change, despite the ephemeral life of its typical content. Pop music may have a short shelf life, but the shelves themselves endure.

Figure 6: the rate at which previous information systems have penetrated.

The complex chart which follows, Figure 7, sets a time frame for the likely installation of the key components from which major change will be built. These are divided into three major categories: economic integration (what companies do), consumer offerings and the fundamental infrastructure itself.

Figure 7: Estimated time frame for the many dependencies to mature.

Only a few of the many topics which have been raised in this section are included. Readers may anyway have their own views on what matters and the time that it will take to install. This said, the time when fastest penetration can be expected seems to bracket the year 2010, with the more exciting issues coming to a head a little later.

Events may move more quickly, or they - as history has often shown us to be the case - may lag behind their technical potential. However one sees this, those investing now do so in a framework of major uncertainty. Investors are, perhaps, trying to buy a mixture of insight - so as better to pick winners - and potential market share. However, discounted at a rate which is standard for very safe projects - 12% real - a $100 earning in 2010 is worth a third of that today. Development will, therefore, continue to follow local, immediate goals and the grand structures will emerge, as we have already discussed, in a form that none of us can fully anticipate.

 to the top 

The Challenge Network supports the Trek Peru charity.