What was the key breakthrough that enabled the dramatic growth in mobile communication?
The first commercial mobile telephone system in the United States (which was inaugurated in St. Louis, Missouri in 1946) could only support three simultaneous calls! Early radio systems generally made very inefficient use of the radio frequencies, and it was the need to make better use of this scarce resource that led to the development of cellular networks. The basic ideas had been worked out by the late 1940’s, but lack of spectrum and technology limitations meant that it would be more than 20 years before the first true cellular network was built.
Cellular networks provide mobile coverage across an area using a large number of low-powered base stations arranged in a regular pattern. Each base station provides coverage within its “cell”, and adjacent base stations operate on different frequencies to avoid interference. However, a given frequency can be re-used by many different cells within the pattern so long as they are spaced just far enough apart to prevent interference. If a user moves from one cell to another, then their handset must switch frequency from the old base station to the new one, and this “handoff” is normally accomplished without the user being aware of it.
As is often the case with emerging technologies, early cellular networks adopted many different (incompatible) standards. The GSM standard was initially intended to enable international roaming in Europe, but it eventually went global as hundreds of countries appreciated the benefits of standardisation. This created a huge equipment market, and manufacturers were able to deliver rapid innovation with massive economies of scale. As costs came down, mobile phones became increasingly affordable to poorer people who might not have access to a fixed line telephone.What were the most important technological advances that lead to the Internet?
Many of the concepts that lie at the heart of today’s Internet developed from the ARPANET network of the late 1960’s. Two of the key concepts inherited from the ARPANET are packet switching and the “dumb network” architecture.
In a traditional telephone network, resources must be reserved across the network to set up a telephone call. Packet switching networks also allocate network resources to users as they need them, but they do so in a far more dynamic way. As a result, Internet users have access to large amounts of bandwidth when they need to download photographs or videos, but the same bandwidth is available to other users as soon as the download is finished.
The Internet is essentially a collection of “dumb networks” that can’t do anything much more sophisticated than moving bits from source to destination and managing traffic priorities. It is software running on computers connected to these networks that makes the Internet so useful. Crucially, this software can be produced by anyone, and is not controlled by the people who own and operate the networks. This has led to an explosion of creativity, with applications such as the World Wide Web, Facebook, Twitter, YouTube and SecondLife taking Internet capabilities far beyond anything that was envisaged by the designers of the ARPANET.What does the future hold for telecommunications?
Network users still encounter annoying restrictions such as lack of bandwidth, incompatible standards, complex user interfaces and high costs. In the future, we can be reasonably certain that improving technology and competitive pressures will sweep away these problems. However, there are at least two fundamental impediments to network development that will not be so easily overcome.
The first of these impediments is the limited supply of radio spectrum. Although we will continue to find new ways of using this resource more efficiently, the fact that we can’t simply create new spectrum with the propagation characteristics that we need means that demand will continue to outstrip supply. Innovative mobile applications are likely to struggle to get the spectrum that they need.
The second major restriction is the speed of light. This “cosmic speed limit” came out of Einstein’s theory of Special Relativity, and its effects are not normally apparent to the average user. However, it will continue to be an issue for high speed computer applications and any communications that use geostationary satellites. Communication with probes in the outer reaches of the solar system will be plagued by round-trip delays of up to a day, and there is simply no prospect of holding a telephone conversation with any extra-terrestrial civilisation that may be orbiting a distant star.
In spite of these issues, the future of telecoms will not be about the network – it will be about the applications that the network supports. Users will have no more interest in the routers and fibre optic cables that carry bits around a network than they do in the processors that shuffle bits around within a computer. In both cases, the user interface is provided by layers of software that create a much more user-friendly environment within which users can work, learn or play. A modern network is like a distributed computer, providing a platform on which sophisticated applications can be built. Since access to processing power and storage can be provided through the network, it is becoming increasingly hard for users to draw any distinction between computers and networks. This phenomenon was summed up in 1984 by John Gage’s famous phrase “The network is the computer”. Although this concept was dismissed at the time by the computer industry, today’s extraordinarily range of Internet applications illustrates just how right he was.The interview was conducted by Jeff Rutherford.