SlideShare una empresa de Scribd logo
1 de 63
Descargar para leer sin conexión
Telecommunications
Telecommunications is a general term for a broad range of technologies used to convey
information over distances both great and small. The public switched telephone system, mobile
telephony, satellite communications, computer networks, the Internet, and radio and television
brodcasting services all fall under the general heading of telecommunications. Although most of
us tend to associate the term with modern technologies, telecommunication has been around in
some form or another since ancient times. The discovery of electricity in the nineteenth century
led to the invention of the telegraph, and later the telephone, enabling communications to occur
in real time over great distances. Immense strides in the development of communications
technology in just the last few decades have changed our leisure activities, the way we work, and
the way we perceive the world in which we live. An understanding of these technologies, how
they work, their impact on society, the economic implications they engender, and where they will
lead us in the future are therefore of considerable importance to us.
Telecommunications Principles
Telecommunications means communication that takes place over some distance (from the Greek
word Tele, which means far away). The distances involved may be small, as is the case with
communications that take place between people working in the same office building, or they may
be vast, as is the case with the communications that occur between a deep space probe and its
mission controllers on Earth. Communicating over long distances has been a challenge
throughout history. In ancient times, runners were used to carry messages between distant
locations. Other methods used have included drums (used for thousands of years to send
messages, and for ceremonial and religious purposes), smoke signals and signal beacons (visible
for many miles if visibility is good), the heliograph (used to send signals by reflecting the light of
the sun), and semaphore (a method of signaling using two flags held in various positions by the
signaler). Modern telecommunications can probably be considered to have started with the
invention of the telegraph in 1832, which exploited the properties of electricity and
electromagnetism discovered in the 19th century. The telegraph operated over long distances
using a simple electrical circuit. An operator at one end of the connection repeatedly makes and
breaks an electrical contact using a telegraph key, and the resulting intermittent bursts of current
are used to produce a series of audible signals at the other end which are interpeted and
transcibed by a second operator.
A telegraph key and sounder
In the 1870s, Alexander Graham Bell was credited with the invention of the telephone, a device
that could transmit speech along a wire by varying the voltage in an electrical circuit using sound.
The invention was a result of Bell's attempts to improve the performance of the telegraph. Sound
is the result of differences in pressure in the air around us caused by vibrations.
A microphone uses these small differences in pressure to vary the resistance of an electrical
circuit, constantly changing the amount of current flowing through it. The current flowing through
the circuit thus becomes an analogue of the sound waves picked up by the microphone. The
public switched telephone system (PSTN) that subsequently evolved was originally intended only
for voice transmission, but as the end of the twentieth century approached, the installation of fibre
optic trunk lines and fully automated digital excghanges have enabled the PSTN to carry vast
amounts of digital data.
In the latter half of the nineteenth century, British physicist James Clerk Maxwell predicted that
moving electrons will create electromagnetic waves that can propagate through free space, a
theory that was later proved by German physicist Heinrich Hertz. By attaching an antenna to an
electrical circuit, electromagnetic waves can be broadcast and received by a receiver some
distance away. In 1901, Marconi successfully broadcast a radio message from Cornwall in the
UK to Canada, a distance of over three thousand kilometres. The behaviour of electromagnetic
waves varies with frequency. Today, much of the electromagnetic spectrum, including radio,
microwave, infra-red, and visible light, are used for both short-range and long range wireless
communications.
The telecommunications industry continues to develop new technologies and to deliver new
services, but many of the principles that underpinned the early development of telephony and
radio communications are just as relevant today as they have ever been. These pages examine
some of the fundamental characteristics of transmission lines, the application of analogue and
digital signalling techniques. They will also examine communication system architectures, explain
the importance of communication protocols, and provide an in-depth look at concepts such as
modulation and multiplexing.
Properties of Waves
A wave can be defined as the transfer of energy between two points without any physical transfer
of matter. Waves on the surface of the sea or on a lake provide an obvious example, because
they are highly visible. The fact that they transfer energy can be seen from the effects of coastal
erosion over many years, and from the more immediate effects involving the transfer of materials
onto the shoreline. Sound is an example of waves that we can hear, and is caused by vibrating
air molecules. A basic sine wave is illustrated below.
A typical sine wave
The properties of waves that can be measured or calculated are:
 Amplitude - the height of the wave in meters
 Wavelength - the distance between consecutive peaks in meters
 Period - the time a wave takes to pass a given point in seconds
 Frequency - the number of waves that pass a point in one second
 Speed - the speed at which a wave propagates in meters per second
The symbol normally used to denote wavelength is the Greek letter λ (lambda). Wavelength is
commonly expressed in terms of its frequency (ƒ) and velocity of propagation (v), as follows:
Frequency (ƒ) is the term used to describe the number of oscillations (cycles) per second of a
wave. The unit of frequency is the Hertz (Hz), and one Hertz is equal to one cycle per second.
The term is named after the German physicist Heinrich Rudolph Hertz, who first produced and
observed electromagnetic waves in 1887. The term is combined with metric prefixes to denote
multiple units such as the kilohertz (103
Hz), megahertz (106
Hz), and gigahertz (109
Hz). Other
properties of waves can be calculated:
 Period = frequency-1
 Speed = wavelength / period (or wavelength x frequency)
Baud Rate, Signalling Rate and Data Rate
The term signalling rate (or baud rate) is used to describe the number of signalling
elements (bauds) that can be transmitted in one second. The baud is named after the inventor of
the Baudot telegraph code, J.M.E. Baudot. Signalling elements are generally represented either
by a change in voltage on a transmission line (digital signalling) or by changes in the phase,
frequency or amplitude of an analogue carrier signal (analogue signalling). The terms baud
rate and data-rate (usually expressed as bits per second) do not mean the same thing, and are
sometimes confused.
If only one bit of information is encoded in each signalling element, then the baud rate and the
data rate (or bit-rate) will be the same. If two signalling levels are used, each element will
represent either one or zero. If more than two signalling levels are used, however, it becomes
possible to encode more than one bit per signal element. If four signalling levels are used, for
example, each signalling level can represent two bits, and the bit-rate will be twice the baud rate.
Bandwidth
A generally accepted definition of the bandwidth of an analogue transmission channel is the
difference between the highest and lowest frequencies that it can support. Bandwidth is typically
measured in hertz. In the case of a baseband channel, the bandwidth is generally considered to
be the highest frequency supported. The bandwidth of a channel that is made up of a number of
distinct physical transmission links is limited by the range of frequencies supported by all of the
links. In data communication networks, the term bandwidth often refers to the nominal
maximum data rate measured in bits per second (bps). The maximum data rate (or channel
capacity) of a physical communication link is related to its bandwidth in hertz, sometimes referred
to as its analogue bandwidth.
An analogue telephone line in Europe or North America typically has a bandwidth of 3 kHz, and
can carry frequencies of between 400 Hz and 3.4 Khz. The frequency response of the channel is
artificially limited by filters in the telephone transmission system (the type of twisted pair cable
employed in the subscriber loop can actually carry a much wider range of frequencies). By
comparison analogue TV signals, which comprise both video and audio components, require a
6MHz bandwidth RF channel. The graphic below provides a comparison of the typical bandwidths
achievable using current or proposed Internet access technologies.
Comparative bandwidth of current and proposed Internet access technologies
Since digital signals are often represented by discrete voltage levels, the signal elements that
make up a digital transmission can essentially be considered to be square wave pulses. Such
waveforms do not occur naturally, and the French scientist Jean Baptiste Joseph Fourier (1768 -
1830) was able to demonstrate that such a signal can only be generated by combining a number
of sine waves, each having a different frequency and amplitude, to create a more complex
waveform.
The frequency of the square wave itself is said to be the fundamental frequency. It can be shown
that by taking a sine wave with same frequency as the required square wave, and adding
successive odd-numbered harmonics to it, a square wave can be approximated. A harmonic is a
sine wave with a frequency that is an integer multiple of the fundamental frequency. By adding
together the fundamental, third harmonic and fifth harmonic, we can achieve a waveform that is
an approximation of a square wave. The fundamental, 3rd
and 5th
harmonics are shown below,
and are labelled A, B and C respectively. Notice that the amplitude of each harmonic relative to
that of the fundamental is approximately the inverse of its harmonic number.
Fundamental sine wave with third and fifth harmonics
The image below illustrates the effect of adding these sine waves together. The resulting
waveform is starting to resemble our ideal square wave, although in practice it would require an
infinite number of harmonics to produce a "perfect" square wave. Since no transmission medium
is capable of supporting an infinite range of frequencies, the best that can ever be achieved will
be an approximation of a square wave. It is the properties of the receiver in a commununications
channel that will determine how good an approximation is required, and therefore the bandwidth
that must be supported by the channel.
Adding the fundamental, third and fifth harmonics produces an approximation of a square wave
We so far have looked at the waveform of a complex wave (in this case a square wave) as it might
appear on an oscilloscope, which displays the amplitude of a waveform as a function of time. In
otherwords, we have looked at these waveforms in the time domain. We could also look at the
waveform using a spectrum analyser, which displays the amplitude and frequency of each sine
wave used to generate the complex waveform. Looking at the same square wave illustrated above
in the frequency domain, therefore, we would see something like the image below.
A time-domain view of a squarewave comprising the fundamental, third and fifth harmonics
Velocity of Propagation
The Velocity of Propagation (VoP) is a measure of the speed at which a signal travels through a
transmission medium, usually expressed as a percentage of the speed of light in a vacuum
(approximately 3x108
metres per second). In a conducting material (e.g. copper), the VoP of a
high-frequency electrical signal is equal to the reciprocal of the square root of the dielectric
constant of the material:
Twisted pair copper cables typically have a VoP of between 40% and 75%. A VoP of 66%
corresponds to a speed of approximately 2x108
metres per second.
Analogue Signals
An analogue signal is an electro-magnetic waveform that continuously varies its amplitude over
time. It differs from a digital signal in that small fluctuations in the amplitude of the signal may
convey information. The word analogue reflects the fact that the signal is often an analogy of
some real-world input to the system. For example, there is a direct relationship between the
variation in the voltage of an electrical signal on a telephone line and the pattern of sound waves
entering the microphone mounted in the telephone's handset.
An analogue system uses some physical property of the signal to convey information. In
telecommunications systems, the property most commonly used is voltage, which is made to vary
in response to some physical input. This is achieved using a transducer. A transducer is a device
that converts energy from one form to another (e.g. heat energy to light, sound energy to an
electrical signal, etc.). A clock with hands is said to be an analogue device because the time is
represented by the constantly changing position of the clock's hands (although for many clocks
the movement of the hands around the clock face occurs as a series of small, discrete increments,
rather than a smooth and continuous circular motion).
In on of the oldest types of microphone, sound waves striking a thin diaphragm cause it to vibrate.
Carbon dust inside the microphone, used to conduct an electrical current through the device,
rapidly changes in density as the vibrating diaphragm compresses, and then releases it. The small
changes in the density of the carbon dust alter its electrical resistance, varying the amount of
current that can flow through it. Since the resistance of the telephone wire itself does not change,
and since, for a given value of resistance, voltage varies in direct proportion to current, these
small changes in current can be seen as changes in voltage across the telephone line.
A typical analogue signal
The main disadvantage of an analogue signalling systems is that, because the signal is
continuously varying (as opposed to the two or three discrete levels used in digital systems) any
unwanted signals (noise) introduced into the system are often difficult to detect and to filter out of
the signal. Furthermore, the effects of noise get worse the further the signal has to travel, because
the signal is attenuated. Essentially, this means that the signal becomes weaker the further it
travels from its source, whereas the level of noise, both inherent and external to the system,
remains relatively constant. As a result, the signal-to-noise ratio (SNR) decreases steadily, and
at some point the signal will become indistinguishable from the noise. A signal may, of course, be
amplified at one or more points along the transmission path in order to compensate for
attenuation, but the noise in the signal will inevitably be amplified as well. The effects of noise can
be mitigated by using suitable cable and connector types to screen out external interference, but
there is no way of eliminating the so-called Gaussian noise (or thermal or white noise) which is
due to the random movement of electrons in a conducting material.
The range of levels in an analogue signal can be said to be infinite, because any two points on
the waveform, however adjacent, will have different values. The relative distance between the two
points can theoretically be halved, and halved again an infinite number of times, without producing
two identical values, since an analogue signal has no discontinuous points and follows an
unbroken curve for its full duration. In principle, therefore, it would seem that an analogue signal
should be able to represent some real-world dynamic entity, such as the sound of the human
voice or a symphony orchestra, far better that a digital signal that essentially consists of only two
or three discrete voltage levels. Indeed, when it comes to the subject of the reproduction of music,
there is much debate over the relative merits of analogue and digital recording techniques. When
it comes to telecommunications, however, the problem becomes one of maintaining signal
integrity over long distances.
The signal can, of course, undergo amplification at various points along the transmission path to
ensure that the signal-to-noise ratio is maintained above some predefined threshold. Some of the
inherent or injected noise can probably be filtered out of the signal. Unfortunately, the very nature
of an analogue signal (i.e. constantly varying) means that it is usually not possible to completely
separate the original signal from the noise, particularly in view of the fact that the inherent
Gaussian noise is present across the entire frequency spectrum supported by the physical
medium. Hence, when an analogue signal undergoes amplification, any noise that cannot be
removed from the signal is amplified along with it, in equal proportion.
The effects of noise can be reduced in analogue telecommunications systems using appropriate
design, engineering and installation techniques. Such techniques would include the use of
suitable transmission media, which could dictate the use of shielded cabling, and careful selection
of cabling routes to avoid potential sources of electromagnetic interference. Analogue signals
have been used successfully for decades to carry relatively low-frequency voice signals through
the public switched telephone network, and are still widely used in the local loop of the telephone
network (the connections between telephone company subscribers and their local exchange).
Until relatively recently, analogue systems were also used for radio and television broadcasting.
The advent of the Internet and the proliferation of computers in commerce, industry and the home
have fuelled the development of digital communications systems capable of carrying virtually any
and all kinds of digital data. Despite the digital revolution, however, an understanding of analogue
signalling techniques is still crucial to a study of telecommunications systems.
Digital Signals
A digital signal represents information as a series of binary digits. A binary digit (or bit) can only
take one of two values - one or zero. For that reason, the signals used to represent digital
information are often waveforms that have only two (or sometimes three) discrete states. In the
signal waveform shown below, the signal alternates between two discrete states (0 volts and 5
volts) which could be used to represent binary zero and binary one respectively. If it were actually
possible for the signal voltage to instantly transition from zero to five volts (or vice versa), the
signal could be said to be discontinuous. In reality, such an instantaneous transition is not
physically possible, and a small amount of time is required for the voltage to increase from zero
to five volts, and again for the signal to drop from five to zero volts. These finite time periods are
referred to as the rise time and the fall time respectively.
A simple digital signal
In the simple digital signal represented above, alternating binary ones and zeroes are represented
by different voltage levels. A binary one would appear on the transmission line as a short voltage
pulse, while a binary zero would be represented as an absence of voltage. This rather simplistic
signalling scheme has a number of serious flaws, one of which is that a long series of consecutive
ones (or a long series of consecutive zeroes) presents the receiver with the problem of
determining exactly how many bits are actually being transmitted. For this to be possible, the
duration of each bit-time must be known to both the transmitter and the receiver, and the
receiver?s internal clock must be synchronised exactly with that of the transmitter, so that the
correct number of consecutive identical bits can be calculated by the receiver. In the example
shown below, there are no more than two consecutive bits with the same value, which would not
normally present the receiver with too much of a problem. Extended runs of binary numbers
having the same value, however, would prove far more of a challenge.
Data representation in a digital signal
Our simple example in the first diagram uses a positive voltage to represent a one, and the
absence of a voltage to represent a zero (for historical reasons, the terms mark and space are
often used to refer to the binary digits one and zero respectively). This prompts the question of
how the receiver knows whether the transmitter is transmitting a long stream of zeroes, or has
simply ceased to transmit. There are, in fact, many different digital encoding schemes that
overcome this problem, together with that of long streams of bits having the same value, which
we will look at in more detail elsewhere. For now, it is enough to understand that digital signals
convey binary data in the form of ones and zeros, using different, discrete signal levels to
represent the different logical values. If the signalling scheme used employs a positive voltage to
represent one logic state, and a negative voltage to represent the other, the signal is said to
be bipolar.
The number of bits that can be transmitted by the signalling scheme in one second is known as
its data rate, and is expressed as bits per second (bps), kilobits per second (kbps) or megabits
per second (Mbps). The duration of a bit is the time the transmitter takes to output the bit (and as
such is obviously related to the data rate). The modulation or signalling rate is the rate at which
the signal level is changed, and depends on the digital encoding scheme used (and is also directly
related to the data rate). A special case of digital signalling involves the generation of clock
signals used to provide synchronisation and timing information for various signal-processing and
computing devices. Clock ticks are triggered by either the rising or falling edge (or in some cases
both the rising and falling edges) of an alternating digital signal.
The physical communications channel between two communicating end points will inevitably be
subject to external noise (electromagnetic interference), so errors will occasionally occur. The
degree to which the receiver will be able to correctly interpret incoming signals will depend upon
several factors, including its ability to synchronise with the transmitter, the signal-to-noise
ratio (SNR), which is a measure of the difference between the transmitted signal strength and the
level of background noise, and the data rate. The data rate is significant in this respect because
it is directly related to the baseband frequency used. Signals at higher frequencies tend to be
more susceptible to very short but high-intensity bursts of external noise (impulse noise), because
as frequency increases, there is a greater likelihood that one or more bits in the data stream will
become corrupted by a so-called "spike".
In order for the receiver to correctly interpret an incoming stream of bits, it must be able to
determine where each bit starts and ends. In order to do this, it needs to somehow be
synchronised with the transmitter. It will need to sample each bit as it arrives to determine whether
the signal level is high (denoting a binary one) or low (denoting a binary zero). In the simple digital
encoding schemes considered so far, each bit will be sampled in the middle of the bit-time, and
the measured value compared to pre-determined threshold values to determine whether it is a
logic high or a logic low (or neither).
Timing information becomes more critical as data rates increase and the bit duration becomes
shorter, especially for data transfers involving large blocks of data consisting of thousands of bits
of information. At relatively low data rates, and for asynchronous data transmission involving only
a few bits or bytes of data at any one time, the receiver?s internal clock signal will normally suffice
to maintain synchronisation with the transmitter long enough to sample the incoming bits in each
block of data received at (or close to) the centre of each bit-time (synchronous and asynchronous
transmission are dealt with in more detail elsewhere). For larger blocks of data, however, the
receiver?s internal clock cannot be relied upon to remain synchronised with the transmitter. A
more reliable timing mechanism is required to maintain synchronisation between receiver and
transmitter.
One option would be for the transmitter to transmit a separate timing signal which the receiver
could use to synchronise its sampling operations on the incoming data stream. This would
significantly increase the overall bandwidth required for data transmission, and make the digital
transmission system far more difficult to design and implement. Fortunately this is not necessary,
because the required timing signal can be embedded in the data itself. This is achieved by
encoding the data in such a way that there is a guaranteed transition in signal level (from high to
low or from low to high) at some point during each bit-time. One such encoding scheme,
called Manchester encoding, is illustrated below. This scheme guarantees a transition in the
middle of each bit-time that serves as both a clocking mechanism and as a method of encoding
the data. A low-to-high transition represents a binary one, while a high-to-low transition
represents a binary zero. This type of encoding is known as bi-phase digital encoding. Such
schemes are said to be self-clocking, and have no net dc component (there are both positive and
negative voltage components of equal duration, during each bit-time).
Manchester encoding is a bi-phase digital encoding scheme
One of the main advantages of digital communications is that virtually any kind of information can
be represented digitally, which means that many different kinds of data may be transmitted over
the same physical transmission medium. In fact, a number of different digital data streams may
share the same physical transmission medium at the same time, thanks to
advanced multiplexing techniques (multiplexing will be discussed in detail elsewhere). The
number of bits required to represent each item of data transmitted will depend on the type of
information being sent. Alpha-numeric characters in the ASCII character set, for example, require
eight bits per character. Other character encoding schemes can represent a far greater number
of characters, but require more bits to represent each character. Analogue information (for
example audio or video data) can be represented digitally by sampling the analogue waveform
many hundreds, or even thousands of times per second, and then encoding the sample data
using a finite range of discrete values (a process known as quantising). The values derived using
the quantisation process are then represented as binary numbers, and as such can be transmitted
over a digital communications medium as a bit stream. The sampling, quantisation, and
conversion to binary format represent an analogue-to-digital conversion (ADC).
The sampling process repeatedly measures the instantaneous voltage of the analogue waveform
The quantisation process assigns a discrete numeric value to each sample
The quantised values are encoded as binary numbers
The number of bits used to represent each sample will depend on the total number of discrete
values required to represent the original data so that the original analogue waveform can be
reproduced at the receiver to an acceptable standard. The more samples taken per unit time, the
more closely the reconstructed analogue waveform will reflect the original waveform (or, to put it
another way, the higher the resolution will be). The cost of higher resolution is that more bits will
be required to digitally encode each sample, increasing the bandwidth required for transmission.
Analogue human voice signals are encoded for transmission over digital circuits in the public
switched telephone service (PSTN) using eight bits per sample, giving a range of 256 possible
values for each sample. The signals are sampled eight thousand times per second, giving a total
requirement of 8 x 8,000 bits per second, or 64 kbps. This is adequate for voice transmission over
the telephone network which has traditionally been restricted to a bandwidth of less than 4 kHz
(the significance of this restriction will be discussed elsewhere).
For high-quality real-time video transmission, the data rate (and hence the required transmission
bandwidth), will be far higher. Various data compression techniques can be used to maximise the
bandwidth utilisation, but a significant amount of bandwidth will still need to be available to
guarantee high-quality real-time video transmission, and the complexity of the signal processing
required will be greater.
The ability to interleave video, audio, and other forms of data on the same digital transmission
links has already been mentioned. Another important advantage of digital signalling is the fact
that, because it employs discrete signalling levels, a receiver need only determine whether the
sampled voltage represents a logic high (1) or a logic low (0). Small variations in level can
otherwise be ignored as having no significance, unlike the continuously varying analogue signals,
where even small variations in the amplitude may convey information (or represent fluctuations
due to noise). Digital signals suffer from attenuation of course, in the same way that analogue
signals suffer from attenuation. Unlike analogue signals, however, as long as a receiver can
distinguish between logic high and logic low, the incoming signals can be amplified and repeated
with no loss of data whatsoever. The regenerated signal that leaves a digital repeater is identical
to the digital signal originally transmitted by the source transmitter.
Simplex and Duplex Channels
In a simplex transmission, one device acts as the transmitter and a second device acts as the
receiver. Data flows in one direction only, whereas in a duplex channel, the communication is bi-
directional. Full-duplex transmission uses two separate communication channels so that two
communicating devices can transmit and receive data at the same time. Data can flow in both
directions simultaneously. Half-duplex transmission is a compromise between simplex and full-
duplex transmission. A single channel is shared between the devices wishing to communicate,
and the devices must take turns to transmit data. Data can flow in both directions, but not
simultaneously.
Synchronous and Asynchronous Transmission
One of the main problems when two devices linked by a transmission medium wish to exchange
data is that of synchronising the receiving device with the transmitting device. Typically, data is
transmitted one bit at a time, and the data rate must be the same for both the transmitter and the
receiver. The receiver must be able to recognise the beginning and end of a block of bits, and
know the time taken to transmit each bit, so that it can sample the line at the correct time to read
each bit. When the sending device is transmitting a stream of bits, it uses an internal clock to
control timing. If data is transmitted at 10 Kbps, a bit is transmitted every 0.1 milliseconds. The
receiver attempts to sample the line at the centre of each bit time, i.e. at intervals of 0.1
milliseconds. If the receiver uses its own internal clock for timing, a problem will arise if the clocks
in the transmitter and receiver are not synchronised. A drift of 1 percent will cause the first sample
to be 0.01 of a bit time away from the centre of the bit, so that after fifty or more samples, the
receiver may be sampling at the wrong bit time. The smaller the timing difference, the later the
error will occur, but if the transmitter sends a sufficiently long stream of bits, the transmitter and
receiver will eventually be out of step. Two approaches exist to solve the problem of
synchronisation - asynchronous transmission and synchronous transmission.
Asynchronous transmission
Timing problems are avoided by simply not sending long streams of bits. Data is transmitted one
character (byte) at a time. Synchronisation only needs to be maintained within each character,
because the receiver can resynchronise at the beginning of each new character. When no
characters are being transmitted, the line is idle (usually represented by a constant negative
voltage). The beginning of a character is signalled by a start bit (usually a positive voltage),
allowing the receiver to synchronised its clock with that of the transmitter. The rest of the bits that
make up the character follow the start bit, and the last element transmitted is a stop bit that is
typically 1.5 or 2 times as long as the other bits transmitted. The transmitter then transmits the
idle signal (which is usually the same voltage as the stop bit) until it is ready to send the next
character (see below).
Character format in asynchronous transmission
Asynchronous transmission is also known as start-stop mode or character mode. Each character
is framed as an independent unit of data that may be transmitted and received independently.
Data may also be transmitted as a continuous stream of characters. Most communications
systems require a specific number of bits to represent each character, plus a parity bit that is often
included to provide simple error detection. Asynchronous data characters normally contain 8 data
bits (including the parity bit) plus a start bit and at least 1 stop bit, giving a total of 10 bits. Data
can be transmitted in blocks of characters known as transmission blocks. The transmission block
may use special control characters to provide control functions and to identify the start and end
of a block. Asynchronous transmission is only really suitable for relatively low data rates (up to 3
Kbits). Many of the bits transmitted in each block are control bits, giving a high proportion of
overhead. It is used mainly for applications where character data is generated at irregular intervals
(e.g. user input from a keyboard).
Synchronous transmission
With synchronous transmission, the receiver's clock is synchronised with the transmitter's clock.
Data is transmitted in a continuous stream, and the arrival time of each can be predicted by the
receiver. This is achieved either by using a separate timing circuit, or by embedding the timing
information in the signal itself. The latter can be achieved using bi-phase encoding (e.g.
Manchester encoding). An embedded timing signal can be used by the receiver to synchronise
with the transmitter using a Digital Phase-Locked Loop (DPLL).
Use of embedded timing information
A data frame usually starts with one or more bytes of data that have a unique bit pattern, or flag
(sometimes called a preamble), that tells the receiver a block of data will follow. The preamble is
followed by various control fields, a variable-length data field, more control fields, and finally a
postamble. The control information within the frame will include a length field, which specifies the
amount of data to be read.
A bit-oriented frame
For large blocks of data, synchronous transmission is far more efficient than asynchronous
transmission, requiring far less overhead. The accuracy of the timing information allows much
higher data rates. There is usually a minimum frame length, and each frame will contain the same
amount of control information regardless of the amount of data in the frame.
Noise
In any communication system, the received signal will consist of the transmitted signal, attenuated
as it has propagated along the transmission media and suffering from some distortion due to the
characteristics of the system. In addition, unwanted signals (or noise) may occur between the
transmitter and the receiver which are added to the transmitted signal. Noise is the main factor
that limits the performance of a communications system.
The effect of noise on a digital signal
There are four categories of noise:
 Thermal (Gaussian) noise - this is due to the thermal agitation of electrons in a
conductor, is present in all electronic devices and transmission lines, and is a
function of temperature. It is distributed uniformly across the frequency spectrum,
and is often referred to as white noise. It cannot be eliminated, and limits overall
system performance.
 Intermodulation noise - this can occur if signals at different frequencies share the
same transmission line. It results in signals that are the sum or difference of the
original signals, and occurs when there is some non-linearity in the communication
system (which may be caused by component malfunction or excessive signal
strength).
 Crosstalk - this is the phenomenon that allows you to hear someone else's
conversation whilst using the telephone, and occurs due to electrical coupling
between two or more transmission paths (such as adjacent twisted-pair cables).
 Impulse noise - this consists of random pulses (or spikes) of noise, usually of short
duration and relatively high amplitude. Causes include external electromagnetic
disturbances such as lightning, vehicle ignition systems, heavy-duty electrical
equipment, and faults in the communications system itself. It is usually only a minor
annoyance for analogue systems such as a telephone link, but is the primary cause
of errors in digital communication.
Shannon Limit
In 1924 Harry Nyquist derived an equation expressing the maximum data rate for a noiseless
channel. Nyquist proved that if an arbitrary signal is run through a low-pass filter of a given
bandwidth (H), the filtered signal could be completely reconstructed by line samples taken at a
rate equivalent to twice the bandwidth. Sampling the line more frequently is pointless, because
the higher frequency components that such sampling could recover have already been filtered
out. If the signal consists of V discrete levels, Nyquist's theorem states:
Maximum data rate = 2H log2 V bits per second
In 1948 Claude Shannon took this work further and extended it to the case of a channel subject
to random (thermal) noise. According to Nyquists, a noiseless 3 KHz channel cannot transmit
binary (i.e. two-level) signals at a rate exceeding 6000 bits per second. If random noise is
introduced, the situation deteriorates rapidly. The amount of thermal noise present in a signal is
expressed as the ratio of signal power (S) to noise power (N), and is called the signal-to-noise
ratio (SNR). The ratio will become smaller as the signal propagates through the transmission
medium due to attenuation of the transmitted signal. The SNR is not usually usually expressed
as a ratio. Instead, the value 10 log10 S/N is used. The unit thus derived is known as
a decibel (dB). A signal-to-noise ratio of 10 would be expressed as 10 dB; a ratio of 100 as 20
dB; a ratio of 1000 as 30 dB and so on. Shannon found that the maximum data rate of a noisy
channel with a bandwidth of H Hz, and a signal-to-noise ratio S/N is given by:
Maximum data rate = H log2 (1+S/N) bits per second
As an example, a channel of 3000-Hz bandwidth, and a signal to thermal noise ratio of 30 dB
(typical parameters for an analogue telephone line) can never transmit much more than 30,000
bps, no matter how many signal levels are used, and no matter how frequently samples are taken.
Shannon's result can be applied to any channel subject to Gaussian (thermal) noise. It should
also be noted that this limitation is an upper bound, and real systems will rarely achieve it.
Data Structures
Most data communications networks require that information transmitted between two end points
is divided into blocks of a manageable size in order to make the most efficient use of network
bandwidth and to facilitate switching and routing. The type of network over which the data is to be
transmitted will determine the maximum block size. Each block contains both the data itself and
some control information, such as the source and destination address, and an error checking
code.
The name given to these blocks will depend on the communications protocol that created them.
The term protocol data unit (PDU) is a generic term that can refer to any unitised collection of
data and control information, although it is normally used only with upper-layer communication
protocols like the Transmission Control Protocol (TCP). The term packet (or datagram) is used to
describe blocks produced by network layer protocols such as the Internet Protocol (IP), while the
term frame is used to describe the blocks produced by data-link layer protocols like Ethernet.
Amplitude Modulation
Amplitude modulation (AM) is a modulation technique in which the amplitude of a high frequency
sine wave (usually at a radio frequency) is varied in direct proportion to that of a modulating signal.
The modulating signal carries the required information and often consists of audio data, as in the
case of AM radio broadcasts or two-way radio communications. The high frequency sine wave
(the carrier) is modulated by adding the modulating signal to it in a mixer. A simplified AM radio
transmitter system is shown below.
A simplified AM radio transmitter system
A simple form of amplitude modulation was originally used to modulate audio voice signals onto
a low-voltage direct current (dc) carrier on a telephone circuit. A microphone in the telephone
handset acts as a transducer, and uses the sound waves produced by the human voice to vary
the current passing through the circuit. At the other end of the telephone line, a second transducer
(in the form of a small loudspeaker mounted in the remote handset) uses the varying voltage to
produce sound waves that are close enough to the original speech patterns to be recognisable
as the voice of the caller. Although the human voice is composed of frequencies ranging from 300
to approximately 20,000 hertz, the public switched telephone system limits the frequencies used
to between 300 and 3,400 hertz, giving a total bandwidth of 3,100 hertz. This bandwidth is
perfectly adequate for purely voice transmission, since the higher frequencies in the human voice
(i.e. those above 3,100 hertz) are not really needed for recognisable speech reproduction. The
use of a limited bandwidth also makes the telephone system much simpler from an engineering
perspective.
Whereas telephone signals can be transmitted at audio frequencies, the same is not really a
practical proposition for radio transmissions. The main reason for this is that the optimum length
of a radio antenna is a half or a quarter of a wavelength. Since a typical audio frequency of 3,000
hertz has a wavelength of approximately 100 kilometres, the antenna would need to have a length
of 25 kilometres to be effective - not a realistic proposition. By comparison, a radio frequency of
100 megahertz would have a wavelength of approximately 3 metres, and could use an antenna
80 centimetres long. It becomes necessary, therefore, to use a radio frequency carrier signal in
order to transmit audio signals, which are used to modulate the carrier waveform.
A typical amplitude modulated signal
Modulating a carrier wave by adding another, lower frequency signal results in a signal that has
most of its power concentrated in the carrier, with the rest shared between two sidebands, one
above the carrier in frequency and one below it. The highest frequency in the modulating signal
is typically less than ten percent of that of the carrier. The process of creating these sideband
frequencies by adding another signal to the carrier is known as heterodyning. In the simplest case,
the carrier can be modulated by adding another single-frequency sine wave signal to it, changing
the carrier's shape (or envelope) as illustrated above. The sideband frequencies account for
approximately 33% of the transmitted power. If a more complex modulating signal (such as an
audio signal) is used to modulate the carrier, the sidebands account for only about 20-25% of the
total transmitted power.
Consider, for example, a 100 kHz carrier that is modulated by a steady audio signal (or tone) of
5 kHz. When these signals are added, two sidebands are produced. One sideband has a
frequency equal to the sum of the carrier and the modulating signal (100 kHz + 5 kHz = 105 kHz),
while the other sideband has a frequency equal to the difference between the carrier and the
modulating signal (100 kHz - 5 kHz = 95 kHz). The two sidebands are 5 kHz equidistant from the
carrier (one above it and one below it), giving a total bandwidth for the modulated signal of 10
kHz (105 kHz - 95 kHz). The resulting frequency spectrum is illustrated below.
A 100 kHz carrier modulated by a 5kHz audio tone
Of course, most audio signals (speech and music, for example) are far more complex than a
single-frequency audio tone, and are composed of many different frequencies. When a carrier is
modulated with a more complex audio signal, therefore, all of the frequencies present in the audio
signal are represented in the resulting output signal. In this case, the total bandwidth is the
difference between the sum and the difference values of the carrier and the highest frequency
component of the modulating signal. To simplify things, the modulated signal bandwidth will be
twice that of the modulating signal. For a modulating audio signal with frequency components
ranging from 0 - 6 kHz, therefore, the bandwidth of the modulated signal for a 100 kHz carrier will
be 106 kHz - 94 kHz = 12 kHz. This produces a more complex frequency spectrum, which might
look something like that shown below.
A 100 kHz carrier modulated by an audio signal (frequencies up to 6 kHz)
The bandwidth of each sideband is equal to that of the modulating signal, and the two sidebands
are mirror images of each other, each carrying the same information as the original audio signal.
This type of basic amplitude modulation, which results in two sidebands and a carrier, is usually
referred to as double sideband amplitude modulation (DSB-AM). It is a very inefficient form of
modulation in terms of its power usage, because at least two thirds of the transmitted power is
concentrated in the carrier signal, with the remaining power being evenly split between the two
sidebands. Since the sidebands contain identical information, only one sideband is actually
needed to carry the transmitted audio information. The other sideband is redundant, and the
carrier signal contains no useful information. DSB-AM is also therefore spectrally inefficient,
because fewer stations can make use of a given transmission band. The main benefit of DSB-AM
is that, because of its relative simplicity, receiving equipment is cheaper to produce.
The process of demodulation for DSB-AM is relatively straightforward. The radio frequency carrier
can be removed from the signal using a simple diode detector consisting of a diode, a resistor,
and a capacitor. The incoming signal is rectified by the diode, which allows only half of the
alternating waveform to pass through it. The capacitor removes the remaining radio frequency
signal components to provide a smooth output, and the resistor allows the capacitor to discharge.
An AM receiver can thus be produced relatively cheaply, since there is no requirement for
specialised components. The basic circuit diode detector circuit is shown below.
A basic diode detector circuit
Because the modulating signal is added to the carrier, the instantaneous amplitude of the
modulated signal will depend on the instantaneous amplitude of the modulating data.
The modulation index is a measure of the degree to which the modulating signal increases the
maximum amplitude of the carrier signal. If the carrier's amplitude is made to vary between 50%
above and 50% below its un-modulated value, it is said to have a modulation index of 0.5. If the
amplitude is made to vary by 100% above and below its un-modulated value, it has a modulation
index of 1.0. A modulation index of 1.0 for the A3E transmission mode will give a maximum
transmitter power efficiency of 33%. Increasing the modulation index would result in greater power
efficiency, but would result in distortion at the receiver.
The power efficiency of the transmitter can be increased by removing (suppressing) the carrier
from the AM signal to create a reduced-carrier transmission, or double-sideband suppressed-
carrier (DSBSC). DSBSC is three times more power-efficient than DSB-AM. A similar scheme, in
which the carrier is only partially suppressed, is called double-sideband reduced-
carrier (DSBRC). Both schemes require the carrier to be regenerated by a local oscillator in the
receiver in order that demodulation can be achieved using standard demodulation techniques. In
addition to transmitter efficiency, spectral efficiency can be achieved by completely suppressing
both the carrier and one of the sidebands, although the complexity of both the transmitter and the
receiver is increased significantly. The ITU designations for the various amplitude modulation
schemes are shown in the table below.
ITU Amplitude Modulation Scheme Designations
Designation Description
A3E Double-sideband full-carrier
R3E Single-sideband reduced-carrier
H3E Single-sideband full-carrier
J3E Single-sideband suppressed-carrier
B8E Independent-sideband emission
C3F Vestigial-sideband
Lincompex Linked compressor and expander
The carrier frequencies used in some applications are very high (radar frequencies, for example,
range from 3MHz up to 300 GHz). At very high frequencies, many standard electronic
components cannot function properly. A superheterodyne receiver is one that reduces the
frequency of an incoming signal by adding a lower frequency to it using a mixer (a process known
as superheterodyning) to reduce the frequency of the AM signal, which is centred on the carrier
frequency, to some lower frequency called the intermediate frequency (IF) prior to processing.
The intermediate frequency obtained is the difference (or beat) frequency between the incoming
AM signal's carrier frequency and that of the local oscillator. The receiver will use a tuner to select
the required carrier frequency, and to adjust the frequency of the receiver's local oscillator so that
the intermediate frequency will always have the same value (the tuner and the local oscillator or
therefore tightly coupled). This both simplifies the design of the receiver and reduces its cost,
since the majority of its components will be required only to operate at a single intermediate
frequency rather than over a range of frequencies. A simple superheterodyne receiver system is
shown below.
A superheterodyne receiver
The band-pass filter in the tuner filters out all signals except the selected carrier frequency. The
receiver bandwidth is usually some fraction of the carrier frequency. A receiver bandwidth of 2%,
for example, means that any signals between 2% above and 2% below the carrier frequency are
allowed to pass through the filter. For a carrier frequency of 850 kHz, this would mean that all
signals between 833 kHz and 867 kHz are accepted by the receiver. If the same fraction is applied
to the intermediate frequency, then for a fixed IF of 452 kHz, only signals that are within the range
443 kHz to 461 kHz will pass. The local oscillator is set to 398 kHz to reduce the 850 kHz carrier
to 452 kHz (the beat frequency).
Any adjacent signals are also superheterodyned, but remain at the same margin above and below
the original signal. If the incoming signal includes interference at 863 kHz, a conventional 2%
receiver will allow the interference to pass, since the interference falls within the range 833 kHz
to 867 kHz. If the signal is superheterodyned using a local oscillator frequency of 398 kHz, the
interfering signal will be shifted down to a beat frequency of 465 kHz. If the resulting IF frequency
is also limited to a bandwidth of 2%, any frequencies below 443 kHz or above 461 kHz will be
filtered out. This means that the interference at 465 kHz will be eliminated from the signal (i.e. it
has been suppressed). It is apparent, therefore, that the superheterodyne receiver is more
selective. The term used to describe the process of narrowing the receiver bandwidth in this way
is arithmetic selectivity.
In order to increase both the power efficiency and spectral efficiency of the transmitter, it is
necessary to remove both the carrier and one of the sidebands from the transmitted AM signal. A
simplified single sideband AM transmitter is shown below.
A single sideband AM transmitter system
The receiver must restore the carrier signal before demodulation can take place by creating its
own carrier signal using a local oscillator and adding it to the received SSB AM signal in a mixer.
A suitable receiver system might look something like that shown below.
A single sideband AM receiver
A simple form of AM, often used for digital communications is on-off keying, in which binary data
is represented as the presence or absence of the carrier wave. This method is often used at radio
frequencies to transmit Morse code.
A simple amplitude modulated digital signal
Quadrature Amplitude Modulation (QAM)
Quadrature amplitude modulation (QAM) is a modulation scheme in which two sinusoidal carriers,
one exactly 90 degrees out of phase with respect to the other, are used to transmit data over a
given physical channel. One signal is called the "I" signal, and can be represented by a sine wave.
The other is called the "Q" signal, and can be represented by a cosine wave. Because the carriers
occupy the same frequency band and differ by a 90-degree phase shift, each can be modulated
independently, transmitted over the same frequency band, and separated by demodulation at the
receiver. For a given bandwidth, QAM enables data transmission at twice the rate of standard
pulse amplitude modulation without any degradation in the bit error rate. QAM and its derivatives
are used in both mobile radio and satellite communication systems. Each symbol is a specific
combination of signal amplitude and phase. By combining the amplitude and phase modulation
of a carrier signal, it is possible to increase the number of possible symbols and therefore transmit
more bits for each symbol. One way to represent the symbols is to use a constellation pattern
diagram such as the one shown below. The pattern shown represents the different amplitudes
and phases. Dots at 0, 90, 180, and 270 degrees all have two possible amplitudes resulting in
eight different symbols. With eight symbols, it is possible to transmit 3 bits for each symbol. For
example, if the modulated signal is of amplitude 1 at 0 degrees, three zeros (000) are transmitted.
A 3-bit QAM constellation
Modern communication equipment requires modulation that uses dense constellation patterns.
The diagram below depicts a 16-state constellation pattern, allowing the transmission of four bits
for every baud. The number of states grows exponentially to the number of bits transmitted per
baud. Transmitting eight bits per baud would require 256 possible states, resulting in a very dense
constellation pattern.
A 4-bit QAM constellation
Frequency Shift Keying (FSK)
Frequency shift keying (FSK) is one of several techniques used to transmit a digital signal on an
analogue transmission medium. The frequency of a sine wave carrier is shifted up or down to
represent either a single binary value or a specific bit pattern. The simplest form of frequency shift
keying is called binary frequency shift keying (BFSK), in which the binary logic values one and
zero are represented by the carrier frequency being shifted above or below the centre frequency.
In conventional BFSK systems, the higher frequency represents a logic high (one) and is referred
to as the mark frequency. The lower frequency represents a logic low (zero) and is called
the space frequency. The two frequencies are equi-distant from the centre frequency. A typical
BFSK output waveform is shown below.
Binary Frequency Shift Keying (BFSK)
If there is a discontinuity in phase when the frequency is shifted between the mark and space
values, the form of frequency shift keying used is said to be non-coherent, otherwise it is said to
be coherent. In more complex schemes, additional frequencies are used to enable more than one
bit to be represented by each frequency used. This provides a higher data rate, but requires more
bandwidth (representing a group of two binary values, for example, would require four different
frequencies). It also increases the complexity of the modulator and demodulator circuitry, and
increases the probability of transmission errors occurring.
Audio frequency shift keying (AFSK)
Audio frequency-shift keying (AFSK) is a modulation technique in which binary data is
represented by changes in the frequency of an audio tone, and is one of the techniques used for
transmission on analogue telephone lines. Two tones are normally used to represent the mark
and space values. Many early analogue modems employed AFSK to transmit data at rates of up
to about 300 bits per second, and some early microcomputers used a modified form of AFSK to
store data on audio cassettes.
Phase Shift Keying (PSK)
Phase-shift keying (PSK) is a method of modulating digital signals onto an analogue carrier wave
in which the phase of the carrier wave is shifted between two or more values, depending upon
the logic state of the input bit stream. The simplest method uses two phases - 0 degrees and 180
degrees. The logic state of each bit is examined with respect to the logic state of the preceding
bit. If the logic state changes (i.e. from logic high to logic low) the phase of the carrier is shifted
by 180 degrees. If the logic state does not change, the phase of the carrier remains the same.
This form of PSK is sometimes called biphase modulation. The output waveform of a 2-phase
PSK modulator is shown below.
Phase shift key modulation
More complex forms of PSK employ four or eight phases. This allows more bits to be transmitted
for each phase angle used. In four-phase modulation, the possible phase angles are +45/-315,
+135/-225, +225/-135, and +315/-45 degrees (a phase difference between symbols of 90
degrees), and each symbol can represent two signal elements (00, 01, 10 or 11). In eight-phase
modulation, the phase difference between symbols is 45 degrees, and each phase shift can
represent three signal elements (000, 001, 010, 011, 100, 101, 110, or 111).
Pulse Code Modulation (PCM)
Analog transmission is not particularly efficient. When the signal-to-noise ratio of an analog signal
deteriorates due to attenuation, amplifying the signal also amplifies noise. Digital signals are more
easily separated from noise and can be regenerated in their original state. The conversion of
analogue signals to digital signals therefore eliminates the problems caused by attenuation. Pulse
Code Modulation (PCM) is the simplest form of waveform coding. Waveform coding is used to
encode analogue signals (for example speech) into a digital signal. The digital signal is
subsequently used to reconstruct the analogue signal. The accuracy with which the analogue
signal can be reproduced depends in part on the number of bits used to encode the original signal.
Pulse code modulation is an extension of Pulse Amplitude Modulation (PAM), in which a sampled
signal consists of a train of pulses where each pulse corresponds to the amplitude of the signal
at the corresponding sampling time (the signal is modulated in amplitude). Each analogue sample
value is quantised into a discrete value for representation as a digital code word. Pulse code
modulation is the most frequently used analogue-to-digital conversion technique, and is defined
in the ITU-T G.711 specification. The main parts of a conversion system are the encoder (the
analogue-to-digital converter) and the decoder (the digital-to-analogue converter). The combined
encoder/decoder is known as a codec. A PCM encoder performs three functions:
 sampling
 quantising
 encoding
The human voice uses frequencies between 100Hz and 10,000Hz, but it has been found that
most of the energy in speech is between 300 Hertz and 3400 Hertz - a bandwidth of approximately
3100 Hertz. Before converting the signal from analog to digital, the unwanted frequency
components of the signal are filtered out. This makes the task of converting the signal to digital
form much easier, and results in an acceptable quality of signal reproduction for voice
communication. From an equipment point of viev, because the manufacture of very precise filters
would be expensive, a bandwidth of 4000 Hertz is generally used. This bandwidth limitation also
helps to reduce aliasing - aliasing happens when the number of samples is insufficient to
adequately represent the analog waveform (the same effect you can see on a computer screen
when diagonal and curved lines are displayed as a series of zigzag horizontal and vertical lines).
Sampling
Sampling the analogue signal
Sampling is the process of reading the values of the filtered analogue signal at discrete time
intervals (i.e. at a constant sampling frequency, called the sampling frequency). A scientist called
Harry Nyquist discovered that the original analogue signal could be reconstructed if enough
samples were taken. He found that if the sampling frequency is at least twice the highest
frequency of the input analogue signal, the signal could be reconstructed using a low-pass filter
at the destination.
Quantisation
Quantisation is the process of assigning a discrete value from a range of possible values to each
sample obtained. The number of possible values will depend on the number of bits used to
represent each sample. Quantisation can be achieved by either rounding the signal up or down
to the neares available value, or truncating the signal to the nearest value which is lower than the
actual sample. The process results in a stepped waveform resembling the source signal. The
difference between the sample and the value assigned to it is known as the quantisation
noise (or quantisation error).
Quantisation noise can be reduced by increasing the number of quantisation intervals, because
the difference between the input signal amplitude and the quantization interval decreases as the
number of quantization intervals increases. This would, however, increase the PCM bandwidth.
Uniform quantisation uses equal quantisation levels throughout the entire range of an input
analogue signal. The signal-to-noise ratio (SNR), including quantisation noise, is the most
important factor affecting voice quality in uniform quantisation. The signal-to-noise ratio is
measured in decibels (dB). The higher the signal-to-noise ratio, the better the voice quality.
Quantisation noise reduces the signal-to-noise ratio of a signal, so an increase in quantisation
noise degrades the quality of a voice signal. Low signals will have a small signal-to-noise ratio
and high signals will have a large signal-to-noise ratio. Because most voice signals are relatively
low, having better voice quality at higher signal levels is an inefficient way of digitising voice
signals. Uniform quantisation was therefore replaced by a non-uniform quantisation process
called companding (see below).
Narrowband speech is typically sampled 8000 times per second, and each sample must be
quantised. If linear quantisation is used, 12 bits per sample are required, giving a bit rate of 96
kbits per second. This can be reduced using non-linear quantisation, in which 8 bits per sample
is sufficient to provide speech quality almost indistinguishable from the original. This results in a
bit rate of 64 kbits per second. Two non-linear PCM codecs were standardised in the 1960s - µ-
law (mu-law) coding was the standard developed in the United States, while A-law compression
was used in Europe. These codecs are still widely used today.
Encoding
Encoding is the process of representing the sampled values as a binary number in the
range 0 to n. The value of n is chosen as a power of 2, depending on the accuracy required.
Increasing n reduces the step size between adjacent quantisation levels and hence reduces the
quantisation noise. The down side of this is that the amount of digital data required to represent
the analogue signal increases.
Stages in the analogue-to-digital conversion process
Companding
Working with very small signal levels (by comparison with the quantisation interval) can introduce
more errors. Companding can be used to increase the accuracy of such signals. This is the
process of distorting the analogue signal in a controlled way before quantising takes place, by
compressing its larger values at the source and then expanding them at the receiving end. There
are two standards used: A-law in Europe, and µ-law in the USA. The term companding was
created by combining the terms COMpressing and exPANDING. Input analog signal samples are
compressed into logarithmic segments. Each segment is then quantised, and coded using
uniform quantisation. The compression process is logarithmic, where the compression increases
as the sample signals increase (the larger sample signals are compressed more than the smaller
sample signals, causing the quantization noise to increase as the sample signal increases). A
logarithmic increase in quantisation noise throughout the dynamic range of an input sample signal
gives a signal-to-noise ratio which is almost constant over a wide range of input levels. A rate of
eight bits per sample (64 kbits per second) gives a reconstructed signal which is very close the
original. The advantages of this system include low complexity and delay, and high-quality
reproduction of speech. The disadvantages are a relatively high bit rate and a high susceptibility
to channel errors.
Similarities between A-law and µ-law:
 Both are linear approximations of a logrithmic input/output relationship
 Both are implemented using 8-bit code words (256 levels, one for each quantisation
interval). This allows for a bit rate of 64 kbits per second
 Both break the dynamic range into 16 segments (8 positive and 8 negative) - each
segment is twice the length of the preceeding one, and uniform quantisation is used
within each segment
 Both use similar encoding techniques for the 8-bit word - the first (most significant
bit) identifies polarity, bits 2, 3 and 4 identify the segment, and the last four bits
identify the quantisation level within the segment
Differences between A-law and µ-law:
 Different linear approximations lead to different lengths and slopes
 Numerical assignment of the bit positions in the 8-bit code word to segments and
to quantisation levels within segments are different
 A-law provides a greater dynamic range
 µ-law provides better signal/distortion performance for low level signals
 A-law requires 13 bits for a uniform PCM equivalent, whereas m-law requires 14
bits
 International connections should use A-law (µ to A conversion is the responsibility
of the µ-law country)
Differential Pulse Code Modulation (DPCM)
During the PCM process, the differences between successive input sample signals are minimal.
A common technique used in speech coding is to try to predict the value of the next sample from
that of the preceding samples. This is possible because of correlations in speech samples due to
the effects of the vocal tract and the vibrations of the vocal chords. Differential Pulse Code
Modulation (DPCM) schemes quantise the difference between the original and the predicted
signals, i.e. the difference between successive values. This means a reduction in the number of
bits used per sample over that used for PCM. Using DPCM can reduce the bit rate of voice
transmission down to 48 kbps. DPCM can be described as a predictive coding scheme.
The first part of DPCM works like PCM in that the input signal is sampled at a constant sampling
frequency, and the samples are modulated using Pulse Amplitude Modulation. The sampled input
signal is then stored in a predictor. The predictor sends the stored sample signal it through
a differentiator. The differentiator compares the current sample signal with the previous sample
signal and sends the difference to the quantising and coding phase of PCM. After quantising and
coding, the difference signal is transmitted. At the reciever, the difference signal is dequantised,
added to a sample signal stored in a predictor, and sent to a low-pass filter that reconstructs the
original input signal. Although DPCM reduces the bit rate for voice transmission, the uniform
quantisation used means that large sample signals have a higher signal-to-noise ratio than small
sample signals, so voice quality is better at higher signals. Because most signals generated by
the human voice are small, voice quality should focus on small signals. Adaptive DPCM was
developed to solve this problem.
Adaptive Differential Pulse Code Modulation
(ADPCM)
In the mid 1980's the CCITT standardised an Adaptive Differential Pulse Code
Modulation (ADPCM) codec operating at 32 kbps known as G721, resulting in reconstructed
speech almost as good as that provided by 64 kbps PCM codecs. This was later followed by
ADPCM codecs operating at 16,24 and 40 kbps (G726 and G727). In ADPCM, the predictor and
quantiser are adaptive - they change to match the characteristics of the speech being coded.
ADPCM adapts the quantisation levels of the difference signal that is generated during the DPCM
process. If the difference signal is low, ADPCM reduces the size of the quantisation levels. If the
difference signal is high, ADPCM increases the size of the quantisation levels. The quantisation
level is thus adapted to the size of the input difference signal, generating a uniform signal-to-noise
ratio throughout the dynamic range of the difference signal.
PCM and Time Division Multiplexing (TDM)
Time division multiplexing is used at local exchanges to combine a number of incoming voice
signals onto an outgoing trunk. Each incoming channel is allocated a specific time slot on the
outgoing trunk, and has full access to the transmission line only during its particular time slot.
Because the incoming signals are analogue, they must first be digitised, because TDM can only
handle digital signals. Because PCM samples the incoming signals 8000 times per second, each
sample occupies 1/8000 seconds (125 µseconds). PCM is at the heart of the modern telephone
system, and consequently, nearly all time intervals used in the telephone system are multiples of
125 µseconds.
Because of a failure to agree on an international standard for digital transmission, the systems
used in Europe and North America are different. The North American standard is based on a 24-
channel PCM system, wheras the European system is based on 30/32 channels. This system
contains 30 speech channels, a synchronisation channel and a signalling channel, and the gross
line bit rate of the system is 2.048 Mbps (32 x 64 Kbps). The system can be adapted for common
channel signalling, providing 31 data channels and employing a single synchronisation channel.
The following details refer to the European system.
The 30/32 channel system uses a frame and multiframe structure, with each frame consisting of
32 pulse channel time slots numbered 0-31. Slot 0 contains the Frame Alignment Word (FAW)
and Frame Service Word (FSW). Slots 1-15 and 17-31 are used for digitised speech (channels
1-15 and 16-30 respectively). In each digitised speech channel, the first bit is used to signify the
polarity of the sample, and the remaining bits represent the amplitude of the sample. The duration
of each bit on a PCM system is 488 nanoseconds (ns). Each time slot is therefore 3.904 µseconds
(8 bits x 488 ns). Each frame therefore occupies 125 milliseconds (32 x 3.904 mseconds).
In order for signalling information (dial pulses) for all 30 channels to be transmitted, the multiframe
consists of 16 frames numbered 0-15. In frame 0, slot 16 contains the Multiframe Alignment Word
(MFAW) and Multiframe Service Word (MFSW). In frames 1-15, slot 16 contains signalling
information for two channels. The frame and multiframe structure are shown below. The duration
of each multiframe is 2 milliseconds(125 µseconds x 16).
The frame and multiframe structures for a 30/32 channel PCM system
Communications Protocols
Communication protocols are at the heart of data communications. Applications running on
networked computers need to exchange data with applications running on other computers, often
on other networks. Other devices must also send and receive information over the network in
order to function, including networked printers and interconnection devices such as switches and
routers. Network devices that wish to communicate with each other must speak the same
language. They must use standard messages and a common set of rules that define how
communication will take place. These messages, together with the conventions that must be
followed in order to ensure successful communication, are collectively called a communications
protocol. Such protocols are often described in an industry or international standard.
Protocols exist at every level of a communications system. There are hardware protocols that
determine how electrical signals are transmitted over a transmission link, and software protocols
that determine how transmission errors are handled and how much information can be sent over
the network at a time. There are a number of different communication protocols that can perform
the same function, but if communication is to be successful, both end points using a
communications channel must be using the same protocol. Communication systems have a
layered architecture that allows the functionality required at each layer to be engineered
independently of the layers above and below them, facilitating a modular approach to the design
of hardware, firmware or software components. The layers of a generic five-layer model are
described below.
 Physical - the physical transmission media, connectors, and basic interconnection
devices. Physical layer protocols are concerned with the design of cables and
connection hardware, the electrical or optical properties of the transmission
medium, and the encoding scheme used to represent data.
 Datalink - firmware that controls the transmission of data across a single network
link. Functions include error handling, flow control and hardware addressing, and
arbitration between network devices competing for a shared transmission medium.
 Network - software that is responsible for addressing and routing data across a
network or internetwork. Network addresses and link status information are used to
determine the best route through the network or internetwork.
 Transport - software that is responsible for providing error-free data transmission
between applications communicating over a network. Functionality includes
establishing and managing connections, error handling, flow control, and the
segmentation, sequencing and re-assembly of data.
 Application - the interface between user applications and the network. Each type
of application will have a specific application layer protocol to provide the required
interface.
Each layer implements some part of the communications process. In some cases the same
functionality (for example, error handling and flow control) is provided at different levels. The
functions typically embodied in a particular set of communications protocols (sometimes called
a protocol suite or protocol stack) are described below.
 Addressing - hardware devices on a local network are uniquely identified using
the hardware address (sometimes called a MAC address) burned into each
network adapter. A device with more than one network adapter will have multiple
hardware addresses. Each network adapter may also be allocated
a logical (or network) address that can be assigned by a network administrator
using appropriate software. The network address is used to uniquely identify
devices on both networks and internetworks, and may be part of a private or global
addressing scheme used. In TCP/IP networks, the network address takes the form
of an IP address.
 Process identification - although hardware and network addresses can be used
to get data from one computer to another, it will also be necessary to identify the
application (or process) sending the data, and the application on the destination
computer for which the data is intended. A port number is therefore used together
with the network address to uniquely identify both the source and the destination
process.
 Encapsulation - each protocol accepts a block of data from the layer above it and
adds some control information to it (in the form of a header) to create a protocol
data unit (PDU). The PDU is passed to the active protocol in the next layer down,
which creates its own PDU. The header information added by each protocol is only
of interest to the same protocol on the destination machine. Other protocols see it
simply as data.
 Connection control - connection-oriented protocols must establish a virtual
connection between the two end points of a link or channel before data transfer can
take place. Specific procedures must be followed to set up the connection and to
manage the flow of data between the two end points. On completion of data
transfer, the connection must be closed.
 Segmentation and reassembly - all networks impose a limit on how much data
that can be sent in one go. This is because large amounts of data take a long time
to transmit, as well as taking a long time to process both at the destination and at
intermediate network switching devices. Small blocks of data can be routed quickly,
do not require large storage areas (send and receive buffers), and can be
processed quickly by each device they must pass through. Messages consisting of
large amounts of data are therefore broken down into smaller blocks, usually
called datagrams or packets. In packet-switched networks, datagrams frequently
arrive at their destination out of order, and must be sequentially numbered to enable
the receiving device to reassemble them in the correct order and identify missing
packets.
 Flow control - a process that restricts the flow of data between two points in order
to prevent the destination device receiving more data than it can process in a given
time frame. The receiver may ask the sender to stop transmitting for a while or slow
down the rate of data transfer. Some protocols negotiate a mutually acceptable
data rate when a connection is established. The data rate may be re-negotiated if
circumstances change.
 Error detection and correction - error correction requires the inclusion of
sufficient redundant data to allow the receiver to reconstruct the original data if an
error in transmission occurs. Error detection only requires enough redundant data
to allow the receiver to detect whether or not an error has occurred, in which case
it can take appropriate action (such as requesting retransmission).
The OSI Reference Model
The Open Systems Interconnection (OSI) reference model was developed by the International
Standards Organisation (ISO) as a model for computer communications architectures, and as a
framework for developing protocol standards. It was intended as a first step towards international
standardisation of communications protocols. The model divides the communication process into
seven layers, as shown below. The diagram shows how communication takes
place indirectly between peer layers at each end of a communications channel (denoted by the
bi-directional horizontal arrows), and clearly identifies the concept of an interface between
adjacent layers (denoted by the bi-directional vertical arrows).
The OSI Reference Model
The OSI Reference Model layers
The model starts at the bottom with the physical layer (layer 1), and ends at the top with the
application layer (layer 7). The most important concept behind the model is that each layer
performs a specific function, provides services to the layer above it, and uses the services of the
layer below it. There is a well-defined interface between each layer, across which the flow of
information is kept deliberately minimal. It should be remembered that the OSI model itself is not
a communications architecture. It simply specifies what each layer should do, not how this is to
be achieved. As shown above, protocols in the same layer at each end of the communications
link can communicate with each other only indirectly, by using the services of the layers below
them. The individual layers of the OSI Reference Model are summarised below:
 Physical layer - concerned with the physical transmission of a bit stream. Issues
include the physical and electrical characteristics of the cables and connections,
the encoding and signalling schemes used, and the mechanical, electrical and
procedural interfaces. Network devices that operate at this layer
include hubs and repeaters.
 Data link layer - the point at which the bit stream enters or leaves the physical
layer, and which provides reliable transmission of data across any single network
link, including sequencing, flow control and error detection, using hardware
addresses. It often defines how devices are connected in terms of the network
topology, and how they may access the physical medium. The data link Layer is
divided into the logical link control (LLC) sub-layer, which manages the
communications link between two devices, and the medium access control (MAC)
sub-layer, which manages protocol access to the transmission medium. Network
devices that operate at this layer include bridges and switches. Ethernet is an
example of a data link layer protocol.
 Network layer - controls the operation of the subnet, and is responsible for the
routing and addressing of datagrams (packets) from one network to another using
logical addresses (e.g. IP addresses). The most important network devices that
operate at this layer are routers. Network layer protocols include the Internet
Protocol (IP).
 Transport layer - establishes and terminates connections across the network, and
provides a reliable end-to-end transport mechanism for the exchange of data
between processes in different end systems. It undertakes flow control, and
ensures that data is delivered error-free and in sequence, with no loss or
duplication. Typical protocols used at this layer include Transmission Control
Protocol (TCP) and User Datagram Protocol (UDP).
 Session layer - enables applications on end systems to establish a connection,
and provides the mechanism for controlling the dialogue between them.
 Presentation layer - resolves differences in data representation between end
systems and encodes data in a standard format for transmission across the
network. May also be responsible for providing services such as encryption and
data compression.
 Application layer - contains management functions and mechanisms to support
distributed applications. Typical protocols used at this layer are File Transfer
Protocol (FTP) and the various e-mail protocols.
Data transmission in the OSI model
A process wishing to send data to a process on a remote host passes the data to the application
layer protocol, which attaches the appropriate control information (in the form of a header) to the
data, creating an application layer protocol data unit (PDU) which is then passed down to the
presentation layer. The presentation layer sees the PDU simply as a block of data to be
processed. It may transform the PDU in some way, adds its own header, and passes the resulting
PDU to the session layer. This process is repeated until the data reaches the physical layer and
is transmitted on the physical transmission medium. At the destination host, the protocol operating
at each layer reads the control information for that layer, strips off the header, and passes the
resulting block of data up to the next layer. Finally, the original data, stripped of all control
information, is passed to the target process. This sequence of events is illustrated below.
Data transmission in the OSI Reference Model
Advantages and disadvantages of the OSI model
A major advantage of the OSI model is that it clearly distinguishes between the concepts
of services, interfaces and protocols. A strictly modular approach to the design of system
architecture is encouraged, allowing the protocols operating within each layer to be replaced
relatively easily. The purely theoretical basis for the model means that it is not biased towards a
particular technological approach, and makes it very useful as a reference model, although it also
means that the model does not benefit from practical experience, as a result of which some fairly
arbitrary decisions have been made about what functionality should go into each layer. The
session and presentation layers, for example, do not actually do a great deal, whereas the data-
link layer has had to be divided into two distinct sub-layers (LLC and MAC). The shortcomings of
the OSI model, together with the success of the TCP/IP protocol stack, contributed to the lack of
success of subsequent attempts to implement a protocol stack based on the OSI model. That
said, the OSI model has proved an extremely useful tool for facilitating the discussion of network
architectures.
Circuit Switching
Circuit switching is a technique traditionally used in telephone networks to set up a connection
between two subscribers. When two end-systems in a telecommunications network wish to
communicate in this way, a dedicated circuit must be established between the two end points by
allocating the required network resources prior to data transfer. The circuit remains in place until
all the data has been transferred, and provides a fixed connection bandwidth.
A generic switching network
Using the diagram above as an example, if station A has some data to send to station E, it sends
a request to switching node 4 to establish a connection with station E. Node 4 must identify the
optimum route based on currently available routing information. Assuming that node 5 is chosen
as the next hop, node 4 will secure the first available channel link to node 5 for the connection.
Node 5 will similarly reserve a channel link to node 6, which will then communicate with station E
to establish whether station E want to accept the connection. If so, station A will receive a signal
confirming that the connection has been established.
Once data transfer is complete, the connection is terminated by station A. Signals are sent to
each of the nodes involved instructing them to de-allocate the network resources, which are then
available for use in other connections. Circuit switching may be considered to be inefficient,
because the entire capacity of the channels allocated to the circuit are unavailable for use in other
circuits for the duration of the connection, even if no data is actually being sent, and the circuit
may be idle for much of the time. There is also a delay involved in setting up the connection in
the first place.
The diagram below shows the flow of information involved in setting up, using, and terminating
the typical circuit-switched connection described above. Information orginating from station A is
shown in pink, and information from station E is shown in blue.
A circuit-switched connection
Multiplexing
A multiplexer (sometimes called a mux) is a communications device that multiplexes (combines)
several signals for transmission over a single physical transmission channel.
A demultiplexer completes the process by separating multiplexed signals from a channel line at
the receiver. A multiplexer and demultiplexer are frequently combined into a single device that is
capable of processing both outgoing and incoming signals. The communications channel may be
shared between the multiplexed signals in a variety of ways, including Time Division
Multiplexing (TDM) and Frequency Division Multiplexing (FDM).
Time division multiplexing is a scheme in which multiple incoming digital signals are combined for
transmission onto a single transmission line using interleaved time slots. Each incoming channel
is allocated a specific time slot, and has full access to the transmission line during its allocated
time slot. Some TDM systems allow for a variation in the number of signals being sent along the
line, and will adjust the time interval of each slot to optimise the use of the available bandwidth.
Time division multiplexing
Analogue signals are often multiplexed using frequency-division multiplexing, in which the
bandwidth of the carrier is divided into sub-channels, each having its own range of frequencies,
enabling each sub-channel to carry a separate signal. Each incoming low-bandwidth signal is
assigned a different sub-channel on the main channel. In order to prevent interference between
adjacent sub-channels, small-bandwidth gaps, known as guard bands, are left between each sub-
channel. If a large number of signals are required to be sent along a single long-distance
communication link, a high-bandwidth carrier is required. The transmission system must be
carefully designed to ensure that it can provide the necessary transmission characteristics.
Frequency division multiplexing
For fibre-optic channels, a variation of frequency division multiplexing, called wavelength division
multiplexing (WDM), is used. As long as each incoming channel has a different frequency range,
and none of the frequency ranges overlap, they can be multiplexed onto a long-haul fibre-optic
transmission link. At the transmitting end, incoming optical signals are passed through
a diffraction grating and combined for transmission over a high-capacity fibre-optic link. At the
other end of the link, this combined signal is split into its constituent channels using another
diffraction grating. An optical system of this type is completely passive, and therefore highly
reliable. In WDM transmission systems, each channel will typically carry a number of time division
multiplexed (TDM) signals.
Wavelength division multiplexing
Error Correction and Detection
In telecommunications, the detection and correction of errors is important for maintaining data
integrity on "noisy" communication channels. Error detection is the ability to detect the presence
of errors introduced to a stream of data by interference or faults in the transmission system
between a transmitter and a receiver. Error correction is the ability to restore data in which errors
have been found to its original state. If an error is found using an error detection code, the receiver
can respond either by explicitly requesting retransmission of the data from the transmitter, or by
not sending an acknowledgement for the corrupted data, in which case the transmitter will assume
that the data has either not been received or has been rejected by the receiver, and will re-transmit
the data. Error correction codes are used by the receiver, acting alone, both to detect the presence
of an error in the received data and to re-construct the data in its original form using the error
correction encoding. Error correction necessarily involves the transmission of a significant amount
of additional (redundant) data. The overhead involved is usually far greater than that required for
error-detection schemes, and for this reason error correction is generally only used for
applications where re-transmission of the data is not practical. In some schemes, a compromise
(hybrid) solution is used in which minor errors are corrected using error correction codes, while
major errors result in a request for retransmission.
Signals from Voyager 1 now take more than fourteen hours to reach Earth
Error detection schemes
An error-detecting code (or backward error correction) involves the addition of sufficient
redundant data to the information being sent to enable the receiver to detect errors and request
the receiver to retransmit the data. This approach is known as an automatic repeat request (ARQ)
strategy. A number of commonly used error detection schemes exist, which vary considerably in
their complexity. The amount of additional information sent is usually the same for a given amount
of data, and the error detection information will have a relationship to the data that is determined
by the application of an algorithm of some kind to the data itself. The receiver applies the same
algorithm to the data it receives to obtain its own version of the error correction code, and then
compares that version with the error correction code it has received. If the two codes match, the
receiver can be reasonably sure that the data is correct. If not, it will assume that an error has
occurred and respond in the appropriate manner (i.e. request retransmission, either explicitly or
by not sending an acknowledgement). The common types of error detection scheme are listed
below, together with a brief description.
 Repetition schemes - the data to be sent is broken down into blocks of bits of a
fixed length, and each block is sent a predetermined number of times. If one or
more block differs from another block, it is concluded that an error has occurred.
This type of scheme is simple, but inefficient in that the amount of overhead (in the
form of redundant data) is very high. Also, if the same error affects each block in
the same way, an error may go undetected.
 Parity schemes - the data is again broken up into blocks of bits of a fixed length,
and one additional bit is added (the parity bit). The number of bits in each block that
are set to one (i.e. as opposed to zero) are counted. If an even parity scheme is
being used, and if the number of ones counted is even, then the parity bit is set to
zero. If the number of ones counted is odd, the parity bit is set to one (to make the
number of ones even once more). The receiver simply counts the number of bits
that are set to one, and if an odd number results, an error has occurred. An odd
parity scheme works in exactly the same way, except that the number of bits set to
one must always be odd. The weakness of parity schemes is that they can only
detect errors in which an odd number of bits have been changed.
 Checksum - an arithmetic calculation of some kind is performed on the bytes or
words making up the data, and the result is appended to the data as a checksum.
The receiver performs the same calculation on the received data, and compares
the result to the received checksum. If the results match, the data is correct. If not,
an error has occurred (parity schemes can, in a sense, be considered to be very
simple checksum schemes).
 Cyclic redundancy check (CRC) - this is a somewhat more complex error
detection scheme. To generate a n-bit checksum, or frame check sequence (FCS),
a generator polynomial is used that must be of the order n. For a 16-bit checksum,
for example, the generator polynomial x16
+x12
+x5
+1 is commonly used. The
transmitting device adds n 0-bits to the data to be transmitted and divides the
resulting code polynomial by the generating polynomial, which produces a
remainder polynomial of n degrees. This remainder polynomial becomes the
checksum. The data transmitted (the code vector) is the original data followed by
the n-bit checksum. The receiver can either compute the checksum again from the
data and verify that it agrees with the received checksum, or it can divide the data
together with the checksum by the generator polynomial. If the remainder is found
to be 0, the data is correct.
Error correction schemes
An error-correcting code (ECC) or forward error correction (FEC) code involves the addition of
sufficient redundant data to the information being sent to enable the receiver to both detect and
correct errors, without needing to request the receiver to retransmit the data. The advantage of
this approach is that a return path is not required. This would be a critical requirement for
applications such as communication with deep space probes, for example, where the delay
between sending a message and receiving a reply could be considerable (Voyager 1, launched
in 1977, is now more than ten billion miles from earth, and signals received by NASA arrive
over fourteen hours affter they have been transmitted). The disadvantage is that, in order to
ensure the required degree of data integrity, a large amount of redundant data will have to be
transmitted with each message, significantly increasing the bandwidth required. Shannon's
theorem defines the code rate as the total number of bits divided by the number of bits of actual
data, and the coding gain as the difference in signal-to-noise ratio (SNR) between encoded and
un-encoded data that would be necessary for them both to exhibit the same bit error rate (BER).
The effectiveness of the encoding scheme is measured in terms of both its code rate and its
coding gain. The theorem essentially sets an upper limit on the error correction rate that can be
achieved with a given level of data redundancy, and for a given minimum signal-to-noise ratio.
Error-correction codes can be divided into block codes and convolutional codes. Block codes
work on blocks of data of a fixed-size (e.g. packets). Convolutional codes work for bit streams of
arbitrary length. They tend to be more complex and more difficult to implement than block codes,
and involve considerably more overhead per unit data. Block codes are calculated for each
individual frame or packet independently of one-another, whereas convolutional codes encode
the entire data stream for a message as one long code word, and then transmit the message in
segments. Convolutional codes have very powerful error correction capabilities, and are widely
used in satellite communications and for communicating with deep space exploration vehicles.
Some error-correction schemes work very well above a certain signal-to-noise ratio, but not at all
below it (depending on how closely the scheme approaches Shannon's theoretical limit). Because
most errors occur in random bursts rather than evenly distributed throughout the data stream, the
message data bits are often shuffled (a process known as interleaving) after they have been
encoded. When the message is un-shuffled (de-interleaved) at the receiver, bursts of errors are
dispersed throughout the data stream as individual bit errors, which can be easily corrected using
the error correction encoding.

Más contenido relacionado

La actualidad más candente

Radio communication presentation
Radio communication presentationRadio communication presentation
Radio communication presentationrandan88
 
Communication Engineering
Communication EngineeringCommunication Engineering
Communication Engineeringadnanqayum
 
Introduction wireless communication network
Introduction wireless communication networkIntroduction wireless communication network
Introduction wireless communication networkRiazul Islam
 
Communication systems
Communication systemsCommunication systems
Communication systemsUmang Gupta
 
Communications
CommunicationsCommunications
CommunicationsKANNAN
 
Wireless Communication and Networking by WilliamStallings Chap2
Wireless Communication and Networking  by WilliamStallings Chap2Wireless Communication and Networking  by WilliamStallings Chap2
Wireless Communication and Networking by WilliamStallings Chap2Senthil Kanth
 
Presentation on ALL INDIA RADIO
Presentation on ALL INDIA RADIOPresentation on ALL INDIA RADIO
Presentation on ALL INDIA RADIOPriyanka Shori
 
Radio communication and the mobile phone
Radio communication and the mobile phoneRadio communication and the mobile phone
Radio communication and the mobile phonep10540735
 
Introduction to Communication Systems 1
Introduction to Communication Systems 1Introduction to Communication Systems 1
Introduction to Communication Systems 1slmnsvn
 
Design and Implementation of A VHF Tri Loop Antenna for 2 Meter Amateur Band
Design and Implementation of A VHF Tri Loop Antenna for 2 Meter Amateur BandDesign and Implementation of A VHF Tri Loop Antenna for 2 Meter Amateur Band
Design and Implementation of A VHF Tri Loop Antenna for 2 Meter Amateur Bandijtsrd
 
EE 490 FM Transmitter & Reciever
EE 490 FM Transmitter & RecieverEE 490 FM Transmitter & Reciever
EE 490 FM Transmitter & RecieverTodd Cook
 
A Level Physics - Telecommunications
A Level Physics - TelecommunicationsA Level Physics - Telecommunications
A Level Physics - TelecommunicationsJonathan D'Cruz
 
Communications
CommunicationsCommunications
CommunicationsWaqas !!!!
 
Radio communication
Radio communicationRadio communication
Radio communicationJubaer Ahmed
 
Electronic & communication
Electronic & communication Electronic & communication
Electronic & communication Abhishekbauriya
 
Radio Communication and the Mobile Phone
Radio Communication and the Mobile PhoneRadio Communication and the Mobile Phone
Radio Communication and the Mobile PhoneTKellett
 

La actualidad más candente (20)

Radio communication
Radio communicationRadio communication
Radio communication
 
Radio communication presentation
Radio communication presentationRadio communication presentation
Radio communication presentation
 
Overview of Radio Communication
Overview of Radio CommunicationOverview of Radio Communication
Overview of Radio Communication
 
Communication Engineering
Communication EngineeringCommunication Engineering
Communication Engineering
 
Introduction wireless communication network
Introduction wireless communication networkIntroduction wireless communication network
Introduction wireless communication network
 
Communication systems
Communication systemsCommunication systems
Communication systems
 
Communications
CommunicationsCommunications
Communications
 
Wireless Communication and Networking by WilliamStallings Chap2
Wireless Communication and Networking  by WilliamStallings Chap2Wireless Communication and Networking  by WilliamStallings Chap2
Wireless Communication and Networking by WilliamStallings Chap2
 
Presentation on ALL INDIA RADIO
Presentation on ALL INDIA RADIOPresentation on ALL INDIA RADIO
Presentation on ALL INDIA RADIO
 
Radio communication and the mobile phone
Radio communication and the mobile phoneRadio communication and the mobile phone
Radio communication and the mobile phone
 
Introduction to Communication Systems 1
Introduction to Communication Systems 1Introduction to Communication Systems 1
Introduction to Communication Systems 1
 
Design and Implementation of A VHF Tri Loop Antenna for 2 Meter Amateur Band
Design and Implementation of A VHF Tri Loop Antenna for 2 Meter Amateur BandDesign and Implementation of A VHF Tri Loop Antenna for 2 Meter Amateur Band
Design and Implementation of A VHF Tri Loop Antenna for 2 Meter Amateur Band
 
EE 490 FM Transmitter & Reciever
EE 490 FM Transmitter & RecieverEE 490 FM Transmitter & Reciever
EE 490 FM Transmitter & Reciever
 
A Level Physics - Telecommunications
A Level Physics - TelecommunicationsA Level Physics - Telecommunications
A Level Physics - Telecommunications
 
Communications
CommunicationsCommunications
Communications
 
Radio communication
Radio communicationRadio communication
Radio communication
 
Ep301
Ep301Ep301
Ep301
 
Radio waves
Radio wavesRadio waves
Radio waves
 
Electronic & communication
Electronic & communication Electronic & communication
Electronic & communication
 
Radio Communication and the Mobile Phone
Radio Communication and the Mobile PhoneRadio Communication and the Mobile Phone
Radio Communication and the Mobile Phone
 

Similar a Telecommunications

Telecommunications.pptx
Telecommunications.pptxTelecommunications.pptx
Telecommunications.pptxssuser42c747
 
Comm introduction
Comm introductionComm introduction
Comm introductionmkazree
 
Ch 1 optical fiber introduction
Ch 1 optical fiber introductionCh 1 optical fiber introduction
Ch 1 optical fiber introductionkpphelu
 
Wireless communication
Wireless communicationWireless communication
Wireless communicationMukesh Chinta
 
Communication channel and networktechnologies.pdf
Communication channel and networktechnologies.pdfCommunication channel and networktechnologies.pdf
Communication channel and networktechnologies.pdfmouizakhan4
 
Report underwater-wireless
Report underwater-wirelessReport underwater-wireless
Report underwater-wirelesspatna
 
Wireless communication
Wireless communicationWireless communication
Wireless communicationMukesh Chinta
 
Introduction to communication system lecture1
Introduction to communication system lecture1Introduction to communication system lecture1
Introduction to communication system lecture1Jumaan Ally Mohamed
 
INTRODUCTION TO RADIO COMMUNICATION SYSTEMS 1.docx
INTRODUCTION TO RADIO COMMUNICATION SYSTEMS 1.docxINTRODUCTION TO RADIO COMMUNICATION SYSTEMS 1.docx
INTRODUCTION TO RADIO COMMUNICATION SYSTEMS 1.docxCyprianObota
 
Survey of electronics
Survey of electronicsSurvey of electronics
Survey of electronicsmamcorditz
 
Introduction to electronic system
Introduction to electronic systemIntroduction to electronic system
Introduction to electronic systemNamit Sood
 
Introduction to Electronic Communication
Introduction to Electronic Communication Introduction to Electronic Communication
Introduction to Electronic Communication Shital Kanaskar
 
Basic+electronic+interview+questions+and+answers
Basic+electronic+interview+questions+and+answersBasic+electronic+interview+questions+and+answers
Basic+electronic+interview+questions+and+answersMohan Raj
 

Similar a Telecommunications (20)

wirelesstech1.ppt
wirelesstech1.pptwirelesstech1.ppt
wirelesstech1.ppt
 
Telecommunications.pptx
Telecommunications.pptxTelecommunications.pptx
Telecommunications.pptx
 
Adhoc module 1 Introduction
Adhoc module 1  IntroductionAdhoc module 1  Introduction
Adhoc module 1 Introduction
 
Telecommunication and its history
Telecommunication and its historyTelecommunication and its history
Telecommunication and its history
 
Comm introduction
Comm introductionComm introduction
Comm introduction
 
Comm introduction
Comm introductionComm introduction
Comm introduction
 
Ref ch01 louis-frenzel
Ref ch01 louis-frenzelRef ch01 louis-frenzel
Ref ch01 louis-frenzel
 
Ch 1 optical fiber introduction
Ch 1 optical fiber introductionCh 1 optical fiber introduction
Ch 1 optical fiber introduction
 
Wireless communication
Wireless communicationWireless communication
Wireless communication
 
Communication channel and networktechnologies.pdf
Communication channel and networktechnologies.pdfCommunication channel and networktechnologies.pdf
Communication channel and networktechnologies.pdf
 
Report underwater-wireless
Report underwater-wirelessReport underwater-wireless
Report underwater-wireless
 
Wireless communication
Wireless communicationWireless communication
Wireless communication
 
Introduction to communication system lecture1
Introduction to communication system lecture1Introduction to communication system lecture1
Introduction to communication system lecture1
 
INTRODUCTION TO RADIO COMMUNICATION SYSTEMS 1.docx
INTRODUCTION TO RADIO COMMUNICATION SYSTEMS 1.docxINTRODUCTION TO RADIO COMMUNICATION SYSTEMS 1.docx
INTRODUCTION TO RADIO COMMUNICATION SYSTEMS 1.docx
 
Survey of electronics
Survey of electronicsSurvey of electronics
Survey of electronics
 
PhysicalLayerFundamentals
PhysicalLayerFundamentalsPhysicalLayerFundamentals
PhysicalLayerFundamentals
 
Communication basics part 1
Communication basics part 1Communication basics part 1
Communication basics part 1
 
Introduction to electronic system
Introduction to electronic systemIntroduction to electronic system
Introduction to electronic system
 
Introduction to Electronic Communication
Introduction to Electronic Communication Introduction to Electronic Communication
Introduction to Electronic Communication
 
Basic+electronic+interview+questions+and+answers
Basic+electronic+interview+questions+and+answersBasic+electronic+interview+questions+and+answers
Basic+electronic+interview+questions+and+answers
 

Último

Wadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptxWadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptxNadaHaitham1
 
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLEGEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLEselvakumar948
 
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Servicemeghakumariji156
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsvanyagupta248
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdfKamal Acharya
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiessarkmank1
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdfKamal Acharya
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaOmar Fathy
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptNANDHAKUMARA10
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityMorshed Ahmed Rahath
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...drmkjayanthikannan
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startQuintin Balsdon
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptxJIT KUMAR GUPTA
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdfKamal Acharya
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdfKamal Acharya
 
Verification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptxVerification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptxchumtiyababu
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapRishantSharmaFr
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 

Último (20)

Wadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptxWadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptx
 
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLEGEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
 
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and properties
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdf
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdf
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Verification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptxVerification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptx
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 

Telecommunications

  • 1. Telecommunications Telecommunications is a general term for a broad range of technologies used to convey information over distances both great and small. The public switched telephone system, mobile telephony, satellite communications, computer networks, the Internet, and radio and television brodcasting services all fall under the general heading of telecommunications. Although most of us tend to associate the term with modern technologies, telecommunication has been around in some form or another since ancient times. The discovery of electricity in the nineteenth century led to the invention of the telegraph, and later the telephone, enabling communications to occur in real time over great distances. Immense strides in the development of communications technology in just the last few decades have changed our leisure activities, the way we work, and the way we perceive the world in which we live. An understanding of these technologies, how they work, their impact on society, the economic implications they engender, and where they will lead us in the future are therefore of considerable importance to us. Telecommunications Principles Telecommunications means communication that takes place over some distance (from the Greek word Tele, which means far away). The distances involved may be small, as is the case with communications that take place between people working in the same office building, or they may be vast, as is the case with the communications that occur between a deep space probe and its mission controllers on Earth. Communicating over long distances has been a challenge throughout history. In ancient times, runners were used to carry messages between distant locations. Other methods used have included drums (used for thousands of years to send messages, and for ceremonial and religious purposes), smoke signals and signal beacons (visible for many miles if visibility is good), the heliograph (used to send signals by reflecting the light of the sun), and semaphore (a method of signaling using two flags held in various positions by the signaler). Modern telecommunications can probably be considered to have started with the invention of the telegraph in 1832, which exploited the properties of electricity and electromagnetism discovered in the 19th century. The telegraph operated over long distances using a simple electrical circuit. An operator at one end of the connection repeatedly makes and breaks an electrical contact using a telegraph key, and the resulting intermittent bursts of current are used to produce a series of audible signals at the other end which are interpeted and transcibed by a second operator.
  • 2. A telegraph key and sounder In the 1870s, Alexander Graham Bell was credited with the invention of the telephone, a device that could transmit speech along a wire by varying the voltage in an electrical circuit using sound. The invention was a result of Bell's attempts to improve the performance of the telegraph. Sound is the result of differences in pressure in the air around us caused by vibrations. A microphone uses these small differences in pressure to vary the resistance of an electrical circuit, constantly changing the amount of current flowing through it. The current flowing through the circuit thus becomes an analogue of the sound waves picked up by the microphone. The public switched telephone system (PSTN) that subsequently evolved was originally intended only for voice transmission, but as the end of the twentieth century approached, the installation of fibre optic trunk lines and fully automated digital excghanges have enabled the PSTN to carry vast amounts of digital data. In the latter half of the nineteenth century, British physicist James Clerk Maxwell predicted that moving electrons will create electromagnetic waves that can propagate through free space, a theory that was later proved by German physicist Heinrich Hertz. By attaching an antenna to an electrical circuit, electromagnetic waves can be broadcast and received by a receiver some distance away. In 1901, Marconi successfully broadcast a radio message from Cornwall in the UK to Canada, a distance of over three thousand kilometres. The behaviour of electromagnetic
  • 3. waves varies with frequency. Today, much of the electromagnetic spectrum, including radio, microwave, infra-red, and visible light, are used for both short-range and long range wireless communications. The telecommunications industry continues to develop new technologies and to deliver new services, but many of the principles that underpinned the early development of telephony and radio communications are just as relevant today as they have ever been. These pages examine some of the fundamental characteristics of transmission lines, the application of analogue and digital signalling techniques. They will also examine communication system architectures, explain the importance of communication protocols, and provide an in-depth look at concepts such as modulation and multiplexing. Properties of Waves A wave can be defined as the transfer of energy between two points without any physical transfer of matter. Waves on the surface of the sea or on a lake provide an obvious example, because they are highly visible. The fact that they transfer energy can be seen from the effects of coastal erosion over many years, and from the more immediate effects involving the transfer of materials onto the shoreline. Sound is an example of waves that we can hear, and is caused by vibrating air molecules. A basic sine wave is illustrated below.
  • 4. A typical sine wave The properties of waves that can be measured or calculated are:  Amplitude - the height of the wave in meters  Wavelength - the distance between consecutive peaks in meters  Period - the time a wave takes to pass a given point in seconds  Frequency - the number of waves that pass a point in one second  Speed - the speed at which a wave propagates in meters per second The symbol normally used to denote wavelength is the Greek letter λ (lambda). Wavelength is commonly expressed in terms of its frequency (ƒ) and velocity of propagation (v), as follows: Frequency (ƒ) is the term used to describe the number of oscillations (cycles) per second of a wave. The unit of frequency is the Hertz (Hz), and one Hertz is equal to one cycle per second. The term is named after the German physicist Heinrich Rudolph Hertz, who first produced and observed electromagnetic waves in 1887. The term is combined with metric prefixes to denote multiple units such as the kilohertz (103 Hz), megahertz (106 Hz), and gigahertz (109 Hz). Other properties of waves can be calculated:  Period = frequency-1  Speed = wavelength / period (or wavelength x frequency)
  • 5. Baud Rate, Signalling Rate and Data Rate The term signalling rate (or baud rate) is used to describe the number of signalling elements (bauds) that can be transmitted in one second. The baud is named after the inventor of the Baudot telegraph code, J.M.E. Baudot. Signalling elements are generally represented either by a change in voltage on a transmission line (digital signalling) or by changes in the phase, frequency or amplitude of an analogue carrier signal (analogue signalling). The terms baud rate and data-rate (usually expressed as bits per second) do not mean the same thing, and are sometimes confused. If only one bit of information is encoded in each signalling element, then the baud rate and the data rate (or bit-rate) will be the same. If two signalling levels are used, each element will represent either one or zero. If more than two signalling levels are used, however, it becomes possible to encode more than one bit per signal element. If four signalling levels are used, for example, each signalling level can represent two bits, and the bit-rate will be twice the baud rate. Bandwidth A generally accepted definition of the bandwidth of an analogue transmission channel is the difference between the highest and lowest frequencies that it can support. Bandwidth is typically measured in hertz. In the case of a baseband channel, the bandwidth is generally considered to be the highest frequency supported. The bandwidth of a channel that is made up of a number of distinct physical transmission links is limited by the range of frequencies supported by all of the links. In data communication networks, the term bandwidth often refers to the nominal maximum data rate measured in bits per second (bps). The maximum data rate (or channel capacity) of a physical communication link is related to its bandwidth in hertz, sometimes referred to as its analogue bandwidth. An analogue telephone line in Europe or North America typically has a bandwidth of 3 kHz, and can carry frequencies of between 400 Hz and 3.4 Khz. The frequency response of the channel is artificially limited by filters in the telephone transmission system (the type of twisted pair cable employed in the subscriber loop can actually carry a much wider range of frequencies). By comparison analogue TV signals, which comprise both video and audio components, require a
  • 6. 6MHz bandwidth RF channel. The graphic below provides a comparison of the typical bandwidths achievable using current or proposed Internet access technologies. Comparative bandwidth of current and proposed Internet access technologies Since digital signals are often represented by discrete voltage levels, the signal elements that make up a digital transmission can essentially be considered to be square wave pulses. Such waveforms do not occur naturally, and the French scientist Jean Baptiste Joseph Fourier (1768 - 1830) was able to demonstrate that such a signal can only be generated by combining a number of sine waves, each having a different frequency and amplitude, to create a more complex waveform. The frequency of the square wave itself is said to be the fundamental frequency. It can be shown that by taking a sine wave with same frequency as the required square wave, and adding successive odd-numbered harmonics to it, a square wave can be approximated. A harmonic is a sine wave with a frequency that is an integer multiple of the fundamental frequency. By adding together the fundamental, third harmonic and fifth harmonic, we can achieve a waveform that is an approximation of a square wave. The fundamental, 3rd and 5th harmonics are shown below,
  • 7. and are labelled A, B and C respectively. Notice that the amplitude of each harmonic relative to that of the fundamental is approximately the inverse of its harmonic number. Fundamental sine wave with third and fifth harmonics The image below illustrates the effect of adding these sine waves together. The resulting waveform is starting to resemble our ideal square wave, although in practice it would require an infinite number of harmonics to produce a "perfect" square wave. Since no transmission medium is capable of supporting an infinite range of frequencies, the best that can ever be achieved will be an approximation of a square wave. It is the properties of the receiver in a commununications channel that will determine how good an approximation is required, and therefore the bandwidth that must be supported by the channel.
  • 8. Adding the fundamental, third and fifth harmonics produces an approximation of a square wave We so far have looked at the waveform of a complex wave (in this case a square wave) as it might appear on an oscilloscope, which displays the amplitude of a waveform as a function of time. In otherwords, we have looked at these waveforms in the time domain. We could also look at the waveform using a spectrum analyser, which displays the amplitude and frequency of each sine wave used to generate the complex waveform. Looking at the same square wave illustrated above in the frequency domain, therefore, we would see something like the image below.
  • 9. A time-domain view of a squarewave comprising the fundamental, third and fifth harmonics Velocity of Propagation The Velocity of Propagation (VoP) is a measure of the speed at which a signal travels through a transmission medium, usually expressed as a percentage of the speed of light in a vacuum (approximately 3x108 metres per second). In a conducting material (e.g. copper), the VoP of a high-frequency electrical signal is equal to the reciprocal of the square root of the dielectric constant of the material: Twisted pair copper cables typically have a VoP of between 40% and 75%. A VoP of 66% corresponds to a speed of approximately 2x108 metres per second. Analogue Signals An analogue signal is an electro-magnetic waveform that continuously varies its amplitude over time. It differs from a digital signal in that small fluctuations in the amplitude of the signal may convey information. The word analogue reflects the fact that the signal is often an analogy of some real-world input to the system. For example, there is a direct relationship between the
  • 10. variation in the voltage of an electrical signal on a telephone line and the pattern of sound waves entering the microphone mounted in the telephone's handset. An analogue system uses some physical property of the signal to convey information. In telecommunications systems, the property most commonly used is voltage, which is made to vary in response to some physical input. This is achieved using a transducer. A transducer is a device that converts energy from one form to another (e.g. heat energy to light, sound energy to an electrical signal, etc.). A clock with hands is said to be an analogue device because the time is represented by the constantly changing position of the clock's hands (although for many clocks the movement of the hands around the clock face occurs as a series of small, discrete increments, rather than a smooth and continuous circular motion). In on of the oldest types of microphone, sound waves striking a thin diaphragm cause it to vibrate. Carbon dust inside the microphone, used to conduct an electrical current through the device, rapidly changes in density as the vibrating diaphragm compresses, and then releases it. The small changes in the density of the carbon dust alter its electrical resistance, varying the amount of current that can flow through it. Since the resistance of the telephone wire itself does not change, and since, for a given value of resistance, voltage varies in direct proportion to current, these small changes in current can be seen as changes in voltage across the telephone line. A typical analogue signal
  • 11. The main disadvantage of an analogue signalling systems is that, because the signal is continuously varying (as opposed to the two or three discrete levels used in digital systems) any unwanted signals (noise) introduced into the system are often difficult to detect and to filter out of the signal. Furthermore, the effects of noise get worse the further the signal has to travel, because the signal is attenuated. Essentially, this means that the signal becomes weaker the further it travels from its source, whereas the level of noise, both inherent and external to the system, remains relatively constant. As a result, the signal-to-noise ratio (SNR) decreases steadily, and at some point the signal will become indistinguishable from the noise. A signal may, of course, be amplified at one or more points along the transmission path in order to compensate for attenuation, but the noise in the signal will inevitably be amplified as well. The effects of noise can be mitigated by using suitable cable and connector types to screen out external interference, but there is no way of eliminating the so-called Gaussian noise (or thermal or white noise) which is due to the random movement of electrons in a conducting material. The range of levels in an analogue signal can be said to be infinite, because any two points on the waveform, however adjacent, will have different values. The relative distance between the two points can theoretically be halved, and halved again an infinite number of times, without producing two identical values, since an analogue signal has no discontinuous points and follows an unbroken curve for its full duration. In principle, therefore, it would seem that an analogue signal should be able to represent some real-world dynamic entity, such as the sound of the human voice or a symphony orchestra, far better that a digital signal that essentially consists of only two or three discrete voltage levels. Indeed, when it comes to the subject of the reproduction of music, there is much debate over the relative merits of analogue and digital recording techniques. When it comes to telecommunications, however, the problem becomes one of maintaining signal integrity over long distances. The signal can, of course, undergo amplification at various points along the transmission path to ensure that the signal-to-noise ratio is maintained above some predefined threshold. Some of the inherent or injected noise can probably be filtered out of the signal. Unfortunately, the very nature of an analogue signal (i.e. constantly varying) means that it is usually not possible to completely separate the original signal from the noise, particularly in view of the fact that the inherent Gaussian noise is present across the entire frequency spectrum supported by the physical medium. Hence, when an analogue signal undergoes amplification, any noise that cannot be removed from the signal is amplified along with it, in equal proportion.
  • 12. The effects of noise can be reduced in analogue telecommunications systems using appropriate design, engineering and installation techniques. Such techniques would include the use of suitable transmission media, which could dictate the use of shielded cabling, and careful selection of cabling routes to avoid potential sources of electromagnetic interference. Analogue signals have been used successfully for decades to carry relatively low-frequency voice signals through the public switched telephone network, and are still widely used in the local loop of the telephone network (the connections between telephone company subscribers and their local exchange). Until relatively recently, analogue systems were also used for radio and television broadcasting. The advent of the Internet and the proliferation of computers in commerce, industry and the home have fuelled the development of digital communications systems capable of carrying virtually any and all kinds of digital data. Despite the digital revolution, however, an understanding of analogue signalling techniques is still crucial to a study of telecommunications systems. Digital Signals A digital signal represents information as a series of binary digits. A binary digit (or bit) can only take one of two values - one or zero. For that reason, the signals used to represent digital information are often waveforms that have only two (or sometimes three) discrete states. In the signal waveform shown below, the signal alternates between two discrete states (0 volts and 5 volts) which could be used to represent binary zero and binary one respectively. If it were actually possible for the signal voltage to instantly transition from zero to five volts (or vice versa), the signal could be said to be discontinuous. In reality, such an instantaneous transition is not physically possible, and a small amount of time is required for the voltage to increase from zero to five volts, and again for the signal to drop from five to zero volts. These finite time periods are referred to as the rise time and the fall time respectively.
  • 13. A simple digital signal In the simple digital signal represented above, alternating binary ones and zeroes are represented by different voltage levels. A binary one would appear on the transmission line as a short voltage pulse, while a binary zero would be represented as an absence of voltage. This rather simplistic signalling scheme has a number of serious flaws, one of which is that a long series of consecutive ones (or a long series of consecutive zeroes) presents the receiver with the problem of determining exactly how many bits are actually being transmitted. For this to be possible, the duration of each bit-time must be known to both the transmitter and the receiver, and the receiver?s internal clock must be synchronised exactly with that of the transmitter, so that the correct number of consecutive identical bits can be calculated by the receiver. In the example shown below, there are no more than two consecutive bits with the same value, which would not normally present the receiver with too much of a problem. Extended runs of binary numbers having the same value, however, would prove far more of a challenge.
  • 14. Data representation in a digital signal Our simple example in the first diagram uses a positive voltage to represent a one, and the absence of a voltage to represent a zero (for historical reasons, the terms mark and space are often used to refer to the binary digits one and zero respectively). This prompts the question of how the receiver knows whether the transmitter is transmitting a long stream of zeroes, or has simply ceased to transmit. There are, in fact, many different digital encoding schemes that overcome this problem, together with that of long streams of bits having the same value, which we will look at in more detail elsewhere. For now, it is enough to understand that digital signals convey binary data in the form of ones and zeros, using different, discrete signal levels to represent the different logical values. If the signalling scheme used employs a positive voltage to represent one logic state, and a negative voltage to represent the other, the signal is said to be bipolar. The number of bits that can be transmitted by the signalling scheme in one second is known as its data rate, and is expressed as bits per second (bps), kilobits per second (kbps) or megabits per second (Mbps). The duration of a bit is the time the transmitter takes to output the bit (and as such is obviously related to the data rate). The modulation or signalling rate is the rate at which the signal level is changed, and depends on the digital encoding scheme used (and is also directly related to the data rate). A special case of digital signalling involves the generation of clock signals used to provide synchronisation and timing information for various signal-processing and computing devices. Clock ticks are triggered by either the rising or falling edge (or in some cases both the rising and falling edges) of an alternating digital signal.
  • 15. The physical communications channel between two communicating end points will inevitably be subject to external noise (electromagnetic interference), so errors will occasionally occur. The degree to which the receiver will be able to correctly interpret incoming signals will depend upon several factors, including its ability to synchronise with the transmitter, the signal-to-noise ratio (SNR), which is a measure of the difference between the transmitted signal strength and the level of background noise, and the data rate. The data rate is significant in this respect because it is directly related to the baseband frequency used. Signals at higher frequencies tend to be more susceptible to very short but high-intensity bursts of external noise (impulse noise), because as frequency increases, there is a greater likelihood that one or more bits in the data stream will become corrupted by a so-called "spike". In order for the receiver to correctly interpret an incoming stream of bits, it must be able to determine where each bit starts and ends. In order to do this, it needs to somehow be synchronised with the transmitter. It will need to sample each bit as it arrives to determine whether the signal level is high (denoting a binary one) or low (denoting a binary zero). In the simple digital encoding schemes considered so far, each bit will be sampled in the middle of the bit-time, and the measured value compared to pre-determined threshold values to determine whether it is a logic high or a logic low (or neither). Timing information becomes more critical as data rates increase and the bit duration becomes shorter, especially for data transfers involving large blocks of data consisting of thousands of bits of information. At relatively low data rates, and for asynchronous data transmission involving only a few bits or bytes of data at any one time, the receiver?s internal clock signal will normally suffice to maintain synchronisation with the transmitter long enough to sample the incoming bits in each block of data received at (or close to) the centre of each bit-time (synchronous and asynchronous transmission are dealt with in more detail elsewhere). For larger blocks of data, however, the receiver?s internal clock cannot be relied upon to remain synchronised with the transmitter. A more reliable timing mechanism is required to maintain synchronisation between receiver and transmitter. One option would be for the transmitter to transmit a separate timing signal which the receiver could use to synchronise its sampling operations on the incoming data stream. This would significantly increase the overall bandwidth required for data transmission, and make the digital transmission system far more difficult to design and implement. Fortunately this is not necessary, because the required timing signal can be embedded in the data itself. This is achieved by
  • 16. encoding the data in such a way that there is a guaranteed transition in signal level (from high to low or from low to high) at some point during each bit-time. One such encoding scheme, called Manchester encoding, is illustrated below. This scheme guarantees a transition in the middle of each bit-time that serves as both a clocking mechanism and as a method of encoding the data. A low-to-high transition represents a binary one, while a high-to-low transition represents a binary zero. This type of encoding is known as bi-phase digital encoding. Such schemes are said to be self-clocking, and have no net dc component (there are both positive and negative voltage components of equal duration, during each bit-time). Manchester encoding is a bi-phase digital encoding scheme One of the main advantages of digital communications is that virtually any kind of information can be represented digitally, which means that many different kinds of data may be transmitted over the same physical transmission medium. In fact, a number of different digital data streams may share the same physical transmission medium at the same time, thanks to advanced multiplexing techniques (multiplexing will be discussed in detail elsewhere). The number of bits required to represent each item of data transmitted will depend on the type of information being sent. Alpha-numeric characters in the ASCII character set, for example, require eight bits per character. Other character encoding schemes can represent a far greater number of characters, but require more bits to represent each character. Analogue information (for example audio or video data) can be represented digitally by sampling the analogue waveform many hundreds, or even thousands of times per second, and then encoding the sample data
  • 17. using a finite range of discrete values (a process known as quantising). The values derived using the quantisation process are then represented as binary numbers, and as such can be transmitted over a digital communications medium as a bit stream. The sampling, quantisation, and conversion to binary format represent an analogue-to-digital conversion (ADC). The sampling process repeatedly measures the instantaneous voltage of the analogue waveform
  • 18. The quantisation process assigns a discrete numeric value to each sample The quantised values are encoded as binary numbers The number of bits used to represent each sample will depend on the total number of discrete values required to represent the original data so that the original analogue waveform can be reproduced at the receiver to an acceptable standard. The more samples taken per unit time, the more closely the reconstructed analogue waveform will reflect the original waveform (or, to put it another way, the higher the resolution will be). The cost of higher resolution is that more bits will be required to digitally encode each sample, increasing the bandwidth required for transmission.
  • 19. Analogue human voice signals are encoded for transmission over digital circuits in the public switched telephone service (PSTN) using eight bits per sample, giving a range of 256 possible values for each sample. The signals are sampled eight thousand times per second, giving a total requirement of 8 x 8,000 bits per second, or 64 kbps. This is adequate for voice transmission over the telephone network which has traditionally been restricted to a bandwidth of less than 4 kHz (the significance of this restriction will be discussed elsewhere). For high-quality real-time video transmission, the data rate (and hence the required transmission bandwidth), will be far higher. Various data compression techniques can be used to maximise the bandwidth utilisation, but a significant amount of bandwidth will still need to be available to guarantee high-quality real-time video transmission, and the complexity of the signal processing required will be greater. The ability to interleave video, audio, and other forms of data on the same digital transmission links has already been mentioned. Another important advantage of digital signalling is the fact that, because it employs discrete signalling levels, a receiver need only determine whether the sampled voltage represents a logic high (1) or a logic low (0). Small variations in level can otherwise be ignored as having no significance, unlike the continuously varying analogue signals, where even small variations in the amplitude may convey information (or represent fluctuations due to noise). Digital signals suffer from attenuation of course, in the same way that analogue signals suffer from attenuation. Unlike analogue signals, however, as long as a receiver can distinguish between logic high and logic low, the incoming signals can be amplified and repeated with no loss of data whatsoever. The regenerated signal that leaves a digital repeater is identical to the digital signal originally transmitted by the source transmitter. Simplex and Duplex Channels In a simplex transmission, one device acts as the transmitter and a second device acts as the receiver. Data flows in one direction only, whereas in a duplex channel, the communication is bi- directional. Full-duplex transmission uses two separate communication channels so that two communicating devices can transmit and receive data at the same time. Data can flow in both directions simultaneously. Half-duplex transmission is a compromise between simplex and full- duplex transmission. A single channel is shared between the devices wishing to communicate,
  • 20. and the devices must take turns to transmit data. Data can flow in both directions, but not simultaneously. Synchronous and Asynchronous Transmission One of the main problems when two devices linked by a transmission medium wish to exchange data is that of synchronising the receiving device with the transmitting device. Typically, data is transmitted one bit at a time, and the data rate must be the same for both the transmitter and the receiver. The receiver must be able to recognise the beginning and end of a block of bits, and know the time taken to transmit each bit, so that it can sample the line at the correct time to read each bit. When the sending device is transmitting a stream of bits, it uses an internal clock to control timing. If data is transmitted at 10 Kbps, a bit is transmitted every 0.1 milliseconds. The receiver attempts to sample the line at the centre of each bit time, i.e. at intervals of 0.1 milliseconds. If the receiver uses its own internal clock for timing, a problem will arise if the clocks in the transmitter and receiver are not synchronised. A drift of 1 percent will cause the first sample to be 0.01 of a bit time away from the centre of the bit, so that after fifty or more samples, the receiver may be sampling at the wrong bit time. The smaller the timing difference, the later the error will occur, but if the transmitter sends a sufficiently long stream of bits, the transmitter and receiver will eventually be out of step. Two approaches exist to solve the problem of synchronisation - asynchronous transmission and synchronous transmission. Asynchronous transmission Timing problems are avoided by simply not sending long streams of bits. Data is transmitted one character (byte) at a time. Synchronisation only needs to be maintained within each character, because the receiver can resynchronise at the beginning of each new character. When no characters are being transmitted, the line is idle (usually represented by a constant negative voltage). The beginning of a character is signalled by a start bit (usually a positive voltage), allowing the receiver to synchronised its clock with that of the transmitter. The rest of the bits that make up the character follow the start bit, and the last element transmitted is a stop bit that is typically 1.5 or 2 times as long as the other bits transmitted. The transmitter then transmits the idle signal (which is usually the same voltage as the stop bit) until it is ready to send the next character (see below).
  • 21. Character format in asynchronous transmission Asynchronous transmission is also known as start-stop mode or character mode. Each character is framed as an independent unit of data that may be transmitted and received independently. Data may also be transmitted as a continuous stream of characters. Most communications systems require a specific number of bits to represent each character, plus a parity bit that is often included to provide simple error detection. Asynchronous data characters normally contain 8 data bits (including the parity bit) plus a start bit and at least 1 stop bit, giving a total of 10 bits. Data can be transmitted in blocks of characters known as transmission blocks. The transmission block may use special control characters to provide control functions and to identify the start and end of a block. Asynchronous transmission is only really suitable for relatively low data rates (up to 3 Kbits). Many of the bits transmitted in each block are control bits, giving a high proportion of overhead. It is used mainly for applications where character data is generated at irregular intervals (e.g. user input from a keyboard). Synchronous transmission With synchronous transmission, the receiver's clock is synchronised with the transmitter's clock. Data is transmitted in a continuous stream, and the arrival time of each can be predicted by the receiver. This is achieved either by using a separate timing circuit, or by embedding the timing information in the signal itself. The latter can be achieved using bi-phase encoding (e.g. Manchester encoding). An embedded timing signal can be used by the receiver to synchronise with the transmitter using a Digital Phase-Locked Loop (DPLL).
  • 22. Use of embedded timing information A data frame usually starts with one or more bytes of data that have a unique bit pattern, or flag (sometimes called a preamble), that tells the receiver a block of data will follow. The preamble is followed by various control fields, a variable-length data field, more control fields, and finally a postamble. The control information within the frame will include a length field, which specifies the amount of data to be read. A bit-oriented frame For large blocks of data, synchronous transmission is far more efficient than asynchronous transmission, requiring far less overhead. The accuracy of the timing information allows much higher data rates. There is usually a minimum frame length, and each frame will contain the same amount of control information regardless of the amount of data in the frame.
  • 23. Noise In any communication system, the received signal will consist of the transmitted signal, attenuated as it has propagated along the transmission media and suffering from some distortion due to the characteristics of the system. In addition, unwanted signals (or noise) may occur between the transmitter and the receiver which are added to the transmitted signal. Noise is the main factor that limits the performance of a communications system. The effect of noise on a digital signal There are four categories of noise:
  • 24.  Thermal (Gaussian) noise - this is due to the thermal agitation of electrons in a conductor, is present in all electronic devices and transmission lines, and is a function of temperature. It is distributed uniformly across the frequency spectrum, and is often referred to as white noise. It cannot be eliminated, and limits overall system performance.  Intermodulation noise - this can occur if signals at different frequencies share the same transmission line. It results in signals that are the sum or difference of the original signals, and occurs when there is some non-linearity in the communication system (which may be caused by component malfunction or excessive signal strength).  Crosstalk - this is the phenomenon that allows you to hear someone else's conversation whilst using the telephone, and occurs due to electrical coupling between two or more transmission paths (such as adjacent twisted-pair cables).  Impulse noise - this consists of random pulses (or spikes) of noise, usually of short duration and relatively high amplitude. Causes include external electromagnetic disturbances such as lightning, vehicle ignition systems, heavy-duty electrical equipment, and faults in the communications system itself. It is usually only a minor annoyance for analogue systems such as a telephone link, but is the primary cause of errors in digital communication. Shannon Limit In 1924 Harry Nyquist derived an equation expressing the maximum data rate for a noiseless channel. Nyquist proved that if an arbitrary signal is run through a low-pass filter of a given bandwidth (H), the filtered signal could be completely reconstructed by line samples taken at a rate equivalent to twice the bandwidth. Sampling the line more frequently is pointless, because the higher frequency components that such sampling could recover have already been filtered out. If the signal consists of V discrete levels, Nyquist's theorem states: Maximum data rate = 2H log2 V bits per second In 1948 Claude Shannon took this work further and extended it to the case of a channel subject to random (thermal) noise. According to Nyquists, a noiseless 3 KHz channel cannot transmit binary (i.e. two-level) signals at a rate exceeding 6000 bits per second. If random noise is introduced, the situation deteriorates rapidly. The amount of thermal noise present in a signal is expressed as the ratio of signal power (S) to noise power (N), and is called the signal-to-noise
  • 25. ratio (SNR). The ratio will become smaller as the signal propagates through the transmission medium due to attenuation of the transmitted signal. The SNR is not usually usually expressed as a ratio. Instead, the value 10 log10 S/N is used. The unit thus derived is known as a decibel (dB). A signal-to-noise ratio of 10 would be expressed as 10 dB; a ratio of 100 as 20 dB; a ratio of 1000 as 30 dB and so on. Shannon found that the maximum data rate of a noisy channel with a bandwidth of H Hz, and a signal-to-noise ratio S/N is given by: Maximum data rate = H log2 (1+S/N) bits per second As an example, a channel of 3000-Hz bandwidth, and a signal to thermal noise ratio of 30 dB (typical parameters for an analogue telephone line) can never transmit much more than 30,000 bps, no matter how many signal levels are used, and no matter how frequently samples are taken. Shannon's result can be applied to any channel subject to Gaussian (thermal) noise. It should also be noted that this limitation is an upper bound, and real systems will rarely achieve it. Data Structures Most data communications networks require that information transmitted between two end points is divided into blocks of a manageable size in order to make the most efficient use of network bandwidth and to facilitate switching and routing. The type of network over which the data is to be transmitted will determine the maximum block size. Each block contains both the data itself and some control information, such as the source and destination address, and an error checking code. The name given to these blocks will depend on the communications protocol that created them. The term protocol data unit (PDU) is a generic term that can refer to any unitised collection of data and control information, although it is normally used only with upper-layer communication protocols like the Transmission Control Protocol (TCP). The term packet (or datagram) is used to describe blocks produced by network layer protocols such as the Internet Protocol (IP), while the term frame is used to describe the blocks produced by data-link layer protocols like Ethernet.
  • 26. Amplitude Modulation Amplitude modulation (AM) is a modulation technique in which the amplitude of a high frequency sine wave (usually at a radio frequency) is varied in direct proportion to that of a modulating signal. The modulating signal carries the required information and often consists of audio data, as in the case of AM radio broadcasts or two-way radio communications. The high frequency sine wave (the carrier) is modulated by adding the modulating signal to it in a mixer. A simplified AM radio transmitter system is shown below. A simplified AM radio transmitter system A simple form of amplitude modulation was originally used to modulate audio voice signals onto a low-voltage direct current (dc) carrier on a telephone circuit. A microphone in the telephone handset acts as a transducer, and uses the sound waves produced by the human voice to vary the current passing through the circuit. At the other end of the telephone line, a second transducer (in the form of a small loudspeaker mounted in the remote handset) uses the varying voltage to produce sound waves that are close enough to the original speech patterns to be recognisable
  • 27. as the voice of the caller. Although the human voice is composed of frequencies ranging from 300 to approximately 20,000 hertz, the public switched telephone system limits the frequencies used to between 300 and 3,400 hertz, giving a total bandwidth of 3,100 hertz. This bandwidth is perfectly adequate for purely voice transmission, since the higher frequencies in the human voice (i.e. those above 3,100 hertz) are not really needed for recognisable speech reproduction. The use of a limited bandwidth also makes the telephone system much simpler from an engineering perspective. Whereas telephone signals can be transmitted at audio frequencies, the same is not really a practical proposition for radio transmissions. The main reason for this is that the optimum length of a radio antenna is a half or a quarter of a wavelength. Since a typical audio frequency of 3,000 hertz has a wavelength of approximately 100 kilometres, the antenna would need to have a length of 25 kilometres to be effective - not a realistic proposition. By comparison, a radio frequency of 100 megahertz would have a wavelength of approximately 3 metres, and could use an antenna 80 centimetres long. It becomes necessary, therefore, to use a radio frequency carrier signal in order to transmit audio signals, which are used to modulate the carrier waveform.
  • 28. A typical amplitude modulated signal Modulating a carrier wave by adding another, lower frequency signal results in a signal that has most of its power concentrated in the carrier, with the rest shared between two sidebands, one above the carrier in frequency and one below it. The highest frequency in the modulating signal is typically less than ten percent of that of the carrier. The process of creating these sideband frequencies by adding another signal to the carrier is known as heterodyning. In the simplest case, the carrier can be modulated by adding another single-frequency sine wave signal to it, changing the carrier's shape (or envelope) as illustrated above. The sideband frequencies account for approximately 33% of the transmitted power. If a more complex modulating signal (such as an
  • 29. audio signal) is used to modulate the carrier, the sidebands account for only about 20-25% of the total transmitted power. Consider, for example, a 100 kHz carrier that is modulated by a steady audio signal (or tone) of 5 kHz. When these signals are added, two sidebands are produced. One sideband has a frequency equal to the sum of the carrier and the modulating signal (100 kHz + 5 kHz = 105 kHz), while the other sideband has a frequency equal to the difference between the carrier and the modulating signal (100 kHz - 5 kHz = 95 kHz). The two sidebands are 5 kHz equidistant from the carrier (one above it and one below it), giving a total bandwidth for the modulated signal of 10 kHz (105 kHz - 95 kHz). The resulting frequency spectrum is illustrated below. A 100 kHz carrier modulated by a 5kHz audio tone Of course, most audio signals (speech and music, for example) are far more complex than a single-frequency audio tone, and are composed of many different frequencies. When a carrier is modulated with a more complex audio signal, therefore, all of the frequencies present in the audio signal are represented in the resulting output signal. In this case, the total bandwidth is the difference between the sum and the difference values of the carrier and the highest frequency component of the modulating signal. To simplify things, the modulated signal bandwidth will be
  • 30. twice that of the modulating signal. For a modulating audio signal with frequency components ranging from 0 - 6 kHz, therefore, the bandwidth of the modulated signal for a 100 kHz carrier will be 106 kHz - 94 kHz = 12 kHz. This produces a more complex frequency spectrum, which might look something like that shown below. A 100 kHz carrier modulated by an audio signal (frequencies up to 6 kHz) The bandwidth of each sideband is equal to that of the modulating signal, and the two sidebands are mirror images of each other, each carrying the same information as the original audio signal. This type of basic amplitude modulation, which results in two sidebands and a carrier, is usually referred to as double sideband amplitude modulation (DSB-AM). It is a very inefficient form of modulation in terms of its power usage, because at least two thirds of the transmitted power is concentrated in the carrier signal, with the remaining power being evenly split between the two sidebands. Since the sidebands contain identical information, only one sideband is actually needed to carry the transmitted audio information. The other sideband is redundant, and the carrier signal contains no useful information. DSB-AM is also therefore spectrally inefficient, because fewer stations can make use of a given transmission band. The main benefit of DSB-AM is that, because of its relative simplicity, receiving equipment is cheaper to produce.
  • 31. The process of demodulation for DSB-AM is relatively straightforward. The radio frequency carrier can be removed from the signal using a simple diode detector consisting of a diode, a resistor, and a capacitor. The incoming signal is rectified by the diode, which allows only half of the alternating waveform to pass through it. The capacitor removes the remaining radio frequency signal components to provide a smooth output, and the resistor allows the capacitor to discharge. An AM receiver can thus be produced relatively cheaply, since there is no requirement for specialised components. The basic circuit diode detector circuit is shown below. A basic diode detector circuit Because the modulating signal is added to the carrier, the instantaneous amplitude of the modulated signal will depend on the instantaneous amplitude of the modulating data. The modulation index is a measure of the degree to which the modulating signal increases the maximum amplitude of the carrier signal. If the carrier's amplitude is made to vary between 50% above and 50% below its un-modulated value, it is said to have a modulation index of 0.5. If the amplitude is made to vary by 100% above and below its un-modulated value, it has a modulation index of 1.0. A modulation index of 1.0 for the A3E transmission mode will give a maximum transmitter power efficiency of 33%. Increasing the modulation index would result in greater power efficiency, but would result in distortion at the receiver.
  • 32. The power efficiency of the transmitter can be increased by removing (suppressing) the carrier from the AM signal to create a reduced-carrier transmission, or double-sideband suppressed- carrier (DSBSC). DSBSC is three times more power-efficient than DSB-AM. A similar scheme, in which the carrier is only partially suppressed, is called double-sideband reduced- carrier (DSBRC). Both schemes require the carrier to be regenerated by a local oscillator in the receiver in order that demodulation can be achieved using standard demodulation techniques. In addition to transmitter efficiency, spectral efficiency can be achieved by completely suppressing both the carrier and one of the sidebands, although the complexity of both the transmitter and the receiver is increased significantly. The ITU designations for the various amplitude modulation schemes are shown in the table below. ITU Amplitude Modulation Scheme Designations Designation Description A3E Double-sideband full-carrier R3E Single-sideband reduced-carrier H3E Single-sideband full-carrier J3E Single-sideband suppressed-carrier B8E Independent-sideband emission C3F Vestigial-sideband Lincompex Linked compressor and expander The carrier frequencies used in some applications are very high (radar frequencies, for example, range from 3MHz up to 300 GHz). At very high frequencies, many standard electronic components cannot function properly. A superheterodyne receiver is one that reduces the frequency of an incoming signal by adding a lower frequency to it using a mixer (a process known as superheterodyning) to reduce the frequency of the AM signal, which is centred on the carrier frequency, to some lower frequency called the intermediate frequency (IF) prior to processing. The intermediate frequency obtained is the difference (or beat) frequency between the incoming AM signal's carrier frequency and that of the local oscillator. The receiver will use a tuner to select
  • 33. the required carrier frequency, and to adjust the frequency of the receiver's local oscillator so that the intermediate frequency will always have the same value (the tuner and the local oscillator or therefore tightly coupled). This both simplifies the design of the receiver and reduces its cost, since the majority of its components will be required only to operate at a single intermediate frequency rather than over a range of frequencies. A simple superheterodyne receiver system is shown below. A superheterodyne receiver The band-pass filter in the tuner filters out all signals except the selected carrier frequency. The receiver bandwidth is usually some fraction of the carrier frequency. A receiver bandwidth of 2%, for example, means that any signals between 2% above and 2% below the carrier frequency are allowed to pass through the filter. For a carrier frequency of 850 kHz, this would mean that all signals between 833 kHz and 867 kHz are accepted by the receiver. If the same fraction is applied to the intermediate frequency, then for a fixed IF of 452 kHz, only signals that are within the range 443 kHz to 461 kHz will pass. The local oscillator is set to 398 kHz to reduce the 850 kHz carrier to 452 kHz (the beat frequency). Any adjacent signals are also superheterodyned, but remain at the same margin above and below the original signal. If the incoming signal includes interference at 863 kHz, a conventional 2% receiver will allow the interference to pass, since the interference falls within the range 833 kHz to 867 kHz. If the signal is superheterodyned using a local oscillator frequency of 398 kHz, the
  • 34. interfering signal will be shifted down to a beat frequency of 465 kHz. If the resulting IF frequency is also limited to a bandwidth of 2%, any frequencies below 443 kHz or above 461 kHz will be filtered out. This means that the interference at 465 kHz will be eliminated from the signal (i.e. it has been suppressed). It is apparent, therefore, that the superheterodyne receiver is more selective. The term used to describe the process of narrowing the receiver bandwidth in this way is arithmetic selectivity. In order to increase both the power efficiency and spectral efficiency of the transmitter, it is necessary to remove both the carrier and one of the sidebands from the transmitted AM signal. A simplified single sideband AM transmitter is shown below. A single sideband AM transmitter system The receiver must restore the carrier signal before demodulation can take place by creating its own carrier signal using a local oscillator and adding it to the received SSB AM signal in a mixer. A suitable receiver system might look something like that shown below.
  • 35. A single sideband AM receiver A simple form of AM, often used for digital communications is on-off keying, in which binary data is represented as the presence or absence of the carrier wave. This method is often used at radio frequencies to transmit Morse code. A simple amplitude modulated digital signal
  • 36. Quadrature Amplitude Modulation (QAM) Quadrature amplitude modulation (QAM) is a modulation scheme in which two sinusoidal carriers, one exactly 90 degrees out of phase with respect to the other, are used to transmit data over a given physical channel. One signal is called the "I" signal, and can be represented by a sine wave. The other is called the "Q" signal, and can be represented by a cosine wave. Because the carriers occupy the same frequency band and differ by a 90-degree phase shift, each can be modulated independently, transmitted over the same frequency band, and separated by demodulation at the receiver. For a given bandwidth, QAM enables data transmission at twice the rate of standard pulse amplitude modulation without any degradation in the bit error rate. QAM and its derivatives are used in both mobile radio and satellite communication systems. Each symbol is a specific combination of signal amplitude and phase. By combining the amplitude and phase modulation of a carrier signal, it is possible to increase the number of possible symbols and therefore transmit more bits for each symbol. One way to represent the symbols is to use a constellation pattern diagram such as the one shown below. The pattern shown represents the different amplitudes and phases. Dots at 0, 90, 180, and 270 degrees all have two possible amplitudes resulting in eight different symbols. With eight symbols, it is possible to transmit 3 bits for each symbol. For example, if the modulated signal is of amplitude 1 at 0 degrees, three zeros (000) are transmitted. A 3-bit QAM constellation Modern communication equipment requires modulation that uses dense constellation patterns. The diagram below depicts a 16-state constellation pattern, allowing the transmission of four bits
  • 37. for every baud. The number of states grows exponentially to the number of bits transmitted per baud. Transmitting eight bits per baud would require 256 possible states, resulting in a very dense constellation pattern. A 4-bit QAM constellation Frequency Shift Keying (FSK) Frequency shift keying (FSK) is one of several techniques used to transmit a digital signal on an analogue transmission medium. The frequency of a sine wave carrier is shifted up or down to represent either a single binary value or a specific bit pattern. The simplest form of frequency shift keying is called binary frequency shift keying (BFSK), in which the binary logic values one and zero are represented by the carrier frequency being shifted above or below the centre frequency. In conventional BFSK systems, the higher frequency represents a logic high (one) and is referred to as the mark frequency. The lower frequency represents a logic low (zero) and is called the space frequency. The two frequencies are equi-distant from the centre frequency. A typical BFSK output waveform is shown below.
  • 38. Binary Frequency Shift Keying (BFSK) If there is a discontinuity in phase when the frequency is shifted between the mark and space values, the form of frequency shift keying used is said to be non-coherent, otherwise it is said to be coherent. In more complex schemes, additional frequencies are used to enable more than one bit to be represented by each frequency used. This provides a higher data rate, but requires more bandwidth (representing a group of two binary values, for example, would require four different frequencies). It also increases the complexity of the modulator and demodulator circuitry, and increases the probability of transmission errors occurring.
  • 39. Audio frequency shift keying (AFSK) Audio frequency-shift keying (AFSK) is a modulation technique in which binary data is represented by changes in the frequency of an audio tone, and is one of the techniques used for transmission on analogue telephone lines. Two tones are normally used to represent the mark and space values. Many early analogue modems employed AFSK to transmit data at rates of up to about 300 bits per second, and some early microcomputers used a modified form of AFSK to store data on audio cassettes. Phase Shift Keying (PSK) Phase-shift keying (PSK) is a method of modulating digital signals onto an analogue carrier wave in which the phase of the carrier wave is shifted between two or more values, depending upon the logic state of the input bit stream. The simplest method uses two phases - 0 degrees and 180 degrees. The logic state of each bit is examined with respect to the logic state of the preceding bit. If the logic state changes (i.e. from logic high to logic low) the phase of the carrier is shifted by 180 degrees. If the logic state does not change, the phase of the carrier remains the same. This form of PSK is sometimes called biphase modulation. The output waveform of a 2-phase PSK modulator is shown below.
  • 40. Phase shift key modulation More complex forms of PSK employ four or eight phases. This allows more bits to be transmitted for each phase angle used. In four-phase modulation, the possible phase angles are +45/-315, +135/-225, +225/-135, and +315/-45 degrees (a phase difference between symbols of 90 degrees), and each symbol can represent two signal elements (00, 01, 10 or 11). In eight-phase modulation, the phase difference between symbols is 45 degrees, and each phase shift can represent three signal elements (000, 001, 010, 011, 100, 101, 110, or 111).
  • 41. Pulse Code Modulation (PCM) Analog transmission is not particularly efficient. When the signal-to-noise ratio of an analog signal deteriorates due to attenuation, amplifying the signal also amplifies noise. Digital signals are more easily separated from noise and can be regenerated in their original state. The conversion of analogue signals to digital signals therefore eliminates the problems caused by attenuation. Pulse Code Modulation (PCM) is the simplest form of waveform coding. Waveform coding is used to encode analogue signals (for example speech) into a digital signal. The digital signal is subsequently used to reconstruct the analogue signal. The accuracy with which the analogue signal can be reproduced depends in part on the number of bits used to encode the original signal. Pulse code modulation is an extension of Pulse Amplitude Modulation (PAM), in which a sampled signal consists of a train of pulses where each pulse corresponds to the amplitude of the signal at the corresponding sampling time (the signal is modulated in amplitude). Each analogue sample value is quantised into a discrete value for representation as a digital code word. Pulse code modulation is the most frequently used analogue-to-digital conversion technique, and is defined in the ITU-T G.711 specification. The main parts of a conversion system are the encoder (the analogue-to-digital converter) and the decoder (the digital-to-analogue converter). The combined encoder/decoder is known as a codec. A PCM encoder performs three functions:  sampling  quantising  encoding The human voice uses frequencies between 100Hz and 10,000Hz, but it has been found that most of the energy in speech is between 300 Hertz and 3400 Hertz - a bandwidth of approximately 3100 Hertz. Before converting the signal from analog to digital, the unwanted frequency components of the signal are filtered out. This makes the task of converting the signal to digital form much easier, and results in an acceptable quality of signal reproduction for voice communication. From an equipment point of viev, because the manufacture of very precise filters would be expensive, a bandwidth of 4000 Hertz is generally used. This bandwidth limitation also helps to reduce aliasing - aliasing happens when the number of samples is insufficient to adequately represent the analog waveform (the same effect you can see on a computer screen when diagonal and curved lines are displayed as a series of zigzag horizontal and vertical lines).
  • 42. Sampling Sampling the analogue signal Sampling is the process of reading the values of the filtered analogue signal at discrete time intervals (i.e. at a constant sampling frequency, called the sampling frequency). A scientist called Harry Nyquist discovered that the original analogue signal could be reconstructed if enough samples were taken. He found that if the sampling frequency is at least twice the highest frequency of the input analogue signal, the signal could be reconstructed using a low-pass filter at the destination. Quantisation Quantisation is the process of assigning a discrete value from a range of possible values to each sample obtained. The number of possible values will depend on the number of bits used to represent each sample. Quantisation can be achieved by either rounding the signal up or down to the neares available value, or truncating the signal to the nearest value which is lower than the actual sample. The process results in a stepped waveform resembling the source signal. The
  • 43. difference between the sample and the value assigned to it is known as the quantisation noise (or quantisation error). Quantisation noise can be reduced by increasing the number of quantisation intervals, because the difference between the input signal amplitude and the quantization interval decreases as the number of quantization intervals increases. This would, however, increase the PCM bandwidth. Uniform quantisation uses equal quantisation levels throughout the entire range of an input analogue signal. The signal-to-noise ratio (SNR), including quantisation noise, is the most important factor affecting voice quality in uniform quantisation. The signal-to-noise ratio is measured in decibels (dB). The higher the signal-to-noise ratio, the better the voice quality. Quantisation noise reduces the signal-to-noise ratio of a signal, so an increase in quantisation noise degrades the quality of a voice signal. Low signals will have a small signal-to-noise ratio and high signals will have a large signal-to-noise ratio. Because most voice signals are relatively low, having better voice quality at higher signal levels is an inefficient way of digitising voice signals. Uniform quantisation was therefore replaced by a non-uniform quantisation process called companding (see below). Narrowband speech is typically sampled 8000 times per second, and each sample must be quantised. If linear quantisation is used, 12 bits per sample are required, giving a bit rate of 96 kbits per second. This can be reduced using non-linear quantisation, in which 8 bits per sample is sufficient to provide speech quality almost indistinguishable from the original. This results in a bit rate of 64 kbits per second. Two non-linear PCM codecs were standardised in the 1960s - µ- law (mu-law) coding was the standard developed in the United States, while A-law compression was used in Europe. These codecs are still widely used today. Encoding Encoding is the process of representing the sampled values as a binary number in the range 0 to n. The value of n is chosen as a power of 2, depending on the accuracy required. Increasing n reduces the step size between adjacent quantisation levels and hence reduces the quantisation noise. The down side of this is that the amount of digital data required to represent the analogue signal increases.
  • 44. Stages in the analogue-to-digital conversion process Companding Working with very small signal levels (by comparison with the quantisation interval) can introduce more errors. Companding can be used to increase the accuracy of such signals. This is the process of distorting the analogue signal in a controlled way before quantising takes place, by compressing its larger values at the source and then expanding them at the receiving end. There are two standards used: A-law in Europe, and µ-law in the USA. The term companding was created by combining the terms COMpressing and exPANDING. Input analog signal samples are compressed into logarithmic segments. Each segment is then quantised, and coded using uniform quantisation. The compression process is logarithmic, where the compression increases as the sample signals increase (the larger sample signals are compressed more than the smaller sample signals, causing the quantization noise to increase as the sample signal increases). A logarithmic increase in quantisation noise throughout the dynamic range of an input sample signal gives a signal-to-noise ratio which is almost constant over a wide range of input levels. A rate of eight bits per sample (64 kbits per second) gives a reconstructed signal which is very close the original. The advantages of this system include low complexity and delay, and high-quality
  • 45. reproduction of speech. The disadvantages are a relatively high bit rate and a high susceptibility to channel errors. Similarities between A-law and µ-law:  Both are linear approximations of a logrithmic input/output relationship  Both are implemented using 8-bit code words (256 levels, one for each quantisation interval). This allows for a bit rate of 64 kbits per second  Both break the dynamic range into 16 segments (8 positive and 8 negative) - each segment is twice the length of the preceeding one, and uniform quantisation is used within each segment  Both use similar encoding techniques for the 8-bit word - the first (most significant bit) identifies polarity, bits 2, 3 and 4 identify the segment, and the last four bits identify the quantisation level within the segment Differences between A-law and µ-law:  Different linear approximations lead to different lengths and slopes  Numerical assignment of the bit positions in the 8-bit code word to segments and to quantisation levels within segments are different  A-law provides a greater dynamic range  µ-law provides better signal/distortion performance for low level signals  A-law requires 13 bits for a uniform PCM equivalent, whereas m-law requires 14 bits  International connections should use A-law (µ to A conversion is the responsibility of the µ-law country)
  • 46. Differential Pulse Code Modulation (DPCM) During the PCM process, the differences between successive input sample signals are minimal. A common technique used in speech coding is to try to predict the value of the next sample from that of the preceding samples. This is possible because of correlations in speech samples due to the effects of the vocal tract and the vibrations of the vocal chords. Differential Pulse Code Modulation (DPCM) schemes quantise the difference between the original and the predicted signals, i.e. the difference between successive values. This means a reduction in the number of bits used per sample over that used for PCM. Using DPCM can reduce the bit rate of voice transmission down to 48 kbps. DPCM can be described as a predictive coding scheme. The first part of DPCM works like PCM in that the input signal is sampled at a constant sampling frequency, and the samples are modulated using Pulse Amplitude Modulation. The sampled input signal is then stored in a predictor. The predictor sends the stored sample signal it through a differentiator. The differentiator compares the current sample signal with the previous sample signal and sends the difference to the quantising and coding phase of PCM. After quantising and coding, the difference signal is transmitted. At the reciever, the difference signal is dequantised, added to a sample signal stored in a predictor, and sent to a low-pass filter that reconstructs the original input signal. Although DPCM reduces the bit rate for voice transmission, the uniform quantisation used means that large sample signals have a higher signal-to-noise ratio than small sample signals, so voice quality is better at higher signals. Because most signals generated by the human voice are small, voice quality should focus on small signals. Adaptive DPCM was developed to solve this problem. Adaptive Differential Pulse Code Modulation (ADPCM) In the mid 1980's the CCITT standardised an Adaptive Differential Pulse Code Modulation (ADPCM) codec operating at 32 kbps known as G721, resulting in reconstructed speech almost as good as that provided by 64 kbps PCM codecs. This was later followed by ADPCM codecs operating at 16,24 and 40 kbps (G726 and G727). In ADPCM, the predictor and
  • 47. quantiser are adaptive - they change to match the characteristics of the speech being coded. ADPCM adapts the quantisation levels of the difference signal that is generated during the DPCM process. If the difference signal is low, ADPCM reduces the size of the quantisation levels. If the difference signal is high, ADPCM increases the size of the quantisation levels. The quantisation level is thus adapted to the size of the input difference signal, generating a uniform signal-to-noise ratio throughout the dynamic range of the difference signal. PCM and Time Division Multiplexing (TDM) Time division multiplexing is used at local exchanges to combine a number of incoming voice signals onto an outgoing trunk. Each incoming channel is allocated a specific time slot on the outgoing trunk, and has full access to the transmission line only during its particular time slot. Because the incoming signals are analogue, they must first be digitised, because TDM can only handle digital signals. Because PCM samples the incoming signals 8000 times per second, each sample occupies 1/8000 seconds (125 µseconds). PCM is at the heart of the modern telephone system, and consequently, nearly all time intervals used in the telephone system are multiples of 125 µseconds. Because of a failure to agree on an international standard for digital transmission, the systems used in Europe and North America are different. The North American standard is based on a 24- channel PCM system, wheras the European system is based on 30/32 channels. This system contains 30 speech channels, a synchronisation channel and a signalling channel, and the gross line bit rate of the system is 2.048 Mbps (32 x 64 Kbps). The system can be adapted for common channel signalling, providing 31 data channels and employing a single synchronisation channel. The following details refer to the European system. The 30/32 channel system uses a frame and multiframe structure, with each frame consisting of 32 pulse channel time slots numbered 0-31. Slot 0 contains the Frame Alignment Word (FAW) and Frame Service Word (FSW). Slots 1-15 and 17-31 are used for digitised speech (channels 1-15 and 16-30 respectively). In each digitised speech channel, the first bit is used to signify the polarity of the sample, and the remaining bits represent the amplitude of the sample. The duration of each bit on a PCM system is 488 nanoseconds (ns). Each time slot is therefore 3.904 µseconds (8 bits x 488 ns). Each frame therefore occupies 125 milliseconds (32 x 3.904 mseconds).
  • 48. In order for signalling information (dial pulses) for all 30 channels to be transmitted, the multiframe consists of 16 frames numbered 0-15. In frame 0, slot 16 contains the Multiframe Alignment Word (MFAW) and Multiframe Service Word (MFSW). In frames 1-15, slot 16 contains signalling information for two channels. The frame and multiframe structure are shown below. The duration of each multiframe is 2 milliseconds(125 µseconds x 16). The frame and multiframe structures for a 30/32 channel PCM system
  • 49. Communications Protocols Communication protocols are at the heart of data communications. Applications running on networked computers need to exchange data with applications running on other computers, often on other networks. Other devices must also send and receive information over the network in order to function, including networked printers and interconnection devices such as switches and routers. Network devices that wish to communicate with each other must speak the same language. They must use standard messages and a common set of rules that define how communication will take place. These messages, together with the conventions that must be followed in order to ensure successful communication, are collectively called a communications protocol. Such protocols are often described in an industry or international standard. Protocols exist at every level of a communications system. There are hardware protocols that determine how electrical signals are transmitted over a transmission link, and software protocols that determine how transmission errors are handled and how much information can be sent over the network at a time. There are a number of different communication protocols that can perform the same function, but if communication is to be successful, both end points using a communications channel must be using the same protocol. Communication systems have a layered architecture that allows the functionality required at each layer to be engineered independently of the layers above and below them, facilitating a modular approach to the design of hardware, firmware or software components. The layers of a generic five-layer model are described below.  Physical - the physical transmission media, connectors, and basic interconnection devices. Physical layer protocols are concerned with the design of cables and connection hardware, the electrical or optical properties of the transmission medium, and the encoding scheme used to represent data.  Datalink - firmware that controls the transmission of data across a single network link. Functions include error handling, flow control and hardware addressing, and arbitration between network devices competing for a shared transmission medium.  Network - software that is responsible for addressing and routing data across a network or internetwork. Network addresses and link status information are used to determine the best route through the network or internetwork.  Transport - software that is responsible for providing error-free data transmission between applications communicating over a network. Functionality includes
  • 50. establishing and managing connections, error handling, flow control, and the segmentation, sequencing and re-assembly of data.  Application - the interface between user applications and the network. Each type of application will have a specific application layer protocol to provide the required interface. Each layer implements some part of the communications process. In some cases the same functionality (for example, error handling and flow control) is provided at different levels. The functions typically embodied in a particular set of communications protocols (sometimes called a protocol suite or protocol stack) are described below.  Addressing - hardware devices on a local network are uniquely identified using the hardware address (sometimes called a MAC address) burned into each network adapter. A device with more than one network adapter will have multiple hardware addresses. Each network adapter may also be allocated a logical (or network) address that can be assigned by a network administrator using appropriate software. The network address is used to uniquely identify devices on both networks and internetworks, and may be part of a private or global addressing scheme used. In TCP/IP networks, the network address takes the form of an IP address.  Process identification - although hardware and network addresses can be used to get data from one computer to another, it will also be necessary to identify the application (or process) sending the data, and the application on the destination computer for which the data is intended. A port number is therefore used together with the network address to uniquely identify both the source and the destination process.  Encapsulation - each protocol accepts a block of data from the layer above it and adds some control information to it (in the form of a header) to create a protocol data unit (PDU). The PDU is passed to the active protocol in the next layer down, which creates its own PDU. The header information added by each protocol is only of interest to the same protocol on the destination machine. Other protocols see it simply as data.  Connection control - connection-oriented protocols must establish a virtual connection between the two end points of a link or channel before data transfer can take place. Specific procedures must be followed to set up the connection and to manage the flow of data between the two end points. On completion of data transfer, the connection must be closed.  Segmentation and reassembly - all networks impose a limit on how much data that can be sent in one go. This is because large amounts of data take a long time to transmit, as well as taking a long time to process both at the destination and at
  • 51. intermediate network switching devices. Small blocks of data can be routed quickly, do not require large storage areas (send and receive buffers), and can be processed quickly by each device they must pass through. Messages consisting of large amounts of data are therefore broken down into smaller blocks, usually called datagrams or packets. In packet-switched networks, datagrams frequently arrive at their destination out of order, and must be sequentially numbered to enable the receiving device to reassemble them in the correct order and identify missing packets.  Flow control - a process that restricts the flow of data between two points in order to prevent the destination device receiving more data than it can process in a given time frame. The receiver may ask the sender to stop transmitting for a while or slow down the rate of data transfer. Some protocols negotiate a mutually acceptable data rate when a connection is established. The data rate may be re-negotiated if circumstances change.  Error detection and correction - error correction requires the inclusion of sufficient redundant data to allow the receiver to reconstruct the original data if an error in transmission occurs. Error detection only requires enough redundant data to allow the receiver to detect whether or not an error has occurred, in which case it can take appropriate action (such as requesting retransmission). The OSI Reference Model The Open Systems Interconnection (OSI) reference model was developed by the International Standards Organisation (ISO) as a model for computer communications architectures, and as a framework for developing protocol standards. It was intended as a first step towards international standardisation of communications protocols. The model divides the communication process into seven layers, as shown below. The diagram shows how communication takes place indirectly between peer layers at each end of a communications channel (denoted by the bi-directional horizontal arrows), and clearly identifies the concept of an interface between adjacent layers (denoted by the bi-directional vertical arrows).
  • 52. The OSI Reference Model The OSI Reference Model layers The model starts at the bottom with the physical layer (layer 1), and ends at the top with the application layer (layer 7). The most important concept behind the model is that each layer performs a specific function, provides services to the layer above it, and uses the services of the layer below it. There is a well-defined interface between each layer, across which the flow of information is kept deliberately minimal. It should be remembered that the OSI model itself is not a communications architecture. It simply specifies what each layer should do, not how this is to be achieved. As shown above, protocols in the same layer at each end of the communications link can communicate with each other only indirectly, by using the services of the layers below them. The individual layers of the OSI Reference Model are summarised below:
  • 53.  Physical layer - concerned with the physical transmission of a bit stream. Issues include the physical and electrical characteristics of the cables and connections, the encoding and signalling schemes used, and the mechanical, electrical and procedural interfaces. Network devices that operate at this layer include hubs and repeaters.  Data link layer - the point at which the bit stream enters or leaves the physical layer, and which provides reliable transmission of data across any single network link, including sequencing, flow control and error detection, using hardware addresses. It often defines how devices are connected in terms of the network topology, and how they may access the physical medium. The data link Layer is divided into the logical link control (LLC) sub-layer, which manages the communications link between two devices, and the medium access control (MAC) sub-layer, which manages protocol access to the transmission medium. Network devices that operate at this layer include bridges and switches. Ethernet is an example of a data link layer protocol.  Network layer - controls the operation of the subnet, and is responsible for the routing and addressing of datagrams (packets) from one network to another using logical addresses (e.g. IP addresses). The most important network devices that operate at this layer are routers. Network layer protocols include the Internet Protocol (IP).  Transport layer - establishes and terminates connections across the network, and provides a reliable end-to-end transport mechanism for the exchange of data between processes in different end systems. It undertakes flow control, and ensures that data is delivered error-free and in sequence, with no loss or duplication. Typical protocols used at this layer include Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).  Session layer - enables applications on end systems to establish a connection, and provides the mechanism for controlling the dialogue between them.  Presentation layer - resolves differences in data representation between end systems and encodes data in a standard format for transmission across the network. May also be responsible for providing services such as encryption and data compression.  Application layer - contains management functions and mechanisms to support distributed applications. Typical protocols used at this layer are File Transfer Protocol (FTP) and the various e-mail protocols.
  • 54. Data transmission in the OSI model A process wishing to send data to a process on a remote host passes the data to the application layer protocol, which attaches the appropriate control information (in the form of a header) to the data, creating an application layer protocol data unit (PDU) which is then passed down to the presentation layer. The presentation layer sees the PDU simply as a block of data to be processed. It may transform the PDU in some way, adds its own header, and passes the resulting PDU to the session layer. This process is repeated until the data reaches the physical layer and is transmitted on the physical transmission medium. At the destination host, the protocol operating at each layer reads the control information for that layer, strips off the header, and passes the resulting block of data up to the next layer. Finally, the original data, stripped of all control information, is passed to the target process. This sequence of events is illustrated below. Data transmission in the OSI Reference Model
  • 55. Advantages and disadvantages of the OSI model A major advantage of the OSI model is that it clearly distinguishes between the concepts of services, interfaces and protocols. A strictly modular approach to the design of system architecture is encouraged, allowing the protocols operating within each layer to be replaced relatively easily. The purely theoretical basis for the model means that it is not biased towards a particular technological approach, and makes it very useful as a reference model, although it also means that the model does not benefit from practical experience, as a result of which some fairly arbitrary decisions have been made about what functionality should go into each layer. The session and presentation layers, for example, do not actually do a great deal, whereas the data- link layer has had to be divided into two distinct sub-layers (LLC and MAC). The shortcomings of the OSI model, together with the success of the TCP/IP protocol stack, contributed to the lack of success of subsequent attempts to implement a protocol stack based on the OSI model. That said, the OSI model has proved an extremely useful tool for facilitating the discussion of network architectures. Circuit Switching Circuit switching is a technique traditionally used in telephone networks to set up a connection between two subscribers. When two end-systems in a telecommunications network wish to communicate in this way, a dedicated circuit must be established between the two end points by allocating the required network resources prior to data transfer. The circuit remains in place until all the data has been transferred, and provides a fixed connection bandwidth.
  • 56. A generic switching network Using the diagram above as an example, if station A has some data to send to station E, it sends a request to switching node 4 to establish a connection with station E. Node 4 must identify the optimum route based on currently available routing information. Assuming that node 5 is chosen as the next hop, node 4 will secure the first available channel link to node 5 for the connection. Node 5 will similarly reserve a channel link to node 6, which will then communicate with station E to establish whether station E want to accept the connection. If so, station A will receive a signal confirming that the connection has been established. Once data transfer is complete, the connection is terminated by station A. Signals are sent to each of the nodes involved instructing them to de-allocate the network resources, which are then available for use in other connections. Circuit switching may be considered to be inefficient, because the entire capacity of the channels allocated to the circuit are unavailable for use in other circuits for the duration of the connection, even if no data is actually being sent, and the circuit may be idle for much of the time. There is also a delay involved in setting up the connection in the first place.
  • 57. The diagram below shows the flow of information involved in setting up, using, and terminating the typical circuit-switched connection described above. Information orginating from station A is shown in pink, and information from station E is shown in blue. A circuit-switched connection Multiplexing A multiplexer (sometimes called a mux) is a communications device that multiplexes (combines) several signals for transmission over a single physical transmission channel. A demultiplexer completes the process by separating multiplexed signals from a channel line at the receiver. A multiplexer and demultiplexer are frequently combined into a single device that is capable of processing both outgoing and incoming signals. The communications channel may be shared between the multiplexed signals in a variety of ways, including Time Division Multiplexing (TDM) and Frequency Division Multiplexing (FDM).
  • 58. Time division multiplexing is a scheme in which multiple incoming digital signals are combined for transmission onto a single transmission line using interleaved time slots. Each incoming channel is allocated a specific time slot, and has full access to the transmission line during its allocated time slot. Some TDM systems allow for a variation in the number of signals being sent along the line, and will adjust the time interval of each slot to optimise the use of the available bandwidth. Time division multiplexing Analogue signals are often multiplexed using frequency-division multiplexing, in which the bandwidth of the carrier is divided into sub-channels, each having its own range of frequencies, enabling each sub-channel to carry a separate signal. Each incoming low-bandwidth signal is assigned a different sub-channel on the main channel. In order to prevent interference between adjacent sub-channels, small-bandwidth gaps, known as guard bands, are left between each sub- channel. If a large number of signals are required to be sent along a single long-distance communication link, a high-bandwidth carrier is required. The transmission system must be carefully designed to ensure that it can provide the necessary transmission characteristics.
  • 59. Frequency division multiplexing For fibre-optic channels, a variation of frequency division multiplexing, called wavelength division multiplexing (WDM), is used. As long as each incoming channel has a different frequency range, and none of the frequency ranges overlap, they can be multiplexed onto a long-haul fibre-optic transmission link. At the transmitting end, incoming optical signals are passed through a diffraction grating and combined for transmission over a high-capacity fibre-optic link. At the other end of the link, this combined signal is split into its constituent channels using another diffraction grating. An optical system of this type is completely passive, and therefore highly reliable. In WDM transmission systems, each channel will typically carry a number of time division multiplexed (TDM) signals.
  • 60. Wavelength division multiplexing Error Correction and Detection In telecommunications, the detection and correction of errors is important for maintaining data integrity on "noisy" communication channels. Error detection is the ability to detect the presence of errors introduced to a stream of data by interference or faults in the transmission system between a transmitter and a receiver. Error correction is the ability to restore data in which errors have been found to its original state. If an error is found using an error detection code, the receiver can respond either by explicitly requesting retransmission of the data from the transmitter, or by not sending an acknowledgement for the corrupted data, in which case the transmitter will assume that the data has either not been received or has been rejected by the receiver, and will re-transmit the data. Error correction codes are used by the receiver, acting alone, both to detect the presence of an error in the received data and to re-construct the data in its original form using the error correction encoding. Error correction necessarily involves the transmission of a significant amount of additional (redundant) data. The overhead involved is usually far greater than that required for error-detection schemes, and for this reason error correction is generally only used for applications where re-transmission of the data is not practical. In some schemes, a compromise
  • 61. (hybrid) solution is used in which minor errors are corrected using error correction codes, while major errors result in a request for retransmission. Signals from Voyager 1 now take more than fourteen hours to reach Earth Error detection schemes An error-detecting code (or backward error correction) involves the addition of sufficient redundant data to the information being sent to enable the receiver to detect errors and request the receiver to retransmit the data. This approach is known as an automatic repeat request (ARQ) strategy. A number of commonly used error detection schemes exist, which vary considerably in their complexity. The amount of additional information sent is usually the same for a given amount of data, and the error detection information will have a relationship to the data that is determined by the application of an algorithm of some kind to the data itself. The receiver applies the same algorithm to the data it receives to obtain its own version of the error correction code, and then compares that version with the error correction code it has received. If the two codes match, the receiver can be reasonably sure that the data is correct. If not, it will assume that an error has occurred and respond in the appropriate manner (i.e. request retransmission, either explicitly or by not sending an acknowledgement). The common types of error detection scheme are listed below, together with a brief description.
  • 62.  Repetition schemes - the data to be sent is broken down into blocks of bits of a fixed length, and each block is sent a predetermined number of times. If one or more block differs from another block, it is concluded that an error has occurred. This type of scheme is simple, but inefficient in that the amount of overhead (in the form of redundant data) is very high. Also, if the same error affects each block in the same way, an error may go undetected.  Parity schemes - the data is again broken up into blocks of bits of a fixed length, and one additional bit is added (the parity bit). The number of bits in each block that are set to one (i.e. as opposed to zero) are counted. If an even parity scheme is being used, and if the number of ones counted is even, then the parity bit is set to zero. If the number of ones counted is odd, the parity bit is set to one (to make the number of ones even once more). The receiver simply counts the number of bits that are set to one, and if an odd number results, an error has occurred. An odd parity scheme works in exactly the same way, except that the number of bits set to one must always be odd. The weakness of parity schemes is that they can only detect errors in which an odd number of bits have been changed.  Checksum - an arithmetic calculation of some kind is performed on the bytes or words making up the data, and the result is appended to the data as a checksum. The receiver performs the same calculation on the received data, and compares the result to the received checksum. If the results match, the data is correct. If not, an error has occurred (parity schemes can, in a sense, be considered to be very simple checksum schemes).  Cyclic redundancy check (CRC) - this is a somewhat more complex error detection scheme. To generate a n-bit checksum, or frame check sequence (FCS), a generator polynomial is used that must be of the order n. For a 16-bit checksum, for example, the generator polynomial x16 +x12 +x5 +1 is commonly used. The transmitting device adds n 0-bits to the data to be transmitted and divides the resulting code polynomial by the generating polynomial, which produces a remainder polynomial of n degrees. This remainder polynomial becomes the checksum. The data transmitted (the code vector) is the original data followed by the n-bit checksum. The receiver can either compute the checksum again from the data and verify that it agrees with the received checksum, or it can divide the data together with the checksum by the generator polynomial. If the remainder is found to be 0, the data is correct. Error correction schemes An error-correcting code (ECC) or forward error correction (FEC) code involves the addition of sufficient redundant data to the information being sent to enable the receiver to both detect and correct errors, without needing to request the receiver to retransmit the data. The advantage of this approach is that a return path is not required. This would be a critical requirement for applications such as communication with deep space probes, for example, where the delay
  • 63. between sending a message and receiving a reply could be considerable (Voyager 1, launched in 1977, is now more than ten billion miles from earth, and signals received by NASA arrive over fourteen hours affter they have been transmitted). The disadvantage is that, in order to ensure the required degree of data integrity, a large amount of redundant data will have to be transmitted with each message, significantly increasing the bandwidth required. Shannon's theorem defines the code rate as the total number of bits divided by the number of bits of actual data, and the coding gain as the difference in signal-to-noise ratio (SNR) between encoded and un-encoded data that would be necessary for them both to exhibit the same bit error rate (BER). The effectiveness of the encoding scheme is measured in terms of both its code rate and its coding gain. The theorem essentially sets an upper limit on the error correction rate that can be achieved with a given level of data redundancy, and for a given minimum signal-to-noise ratio. Error-correction codes can be divided into block codes and convolutional codes. Block codes work on blocks of data of a fixed-size (e.g. packets). Convolutional codes work for bit streams of arbitrary length. They tend to be more complex and more difficult to implement than block codes, and involve considerably more overhead per unit data. Block codes are calculated for each individual frame or packet independently of one-another, whereas convolutional codes encode the entire data stream for a message as one long code word, and then transmit the message in segments. Convolutional codes have very powerful error correction capabilities, and are widely used in satellite communications and for communicating with deep space exploration vehicles. Some error-correction schemes work very well above a certain signal-to-noise ratio, but not at all below it (depending on how closely the scheme approaches Shannon's theoretical limit). Because most errors occur in random bursts rather than evenly distributed throughout the data stream, the message data bits are often shuffled (a process known as interleaving) after they have been encoded. When the message is un-shuffled (de-interleaved) at the receiver, bursts of errors are dispersed throughout the data stream as individual bit errors, which can be easily corrected using the error correction encoding.