SlideShare una empresa de Scribd logo
1 de 46
Computing and Informatics Class Notes for AMIE
                            By Vinayak Ashok Bharadi

Local Area Networks

For historical reasons, the industry refers to nearly every type of network as an "area
network." The most commonly-discussed categories of computer networks include the
following -

   •   Local Area Network (LAN)
   •   Wide Area Network (WAN)
   •   Metropolitan Area Network (MAN)
   •   Storage Area Network (SAN)
   •   System Area Network (SAN)
   •   Server Area Network (SAN)
   •   Small Area Network (SAN)
   •   Personal Area Network (PAN)
   •   Desk Area Network (DAN)
   •   Controller Area Network (CAN)
   •   Cluster Area Network (CAN)

LANs and WANs were the original flavors of network design. The concept of "area"
made good sense at this time, because a key distinction between a LAN and a WAN
involves the physical distance that the network spans. A third category, the MAN, also fit
into this scheme as it too is centered on a distance-based concept.

As technology improved, new types of networks appeared on the scene. These, too,
became known as various types of "area networks" for consistency's sake, although
distance no longer proved a useful differentiator.

LAN Basics

A LAN connects network devices over a relatively short distance. A networked office
building, school, or home usually contains a single LAN, though sometimes one building
will contain a few small LANs, and occasionally a LAN will span a group of nearby
buildings. In IP networking, one can conceive of a LAN as a single IP subnet (though this
is not necessarily true in practice).

Besides operating in a limited space, LANs include several other distinctive features.
LANs are typically owned, controlled, and managed by a single person or organization.
They also use certain specific connectivity technologies, primarily Ethernet and Token
Ring.
WAN Basics

As the term implies, a wide-area network spans a large physical distance. A WAN like
the Internet spans most of the world!

A WAN is a geographically-dispered collection of LANs. A network device called a
router connects LANs to a WAN. In IP networking, the router maintains both a LAN
address and a WAN address.

WANs differ from LANs in several important ways. Like the Internet, most WANs are
not owned by any one organization but rather exist under collective or distributed
ownership and management. WANs use technology like ATM, Frame Relay and X.25 for
connectivity.

LANs and WANs at Home

Home networkers with cable modem or DSL service already have encountered LANs and
WANs in practice, though they may not have noticed. A cable/DSL router like those in
the Linksys family join the home LAN to the WAN link maintained by one's ISP. The
ISP provides a WAN IP address used by the router, and all of the computers on the home
network use private LAN addresses. On a home network, like many LANs, all computers
can communicate directly with each other, but they must go through a central gateway
location to reach devices outside of their local area.

What About MAN, SAN, PAN, DAN, and CAN?

Future articles will describe the many other types of area networks in more detail. After
LANs and WANs, one will most commonly encounter the following three network
designs:

A Metropolitan Area Network connects an area larger than a LAN but smaller than a
WAN, such as a city, with dedicated or high-performance hardware. [1]

A Storage Area Network connects servers to data storage devices through a technology
like Fibre Channel. [2]

A System Area Network connects high-performance computers with high-speed
connections in a cluster configuration.

Conclusion

To the uninitiated, LANs, WANs, and the other area network acroymns appear to be just
more alphabet soup in a technology industry already drowning in terminology. The
names of these networks are not nearly as important as the technologies used to construct
them, however. A person can use the categorizations as a learning tool to better
understand concepts like subnets, gateways, and routers.
Bus, ring, star, and other types of network topology
In networking, the term "topology" refers to the layout of connected devices on a
network. This article introduces the standard topologies of computer networking.

Topology in Network Design
One can think of a topology as a network's virtual shape or structure. This shape does not
necessarily correspond to the actual physical layout of the devices on the network. For
example, the computers on a home LAN may be arranged in a circle in a family room,
but it would be highly unlikely to find an actual ring topology there.

Network topologies are categorized into the following basic types:

   •   bus
   •   ring
   •   star
   •   tree
   •   mesh

More complex networks can be built as hybrids of two or more of the above basic
topologies.

Bus Topology
Bus networks (not to be confused with the system bus of a computer) use a common
backbone to connect all devices. A single cable, the backbone functions as a shared
communication medium that devices attach or tap into with an interface connector. A
device wanting to communicate with another device on the network sends a broadcast
message onto the wire that all other devices see, but only the intended recipient actually
accepts and processes the message.

Ethernet bus topologies are relatively easy to install and don't require much cabling
compared to the alternatives. 10Base-2 ("ThinNet") and 10Base-5 ("ThickNet") both
were popular Ethernet cabling options many years ago for bus topologies. However, bus
networks work best with a limited number of devices. If more than a few dozen
computers are added to a network bus, performance problems will likely result. In
addition, if the backbone cable fails, the entire network effectively becomes unusable.
Ring Topology
In a ring network, every device has exactly two neighbors for communication purposes.
All messages travel through a ring in the same direction (either "clockwise" or
"counterclockwise"). A failure in any cable or device breaks the loop and can take down
the entire network.

To implement a ring network, one typically uses FDDI, SONET, or Token Ring
technology. Ring topologies are found in some office buildings or school campuses.




Star Topology
Many home networks use the star topology. A star network features a central connection
point called a "hub" that may be a hub, switch or router. Devices typically connect to the
hub with Unshielded Twisted Pair (UTP) Ethernet.

Compared to the bus topology, a star network generally requires more cable, but a failure
in any star network cable will only take down one computer's network access and not the
entire LAN. (If the hub fails, however, the entire network also fails.)
Tree Topology
Tree topologies integrate multiple star topologies together onto a bus. In its simplest
form, only hub devices connect directly to the tree bus, and each hub functions as the
"root" of a tree of devices. This bus/star hybrid approach supports future expandability of
the network much better than a bus (limited in the number of devices due to the broadcast
traffic it generates) or a star (limited by the number of hub connection points) alone.

Mesh Topology
Mesh topologies involve the concept of routes. Unlike each of the previous topologies,
messages sent on a mesh network can take any of several possible paths from source to
destination. (Recall that even in a ring, although two cable paths exist, messages can only
travel in one direction.) Some WANs, like the Internet, employ mesh routing.

Summary
Topologies remain an important part of network design theory. You can probably build a
home or small business network without understanding the difference between a bus
design and a star design, but understanding the concepts behind these gives you a deeper
understanding of important elements like hubs, broadcasts, and routes


                           Internet protocol suite

  Internet protocol suite
  Layer              Protocols

  5. Application     DNS, TLS/SSL,
                     TFTP,     FTP,
                     HTTP, IMAP4,
                     IRC, POP3, SIP,
                     SMTP, SNMP,
                     SSH, TELNET,
                     RTP, …

  4. Transport       TCP,    UDP,
                     RSVP, DCCP,
                     SCTP, …

  3. Network         IP (IPv4, IPv6),
                     ICMP, IGMP,
                     ARP, RARP, …

  2. Data link       Ethernet, Wi-Fi,
                     PPP,      FDDI,
                     ATM,      Frame
Relay,    GPRS,
                    Bluetooth, …

  1. Physical       Modems, ISDN,
                    SONET/SDH,
                    RS232,     USB,
                    Ethernet
                    physical layer,
                    Wi-Fi,     GSM,
                    Bluetooth, …


The Internet protocol suite is the set of communications protocols that implement the
protocol stack on which the Internet and most commercial networks run. It is sometimes
called the TCP/IP protocol suite, after the two most important protocols in it: the
Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were also the
first two defined.

The Internet protocol suite — like many protocol suites — can be viewed as a set of
layers, each layer solves a set of problems involving the transmission of data, and
provides a well-defined service to the upper layer protocols based on using services from
some lower layers. Upper layers are logically closer to the user and deal with more
abstract data, relying on lower layer protocols to translate data into forms that can
eventually be physically transmitted. The original TCP/IP reference model consisted of
four layers, but has evolved into a five layer model.

The OSI model describes a fixed, seven layer stack for networking protocols.
Comparisons between the OSI model and TCP/IP can give further insight into the
significance of the components of the IP suite, but can also cause confusion, since the
definition of the layers are slightly different.

History

The Internet protocol suite came from work done by DARPA in the early 1970s. After
building the pioneering ARPANET, DARPA started work on a number of other data
transmission technologies. In 1972, Robert E. Kahn was hired at the DARPA Information
Processing Technology Office, where he worked on both satellite packet networks and
ground-based radio packet networks, and recognized the value of being able to
communicate across them. In the spring of 1973, Vinton Cerf, the developer of the
existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on
open-architecture interconnection models with the goal of designing the next protocol for
the ARPANET.

By the summer of 1973, Kahn and Cerf had soon worked out a fundamental
reformulation, where the differences between network protocols were hidden by using a
common internetwork protocol, and instead of the network being responsible for
reliability, as in the ARPANET, the hosts became responsible. (Cerf credits Hubert
Zimmerman and Louis Pouzin [designer of the CYCLADES network] with important
influences on this design.)

With the role of the network reduced to the bare minimum, it became possible to join
almost any networks together, no matter what their characteristics were, thereby solving
Kahn's initial problem. (One popular saying has it that TCP/IP, the eventual product of
Cerf and Kahn's work, will run over "two tin cans and a string", and it has in fact been
implemented using homing pigeons.) A computer called a gateway (later changed to
router to avoid confusion with other types of gateway) is provided with an interface to
each network, and forwards packets back and forth between them.

The idea was worked out in more detailed form by Cerf's networking research group at
Stanford in the 1973–74 period. (The early networking work at Xerox PARC, which
produced the PARC Universal Packet protocol suite, much of which was
contemporaneous, was also a significant technical influence; people moved between the
two.)

DARPA then contracted with BBN Technologies, Stanford University, and the
University College London to develop operational versions of the protocol on different
hardware platforms. Four versions were developed: TCP v1, TCP v2, a split into TCP v3
and IP v3 in the spring of 1978, and then stability with TCP/IP v4 — the standard
protocol still in use on the Internet today.

In 1975, a two-network TCP/IP communications test was performed between Stanford
and University College London (UCL). In November, 1977, a three-network TCP/IP test
was conducted between the U.S., UK, and Norway. Between 1978 and 1983, several
other TCP/IP prototypes were developed at multiple research centres. A full switchover
to TCP/IP on the ARPANET took place January 1, 1983.[1]

In March 1982,[2] the US Department of Defense made TCP/IP the standard for all
military computer networking. In 1985, the Internet Architecture Board held a three day
workshop on TCP/IP for the computer industry, attended by 250 vendor representatives,
helping popularize the protocol and leading to its increasing commercial use.

On November 9, 2005 Kahn and Cerf were presented with the Presidential Medal of
Freedom for their contribution to American culture.[3]
Layers in the Internet protocol suite stack




IP suite stack showing the physical network connection of two hosts via two routers and
the corresponding layers used at each hop




Sample encapsulation of data within a UDP datagram within an IP packet

The IP suite uses encapsulation to provide abstraction of protocols and services.
Generally a protocol at a higher level uses a protocol at a lower level to help accomplish
its aims. The Internet protocol stack can be roughly fitted to the four layers of the original
TCP/IP model:
DNS, TFTP, TLS/SSL, FTP, HTTP, IMAP, IRC, NNTP, POP3,
                        SIP, SMTP, SNMP, SSH, TELNET, ECHO, BitTorrent, RTP,
                        PNRP, rlogin, ENRP, …
4. Application
                        Routing protocols like BGP and RIP, which for a variety of
                        reasons run over TCP and UDP respectively, may also be
                        considered part of the application or network layer.


                        TCP, UDP, DCCP, SCTP, IL, RUDP, …

3. Transport
                        Routing protocols like OSPF, which run over IP, may also be
                        considered part of the transport or network layer. ICMP and
                        IGMP run over IP may be considered part of the network layer.


                        IP (IPv4, IPv6)

2. Internet
                        ARP and RARP operate underneath IP but above the link layer
                        so they belong somewhere in between.


                        Ethernet, Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM, Frame
1. Network access
                        Relay, SMDS, …


In many modern textbooks, this model has evolved into the five layer TCP/IP model,
where the Network access layer is splitted into a Data link layer on top of a Physical
layer, and the Internet layer is called Network layer.

Implementations

Today, most commercial operating systems include and install the TCP/IP stack by
default. For most users, there is no need to look for implementations. TCP/IP is included
in all commercial Unix systems, Mac OS X, and all free-software Unix-like systems such
as Linux distributions and BSD systems, as well as Microsoft Windows.

Unique implementations include Lightweight TCP/IP, an open source stack designed for
embedded systems and KA9Q NOS, a stack and associated protocols for amateur packet
radio systems and personal computers connected via serial lines.
Karnaugh map
The Karnaugh map, also known as a Veitch diagram (K-map or KV-map for short), is
a tool to facilitate management of Boolean algebraic expressions. A Karnaugh map is
unique in that only one variable changes value between squares, in other words, the rows
and columns are ordered according to the principles of Gray code.

History and nomenclature

The Karnaugh map was invented in 1953 by Maurice Karnaugh, a telecommunications
engineer at Bell Labs.

Usage in boolean logic

Normally, extensive calculations are required to obtain the minimal expression of a
Boolean function, but one can use a Karnaugh map instead.

Problem solving uses

   •   Karnaugh maps make use of the human brain's excellent pattern-matching
       capability to decide which terms should be combined to get the simplest
       expression.
   •   K-maps permit the rapid identification and elimination of potential race hazards,
       something that boolean equations alone cannot do.
   •   A Karnaugh map is an excellent aid for simplification of up to six variables, but
       with more variables it becomes hard even for our brain to discern optimal
       patterns.
   •   For problems involving more than six variables,solving the boolean expressions is
       more preferred than the Karnaugh map.

Karnaugh maps also help teach about Boolean functions and minimization.

Properties




A mapping of minterms on a Karnaugh map. The arrows indicate which squares can be
thought of as "switched" (rather than being in a normal sequential order).
A Karnaugh map may have any number of variables, but usually works best when there
are only a few - between 2 and 6 for example. Each variable contributes two possibilities
to each possibility of every other variable in the system. Karnaugh maps are organized so
that all the possibilities of the system are arranged in a grid form, and between two
adjacent boxes, only one variable can change value. This is what allows it to reduce
hazards.

When using a Karnaugh map to derive a minimized function, one "covers" the ones on
the map by rectangular "coverings" that contain a number of boxes equal to a power of 2
(for example, 4 boxes in a line, 4 boxes in a square, 8 boxes in a rectangle, etc). Once a
person has covered the ones, that person can produce a term of a sum of products by
finding the variables that do not change throughout the entire covering, and taking a 1 to
mean that variable, and a 0 as the complement of that variable. Doing this for every
covering gives you a matching function.

One can also use zeros to derive a minimized function. The procedure is identical to the
procedure for ones, except that each term is a term in a product of sums - and a 1 means
the compliment of the variable, while 0 means the variable non-complimented.

Each square in a Karnaugh map corresponds to a minterm (and maxterm). The picture to
the right shows the location of each minterm on the map.

Example

Consider the following function:

       f(A,B,C,D) = E(4,8,9,10,11,12,14,15)

The values inside E tell us which rows have output 1.

This function has this truth table:


                           #   A B C D f(A,B,C,D)


                           0   0      0 0   0   0


                           1   0      0 0   1   0


                           2   0      0 1   0   0
3   0   0 1   1   0




4   0   1 0   0   1




5   0   1 0   1   0




6   0   1 1   0   0




7   0   1 1   1   0




8   1   0 0   0   1




9   1   0 0   1   1




10 1    0 1   0   1
11 1   0 1   1   1




                        12 1   1 0   0   1




                        13 1   1 0   1   0




                        14 1   1 1   0   1




                        15 1   1 1   1   1



The input variables can be combined in 16 different ways, so our Karnaugh map has to
have 16 positions. The most convenient way to arrange this is in a 4x4 grid.
The binary digits in the map represent the function's output for any given combination of
inputs. We write 0 in the upper leftmost corner of the map because f = 0 when A = 0, B =
0, C = 1, D = 0. Similarly we mark the bottom right corner as 1 because A = 1, B = 0, C =
0, D = 0 gives f = 1. Note that the values are ordered in a Gray code, so that precisely one
variable flips between any pair of adjacent cells.

After the Karnaugh map has been constructed our next task is to find the minimal terms
to use in the final expression. These terms are found by encircling groups of 1's in the
map. The encirclings must be rectangular and must have an area that is a positive power
of two (i.e. 2, 4, 8, …). The rectangles should be as large as possible without containing
any 0's. The optimal encirclings in this map are marked by the green, red and blue lines.

For each of these encirclings we find those variables that have the same state in each of
the fields in the encircling. For the first encircling (the red one) we find that:

   •   The variable A maintains the same state (1) in the whole encircling, therefore it
       should be included in the term for the red encircling.
   •   Variable B does not maintain the same state (it shifts from 1 to 0), and should
       therefore be excluded.
   •   C does not change: it is always 1.
   •   D changes.

Thus the first term in the Boolean expression is AC.

For the green encircling we see that A and B maintain the same state, but C and D change.
B is 0 and has to be negated before it can be included. Thus the second term is AB'.

In the same way, the blue rectangle gives the term BC'D' and so the whole expression is:
AC + AB′+ BC′D′.

The grid is toroidally connected, which means that the rectangles can wrap around edges,
so ABD′ is a valid term, although not part of the minimal set.

The inverse of a function is solved in the same way by encircling the 0's instead.

In a Karnaugh map with n variables, a Boolean term mentioning k of them will have a
corresponding rectangle of area 2n-k.

Karnaugh maps also allow easy minimizations of functions whose truth tables include
"don't care" conditions (that is sets of inputs for which the designer doesn't care what the
output is) because "don't care" conditions can be included in a ring to make it larger but
do not have to be ringed. They are usually indicated on the map with a hyphen/dash in
place of the number. The value can be a "0," "1," or the hyphen/dash/X depending on if
one can use the "0" or "1" to simplify the KM more. If the "don't cares" don't help you
simplify the KM more, then use the hyphen/dash/X.

Race hazards

Karnaugh maps are useful for detecting and eliminating race hazards. They are very easy
to spot using a Karnaugh map, because a race condition may exist when moving between
any pair of adjacent, but disjointed, regions circled on the map.

   •   In the above example, a potential race condition exists when C and D are both 0,
       A is a 1, and B changes from a 0 to a 1 (moving from the green state to the blue
       state). For this case, the output is defined to remain unchanged at 1, but because
       this transition is not covered by a specific term in the equation, a potential for a
       glitch (a momentary transition of the output to 0) exists.
   •   A harder possible glitch to spot is if D was 0 and A and B were both 1, with C
       changing from 0 to 1. In this case the glitch wraps around from the bottom of the
       map to the top of the map.

Whether these glitches do occur depends on the physical nature of the implementation,
and whether we need to worry about it depends on the application.

In this case, an additional term of +AD' would eliminate the potential race hazard,
bridging between the green and blue output states or blue and red output states.

The term is redundant in terms of the static logic of the system, but such redundant terms
are often needed to assure race-free dynamic performance.

When not to use K-maps

The diagram becomes cluttered and hard to interpret if there are more than four variables
on an axis. This argues against the use of Karnaugh maps for expressions with more than
six variables. For such expressions, the Quine-McCluskey algorithm, also called the
method of prime implicants, should be used.

This algorithm generally finds most of the optimal solutions quickly and easily, but
selecting the final prime implicants (after the essential ones are chosen) may still require
a brute force approach to get the optimal combination (though this is generally far
simpler than trying to brute force the entire problem).

Logic gate
A logic gate performs a logical operation on one or more logic inputs and produces a
single logic output. The logic normally performed is Boolean logic and is most
commonly found in digital circuits. Logic gates are primarily implemented electronically
using diodes or transistors, but can also be constructed using electromagnetic relays,
fluidics, optical or even mechanical elements.
Logic levels

A Boolean logical input or output always takes one of two logic levels. These logic levels
can go by many names including: on / off, high (H) / low (L), one (1) / zero (0), true (T) /
false (F), positive / negative, positive / ground, open circuit / close circuit, potential
difference / no difference, yes / no.

For consistency, the names 1 and 0 will be used below.

Logic gates

A logic gate takes one or more logic-level inputs and produces a single logic-level output.
Because the output is also a logic level, an output of one logic gate can connect to the
input of one or more other logic gates. Two outputs cannot be connected together,
however, as they may be attempting to produce different logic values. In electronic logic
gates, this would cause a short circuit.

In electronic logic, a logic level is represented by a certain voltage (which depends on the
type of electronic logic in use). Each logic gate requires power so that it can source and
sink currents to achieve the correct output voltage. In logic circuit diagrams the power is
not shown, but in a full electronic schematic, power connections are required.

Background

The simplest form of electronic logic is diode logic. This allows AND and OR gates to be
built, but not inverters, and so is an incomplete form of logic. To build a complete logic
system, valves or transistors can be used. The simplest family of logic gates using bipolar
transistors is called resistor-transistor logic, or RTL. Unlike diode logic gates, RTL gates
can be cascaded indefinitely to produce more complex logic functions. These gates were
used in early integrated circuits. For higher speed, the resistors used in RTL were
replaced by diodes, leading to diode-transistor logic, or DTL. It was then discovered that
one transistor could do the job of two diodes in the space of one diode, so transistor-
transistor logic, or TTL, was created. In some types of chip, to reduce size and power
consumption still further, the bipolar transistors were replaced with complementary field-
effect transistors (MOSFETs), resulting in complementary metal-oxide-semiconductor
(CMOS) logic.

For small-scale logic, designers now use prefabricated logic gates from families of
devices such as the TTL 7400 series invented by Texas Instruments and the CMOS 4000
series invented by RCA, and their more recent descendants. These devices usually
contain transistors with multiple emitters, used to implement the AND function, which
are not available as separate components. Increasingly, these fixed-function logic gates
are being replaced by programmable logic devices, which allow designers to pack a huge
number of mixed logic gates into a single integrated circuit. The field-programmable
nature of programmable logic devices such as FPGAs has removed the 'hard' property of
hardware; it is now possible to change the logic design of a hardware system by
reprogramming some of its components, thus allowing the features or function of a
hardware implementation of a logic system to be changed.

Electronic logic gates differ significantly from their relay-and-switch equivalents. They
are much faster, consume much less power, and are much smaller (all by a factor of a
million or more in most cases). Also, there is a fundamental structural difference. The
switch circuit creates a continuous metallic path for current to flow (in either direction)
between its input and its output. The semiconductor logic gate, on the other hand, acts as
a high-gain voltage amplifier, which sinks a tiny current at its input and produces a low-
impedance voltage at its output. It is not possible for current to flow between the output
and the input of a semiconductor logic gate.

Another important advantage of standardised semiconductor logic gates, such as the 7400
and 4000 families, is that they are cascadable. This means that the output of one gate can
be wired to the inputs of one or several other gates, and so on ad infinitum, enabling the
construction of circuits of arbitrary complexity without requiring the designer to
understand the internal workings of the gates.

In practice, the output of one gate can only drive a finite number of inputs to other gates,
a number called the 'fanout limit', but this limit is rarely reached in the newer CMOS
logic circuits, as compared to TTL circuits. Also, there is always a delay, called the
'propagation delay', from a change in input of a gate to the corresponding change in its
output. When gates are cascaded, the total propagation delay is approximately the sum of
the individual delays, an effect which can become a problem in high-speed circuits.

Electronic logic levels

The two logic levels in binary logic circuits are represented by two voltage ranges, "low"
and "high". Each technology has its own requirements for the voltages used to represent
the two logic levels, to ensure that the output of any device can reliably drive the input of
the next device. Usually, two non-overlapping voltage ranges, one for each level, are
defined. The difference between the high and low levels ranges from 0.7 volts in Emitter
coupled logic to around 28 volts in relay logic.

Logic gates and hardware

NAND and NOR logic gates are the two pillars of logic, in that all other types of Boolean
logic gates (i.e., AND, OR, NOT, XOR, XNOR) can be created from a suitable network
of just NAND or just NOR gate(s). They can be built from relays or transistors, or any
other technology that can create an inverter and a two-input AND or OR gate. Hence the
NAND and NOR gates are called the universal gates.

For an input of 2 variables, there are 16 possible boolean algebra outputs. These 16
outputs are enumerated below with the appropriate function or logic gate for the 4
possible combinations of A and B. Note that not all outputs have a corresponding
function or logic gate, although those that do not can be produced by combinations of
those that can.


                                     A            0 01 1
                           INPUT
                                     B            0 10 1


                           OUTPUT 0               0 00 0


                                     A AND B      0 00 1


                                                  0 01 0


                                     A            0 01 1


                                                  0 10 0


                                     B            0 10 1


                                     A XOR B      0 11 0


                                     A OR B       0 11 1


                                     A NOR B      1 00 0


                                     A XNOR B 1 0 0 1


                                     NOT B        1 01 0
1 01 1


                                          NOT A        1 10 0


                                                       1 10 1


                                          A NAND B 1 1 1 0


                                          1            1 11 1


Logic gates are a vital part of many digital circuits, and as such, every kind is available as
an IC. For examples, see the 4000 series of CMOS logic chips or the 700 series.

Symbols

There are two sets of symbols in common use, both now defined by ANSI/IEEE Std 91-
1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on
traditional schematics, is used for simple drawings and is quicker to draw by hand. It is
sometimes unofficially described as "military", reflecting its origin if not its modern
usage. The "rectangular shape" set, based on IEC 60617-12, has rectangular outlines for
all types of gate, and allows representation of a much wider range of devices than is
possible with the traditional symbols. The IEC's system has been adopted by other
standards, such as EN 60617-12:1999 in Europe and BS EN 60617-12:1999 in the United
Kingdom.


                                                        Boolean
                                                        algebra
Type     Distinctive shape      Rectangular shape                 Truth table
                                                        between A
                                                        &B

                                                                         INPUT OUTPUT
                                                                         A    B   A AND B
AND                                                                      0    0   0
                                                                         0    1   0
                                                                         1    0   0
1       1       1


                                                                         INPUT OUTPUT
                                                                         A       B       A OR B
                                                                         0       0       0
OR                                                    A+B                0       1       1
                                                                         1       0       1
                                                                         1       1       1


                                                                         INPUT OUTPUT
                                                                         A               NOT A
NOT                                                                      0               1
                                                                         1               0


In electronics a NOT gate is more commonly called an inverter. The circle on the symbol
is called a bubble, and is generally used in circuit diagrams to indicate an inverted input
or output.

                                                                     INPUT OUTPUT
                                                                     A       B       A NAND B
                                                                     0       0       1
NAND                                                                 0       1       1
                                                                     1       0       1
                                                                     1       1       0


                                                                         INPUT OUTPUT
                                                                         A       B       A NOR B
                                                                         0       0       1
NOR                                                                      0       1       0
                                                                         1       0       0
                                                                         1       1       0


In practice, the cheapest gate to manufacture is usually the NAND gate. Additionally,
Charles Peirce showed that NAND gates alone (as well as NOR gates alone) can be used
to reproduce all the other logic gates.

Symbolically, a NAND gate can also be shown using the OR shape with bubbles on its
inputs, and a NOR gate can be shown as an AND gate with bubbles on its inputs. This
reflects the equivalency due to De Morgans law, but it also allows a diagram to be read
more easily, or a circuit to be mapped onto available physical gates in packages easily,
since any circuit node that has bubbles at both ends can be replaced by a simple bubble-
less connection and a suitable change of gate. If the NAND is drawn as OR with input
bubbles, and a NOR as AND with input bubbles, this gate substitution occurs
automatically in the diagram (effectively, bubbles "cancel"). This is commonly seen in
real logic diagrams - thus the reader must not get into the habit of associating the shapes
exclusively as OR or AND shapes, but also take into account the bubbles at both inputs
and outputs in order to determine the "true" logic function indicated.

Two more gates are the exclusive-OR or XOR function and its inverse, exclusive-NOR or
XNOR. The two input Exclusive-OR is true only when the two input values are different,
false if they are equal, regardless of the value. If there are more than two inputs, the gate
generates a true at its output if the number of trues at its input is odd ([1]). In practice,
these gates are built from combinations of simpler logic gates.
                                                                           INPUT OUTPUT
                                                                           A       B       A XOR B
                                                                           0       0       0
XOR                                                                        0       1       1
                                                                           1       0       1
                                                                           1       1       0


                                                                       INPUT OUTPUT
                                                                       A       B       A XNOR B
                                                                       0       0       1
XNOR                                                                   0       1       0
                                                                       1       0       0
                                                                       1       1       1
The 7400 chip, containing four NANDs. The two additional contacts supply power (+5
V) and connect the ground.

DeMorgan equivalent symbols

By use of De Morgan's theorem, an AND gate can be turned into an OR gate by inverting
the sense of the logic at its inputs and outputs. This leads to a separate set of symbols
with inverted inputs and the opposite core symbol. These symbols can make circuit
diagrams for circuits using active low signals much clearer and help to show accidental
connection of an active high output to an active low input or vice-versa.

Storage of bits

Related to the concept of logic gates (and also built from them) is the idea of storing a bit
of information. The gates discussed up to here cannot store a value: when the inputs
change, the outputs immediately react. It is possible to make a storage element either
through a capacitor (which stores charge due to its physical properties) or by feedback.
Connecting the output of a gate to the input causes it to be put through the logic again,
and choosing the feedback correctly allows it to be preserved or modified through the use
of other inputs. A set of gates arranged in this fashion is known as a "latch", and more
complicated designs that utilise clocks (signals that oscillate with a known period) and
change only on the rising edge are called edge-triggered "flip-flops". The combination of
multiple flip-flops in parallel, to store a multiple-bit value, is known as a register.

These registers or capacitor-based circuits are known as computer memory. They vary in
performance, based on factors of speed, complexity, and reliability of storage, and many
different types of designs are used based on the application.

Three-state logic gates
A tristate buffer can be thought of as a switch. If B is on, the switch is closed. If B is off,
the switch is open.
        Main article: Tri-state buffer

Three-state, or 3-state, logic gates have three states of the output: high (H), low (L) and
high-impedance (Z). The high-impedance state plays no role in the logic, which remains
strictly binary. These devices are used on buses to allow multiple chips to send data. A
group of three-states driving a line with a suitable control circuit is basically equivalent to
a multiplexer, which may be physically distributed over separate devices or plug-in cards.

'Tri-state', a widely-used synonym of 'three-state', is a trademark of the National
Semiconductor Corporation.

Miscellaneous

Logic circuits include such devices as multiplexers, registers, arithmetic logic units
(ALUs), and computer memory, all the way up through complete microprocessors which
can contain more than a 100 million gates. In practice, the gates are made from field
effect transistors (FETs), particularly metal-oxide-semiconductor FETs (MOSFETs).

In reversible logic, Toffoli gates are used.

History and development

The earliest logic gates were made mechanically. Charles Babbage, around 1837, devised
the Analytical Engine. His logic gates relied on mechanical gearing to perform
operations. Electromagnetic relays were later used for logic gates. In 1891, Almon
Strowger patented a device containing a logic gate switch circuit (U.S. Patent 0447918).
Strowger's patent was not in widespread use until the 1920s. Starting in 1898, Nikola
Tesla filed for patents of devices containing logic gate circuits (see List of Tesla patents).
Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's
modification, in 1907, of the Fleming valve can be used as AND logic gate. Claude E.
Shannon introduced the use of Boolean algebra in the analysis and design of switching
circuits in 1937. Walther Bothe, inventor of the coincidence circuit, got part of the 1954
Nobel Prize in physics, for the first modern electronic AND gate in 1924. Active research
is taking place in molecular logic gates.

Common Basic Logic ICs

                     CMOS TTL Function


                     4001     7402 Quad two-input NOR gate
4011     7400 Quad two-input NAND gate


                     4049     7404 Hex NOT gate (inverting buffer)


                     4070     7486 Quad two-Input XOR gate


                     4071     7432 Quad two-input OR gate


                     4077     74266 Quad two-input XNOR gate


                     4081     7408 Quad two-input AND gate


For more CMOS logic ICs, including gates with more than two inputs, see 4000 series.




Adders (electronics)
In electronics, an adder is a device which will perform the addition, S, of two numbers.
In computing, the adder is part of the ALU, and some ALUs contain multiple adders.
Although adders can be constructed for many numerical representations, such as Binary-
coded decimal or excess-3, the most common adders operate on binary numbers. In cases
where two's complement is being used to represent negative numbers it is trivial to
modify an adder into an adder-subtracter.

For single bit adders, there are two general types. A half adder has two inputs, generally
labelled A and B, and two outputs, the sum S and carry output Co. S is the two-bit xor of A
and B, and Co is the two-bit and of A and B. Essentially the output of a half adder is the
two-bit arithmetic sum of two one-bit numbers, with Co being the most significant of
these two outputs.

The other type of single bit adder is the full adder which is like a half adder, but takes an
additional input carry Ci. A full adder can be constructed from two half adders by
connecting A and B to the input of one half adder, connecting the sum from that to an
input to the second adder, connecting Ci to the other input and or the two carry outputs.
Equivalently, S could be made the three-bit xor of A, B, and Ci and Co could be made the
three-bit majority function of A, B, and Ci. The output of the full adder is the two-bit
arithmetic sum of three one-bit numbers.

The purpose of the carry input on the full-adder is to allow multiple full-adders to be
chained together with the carry output of one adder connected to the carry input of the
next most significant adder. The carry is said to ripple down the carry lines of this sort of
adder, giving it the name ripple carry adder.

Half adder




Half adder circuit diagram

A half adder is a logical circuit that performs an addition operation on two binary digits.
The half adder produces a sum and a carry value which are both binary digits.




Following is the logic table for a half adder:


                                      Input Output


                                      A B C S


                                      0 0        0   0


                                      0 1        0   1


                                      1 0        0   1


                                      1 1        1   0
Full adder




Full                  adder                            circuit                      diagram
A + B + CarryIn = Sum + CarryOut

A full adder is a logical circuit that performs an addition operation on three binary digits.
The full adder produces a sum and carry value, which are both binary digits. It can be
combined with other full adders (see below) or work on its own.




                            Input             Output


                            A       B    Ci   Co       S


                            0       0    0    0        0


                            0       0    1    0        1


                            0       1    0    0        1


                            0       1    1    1        0


                            1       0    0    0        1


                            1       0    1    1        0
1      1     0    1       0


                           1      1     1    1       1


Note that the final OR gate before the carry-out output may be replaced by an XOR gate
without altering the resulting logic. This is because the only discrepancy between OR and
XOR gates occurs when both inputs are 1; for the adder shown here, one can check this is
never possible. Using only two types of gates is convenient if one desires to implement
the adder directly using common IC chips.

Ones' complement

Alternatively, a system known as ones' complement can be used to represent negative
numbers. The ones' complement form of a binary number is the bitwise NOT applied to it
— the complement of its positive counterpart. Like sign-and-magnitude representation,
ones' complement has two representations of 0: 00000000 (+0) and 11111111 (−0).

As an example, the ones' complement form of 00101011 (43) becomes 11010100 (−43).
The range of signed numbers using ones' complement in a conventional eight-bit byte is
−12710 to +12710.

To add two numbers represented in this system, one does a conventional binary addition,
but it is then necessary to add any resulting carry back into the resulting sum. To see why
this is necessary, consider the case of the addition of −1 (11111110) to +2 (00000010).
The binary addition alone gives 00000000—not the correct answer! Only when the carry
is added back in does the correct result (00000001) appear.

This numeric representation system was common in older computers; the PDP-1 and
UNIVAC 1100/2200 series, among many others, used ones'-complement arithmetic.

(A remark on terminology: The system is referred to as "ones' complement" because the
negation of x is formed by subtracting x from a long string of ones. Two's complement
arithmetic, on the other hand, forms the negation of x by subtracting x from a single large
power of two.[1])

Two's complement
Two's complement is the most popular method of representing signed integers in
computer science. It is also an operation of negation (converting positive to negative
numbers or vice versa) in computers which represent negative numbers using two's
complement. Its use is ubiquitous today because it doesn't require the addition and
subtraction circuitry to examine the signs of the operands to determine whether to add or
subtract, making it both simpler to implement and capable of easily handling higher
precision arithmetic. Also, 0 has only a single representation, obviating the subtleties
associated with negative zero (which exists in one's complement).

sign bit
0            1        1           1      1    1       1          1     =        127
0            0        0           0      0    0       1          0     =        2
0            0        0           0      0    0       0          1     =        1
0            0        0           0      0    0       0          0     =        0
1            1        1           1      1    1       1          1     =        −1
1            1        1           1      1    1       1          0     =        −2
1            0        0           0      0    0       0          1     =        −127
1            0        0           0      0    0       0          0     =        −128
8-bit two's complement integers

Explanation
Two's complement                                             Decimal
0001                                                         1
0000                                                         0
1111                                                         −1
1110                                                         −2
1101                                                         −3
1100                                                         −4
Two's complement using a 4-bit integer

Two's complement represents signed integers by counting backwards and wrapping
around.

The boundary between positive and negative numbers may theoretically be anywhere (as
long as you check for it). For convenience, all numbers whose left-most bit is 1 are
considered negative. The largest number representable this way with 4 bits is 0111 (7)
and the smallest number is 1000 (-8).

To understand its usefulness for computers, consider the following. Adding 0011 (3) to
1111 (-1) results in the seemingly-incorrect 10010. However, ignoring the 5th bit (from
the right), as we did when we counted backwards, gives us the actual answer, 0010 (2).
Ignoring the 5th bit will work in all cases (although you have to do the aforementioned
overflow checks when, eg, 0100 is added to 0100). Thus, a circuit designed for addition
can handle negative operands without also including a circuit capable of subtraction (and
a circuit which switches between the two based on the sign). Moreover, by this method
an addition circuit can even perform subtractions if you convert the necessary operand
into the "counting-backwards" form. The procedure for doing so is called taking the two's
complement (which, admittedly, requires either an extra cycle or its own adder circuit).
Lastly, a very important reason for utilizing two's complement representation is that it
would be considerably more complex to create a subtraction circuit which would take
0001 - 0010 and give 1001 (ie -001) than it is to make one that returns 1111. (Doing the
former means you have to check the sign, then check if there will be a sign reversal, then
possibly rearrange the numbers, and finally subtract. Doing the latter means you simply
subtract, pretending there's an extra left-most bit hiding somewhere.)

In an n-bit binary number, the most significant bit is usually the 2n−1s place. But in the
two's complement representation, its place value is negated; it becomes the −2n−1s place
and is called the sign bit.

If the sign bit is 0, the value is positive; if it is 1, the value is negative. To negate a two's
complement number, invert all the bits then add 1 to the result.

If all bits are 1, the value is −1. If the sign bit is 1 but the rest of the bits are 0, the value is
the most negative number, −2n−1 for an n-bit number. The absolute value of the most
negative number cannot be represented with the same number of bits, because it is greater
than the most positive number that two's complement number by exactly 1.

A two's complement 8-bit binary numeral can represent every integer in the range −128
to +127. If the sign bit is 0, then the largest value that can be stored in the remaining
seven bits is 27 − 1, or 127.

Using two's complement to represent negative numbers allows only one representation of
zero, and to have effective addition and subtraction while still having the most significant
bit as the sign bit.

Calculating two's complement

In finding the two's complement of a binary number, the bits are inverted, or "flipped", by
using the bitwise NOT operation; the value of 1 is then added to the resulting value. Bit
overflow is ignored, which is the normal case with zero.

For example, beginning with the signed 8-bit binary representation of the decimal value
5:

        0000 0101 (5)

The first bit is 0, so the value represented is indeed a positive 5. To convert to −5 in two's
complement notation, the bits are inverted; 0 becomes 1, and 1 becomes 0:

        1111 1010

At this point, the numeral is the ones' complement of the decimal value 5. To obtain the
two's complement, 1 is added to the result, giving:
1111 1011 (-5)

The result is a signed binary numeral representing the decimal value −5 in two's
complement form. The most significant bit is 1, so the value is negative.

The two's complement of a negative number is the corresponding positive value. For
example, inverting the bits of −5 (above) gives:

         0000 0100

And adding one gives the final value:

         0000 0101 (5)

The decimal value of a two's complement binary number is calculated by taking the value
of the most significant bit, where the value is negative when the bit is one, and adding to
it the values for each power of two where there is a one. Example:

         1111 1011 (−5) = −128 + 64 + 32 + 16 + 8 + 0 + 2 + 1 = (−2^7 + 2^6 + ...) = −5

Note that the two's complement of zero is zero: inverting gives all ones, and adding one
changes the ones back to zeros (the overflow is ignored). Also the two's complement of
the most negative number representable (e.g. a one as the sign bit and all other bits zero)
is itself. This happens because the most negative number's "positive counterpart" is
occupied by "0", which gets classed as a positive number in this argument. Hence, there
appears to be an 'extra' negative number.

A more formal definition of two's complement negative number (denoted by N* in this
example) is derived from the equation N * = 2n − N, where N is the corresponding
positive number and n is the number of bits in the representation.

For example, to find the 4 bit representation of -5:

         N (base 10) = 5, therefore N (base 2) = 0101
         n=4

Hence:

         N * = 2n − N = [24]base2 − 0101 = 10000 − 0101 = 1011

N.B. You can also think of the equation as being entirely in base 10, converting to base 2
at the end, e.g.:

         N * = 2n − N = 24 − 5 = [11]base10 = [1011]base2
Obviously, "N* ... = 11" isn't strictly true but as long as you interpret the equals sign as
"is represented by", it is perfectly acceptable to think of two's complements in this
fashion.

Nevertheless, a shortcut exists when converting a binary number in two's complement
form.

        0011 1100

Converting from right to left, copy all the zeros until the first 1 is reached. Copy down
that one, and then flip the remaining bits. This will allow you to convert to two's
complement without first converting to one's complement and adding 1 to the result. The
two's complemented form of the number above in this case is:

        1100 0100

Sign extension
Decimal         4-bit two's complement               8-bit two's complement
5               0101                                 0000 0101
-3              1101                                 1111 1101
sign-bit repetition in 4 and 8-bit integers

When turning a two's complement number with a certain number of bits into one with
more bits (e.g., when copying from a 1 byte variable to a two byte variable), the sign bit
must be repeated in all the extra bits.

Some processors have instructions to do this in a single instruction. On other processors a
conditional must be used followed with code to set the relevant bits or bytes.

Similarly, when a two's complement number is shifted to the right, the sign bit must be
maintained. However when shifted to the left, a 0 is shifted in. These rules preserve the
common semantics that left shifts multiply the number by two and right shifts divide the
number by two.

Both shifting and doubling the precision are important for some multiplication
algorithms. Note that unlike addition and subtraction, precision extension and right
shifting are done differently for signed vs unsigned numbers.

The weird number

With only one exception, when we start with any number in two's complement
representation, if we flip all the bits and add 1, we get the two's complement
representation of the negative of that number. Negative 12 becomes positive 12, positive
5 becomes negative 5, zero becomes zero, etc.
−128                                           1000 0000
invert bits                                    0111 1111
add one                                        1000 0000
The two's complement of -128 results in the same 8-bit binary number.

The most negative number in two's complement is sometimes called "the weird number"
because it is the only exception.

The two's complement of the minimum number in the range will not have the desired
effect of negating the number. For example, the two's complement of -128 results in the
same binary number. This is because a positive value of 128 cannot be represented with
an 8-bit signed binary numeral. Note that this is detected as an overflow condition since
there was a carry into but not out of the sign bit.

Although the number is weird, it is a valid number. All arithmetic operations work with it
both as an operand and (unless there was an overflow) a result.

Why it works

The 2n possible values of n bits actually form a ring of equivalence classes, namely the
integers modulo 2n, Z/(2n)Z. Each class represents a set {j + k2n | k is an integer} for
some integer j, 0 ≤ j ≤ 2n − 1. There are 2n such sets, and addition and multiplication are
well-defined on them.

If the classes are taken to represent the numbers 0 to 2n − 1, and overflow ignored, then
these are the unsigned integers. But each of these numbers is equivalent to itself minus
2n. So the classes could be understood to represent −2n−1 to 2n−1 − 1, by subtracting 2n
from half of them (specifically [2n−1, 2n−1]).

For example, with eight bits, the unsigned bytes are 0 to 255. Subtracting 256 from the
top half (128 to 255) yields the signed bytes −128 to 127.

The relationship to two's complement is realised by noting that 256 = 255 + 1, and
(255 − x) is the ones' complement of x.

Decimal                       Two's complement
127                           0111 1111
64                            0100 0000
1                             0000 0001
0                             0000 0000
-1                            1111 1111
-64                           1100 0000
-127                           1000 0001
-128                           1000 0000
Some special numbers to note

Example

−95 modulo 256 is equivalent to 161 since

       −95 + 256
       = −95 + 255 + 1
       = 255 − 95 + 1
       = 160 + 1
       = 161
  1111 1111                           255
− 0101 1111                         − 95
===========                         =====
  1010 0000      (ones' complement)   160
+         1                         +   1
===========                         =====
  1010 0001      (two's complement)   161

Arithmetic operations
Addition

Adding two's complement numbers requires no special processing if the operands have
opposite signs: the sign of the result is determined automatically. For example, adding 15
and -5:

 11111 111   (carry)
  0000 1111 (15)
+ 1111 1011 (-5)
==================
  0000 1010 (10)

This process depends upon restricting to 8 bits of precision; a carry to the (nonexistent)
9th most significant bit is ignored, resulting in the arithmetically correct result of 10.

The last two bits of the carry row (reading right-to-left) contain vital information:
whether the calculation resulted in an arithmetic overflow, a number too large for the
binary system to represent (in this case greater than 8 bits). An overflow condition exists
when a carry (an extra 1) is generated into but not out of the far left sign bit, or out of but
not into the sign bit. As mentioned above, the sign bit is the leftmost bit of the result.

In other terms, if the last two carry bits (the ones on the far left of the top row in these
examples) are both 1's or 0's, the result is valid; if the last two carry bits are "1 0" or "0
1", a sign overflow has occurred. Conveniently, an XOR operation on these two bits can
quickly determine if an overflow condition exists. As an example, consider the 4-bit
addition of 7 and 3:

 0111   (carry)
  0111 (7)
+ 0011 (3)
=============
  1010 (−6) invalid!

In this case, the far left two (MSB) carry bits are "01", which means there was a two's
complement addition overflow. That is, ten is outside the permitted range of −8 to 7.

Subtraction

Computers usually use the method of complements to implement subtraction. But
although using complements for subtraction is related to using complements for
representing signed numbers, they are independent; direct subtraction works with two's
complement numbers as well. Like addition, the advantage of using two's complement is
the elimination of examining the signs of the operands to determine if addition or
subtraction is needed. For example, subtracting -5 from 15 is really adding 5 to 15, but
this is hidden by the two's complement representation:

 11110 000      (borrow)
  0000 1111     (15)
− 1111 1011     (−5)
===========
  0001 0100     (20)

Overflow is detected the same way as for addition, by examining the two leftmost (most
significant) bits of the borrows; overflow occurred if they are different.

Another example is a subtraction operation where the result is negative: 15 − 35 = −20:

 11100 000      (borrow)
  0000 1111     (15)
− 0010 0011     (35)
===========
  1110 1100     (−20)

Multiplication

The product of two n-bit numbers can potentially have 2n bits. If the precision of the two
two's complement operands is doubled before the multiplication, direct multiplication
(discarding any excess bits beyond that precision) will provide the correct result. For
example, take 5 × −6 = −30. First, the precision is extended from 4 bits to 8. Then the
numbers are multiplied, discarding the bits beyond 8 (shown by 'x'):

  00000101    (5)
× 11111010    (−6)
 =========
0
      101
       0
    101
   101
  101
 x01
xx1
=========
xx11100010     (−30)

This is very inefficient; by doubling the precision ahead of time, all additions must be
double-precision and at least twice as many partial products are needed than for the more
efficient algorithms actually implemented in computers. Some multiplication algorithms
are designed for two's complement, notably Booth's algorithm. Methods for multiplying
sign-magnitude numbers don't work with two's complement numbers without adaptation.
There isn't usually a problem when the multiplicand (the one being repeatedly added to
form the product) is negative; the issue is setting the initial bits of the product correctly
when the multiplier is negative. Two methods for adapting algorithms to handle two's
complement numbers are common:

   •   First check to see if the multiplier is negative. If so, negate (i.e., take the two's
       complement of) both operands before multiplying. The multiplier will then be
       positive so the algorithm will work. And since both operands are negated, the
       result will still have the correct sign.

   •   Subtract the partial product resulting from the sign bit instead of adding it like the
       other partial products.

As an example of the second method, take the common add-and-shift algorithm for
multiplication. Instead of shifting partial products to the left as is done with pencil and
paper, the accumulated product is shifted right, into a second register that will eventually
hold the least significant half of the product. Since the least significant bits are not
changed once they are calculated, the additions can be single precision, accumulating in
the register that will eventually hold the most significant half of the product. In the
following example, again multiplying 5 by −6, the two registers are separated by "|":

 0101 (5)
×1010 (−6)
 ====|====
 0000|0000     (first partial product (rightmost bit is 0))
 0000|0000     (shift right)
 0101|0000     (add second partial product (next bit is 1))
 0010|1000     (shift right)
 0010|1000     (add third partial product: 0 so no change)
 0001|0100     (shift right)
 1100|0100     (subtract last partial product since it's from sign bit)
 1110|0010     (shift right, preserving sign bit, giving the final answer,
−30)
Memory hierarchy
The hierarchical arrangement of storage in current computer architectures is called the
memory hierarchy. It is designed to take advantage of memory locality in computer
programs. Each level of the hierarchy is of higher speed and lower latency, and is of
smaller size, than lower levels.

Most modern CPUs are so fast that for most program workloads the locality of reference
of memory accesses, and the efficiency of the caching and memory transfer between
different levels of the hierarchy, is the practical limitation on processing speed. As a
result, the CPU spends much of its time idling, waiting for memory I/O to complete.

The memory hierarchy in most computers is as follows:

   •   Processor registers – fastest possible access (usually 1 CPU cycle), only hundreds
       of bytes in size
   •   Level 1 (L1) cache – often accessed in just a few cycles, usually tens of kilobytes
   •   Level 2 (L2) cache – higher latency than L1 by 2× to 10×, often 512 KiB or more
   •   Level 3 (L3) cache – (optional) higher latency than L2, often several MiB
   •   Main memory (DRAM) – may take hundreds of cycles, but can be multiple
       gigabytes. Access times may not be uniform, in the case of a NUMA machine.
   •   Disk storage – hundreds of thousands of cycles latency, but very large
   •   Tertiary storage – tape, optical disk (WORM)

Virtual memory
The memory pages of the virtual address space seen by the process, may reside non-
contiguously in primary, or even secondary storage.

Virtual memory or virtual memory addressing is a memory management technique,
used by computer operating systems, more common in multitasking OSes, wherein non-
contiguous memory is presented to a software (aka process) as contiguous memory. This
contiguous memory is referred to as the virtual address space.

Virtual memory addressing is typically used in paged memory systems. This in turn is
often combined with memory swapping (also known as anonymous memory paging),
whereby memory pages stored in primary storage are written to secondary storage (often
to a swap file or swap partition), thus freeing faster primary storage for other processes to
use.

In technical terms, virtual memory allows software to run in a memory address space
whose size and addressing are not necessarily tied to the computer's physical memory. To
properly implement virtual memory the CPU (or a device attached to it) must provide a
way for the operating system to map virtual memory to physical memory and for it to
detect when an address is required that does not currently relate to main memory so that
the needed data can be swapped in. While it would certainly be possible to provide virtual
memory without the CPU's assistance it would essentially require emulating a CPU that
did provide the needed features.

Background

Most computers possess four kinds of memory: registers in the CPU, CPU caches
(generally some kind of static RAM) both inside and adjacent to the CPU, main memory
(generally dynamic RAM) which the CPU can read and write to directly and reasonably
quickly; and disk storage, which is much slower, but much larger. CPU register use is
generally handled by the compiler (and if preemptive multitasking is in use swapped by
the operating system on context switches) and this isn't a huge burden as they are small in
number and data doesn't generally stay in them very long. The decision of when to use
cache and when to use main memory is generally dealt with by hardware so generally
both are regarded together by the programmer as simply physical memory.

Many applications require access to more information (code as well as data) than can be
stored in physical memory. This is especially true when the operating system allows
multiple processes/applications to run seemingly in parallel. The obvious response to the
problem of the maximum size of the physical memory being less than that required for all
running programs is for the application to keep some of its information on the disk, and
move it back and forth to physical memory as needed, but there are a number of ways to
do this.

One option is for the application software itself to be responsible both for deciding which
information is to be kept where, and also for moving it back and forth. The programmer
would do this by determining which sections of the program (and also its data) were
mutually exclusive, and then arranging for loading and unloading the appropriate sections
from physical memory, as needed. The disadvantage of this approach is that each
application's programmer must spend time and effort on designing, implementing, and
debugging this mechanism, instead of focusing on his or her application; this hampers
programmers' efficiency. Also, if any programmer could truly choose which of their
items of data to store in the physical memory at any one time, they could easily conflict
with the decisions made by another programmer, who also wanted to use all the available
physical memory at that point.

Another option is to store some form of handles to data rather than direct pointers and let
the OS deal with swapping the data associated with those handles between the swap area
and physical memory as needed. This works but has a couple of problems, namely that it
complicates application code, that it requires applications to play nice (they generally
need the power to lock the data into physical memory to actually work on it) and that it
stops the languages standard library doing its own suballocations inside large blocks from
the OS to improve performance. The best known example of this kind of arrangement is
probably the 16-bit versions of Windows.

The modern solution is to use virtual memory, in which a combination of special
hardware and operating system software makes use of both kinds of memory to make it
look as if the computer has a much larger main memory than it actually does and to lay
that space out differently at will. It does this in a way that is invisible to the rest of the
software running on the computer. It usually provides the ability to simulate a main
memory of almost any size (In practice there's a limit imposed on this by the size of the
addresses. For a 32-bit system, the total size of the virtual memory can be 232, or
approximately 4 gigabytes. For the newer 64-bit chips and operating systems that use 64
or 48 bit addresses, this can be much higher. Many operating systems do not allow the
entire address space to be used by applications to simplify kernel access to application
memory but this is not a hard design requirement.)

Virtual memory makes the job of the application programmer much simpler. No matter
how much memory the application needs, it can act as if it has access to a main memory
of that size and can place its data wherever in that virtual space that it likes. The
programmer can also completely ignore the need to manage the moving of data back and
forth between the different kinds of memory. That said, if the programmer cares about
performance when working with large volumes of data, he needs to minimise the number
of nearby blocks being accessed in order to avoid unnecessary swapping.

[edit] Paging

Virtual memory is usually (but not necessarily) implemented using paging. In paging, the
low order bits of the binary representation of the virtual address are preserved, and used
directly as the low order bits of the actual physical address; the high order bits are treated
as a key to one or more address translation tables, which provide the high order bits of the
actual physical address.
For this reason a range of consecutive addresses in the virtual address space whose size is
a power of two will be translated in a corresponding range of consecutive physical
addresses. The memory referenced by such a range is called a page. The page size is
typically in the range of 512 to 8192 bytes (with 4K currently being very common),
though page sizes of 4 megabytes or larger may be used for special purposes. (Using the
same or a related mechanism, contiguous regions of virtual memory larger than a page
are often mappable to contiguous physical memory for purposes other than virtualization,
such as setting access and caching control bits.)

The operating system stores the address translation tables, the mappings from virtual to
physical page numbers, in a data structure known as a page table.

If a page that is marked as unavailable (perhaps because it is not present in physical
memory, but instead is in the swap area), when the CPU tries to reference a memory
location in that page, the MMU responds by raising an exception (commonly called a
page fault) with the CPU, which then jumps to a routine in the operating system. If the
page is in the swap area, this routine invokes an operation called a page swap, to bring in
the required page.

The page swap operation involves a series of steps. First it selects a page in memory, for
example, a page that has not been recently accessed and (preferably) has not been
modified since it was last read from disk or the swap area. (See page replacement
algorithms for details.) If the page has been modified, the process writes the modified
page to the swap area. The next step in the process is to read in the information in the
needed page (the page corresponding to the virtual address the original program was
trying to reference when the exception occurred) from the swap file. When the page has
been read in, the tables for translating virtual addresses to physical addresses are updated
to reflect the revised contents of the physical memory. Once the page swap completes, it
exits, and the program is restarted and continues on as if nothing had happened, returning
to the point in the program that caused the exception.

It is also possible that a virtual page was marked as unavailable because the page was
never previously allocated. In such cases, a page of physical memory is allocated and
filled with zeros, the page table is modified to describe it, and the program is restarted as
above.

Details

The translation from virtual to physical addresses is implemented by an MMU (Memory
Management Unit). This may be either a module of the CPU, or an auxiliary, closely
coupled chip.

The operating system is responsible for deciding which parts of the program's simulated
main memory are kept in physical memory. The operating system also maintains the
translation tables which provide the mappings between virtual and physical addresses, for
use by the MMU. Finally, when a virtual memory exception occurs, the operating system
is responsible for allocating an area of physical memory to hold the missing information
(and possibly in the process pushing something else out to disk), bringing the relevant
information in from the disk, updating the translation tables, and finally resuming
execution of the software that incurred the virtual memory exception.

In most computers, these translation tables are stored in physical memory. Therefore, a
virtual memory reference might actually involve two or more physical memory
references: one or more to retrieve the needed address translation from the page tables,
and a final one to actually do the memory reference.

To minimize the performance penalty of address translation, most modern CPUs include
an on-chip MMU, and maintain a table of recently used virtual-to-physical translations,
called a Translation Lookaside Buffer, or TLB. Addresses with entries in the TLB require
no additional memory references (and therefore time) to translate, However, the TLB can
only maintain a fixed number of mappings between virtual and physical addresses; when
the needed translation is not resident in the TLB, action will have to be taken to load it in.

On some processors, this is performed entirely in hardware; the MMU has to do
additional memory references to load the required translations from the translation tables,
but no other action is needed. In other processors, assistance from the operating system is
needed; an exception is raised, and on this exception, the operating system replaces one
of the entries in the TLB with an entry from the translation table, and the instruction
which made the original memory reference is restarted.

The hardware that supports virtual memory almost always supports memory protection
mechanisms as well. The MMU may have the ability to vary its operation according to
the type of memory reference (for read, write or execution), as well as the privilege mode
of the CPU at the time the memory reference was made. This allows the operating system
to protect its own code and data (such as the translation tables used for virtual memory)
from corruption by an erroneous application program and to protect application programs
from each other and (to some extent) from themselves (e.g. by preventing writes to areas
of memory which contain code)

History

Before the development of the virtual memory technique, programmers in the 1940s and
1950s had to manage two-level storage (main memory or RAM, and secondary memory
in the form of hard disks or earlier, magnetic drums) directly.

Virtual memory was developed in approximately 1959 - 1962, at the University of
Manchester for the Atlas Computer, completed in 1962. However, Fritz-Rudolf Güntsch,
one of Germany's pioneering computer scientists and later the developer of the
Telefunken TR 440 mainframe, claims to have invented the concept in his doctoral
dissertation Logischer Entwurf eines digitalen Rechengerätes mit mehreren asynchron
laufenden Trommeln und automatischem Schnellspeicherbetrieb (Logic Concept of a
Digital Computing Device with Multiple Asynchronous Drum Storage and Automatic
Fast Memory Mode) in 1957.

In 1961, Burroughs released the B5000 the first commercial computer with virtual
memory.

Like many technologies in the history of computing, virtual memory was not accepted
without challenge. Before it could be regarded as a stable entity, many models,
experiments, and theories had to be developed to overcome the numerous problems with
virtual memory. Specialized hardware had to be developed that would take a "virtual"
address and translate it into an actual physical address in memory (secondary or primary).
Some worried that this process would be expensive, hard to build, and take too much
processor power to do the address translation.[citation needed]

By 1969 the debates over virtual memory for commercial computers were
over[citation needed]. An IBM research team, lead by David Sayre, showed that the virtual
memory overlay system worked consistently better than the best manual-controlled
systems.

Possibly the first minicomputer to introduce virtual memory was the Norwegian NORD-1
minicomputer. During the 1970s, other minicomputer models such as VAX models
running VMS implemented virtual memories.

Virtual memory was introduced to the x86 architecture with the protected mode of the
Intel 80286 processor. At first it was done with segment swapping, which becomes
inefficent as segments get larger. With the Intel 80386 comes support for paging, which
lay under segmentation. The page fault exception could be chained with other exceptions
without causing a double fault.
Compilers




A diagram of the operation of a typical multi-language, multi-target compiler.

A compiler is a computer program (or set of programs) that translates text written in a
computer language (the source language) into another computer language (the target
language). The original sequence is usually called the source code and the output called
object code. Commonly the output has a form suitable for processing by other programs
(e.g., a linker), but it may be a human readable text file.

The most common reason for wanting to translate source code is to create an executable
program. The name "compiler" is primarily used for programs that translate source code
from a high level language to a lower level language (e.g., assembly language or machine
language). A program that translates from a low level language to a higher level one is a
decompiler. A program that translates between high-level languages is usually called a
language translator, source to source translator, or language converter. A language
rewriter is usually a program that translates the form of expressions without a change of
language.

A compiler is likely to perform many or all of the following operations: lexing,
preprocessing, parsing, semantic analysis, code optimizations, and code
Linker




Figure of the linking process, where object files and static libraries are assembled into a
new library or executable.

In computer science, a linker or link editor is a program that takes one or more objects
generated by compilers and assembles them into a single executable program.

In IBM mainframe environments such as OS/360 this program is known as a linkage
editor.

(On Unix variants the term loader is often used as a synonym for linker. Because this
usage blurs the distinction between the compile-time process and the run-time process,
this article will use linking for the former and loading for the latter.)

The objects are program modules containing machine code and information for the
linker. This information comes mainly in the form of symbol definitions, which come in
two varieties:

   •   Defined or exported symbols are functions or variables that are present in the
       module represented by the object, and which should be available for use by other
       modules.
   •   Undefined or imported symbols are functions or variables that are called or
       referenced by this object, but not internally defined.

In short, the linker's job is to resolve references to undefined symbols by finding out
which other object defines a symbol in question, and replacing placeholders with the
symbol's address.

Linkers can take objects from a collection called a library. Some linkers do not include
the whole library in the output; they only include its symbols that are referenced from
other object files or libraries. Libraries for diverse purposes exist, and one or more system
libraries are usually linked in by default.
The linker also takes care of arranging the objects in a program's address space. This may
involve relocating code that assumes a specific base address to another base. Since a
compiler seldom knows where an object will reside, it often assumes a fixed base
location (for example, zero). Relocating machine code may involve re-targeting of
absolute jumps, loads and stores.

The executable output by the linker may need another relocation pass when it is finally
loaded into memory (just before execution). On hardware offering virtual memory this is
usually omitted, though—every program is put into its own address space, so there is no
conflict even if all programs load at the same base address.

Assembler

Typically a modern assembler creates object code by translating assembly instruction
mnemonics into opcodes, and by resolving symbolic names for memory locations and
other entities. The use of symbolic references is a key feature of assemblers, saving
tedious calculations and manual address updates after program modifications. Most
assemblers also include macro facilities for performing textual substitution — e.g. to
generate common short sequences of instructions to run inline, instead of in a subroutine.

Assemblers are generally simpler to write than compilers for high-level languages, and
have been available since the 1950s. (The first assemblers, in the early days of
computers, were a breakthrough for a generation of tired programmers.) Modern
assemblers, especially for RISC based architectures, such as MIPS, Sun SPARC and HP
PA-RISC, optimize instruction scheduling to exploit the CPU pipeline efficiently.

More sophisticated High-level assemblers provide language abstractions such as:

   •   Advanced control structures
   •   High-level procedure/function declarations and invocations
   •   High-level abstract data types, including structures/records, unions, classes, and
       sets
   •   Sophisticated macro processing

Note that, in normal professional usage, the term assembler is often used ambiguously: It
is frequently used to refer to an assembly language itself, rather than to the assembler
utility. Thus: "CP/CMS was written in S/360 assembler" as opposed to "ASM-H was a
widely-used S/370 assembler."
The C Compilation Model
     We will briefly highlight key features of the C Compilation model here.
The Preprocessor

We will study this part of the compilation process in greater detail later (Chapter 13.
However we need some basic information for some C programs.

The Preprocessor accepts source code as input and is responsible for

       •       removing comments
       •       interpreting special preprocessor directives denoted by #.

For example

       •       #include -- includes contents of a named file. Files usually called header
       files. e.g
               o     #include <math.h> -- standard library maths file.
               o     #include <stdio.h> -- standard library I/O file
       •       #define -- defines a symbolic name or constant. Macro substitution.
               o     #define MAX_ARRAY_SIZE 100

C Compiler

The C compiler translates source to assembly code. The source code is received from the
preprocessor.

Assembler

The assembler creates object code. On a UNIX system you may see files with a .o suffix
(.OBJ on MSDOS) to indicate object code files.

Link Editor

If a source file references library functions or functions defined in other source files the
link editor combines these functions (with main()) to create an executable file. External
Variable references resolved here also. More on this later (Chapter 34).




                                                                              Digitally signed by Vinayak

                                              Vinayak                         Ashok Bharadi
                                                                              DN: cn=Vinayak Ashok
                                                                              Bharadi, c=IN, o=GPM,

                                              Ashok                           ou=Engineering IT,
                                                                              email=vinu_bharadi@rediffmail.
                                                                              com
                                                                              Reason: I am the author of this

                                              Bharadi                         document
                                                                              Date: 2006.11.20 16:19:13
                                                                              +05'30'

Más contenido relacionado

La actualidad más candente

Dfa guidelines
Dfa guidelinesDfa guidelines
Dfa guidelineskajavarun
 
Oil contamination
Oil contaminationOil contamination
Oil contaminationRajan David
 
Abrasive water jet AWJ
Abrasive water jet AWJAbrasive water jet AWJ
Abrasive water jet AWJGopinath Guru
 
Non-Traditional Machining Process (UCMP)
Non-Traditional Machining Process (UCMP)Non-Traditional Machining Process (UCMP)
Non-Traditional Machining Process (UCMP)S. Sathishkumar
 
Unit 2 Machinability, Cutting Fluids, Tool Life & Wear, Tool Materials
Unit 2 Machinability, Cutting Fluids, Tool Life & Wear, Tool MaterialsUnit 2 Machinability, Cutting Fluids, Tool Life & Wear, Tool Materials
Unit 2 Machinability, Cutting Fluids, Tool Life & Wear, Tool MaterialsMechbytes
 
Solar power lawn mower
Solar power lawn mowerSolar power lawn mower
Solar power lawn mowerJatinder Kumar
 
Abrasive machining ppt_mfg_chapter26_final
Abrasive machining ppt_mfg_chapter26_finalAbrasive machining ppt_mfg_chapter26_final
Abrasive machining ppt_mfg_chapter26_finalSanjay Nayee
 
Manufacturing Processes of Engine Blocks
Manufacturing Processes of Engine BlocksManufacturing Processes of Engine Blocks
Manufacturing Processes of Engine BlocksSandeep Saini
 
COMPUTER AIDED PROCESS PLANNING (CAPP)
COMPUTER AIDED PROCESS PLANNING (CAPP)COMPUTER AIDED PROCESS PLANNING (CAPP)
COMPUTER AIDED PROCESS PLANNING (CAPP)Victor Al
 
Cnc, dnc & adaptive control
Cnc, dnc & adaptive controlCnc, dnc & adaptive control
Cnc, dnc & adaptive controlparabajinkya0070
 
laboratory report on manufacturing lab -1
laboratory report on manufacturing lab -1laboratory report on manufacturing lab -1
laboratory report on manufacturing lab -1Sunith Guraddi
 
Fabrication of abrasive belt grinder saravanan
Fabrication of abrasive belt grinder   saravananFabrication of abrasive belt grinder   saravanan
Fabrication of abrasive belt grinder saravanandinnusara
 
Ultrasonic Machining Process
Ultrasonic Machining ProcessUltrasonic Machining Process
Ultrasonic Machining ProcessPraveenManickam2
 
Production engineering
Production engineeringProduction engineering
Production engineeringSTAY CURIOUS
 

La actualidad más candente (20)

Dfa guidelines
Dfa guidelinesDfa guidelines
Dfa guidelines
 
Oil contamination
Oil contaminationOil contamination
Oil contamination
 
Components of CIM Systems
Components of CIM SystemsComponents of CIM Systems
Components of CIM Systems
 
Ucm comparison
Ucm comparisonUcm comparison
Ucm comparison
 
Abrasive water jet AWJ
Abrasive water jet AWJAbrasive water jet AWJ
Abrasive water jet AWJ
 
Non-Traditional Machining Process (UCMP)
Non-Traditional Machining Process (UCMP)Non-Traditional Machining Process (UCMP)
Non-Traditional Machining Process (UCMP)
 
Unit 2 Machinability, Cutting Fluids, Tool Life & Wear, Tool Materials
Unit 2 Machinability, Cutting Fluids, Tool Life & Wear, Tool MaterialsUnit 2 Machinability, Cutting Fluids, Tool Life & Wear, Tool Materials
Unit 2 Machinability, Cutting Fluids, Tool Life & Wear, Tool Materials
 
Solar power lawn mower
Solar power lawn mowerSolar power lawn mower
Solar power lawn mower
 
Abrasive machining ppt_mfg_chapter26_final
Abrasive machining ppt_mfg_chapter26_finalAbrasive machining ppt_mfg_chapter26_final
Abrasive machining ppt_mfg_chapter26_final
 
Casting process
Casting processCasting process
Casting process
 
Manufacturing Processes of Engine Blocks
Manufacturing Processes of Engine BlocksManufacturing Processes of Engine Blocks
Manufacturing Processes of Engine Blocks
 
COMPUTER AIDED PROCESS PLANNING (CAPP)
COMPUTER AIDED PROCESS PLANNING (CAPP)COMPUTER AIDED PROCESS PLANNING (CAPP)
COMPUTER AIDED PROCESS PLANNING (CAPP)
 
Cnc, dnc & adaptive control
Cnc, dnc & adaptive controlCnc, dnc & adaptive control
Cnc, dnc & adaptive control
 
laboratory report on manufacturing lab -1
laboratory report on manufacturing lab -1laboratory report on manufacturing lab -1
laboratory report on manufacturing lab -1
 
WATER JET CUTTING
WATER JET CUTTINGWATER JET CUTTING
WATER JET CUTTING
 
Fabrication of abrasive belt grinder saravanan
Fabrication of abrasive belt grinder   saravananFabrication of abrasive belt grinder   saravanan
Fabrication of abrasive belt grinder saravanan
 
Ultrasonic Machining Process
Ultrasonic Machining ProcessUltrasonic Machining Process
Ultrasonic Machining Process
 
R&D Project
R&D ProjectR&D Project
R&D Project
 
Production engineering
Production engineeringProduction engineering
Production engineering
 
4.patterns
4.patterns4.patterns
4.patterns
 

Destacado

Design and Manufacturing: Frequently Asked Questions from AMIE Exams
Design and Manufacturing: Frequently Asked Questions from AMIE ExamsDesign and Manufacturing: Frequently Asked Questions from AMIE Exams
Design and Manufacturing: Frequently Asked Questions from AMIE ExamsAMIE(I) Study Circle
 
Material science notes
Material science notesMaterial science notes
Material science notesntrnbk
 
Material Science: Frequently Asked Questions in AMIE Exams
Material Science: Frequently Asked Questions in AMIE ExamsMaterial Science: Frequently Asked Questions in AMIE Exams
Material Science: Frequently Asked Questions in AMIE ExamsAMIE(I) Study Circle
 
Environment And Society
Environment And SocietyEnvironment And Society
Environment And Societymlneal
 
A minimization approach for two level logic synthesis using constrained depth...
A minimization approach for two level logic synthesis using constrained depth...A minimization approach for two level logic synthesis using constrained depth...
A minimization approach for two level logic synthesis using constrained depth...IAEME Publication
 
Input and Output Devices.
Input and Output Devices.Input and Output Devices.
Input and Output Devices.Varun Gupta
 
Difference Between Emulation & Simulation
Difference Between Emulation & SimulationDifference Between Emulation & Simulation
Difference Between Emulation & Simulationcatchanil1989
 
Decision Making and Information Systems
Decision Making and  Information SystemsDecision Making and  Information Systems
Decision Making and Information SystemsAriful Saimon
 
Trend and Future of Cloud Computing
Trend and Future of Cloud ComputingTrend and Future of Cloud Computing
Trend and Future of Cloud Computinghybrid cloud
 
COMPUTER ORGANIZATION - Logic gates, Boolean Algebra, Combinational Circuits
COMPUTER ORGANIZATION - Logic gates, Boolean Algebra, Combinational CircuitsCOMPUTER ORGANIZATION - Logic gates, Boolean Algebra, Combinational Circuits
COMPUTER ORGANIZATION - Logic gates, Boolean Algebra, Combinational CircuitsVanitha Chandru
 
Logic gates - AND, OR, NOT, NOR, NAND, XOR, XNOR Gates.
Logic gates - AND, OR, NOT, NOR, NAND, XOR, XNOR Gates.Logic gates - AND, OR, NOT, NOR, NAND, XOR, XNOR Gates.
Logic gates - AND, OR, NOT, NOR, NAND, XOR, XNOR Gates.Satya P. Joshi
 
Visual Note Taking / Sketchnotes
Visual Note Taking / SketchnotesVisual Note Taking / Sketchnotes
Visual Note Taking / SketchnotesEva-Lotta Lamm
 

Destacado (20)

Design and Manufacturing: Frequently Asked Questions from AMIE Exams
Design and Manufacturing: Frequently Asked Questions from AMIE ExamsDesign and Manufacturing: Frequently Asked Questions from AMIE Exams
Design and Manufacturing: Frequently Asked Questions from AMIE Exams
 
Material science notes
Material science notesMaterial science notes
Material science notes
 
Material Science: Frequently Asked Questions in AMIE Exams
Material Science: Frequently Asked Questions in AMIE ExamsMaterial Science: Frequently Asked Questions in AMIE Exams
Material Science: Frequently Asked Questions in AMIE Exams
 
Environment And Society
Environment And SocietyEnvironment And Society
Environment And Society
 
Society and Environment:
Society and Environment: Society and Environment:
Society and Environment:
 
A minimization approach for two level logic synthesis using constrained depth...
A minimization approach for two level logic synthesis using constrained depth...A minimization approach for two level logic synthesis using constrained depth...
A minimization approach for two level logic synthesis using constrained depth...
 
Karnaugh
KarnaughKarnaugh
Karnaugh
 
Input and Output Devices.
Input and Output Devices.Input and Output Devices.
Input and Output Devices.
 
Difference Between Emulation & Simulation
Difference Between Emulation & SimulationDifference Between Emulation & Simulation
Difference Between Emulation & Simulation
 
05a
05a05a
05a
 
Unit 1(stld)
Unit 1(stld)Unit 1(stld)
Unit 1(stld)
 
Sodc 1 Introduction
Sodc 1 IntroductionSodc 1 Introduction
Sodc 1 Introduction
 
Decision Making and Information Systems
Decision Making and  Information SystemsDecision Making and  Information Systems
Decision Making and Information Systems
 
Need analysis & design
Need analysis & designNeed analysis & design
Need analysis & design
 
Computer languages 11
Computer languages 11Computer languages 11
Computer languages 11
 
Trend and Future of Cloud Computing
Trend and Future of Cloud ComputingTrend and Future of Cloud Computing
Trend and Future of Cloud Computing
 
COMPUTER ORGANIZATION - Logic gates, Boolean Algebra, Combinational Circuits
COMPUTER ORGANIZATION - Logic gates, Boolean Algebra, Combinational CircuitsCOMPUTER ORGANIZATION - Logic gates, Boolean Algebra, Combinational Circuits
COMPUTER ORGANIZATION - Logic gates, Boolean Algebra, Combinational Circuits
 
Computer languages
Computer languagesComputer languages
Computer languages
 
Logic gates - AND, OR, NOT, NOR, NAND, XOR, XNOR Gates.
Logic gates - AND, OR, NOT, NOR, NAND, XOR, XNOR Gates.Logic gates - AND, OR, NOT, NOR, NAND, XOR, XNOR Gates.
Logic gates - AND, OR, NOT, NOR, NAND, XOR, XNOR Gates.
 
Visual Note Taking / Sketchnotes
Visual Note Taking / SketchnotesVisual Note Taking / Sketchnotes
Visual Note Taking / Sketchnotes
 

Similar a Computing and informatics class notes for amie

2.Introduction to Network Devices.ppt
2.Introduction to Network Devices.ppt2.Introduction to Network Devices.ppt
2.Introduction to Network Devices.pptjaba kumar
 
Network essentials chapter 3
Network essentials  chapter 3Network essentials  chapter 3
Network essentials chapter 3Raghu nath
 
Network essentials chapter 4
Network essentials  chapter 4Network essentials  chapter 4
Network essentials chapter 4Raghu nath
 
Introduction to TCP / IP model
Introduction to TCP / IP modelIntroduction to TCP / IP model
Introduction to TCP / IP modelssuserb4996d
 
Basic networking hardware: Switch : Router : Hub : Bridge : Gateway : Bus : C...
Basic networking hardware: Switch : Router : Hub : Bridge : Gateway : Bus : C...Basic networking hardware: Switch : Router : Hub : Bridge : Gateway : Bus : C...
Basic networking hardware: Switch : Router : Hub : Bridge : Gateway : Bus : C...Soumen Santra
 
Networking presentation
Networking presentationNetworking presentation
Networking presentationGajan Hai
 
Data communication class note 1
Data communication class note 1Data communication class note 1
Data communication class note 1Prosanta Mazumder
 
Computer networks--networking hardware
Computer networks--networking hardwareComputer networks--networking hardware
Computer networks--networking hardwareokelloerick
 
Assignment E-Commerce By IHTISHAM AHMAD.docx
Assignment E-Commerce By IHTISHAM AHMAD.docxAssignment E-Commerce By IHTISHAM AHMAD.docx
Assignment E-Commerce By IHTISHAM AHMAD.docxIhtishamAhmad20
 
What is networking
What is networkingWhat is networking
What is networkingbabyparul
 
Network protocols
Network protocolsNetwork protocols
Network protocolsIT Tech
 
454548 634160871407732500
454548 634160871407732500454548 634160871407732500
454548 634160871407732500prabh_in
 
Computer-Networks--Networking_Hardware.pptx
Computer-Networks--Networking_Hardware.pptxComputer-Networks--Networking_Hardware.pptx
Computer-Networks--Networking_Hardware.pptxssuser86699a
 
7312334 chapter-7 a-networking-basics
7312334 chapter-7 a-networking-basics7312334 chapter-7 a-networking-basics
7312334 chapter-7 a-networking-basicsfasywan
 

Similar a Computing and informatics class notes for amie (20)

Com
ComCom
Com
 
My project-new-2
My project-new-2My project-new-2
My project-new-2
 
COMPUTER TAPALOGY
COMPUTER TAPALOGYCOMPUTER TAPALOGY
COMPUTER TAPALOGY
 
2.Introduction to Network Devices.ppt
2.Introduction to Network Devices.ppt2.Introduction to Network Devices.ppt
2.Introduction to Network Devices.ppt
 
Network essentials chapter 3
Network essentials  chapter 3Network essentials  chapter 3
Network essentials chapter 3
 
1658897215230.pdf
1658897215230.pdf1658897215230.pdf
1658897215230.pdf
 
Networking
NetworkingNetworking
Networking
 
Network essentials chapter 4
Network essentials  chapter 4Network essentials  chapter 4
Network essentials chapter 4
 
Computer networks--networking
Computer networks--networkingComputer networks--networking
Computer networks--networking
 
Introduction to TCP / IP model
Introduction to TCP / IP modelIntroduction to TCP / IP model
Introduction to TCP / IP model
 
Basic networking hardware: Switch : Router : Hub : Bridge : Gateway : Bus : C...
Basic networking hardware: Switch : Router : Hub : Bridge : Gateway : Bus : C...Basic networking hardware: Switch : Router : Hub : Bridge : Gateway : Bus : C...
Basic networking hardware: Switch : Router : Hub : Bridge : Gateway : Bus : C...
 
Networking presentation
Networking presentationNetworking presentation
Networking presentation
 
Data communication class note 1
Data communication class note 1Data communication class note 1
Data communication class note 1
 
Computer networks--networking hardware
Computer networks--networking hardwareComputer networks--networking hardware
Computer networks--networking hardware
 
Assignment E-Commerce By IHTISHAM AHMAD.docx
Assignment E-Commerce By IHTISHAM AHMAD.docxAssignment E-Commerce By IHTISHAM AHMAD.docx
Assignment E-Commerce By IHTISHAM AHMAD.docx
 
What is networking
What is networkingWhat is networking
What is networking
 
Network protocols
Network protocolsNetwork protocols
Network protocols
 
454548 634160871407732500
454548 634160871407732500454548 634160871407732500
454548 634160871407732500
 
Computer-Networks--Networking_Hardware.pptx
Computer-Networks--Networking_Hardware.pptxComputer-Networks--Networking_Hardware.pptx
Computer-Networks--Networking_Hardware.pptx
 
7312334 chapter-7 a-networking-basics
7312334 chapter-7 a-networking-basics7312334 chapter-7 a-networking-basics
7312334 chapter-7 a-networking-basics
 

Último

Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 

Último (20)

Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 

Computing and informatics class notes for amie

  • 1. Computing and Informatics Class Notes for AMIE By Vinayak Ashok Bharadi Local Area Networks For historical reasons, the industry refers to nearly every type of network as an "area network." The most commonly-discussed categories of computer networks include the following - • Local Area Network (LAN) • Wide Area Network (WAN) • Metropolitan Area Network (MAN) • Storage Area Network (SAN) • System Area Network (SAN) • Server Area Network (SAN) • Small Area Network (SAN) • Personal Area Network (PAN) • Desk Area Network (DAN) • Controller Area Network (CAN) • Cluster Area Network (CAN) LANs and WANs were the original flavors of network design. The concept of "area" made good sense at this time, because a key distinction between a LAN and a WAN involves the physical distance that the network spans. A third category, the MAN, also fit into this scheme as it too is centered on a distance-based concept. As technology improved, new types of networks appeared on the scene. These, too, became known as various types of "area networks" for consistency's sake, although distance no longer proved a useful differentiator. LAN Basics A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs, and occasionally a LAN will span a group of nearby buildings. In IP networking, one can conceive of a LAN as a single IP subnet (though this is not necessarily true in practice). Besides operating in a limited space, LANs include several other distinctive features. LANs are typically owned, controlled, and managed by a single person or organization. They also use certain specific connectivity technologies, primarily Ethernet and Token Ring.
  • 2. WAN Basics As the term implies, a wide-area network spans a large physical distance. A WAN like the Internet spans most of the world! A WAN is a geographically-dispered collection of LANs. A network device called a router connects LANs to a WAN. In IP networking, the router maintains both a LAN address and a WAN address. WANs differ from LANs in several important ways. Like the Internet, most WANs are not owned by any one organization but rather exist under collective or distributed ownership and management. WANs use technology like ATM, Frame Relay and X.25 for connectivity. LANs and WANs at Home Home networkers with cable modem or DSL service already have encountered LANs and WANs in practice, though they may not have noticed. A cable/DSL router like those in the Linksys family join the home LAN to the WAN link maintained by one's ISP. The ISP provides a WAN IP address used by the router, and all of the computers on the home network use private LAN addresses. On a home network, like many LANs, all computers can communicate directly with each other, but they must go through a central gateway location to reach devices outside of their local area. What About MAN, SAN, PAN, DAN, and CAN? Future articles will describe the many other types of area networks in more detail. After LANs and WANs, one will most commonly encounter the following three network designs: A Metropolitan Area Network connects an area larger than a LAN but smaller than a WAN, such as a city, with dedicated or high-performance hardware. [1] A Storage Area Network connects servers to data storage devices through a technology like Fibre Channel. [2] A System Area Network connects high-performance computers with high-speed connections in a cluster configuration. Conclusion To the uninitiated, LANs, WANs, and the other area network acroymns appear to be just more alphabet soup in a technology industry already drowning in terminology. The names of these networks are not nearly as important as the technologies used to construct them, however. A person can use the categorizations as a learning tool to better understand concepts like subnets, gateways, and routers.
  • 3. Bus, ring, star, and other types of network topology In networking, the term "topology" refers to the layout of connected devices on a network. This article introduces the standard topologies of computer networking. Topology in Network Design One can think of a topology as a network's virtual shape or structure. This shape does not necessarily correspond to the actual physical layout of the devices on the network. For example, the computers on a home LAN may be arranged in a circle in a family room, but it would be highly unlikely to find an actual ring topology there. Network topologies are categorized into the following basic types: • bus • ring • star • tree • mesh More complex networks can be built as hybrids of two or more of the above basic topologies. Bus Topology Bus networks (not to be confused with the system bus of a computer) use a common backbone to connect all devices. A single cable, the backbone functions as a shared communication medium that devices attach or tap into with an interface connector. A device wanting to communicate with another device on the network sends a broadcast message onto the wire that all other devices see, but only the intended recipient actually accepts and processes the message. Ethernet bus topologies are relatively easy to install and don't require much cabling compared to the alternatives. 10Base-2 ("ThinNet") and 10Base-5 ("ThickNet") both were popular Ethernet cabling options many years ago for bus topologies. However, bus networks work best with a limited number of devices. If more than a few dozen computers are added to a network bus, performance problems will likely result. In addition, if the backbone cable fails, the entire network effectively becomes unusable.
  • 4. Ring Topology In a ring network, every device has exactly two neighbors for communication purposes. All messages travel through a ring in the same direction (either "clockwise" or "counterclockwise"). A failure in any cable or device breaks the loop and can take down the entire network. To implement a ring network, one typically uses FDDI, SONET, or Token Ring technology. Ring topologies are found in some office buildings or school campuses. Star Topology Many home networks use the star topology. A star network features a central connection point called a "hub" that may be a hub, switch or router. Devices typically connect to the hub with Unshielded Twisted Pair (UTP) Ethernet. Compared to the bus topology, a star network generally requires more cable, but a failure in any star network cable will only take down one computer's network access and not the entire LAN. (If the hub fails, however, the entire network also fails.)
  • 5. Tree Topology Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only hub devices connect directly to the tree bus, and each hub functions as the "root" of a tree of devices. This bus/star hybrid approach supports future expandability of the network much better than a bus (limited in the number of devices due to the broadcast traffic it generates) or a star (limited by the number of hub connection points) alone. Mesh Topology Mesh topologies involve the concept of routes. Unlike each of the previous topologies, messages sent on a mesh network can take any of several possible paths from source to destination. (Recall that even in a ring, although two cable paths exist, messages can only travel in one direction.) Some WANs, like the Internet, employ mesh routing. Summary Topologies remain an important part of network design theory. You can probably build a home or small business network without understanding the difference between a bus design and a star design, but understanding the concepts behind these gives you a deeper understanding of important elements like hubs, broadcasts, and routes Internet protocol suite Internet protocol suite Layer Protocols 5. Application DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP4, IRC, POP3, SIP, SMTP, SNMP, SSH, TELNET, RTP, … 4. Transport TCP, UDP, RSVP, DCCP, SCTP, … 3. Network IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, … 2. Data link Ethernet, Wi-Fi, PPP, FDDI, ATM, Frame
  • 6. Relay, GPRS, Bluetooth, … 1. Physical Modems, ISDN, SONET/SDH, RS232, USB, Ethernet physical layer, Wi-Fi, GSM, Bluetooth, … The Internet protocol suite is the set of communications protocols that implement the protocol stack on which the Internet and most commercial networks run. It is sometimes called the TCP/IP protocol suite, after the two most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were also the first two defined. The Internet protocol suite — like many protocol suites — can be viewed as a set of layers, each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms that can eventually be physically transmitted. The original TCP/IP reference model consisted of four layers, but has evolved into a five layer model. The OSI model describes a fixed, seven layer stack for networking protocols. Comparisons between the OSI model and TCP/IP can give further insight into the significance of the components of the IP suite, but can also cause confusion, since the definition of the layers are slightly different. History The Internet protocol suite came from work done by DARPA in the early 1970s. After building the pioneering ARPANET, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn was hired at the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across them. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol for the ARPANET. By the summer of 1973, Kahn and Cerf had soon worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for
  • 7. reliability, as in the ARPANET, the hosts became responsible. (Cerf credits Hubert Zimmerman and Louis Pouzin [designer of the CYCLADES network] with important influences on this design.) With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. (One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string", and it has in fact been implemented using homing pigeons.) A computer called a gateway (later changed to router to avoid confusion with other types of gateway) is provided with an interface to each network, and forwards packets back and forth between them. The idea was worked out in more detailed form by Cerf's networking research group at Stanford in the 1973–74 period. (The early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which was contemporaneous, was also a significant technical influence; people moved between the two.) DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then stability with TCP/IP v4 — the standard protocol still in use on the Internet today. In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between the U.S., UK, and Norway. Between 1978 and 1983, several other TCP/IP prototypes were developed at multiple research centres. A full switchover to TCP/IP on the ARPANET took place January 1, 1983.[1] In March 1982,[2] the US Department of Defense made TCP/IP the standard for all military computer networking. In 1985, the Internet Architecture Board held a three day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, helping popularize the protocol and leading to its increasing commercial use. On November 9, 2005 Kahn and Cerf were presented with the Presidential Medal of Freedom for their contribution to American culture.[3]
  • 8. Layers in the Internet protocol suite stack IP suite stack showing the physical network connection of two hosts via two routers and the corresponding layers used at each hop Sample encapsulation of data within a UDP datagram within an IP packet The IP suite uses encapsulation to provide abstraction of protocols and services. Generally a protocol at a higher level uses a protocol at a lower level to help accomplish its aims. The Internet protocol stack can be roughly fitted to the four layers of the original TCP/IP model:
  • 9. DNS, TFTP, TLS/SSL, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, ECHO, BitTorrent, RTP, PNRP, rlogin, ENRP, … 4. Application Routing protocols like BGP and RIP, which for a variety of reasons run over TCP and UDP respectively, may also be considered part of the application or network layer. TCP, UDP, DCCP, SCTP, IL, RUDP, … 3. Transport Routing protocols like OSPF, which run over IP, may also be considered part of the transport or network layer. ICMP and IGMP run over IP may be considered part of the network layer. IP (IPv4, IPv6) 2. Internet ARP and RARP operate underneath IP but above the link layer so they belong somewhere in between. Ethernet, Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM, Frame 1. Network access Relay, SMDS, … In many modern textbooks, this model has evolved into the five layer TCP/IP model, where the Network access layer is splitted into a Data link layer on top of a Physical layer, and the Internet layer is called Network layer. Implementations Today, most commercial operating systems include and install the TCP/IP stack by default. For most users, there is no need to look for implementations. TCP/IP is included in all commercial Unix systems, Mac OS X, and all free-software Unix-like systems such as Linux distributions and BSD systems, as well as Microsoft Windows. Unique implementations include Lightweight TCP/IP, an open source stack designed for embedded systems and KA9Q NOS, a stack and associated protocols for amateur packet radio systems and personal computers connected via serial lines.
  • 10. Karnaugh map The Karnaugh map, also known as a Veitch diagram (K-map or KV-map for short), is a tool to facilitate management of Boolean algebraic expressions. A Karnaugh map is unique in that only one variable changes value between squares, in other words, the rows and columns are ordered according to the principles of Gray code. History and nomenclature The Karnaugh map was invented in 1953 by Maurice Karnaugh, a telecommunications engineer at Bell Labs. Usage in boolean logic Normally, extensive calculations are required to obtain the minimal expression of a Boolean function, but one can use a Karnaugh map instead. Problem solving uses • Karnaugh maps make use of the human brain's excellent pattern-matching capability to decide which terms should be combined to get the simplest expression. • K-maps permit the rapid identification and elimination of potential race hazards, something that boolean equations alone cannot do. • A Karnaugh map is an excellent aid for simplification of up to six variables, but with more variables it becomes hard even for our brain to discern optimal patterns. • For problems involving more than six variables,solving the boolean expressions is more preferred than the Karnaugh map. Karnaugh maps also help teach about Boolean functions and minimization. Properties A mapping of minterms on a Karnaugh map. The arrows indicate which squares can be thought of as "switched" (rather than being in a normal sequential order).
  • 11. A Karnaugh map may have any number of variables, but usually works best when there are only a few - between 2 and 6 for example. Each variable contributes two possibilities to each possibility of every other variable in the system. Karnaugh maps are organized so that all the possibilities of the system are arranged in a grid form, and between two adjacent boxes, only one variable can change value. This is what allows it to reduce hazards. When using a Karnaugh map to derive a minimized function, one "covers" the ones on the map by rectangular "coverings" that contain a number of boxes equal to a power of 2 (for example, 4 boxes in a line, 4 boxes in a square, 8 boxes in a rectangle, etc). Once a person has covered the ones, that person can produce a term of a sum of products by finding the variables that do not change throughout the entire covering, and taking a 1 to mean that variable, and a 0 as the complement of that variable. Doing this for every covering gives you a matching function. One can also use zeros to derive a minimized function. The procedure is identical to the procedure for ones, except that each term is a term in a product of sums - and a 1 means the compliment of the variable, while 0 means the variable non-complimented. Each square in a Karnaugh map corresponds to a minterm (and maxterm). The picture to the right shows the location of each minterm on the map. Example Consider the following function: f(A,B,C,D) = E(4,8,9,10,11,12,14,15) The values inside E tell us which rows have output 1. This function has this truth table: # A B C D f(A,B,C,D) 0 0 0 0 0 0 1 0 0 0 1 0 2 0 0 1 0 0
  • 12. 3 0 0 1 1 0 4 0 1 0 0 1 5 0 1 0 1 0 6 0 1 1 0 0 7 0 1 1 1 0 8 1 0 0 0 1 9 1 0 0 1 1 10 1 0 1 0 1
  • 13. 11 1 0 1 1 1 12 1 1 0 0 1 13 1 1 0 1 0 14 1 1 1 0 1 15 1 1 1 1 1 The input variables can be combined in 16 different ways, so our Karnaugh map has to have 16 positions. The most convenient way to arrange this is in a 4x4 grid.
  • 14. The binary digits in the map represent the function's output for any given combination of inputs. We write 0 in the upper leftmost corner of the map because f = 0 when A = 0, B = 0, C = 1, D = 0. Similarly we mark the bottom right corner as 1 because A = 1, B = 0, C = 0, D = 0 gives f = 1. Note that the values are ordered in a Gray code, so that precisely one variable flips between any pair of adjacent cells. After the Karnaugh map has been constructed our next task is to find the minimal terms to use in the final expression. These terms are found by encircling groups of 1's in the map. The encirclings must be rectangular and must have an area that is a positive power of two (i.e. 2, 4, 8, …). The rectangles should be as large as possible without containing any 0's. The optimal encirclings in this map are marked by the green, red and blue lines. For each of these encirclings we find those variables that have the same state in each of the fields in the encircling. For the first encircling (the red one) we find that: • The variable A maintains the same state (1) in the whole encircling, therefore it should be included in the term for the red encircling. • Variable B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded. • C does not change: it is always 1. • D changes. Thus the first term in the Boolean expression is AC. For the green encircling we see that A and B maintain the same state, but C and D change. B is 0 and has to be negated before it can be included. Thus the second term is AB'. In the same way, the blue rectangle gives the term BC'D' and so the whole expression is: AC + AB′+ BC′D′. The grid is toroidally connected, which means that the rectangles can wrap around edges, so ABD′ is a valid term, although not part of the minimal set. The inverse of a function is solved in the same way by encircling the 0's instead. In a Karnaugh map with n variables, a Boolean term mentioning k of them will have a corresponding rectangle of area 2n-k. Karnaugh maps also allow easy minimizations of functions whose truth tables include "don't care" conditions (that is sets of inputs for which the designer doesn't care what the output is) because "don't care" conditions can be included in a ring to make it larger but do not have to be ringed. They are usually indicated on the map with a hyphen/dash in place of the number. The value can be a "0," "1," or the hyphen/dash/X depending on if
  • 15. one can use the "0" or "1" to simplify the KM more. If the "don't cares" don't help you simplify the KM more, then use the hyphen/dash/X. Race hazards Karnaugh maps are useful for detecting and eliminating race hazards. They are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjointed, regions circled on the map. • In the above example, a potential race condition exists when C and D are both 0, A is a 1, and B changes from a 0 to a 1 (moving from the green state to the blue state). For this case, the output is defined to remain unchanged at 1, but because this transition is not covered by a specific term in the equation, a potential for a glitch (a momentary transition of the output to 0) exists. • A harder possible glitch to spot is if D was 0 and A and B were both 1, with C changing from 0 to 1. In this case the glitch wraps around from the bottom of the map to the top of the map. Whether these glitches do occur depends on the physical nature of the implementation, and whether we need to worry about it depends on the application. In this case, an additional term of +AD' would eliminate the potential race hazard, bridging between the green and blue output states or blue and red output states. The term is redundant in terms of the static logic of the system, but such redundant terms are often needed to assure race-free dynamic performance. When not to use K-maps The diagram becomes cluttered and hard to interpret if there are more than four variables on an axis. This argues against the use of Karnaugh maps for expressions with more than six variables. For such expressions, the Quine-McCluskey algorithm, also called the method of prime implicants, should be used. This algorithm generally finds most of the optimal solutions quickly and easily, but selecting the final prime implicants (after the essential ones are chosen) may still require a brute force approach to get the optimal combination (though this is generally far simpler than trying to brute force the entire problem). Logic gate A logic gate performs a logical operation on one or more logic inputs and produces a single logic output. The logic normally performed is Boolean logic and is most commonly found in digital circuits. Logic gates are primarily implemented electronically using diodes or transistors, but can also be constructed using electromagnetic relays, fluidics, optical or even mechanical elements.
  • 16. Logic levels A Boolean logical input or output always takes one of two logic levels. These logic levels can go by many names including: on / off, high (H) / low (L), one (1) / zero (0), true (T) / false (F), positive / negative, positive / ground, open circuit / close circuit, potential difference / no difference, yes / no. For consistency, the names 1 and 0 will be used below. Logic gates A logic gate takes one or more logic-level inputs and produces a single logic-level output. Because the output is also a logic level, an output of one logic gate can connect to the input of one or more other logic gates. Two outputs cannot be connected together, however, as they may be attempting to produce different logic values. In electronic logic gates, this would cause a short circuit. In electronic logic, a logic level is represented by a certain voltage (which depends on the type of electronic logic in use). Each logic gate requires power so that it can source and sink currents to achieve the correct output voltage. In logic circuit diagrams the power is not shown, but in a full electronic schematic, power connections are required. Background The simplest form of electronic logic is diode logic. This allows AND and OR gates to be built, but not inverters, and so is an incomplete form of logic. To build a complete logic system, valves or transistors can be used. The simplest family of logic gates using bipolar transistors is called resistor-transistor logic, or RTL. Unlike diode logic gates, RTL gates can be cascaded indefinitely to produce more complex logic functions. These gates were used in early integrated circuits. For higher speed, the resistors used in RTL were replaced by diodes, leading to diode-transistor logic, or DTL. It was then discovered that one transistor could do the job of two diodes in the space of one diode, so transistor- transistor logic, or TTL, was created. In some types of chip, to reduce size and power consumption still further, the bipolar transistors were replaced with complementary field- effect transistors (MOSFETs), resulting in complementary metal-oxide-semiconductor (CMOS) logic. For small-scale logic, designers now use prefabricated logic gates from families of devices such as the TTL 7400 series invented by Texas Instruments and the CMOS 4000 series invented by RCA, and their more recent descendants. These devices usually contain transistors with multiple emitters, used to implement the AND function, which are not available as separate components. Increasingly, these fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack a huge number of mixed logic gates into a single integrated circuit. The field-programmable nature of programmable logic devices such as FPGAs has removed the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by
  • 17. reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed. Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gain voltage amplifier, which sinks a tiny current at its input and produces a low- impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate. Another important advantage of standardised semiconductor logic gates, such as the 7400 and 4000 families, is that they are cascadable. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on ad infinitum, enabling the construction of circuits of arbitrary complexity without requiring the designer to understand the internal workings of the gates. In practice, the output of one gate can only drive a finite number of inputs to other gates, a number called the 'fanout limit', but this limit is rarely reached in the newer CMOS logic circuits, as compared to TTL circuits. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speed circuits. Electronic logic levels The two logic levels in binary logic circuits are represented by two voltage ranges, "low" and "high". Each technology has its own requirements for the voltages used to represent the two logic levels, to ensure that the output of any device can reliably drive the input of the next device. Usually, two non-overlapping voltage ranges, one for each level, are defined. The difference between the high and low levels ranges from 0.7 volts in Emitter coupled logic to around 28 volts in relay logic. Logic gates and hardware NAND and NOR logic gates are the two pillars of logic, in that all other types of Boolean logic gates (i.e., AND, OR, NOT, XOR, XNOR) can be created from a suitable network of just NAND or just NOR gate(s). They can be built from relays or transistors, or any other technology that can create an inverter and a two-input AND or OR gate. Hence the NAND and NOR gates are called the universal gates. For an input of 2 variables, there are 16 possible boolean algebra outputs. These 16 outputs are enumerated below with the appropriate function or logic gate for the 4 possible combinations of A and B. Note that not all outputs have a corresponding
  • 18. function or logic gate, although those that do not can be produced by combinations of those that can. A 0 01 1 INPUT B 0 10 1 OUTPUT 0 0 00 0 A AND B 0 00 1 0 01 0 A 0 01 1 0 10 0 B 0 10 1 A XOR B 0 11 0 A OR B 0 11 1 A NOR B 1 00 0 A XNOR B 1 0 0 1 NOT B 1 01 0
  • 19. 1 01 1 NOT A 1 10 0 1 10 1 A NAND B 1 1 1 0 1 1 11 1 Logic gates are a vital part of many digital circuits, and as such, every kind is available as an IC. For examples, see the 4000 series of CMOS logic chips or the 700 series. Symbols There are two sets of symbols in common use, both now defined by ANSI/IEEE Std 91- 1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and is quicker to draw by hand. It is sometimes unofficially described as "military", reflecting its origin if not its modern usage. The "rectangular shape" set, based on IEC 60617-12, has rectangular outlines for all types of gate, and allows representation of a much wider range of devices than is possible with the traditional symbols. The IEC's system has been adopted by other standards, such as EN 60617-12:1999 in Europe and BS EN 60617-12:1999 in the United Kingdom. Boolean algebra Type Distinctive shape Rectangular shape Truth table between A &B INPUT OUTPUT A B A AND B AND 0 0 0 0 1 0 1 0 0
  • 20. 1 1 1 INPUT OUTPUT A B A OR B 0 0 0 OR A+B 0 1 1 1 0 1 1 1 1 INPUT OUTPUT A NOT A NOT 0 1 1 0 In electronics a NOT gate is more commonly called an inverter. The circle on the symbol is called a bubble, and is generally used in circuit diagrams to indicate an inverted input or output. INPUT OUTPUT A B A NAND B 0 0 1 NAND 0 1 1 1 0 1 1 1 0 INPUT OUTPUT A B A NOR B 0 0 1 NOR 0 1 0 1 0 0 1 1 0 In practice, the cheapest gate to manufacture is usually the NAND gate. Additionally,
  • 21. Charles Peirce showed that NAND gates alone (as well as NOR gates alone) can be used to reproduce all the other logic gates. Symbolically, a NAND gate can also be shown using the OR shape with bubbles on its inputs, and a NOR gate can be shown as an AND gate with bubbles on its inputs. This reflects the equivalency due to De Morgans law, but it also allows a diagram to be read more easily, or a circuit to be mapped onto available physical gates in packages easily, since any circuit node that has bubbles at both ends can be replaced by a simple bubble- less connection and a suitable change of gate. If the NAND is drawn as OR with input bubbles, and a NOR as AND with input bubbles, this gate substitution occurs automatically in the diagram (effectively, bubbles "cancel"). This is commonly seen in real logic diagrams - thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated. Two more gates are the exclusive-OR or XOR function and its inverse, exclusive-NOR or XNOR. The two input Exclusive-OR is true only when the two input values are different, false if they are equal, regardless of the value. If there are more than two inputs, the gate generates a true at its output if the number of trues at its input is odd ([1]). In practice, these gates are built from combinations of simpler logic gates. INPUT OUTPUT A B A XOR B 0 0 0 XOR 0 1 1 1 0 1 1 1 0 INPUT OUTPUT A B A XNOR B 0 0 1 XNOR 0 1 0 1 0 0 1 1 1
  • 22. The 7400 chip, containing four NANDs. The two additional contacts supply power (+5 V) and connect the ground. DeMorgan equivalent symbols By use of De Morgan's theorem, an AND gate can be turned into an OR gate by inverting the sense of the logic at its inputs and outputs. This leads to a separate set of symbols with inverted inputs and the opposite core symbol. These symbols can make circuit diagrams for circuits using active low signals much clearer and help to show accidental connection of an active high output to an active low input or vice-versa. Storage of bits Related to the concept of logic gates (and also built from them) is the idea of storing a bit of information. The gates discussed up to here cannot store a value: when the inputs change, the outputs immediately react. It is possible to make a storage element either through a capacitor (which stores charge due to its physical properties) or by feedback. Connecting the output of a gate to the input causes it to be put through the logic again, and choosing the feedback correctly allows it to be preserved or modified through the use of other inputs. A set of gates arranged in this fashion is known as a "latch", and more complicated designs that utilise clocks (signals that oscillate with a known period) and change only on the rising edge are called edge-triggered "flip-flops". The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. These registers or capacitor-based circuits are known as computer memory. They vary in performance, based on factors of speed, complexity, and reliability of storage, and many different types of designs are used based on the application. Three-state logic gates
  • 23. A tristate buffer can be thought of as a switch. If B is on, the switch is closed. If B is off, the switch is open. Main article: Tri-state buffer Three-state, or 3-state, logic gates have three states of the output: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which remains strictly binary. These devices are used on buses to allow multiple chips to send data. A group of three-states driving a line with a suitable control circuit is basically equivalent to a multiplexer, which may be physically distributed over separate devices or plug-in cards. 'Tri-state', a widely-used synonym of 'three-state', is a trademark of the National Semiconductor Corporation. Miscellaneous Logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), and computer memory, all the way up through complete microprocessors which can contain more than a 100 million gates. In practice, the gates are made from field effect transistors (FETs), particularly metal-oxide-semiconductor FETs (MOSFETs). In reversible logic, Toffoli gates are used. History and development The earliest logic gates were made mechanically. Charles Babbage, around 1837, devised the Analytical Engine. His logic gates relied on mechanical gearing to perform operations. Electromagnetic relays were later used for logic gates. In 1891, Almon Strowger patented a device containing a logic gate switch circuit (U.S. Patent 0447918). Strowger's patent was not in widespread use until the 1920s. Starting in 1898, Nikola Tesla filed for patents of devices containing logic gate circuits (see List of Tesla patents). Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as AND logic gate. Claude E. Shannon introduced the use of Boolean algebra in the analysis and design of switching circuits in 1937. Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Active research is taking place in molecular logic gates. Common Basic Logic ICs CMOS TTL Function 4001 7402 Quad two-input NOR gate
  • 24. 4011 7400 Quad two-input NAND gate 4049 7404 Hex NOT gate (inverting buffer) 4070 7486 Quad two-Input XOR gate 4071 7432 Quad two-input OR gate 4077 74266 Quad two-input XNOR gate 4081 7408 Quad two-input AND gate For more CMOS logic ICs, including gates with more than two inputs, see 4000 series. Adders (electronics) In electronics, an adder is a device which will perform the addition, S, of two numbers. In computing, the adder is part of the ALU, and some ALUs contain multiple adders. Although adders can be constructed for many numerical representations, such as Binary- coded decimal or excess-3, the most common adders operate on binary numbers. In cases where two's complement is being used to represent negative numbers it is trivial to modify an adder into an adder-subtracter. For single bit adders, there are two general types. A half adder has two inputs, generally labelled A and B, and two outputs, the sum S and carry output Co. S is the two-bit xor of A and B, and Co is the two-bit and of A and B. Essentially the output of a half adder is the two-bit arithmetic sum of two one-bit numbers, with Co being the most significant of these two outputs. The other type of single bit adder is the full adder which is like a half adder, but takes an additional input carry Ci. A full adder can be constructed from two half adders by connecting A and B to the input of one half adder, connecting the sum from that to an input to the second adder, connecting Ci to the other input and or the two carry outputs. Equivalently, S could be made the three-bit xor of A, B, and Ci and Co could be made the
  • 25. three-bit majority function of A, B, and Ci. The output of the full adder is the two-bit arithmetic sum of three one-bit numbers. The purpose of the carry input on the full-adder is to allow multiple full-adders to be chained together with the carry output of one adder connected to the carry input of the next most significant adder. The carry is said to ripple down the carry lines of this sort of adder, giving it the name ripple carry adder. Half adder Half adder circuit diagram A half adder is a logical circuit that performs an addition operation on two binary digits. The half adder produces a sum and a carry value which are both binary digits. Following is the logic table for a half adder: Input Output A B C S 0 0 0 0 0 1 0 1 1 0 0 1 1 1 1 0
  • 26. Full adder Full adder circuit diagram A + B + CarryIn = Sum + CarryOut A full adder is a logical circuit that performs an addition operation on three binary digits. The full adder produces a sum and carry value, which are both binary digits. It can be combined with other full adders (see below) or work on its own. Input Output A B Ci Co S 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 1 1 0 1 0 0 0 1 1 0 1 1 0
  • 27. 1 1 0 1 0 1 1 1 1 1 Note that the final OR gate before the carry-out output may be replaced by an XOR gate without altering the resulting logic. This is because the only discrepancy between OR and XOR gates occurs when both inputs are 1; for the adder shown here, one can check this is never possible. Using only two types of gates is convenient if one desires to implement the adder directly using common IC chips. Ones' complement Alternatively, a system known as ones' complement can be used to represent negative numbers. The ones' complement form of a binary number is the bitwise NOT applied to it — the complement of its positive counterpart. Like sign-and-magnitude representation, ones' complement has two representations of 0: 00000000 (+0) and 11111111 (−0). As an example, the ones' complement form of 00101011 (43) becomes 11010100 (−43). The range of signed numbers using ones' complement in a conventional eight-bit byte is −12710 to +12710. To add two numbers represented in this system, one does a conventional binary addition, but it is then necessary to add any resulting carry back into the resulting sum. To see why this is necessary, consider the case of the addition of −1 (11111110) to +2 (00000010). The binary addition alone gives 00000000—not the correct answer! Only when the carry is added back in does the correct result (00000001) appear. This numeric representation system was common in older computers; the PDP-1 and UNIVAC 1100/2200 series, among many others, used ones'-complement arithmetic. (A remark on terminology: The system is referred to as "ones' complement" because the negation of x is formed by subtracting x from a long string of ones. Two's complement arithmetic, on the other hand, forms the negation of x by subtracting x from a single large power of two.[1]) Two's complement Two's complement is the most popular method of representing signed integers in computer science. It is also an operation of negation (converting positive to negative numbers or vice versa) in computers which represent negative numbers using two's complement. Its use is ubiquitous today because it doesn't require the addition and subtraction circuitry to examine the signs of the operands to determine whether to add or
  • 28. subtract, making it both simpler to implement and capable of easily handling higher precision arithmetic. Also, 0 has only a single representation, obviating the subtleties associated with negative zero (which exists in one's complement). sign bit 0 1 1 1 1 1 1 1 = 127 0 0 0 0 0 0 1 0 = 2 0 0 0 0 0 0 0 1 = 1 0 0 0 0 0 0 0 0 = 0 1 1 1 1 1 1 1 1 = −1 1 1 1 1 1 1 1 0 = −2 1 0 0 0 0 0 0 1 = −127 1 0 0 0 0 0 0 0 = −128 8-bit two's complement integers Explanation Two's complement Decimal 0001 1 0000 0 1111 −1 1110 −2 1101 −3 1100 −4 Two's complement using a 4-bit integer Two's complement represents signed integers by counting backwards and wrapping around. The boundary between positive and negative numbers may theoretically be anywhere (as long as you check for it). For convenience, all numbers whose left-most bit is 1 are considered negative. The largest number representable this way with 4 bits is 0111 (7) and the smallest number is 1000 (-8). To understand its usefulness for computers, consider the following. Adding 0011 (3) to 1111 (-1) results in the seemingly-incorrect 10010. However, ignoring the 5th bit (from the right), as we did when we counted backwards, gives us the actual answer, 0010 (2). Ignoring the 5th bit will work in all cases (although you have to do the aforementioned overflow checks when, eg, 0100 is added to 0100). Thus, a circuit designed for addition can handle negative operands without also including a circuit capable of subtraction (and a circuit which switches between the two based on the sign). Moreover, by this method an addition circuit can even perform subtractions if you convert the necessary operand into the "counting-backwards" form. The procedure for doing so is called taking the two's
  • 29. complement (which, admittedly, requires either an extra cycle or its own adder circuit). Lastly, a very important reason for utilizing two's complement representation is that it would be considerably more complex to create a subtraction circuit which would take 0001 - 0010 and give 1001 (ie -001) than it is to make one that returns 1111. (Doing the former means you have to check the sign, then check if there will be a sign reversal, then possibly rearrange the numbers, and finally subtract. Doing the latter means you simply subtract, pretending there's an extra left-most bit hiding somewhere.) In an n-bit binary number, the most significant bit is usually the 2n−1s place. But in the two's complement representation, its place value is negated; it becomes the −2n−1s place and is called the sign bit. If the sign bit is 0, the value is positive; if it is 1, the value is negative. To negate a two's complement number, invert all the bits then add 1 to the result. If all bits are 1, the value is −1. If the sign bit is 1 but the rest of the bits are 0, the value is the most negative number, −2n−1 for an n-bit number. The absolute value of the most negative number cannot be represented with the same number of bits, because it is greater than the most positive number that two's complement number by exactly 1. A two's complement 8-bit binary numeral can represent every integer in the range −128 to +127. If the sign bit is 0, then the largest value that can be stored in the remaining seven bits is 27 − 1, or 127. Using two's complement to represent negative numbers allows only one representation of zero, and to have effective addition and subtraction while still having the most significant bit as the sign bit. Calculating two's complement In finding the two's complement of a binary number, the bits are inverted, or "flipped", by using the bitwise NOT operation; the value of 1 is then added to the resulting value. Bit overflow is ignored, which is the normal case with zero. For example, beginning with the signed 8-bit binary representation of the decimal value 5: 0000 0101 (5) The first bit is 0, so the value represented is indeed a positive 5. To convert to −5 in two's complement notation, the bits are inverted; 0 becomes 1, and 1 becomes 0: 1111 1010 At this point, the numeral is the ones' complement of the decimal value 5. To obtain the two's complement, 1 is added to the result, giving:
  • 30. 1111 1011 (-5) The result is a signed binary numeral representing the decimal value −5 in two's complement form. The most significant bit is 1, so the value is negative. The two's complement of a negative number is the corresponding positive value. For example, inverting the bits of −5 (above) gives: 0000 0100 And adding one gives the final value: 0000 0101 (5) The decimal value of a two's complement binary number is calculated by taking the value of the most significant bit, where the value is negative when the bit is one, and adding to it the values for each power of two where there is a one. Example: 1111 1011 (−5) = −128 + 64 + 32 + 16 + 8 + 0 + 2 + 1 = (−2^7 + 2^6 + ...) = −5 Note that the two's complement of zero is zero: inverting gives all ones, and adding one changes the ones back to zeros (the overflow is ignored). Also the two's complement of the most negative number representable (e.g. a one as the sign bit and all other bits zero) is itself. This happens because the most negative number's "positive counterpart" is occupied by "0", which gets classed as a positive number in this argument. Hence, there appears to be an 'extra' negative number. A more formal definition of two's complement negative number (denoted by N* in this example) is derived from the equation N * = 2n − N, where N is the corresponding positive number and n is the number of bits in the representation. For example, to find the 4 bit representation of -5: N (base 10) = 5, therefore N (base 2) = 0101 n=4 Hence: N * = 2n − N = [24]base2 − 0101 = 10000 − 0101 = 1011 N.B. You can also think of the equation as being entirely in base 10, converting to base 2 at the end, e.g.: N * = 2n − N = 24 − 5 = [11]base10 = [1011]base2
  • 31. Obviously, "N* ... = 11" isn't strictly true but as long as you interpret the equals sign as "is represented by", it is perfectly acceptable to think of two's complements in this fashion. Nevertheless, a shortcut exists when converting a binary number in two's complement form. 0011 1100 Converting from right to left, copy all the zeros until the first 1 is reached. Copy down that one, and then flip the remaining bits. This will allow you to convert to two's complement without first converting to one's complement and adding 1 to the result. The two's complemented form of the number above in this case is: 1100 0100 Sign extension Decimal 4-bit two's complement 8-bit two's complement 5 0101 0000 0101 -3 1101 1111 1101 sign-bit repetition in 4 and 8-bit integers When turning a two's complement number with a certain number of bits into one with more bits (e.g., when copying from a 1 byte variable to a two byte variable), the sign bit must be repeated in all the extra bits. Some processors have instructions to do this in a single instruction. On other processors a conditional must be used followed with code to set the relevant bits or bytes. Similarly, when a two's complement number is shifted to the right, the sign bit must be maintained. However when shifted to the left, a 0 is shifted in. These rules preserve the common semantics that left shifts multiply the number by two and right shifts divide the number by two. Both shifting and doubling the precision are important for some multiplication algorithms. Note that unlike addition and subtraction, precision extension and right shifting are done differently for signed vs unsigned numbers. The weird number With only one exception, when we start with any number in two's complement representation, if we flip all the bits and add 1, we get the two's complement representation of the negative of that number. Negative 12 becomes positive 12, positive 5 becomes negative 5, zero becomes zero, etc.
  • 32. −128 1000 0000 invert bits 0111 1111 add one 1000 0000 The two's complement of -128 results in the same 8-bit binary number. The most negative number in two's complement is sometimes called "the weird number" because it is the only exception. The two's complement of the minimum number in the range will not have the desired effect of negating the number. For example, the two's complement of -128 results in the same binary number. This is because a positive value of 128 cannot be represented with an 8-bit signed binary numeral. Note that this is detected as an overflow condition since there was a carry into but not out of the sign bit. Although the number is weird, it is a valid number. All arithmetic operations work with it both as an operand and (unless there was an overflow) a result. Why it works The 2n possible values of n bits actually form a ring of equivalence classes, namely the integers modulo 2n, Z/(2n)Z. Each class represents a set {j + k2n | k is an integer} for some integer j, 0 ≤ j ≤ 2n − 1. There are 2n such sets, and addition and multiplication are well-defined on them. If the classes are taken to represent the numbers 0 to 2n − 1, and overflow ignored, then these are the unsigned integers. But each of these numbers is equivalent to itself minus 2n. So the classes could be understood to represent −2n−1 to 2n−1 − 1, by subtracting 2n from half of them (specifically [2n−1, 2n−1]). For example, with eight bits, the unsigned bytes are 0 to 255. Subtracting 256 from the top half (128 to 255) yields the signed bytes −128 to 127. The relationship to two's complement is realised by noting that 256 = 255 + 1, and (255 − x) is the ones' complement of x. Decimal Two's complement 127 0111 1111 64 0100 0000 1 0000 0001 0 0000 0000 -1 1111 1111 -64 1100 0000
  • 33. -127 1000 0001 -128 1000 0000 Some special numbers to note Example −95 modulo 256 is equivalent to 161 since −95 + 256 = −95 + 255 + 1 = 255 − 95 + 1 = 160 + 1 = 161 1111 1111 255 − 0101 1111 − 95 =========== ===== 1010 0000 (ones' complement) 160 + 1 + 1 =========== ===== 1010 0001 (two's complement) 161 Arithmetic operations Addition Adding two's complement numbers requires no special processing if the operands have opposite signs: the sign of the result is determined automatically. For example, adding 15 and -5: 11111 111 (carry) 0000 1111 (15) + 1111 1011 (-5) ================== 0000 1010 (10) This process depends upon restricting to 8 bits of precision; a carry to the (nonexistent) 9th most significant bit is ignored, resulting in the arithmetically correct result of 10. The last two bits of the carry row (reading right-to-left) contain vital information: whether the calculation resulted in an arithmetic overflow, a number too large for the binary system to represent (in this case greater than 8 bits). An overflow condition exists when a carry (an extra 1) is generated into but not out of the far left sign bit, or out of but not into the sign bit. As mentioned above, the sign bit is the leftmost bit of the result. In other terms, if the last two carry bits (the ones on the far left of the top row in these examples) are both 1's or 0's, the result is valid; if the last two carry bits are "1 0" or "0 1", a sign overflow has occurred. Conveniently, an XOR operation on these two bits can
  • 34. quickly determine if an overflow condition exists. As an example, consider the 4-bit addition of 7 and 3: 0111 (carry) 0111 (7) + 0011 (3) ============= 1010 (−6) invalid! In this case, the far left two (MSB) carry bits are "01", which means there was a two's complement addition overflow. That is, ten is outside the permitted range of −8 to 7. Subtraction Computers usually use the method of complements to implement subtraction. But although using complements for subtraction is related to using complements for representing signed numbers, they are independent; direct subtraction works with two's complement numbers as well. Like addition, the advantage of using two's complement is the elimination of examining the signs of the operands to determine if addition or subtraction is needed. For example, subtracting -5 from 15 is really adding 5 to 15, but this is hidden by the two's complement representation: 11110 000 (borrow) 0000 1111 (15) − 1111 1011 (−5) =========== 0001 0100 (20) Overflow is detected the same way as for addition, by examining the two leftmost (most significant) bits of the borrows; overflow occurred if they are different. Another example is a subtraction operation where the result is negative: 15 − 35 = −20: 11100 000 (borrow) 0000 1111 (15) − 0010 0011 (35) =========== 1110 1100 (−20) Multiplication The product of two n-bit numbers can potentially have 2n bits. If the precision of the two two's complement operands is doubled before the multiplication, direct multiplication (discarding any excess bits beyond that precision) will provide the correct result. For example, take 5 × −6 = −30. First, the precision is extended from 4 bits to 8. Then the numbers are multiplied, discarding the bits beyond 8 (shown by 'x'): 00000101 (5) × 11111010 (−6) =========
  • 35. 0 101 0 101 101 101 x01 xx1 ========= xx11100010 (−30) This is very inefficient; by doubling the precision ahead of time, all additions must be double-precision and at least twice as many partial products are needed than for the more efficient algorithms actually implemented in computers. Some multiplication algorithms are designed for two's complement, notably Booth's algorithm. Methods for multiplying sign-magnitude numbers don't work with two's complement numbers without adaptation. There isn't usually a problem when the multiplicand (the one being repeatedly added to form the product) is negative; the issue is setting the initial bits of the product correctly when the multiplier is negative. Two methods for adapting algorithms to handle two's complement numbers are common: • First check to see if the multiplier is negative. If so, negate (i.e., take the two's complement of) both operands before multiplying. The multiplier will then be positive so the algorithm will work. And since both operands are negated, the result will still have the correct sign. • Subtract the partial product resulting from the sign bit instead of adding it like the other partial products. As an example of the second method, take the common add-and-shift algorithm for multiplication. Instead of shifting partial products to the left as is done with pencil and paper, the accumulated product is shifted right, into a second register that will eventually hold the least significant half of the product. Since the least significant bits are not changed once they are calculated, the additions can be single precision, accumulating in the register that will eventually hold the most significant half of the product. In the following example, again multiplying 5 by −6, the two registers are separated by "|": 0101 (5) ×1010 (−6) ====|==== 0000|0000 (first partial product (rightmost bit is 0)) 0000|0000 (shift right) 0101|0000 (add second partial product (next bit is 1)) 0010|1000 (shift right) 0010|1000 (add third partial product: 0 so no change) 0001|0100 (shift right) 1100|0100 (subtract last partial product since it's from sign bit) 1110|0010 (shift right, preserving sign bit, giving the final answer, −30)
  • 36. Memory hierarchy The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. It is designed to take advantage of memory locality in computer programs. Each level of the hierarchy is of higher speed and lower latency, and is of smaller size, than lower levels. Most modern CPUs are so fast that for most program workloads the locality of reference of memory accesses, and the efficiency of the caching and memory transfer between different levels of the hierarchy, is the practical limitation on processing speed. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete. The memory hierarchy in most computers is as follows: • Processor registers – fastest possible access (usually 1 CPU cycle), only hundreds of bytes in size • Level 1 (L1) cache – often accessed in just a few cycles, usually tens of kilobytes • Level 2 (L2) cache – higher latency than L1 by 2× to 10×, often 512 KiB or more • Level 3 (L3) cache – (optional) higher latency than L2, often several MiB • Main memory (DRAM) – may take hundreds of cycles, but can be multiple gigabytes. Access times may not be uniform, in the case of a NUMA machine. • Disk storage – hundreds of thousands of cycles latency, but very large • Tertiary storage – tape, optical disk (WORM) Virtual memory
  • 37. The memory pages of the virtual address space seen by the process, may reside non- contiguously in primary, or even secondary storage. Virtual memory or virtual memory addressing is a memory management technique, used by computer operating systems, more common in multitasking OSes, wherein non- contiguous memory is presented to a software (aka process) as contiguous memory. This contiguous memory is referred to as the virtual address space. Virtual memory addressing is typically used in paged memory systems. This in turn is often combined with memory swapping (also known as anonymous memory paging), whereby memory pages stored in primary storage are written to secondary storage (often to a swap file or swap partition), thus freeing faster primary storage for other processes to use. In technical terms, virtual memory allows software to run in a memory address space whose size and addressing are not necessarily tied to the computer's physical memory. To properly implement virtual memory the CPU (or a device attached to it) must provide a way for the operating system to map virtual memory to physical memory and for it to detect when an address is required that does not currently relate to main memory so that the needed data can be swapped in. While it would certainly be possible to provide virtual memory without the CPU's assistance it would essentially require emulating a CPU that did provide the needed features. Background Most computers possess four kinds of memory: registers in the CPU, CPU caches (generally some kind of static RAM) both inside and adjacent to the CPU, main memory (generally dynamic RAM) which the CPU can read and write to directly and reasonably quickly; and disk storage, which is much slower, but much larger. CPU register use is generally handled by the compiler (and if preemptive multitasking is in use swapped by the operating system on context switches) and this isn't a huge burden as they are small in number and data doesn't generally stay in them very long. The decision of when to use cache and when to use main memory is generally dealt with by hardware so generally both are regarded together by the programmer as simply physical memory. Many applications require access to more information (code as well as data) than can be stored in physical memory. This is especially true when the operating system allows multiple processes/applications to run seemingly in parallel. The obvious response to the problem of the maximum size of the physical memory being less than that required for all running programs is for the application to keep some of its information on the disk, and move it back and forth to physical memory as needed, but there are a number of ways to do this. One option is for the application software itself to be responsible both for deciding which information is to be kept where, and also for moving it back and forth. The programmer would do this by determining which sections of the program (and also its data) were
  • 38. mutually exclusive, and then arranging for loading and unloading the appropriate sections from physical memory, as needed. The disadvantage of this approach is that each application's programmer must spend time and effort on designing, implementing, and debugging this mechanism, instead of focusing on his or her application; this hampers programmers' efficiency. Also, if any programmer could truly choose which of their items of data to store in the physical memory at any one time, they could easily conflict with the decisions made by another programmer, who also wanted to use all the available physical memory at that point. Another option is to store some form of handles to data rather than direct pointers and let the OS deal with swapping the data associated with those handles between the swap area and physical memory as needed. This works but has a couple of problems, namely that it complicates application code, that it requires applications to play nice (they generally need the power to lock the data into physical memory to actually work on it) and that it stops the languages standard library doing its own suballocations inside large blocks from the OS to improve performance. The best known example of this kind of arrangement is probably the 16-bit versions of Windows. The modern solution is to use virtual memory, in which a combination of special hardware and operating system software makes use of both kinds of memory to make it look as if the computer has a much larger main memory than it actually does and to lay that space out differently at will. It does this in a way that is invisible to the rest of the software running on the computer. It usually provides the ability to simulate a main memory of almost any size (In practice there's a limit imposed on this by the size of the addresses. For a 32-bit system, the total size of the virtual memory can be 232, or approximately 4 gigabytes. For the newer 64-bit chips and operating systems that use 64 or 48 bit addresses, this can be much higher. Many operating systems do not allow the entire address space to be used by applications to simplify kernel access to application memory but this is not a hard design requirement.) Virtual memory makes the job of the application programmer much simpler. No matter how much memory the application needs, it can act as if it has access to a main memory of that size and can place its data wherever in that virtual space that it likes. The programmer can also completely ignore the need to manage the moving of data back and forth between the different kinds of memory. That said, if the programmer cares about performance when working with large volumes of data, he needs to minimise the number of nearby blocks being accessed in order to avoid unnecessary swapping. [edit] Paging Virtual memory is usually (but not necessarily) implemented using paging. In paging, the low order bits of the binary representation of the virtual address are preserved, and used directly as the low order bits of the actual physical address; the high order bits are treated as a key to one or more address translation tables, which provide the high order bits of the actual physical address.
  • 39. For this reason a range of consecutive addresses in the virtual address space whose size is a power of two will be translated in a corresponding range of consecutive physical addresses. The memory referenced by such a range is called a page. The page size is typically in the range of 512 to 8192 bytes (with 4K currently being very common), though page sizes of 4 megabytes or larger may be used for special purposes. (Using the same or a related mechanism, contiguous regions of virtual memory larger than a page are often mappable to contiguous physical memory for purposes other than virtualization, such as setting access and caching control bits.) The operating system stores the address translation tables, the mappings from virtual to physical page numbers, in a data structure known as a page table. If a page that is marked as unavailable (perhaps because it is not present in physical memory, but instead is in the swap area), when the CPU tries to reference a memory location in that page, the MMU responds by raising an exception (commonly called a page fault) with the CPU, which then jumps to a routine in the operating system. If the page is in the swap area, this routine invokes an operation called a page swap, to bring in the required page. The page swap operation involves a series of steps. First it selects a page in memory, for example, a page that has not been recently accessed and (preferably) has not been modified since it was last read from disk or the swap area. (See page replacement algorithms for details.) If the page has been modified, the process writes the modified page to the swap area. The next step in the process is to read in the information in the needed page (the page corresponding to the virtual address the original program was trying to reference when the exception occurred) from the swap file. When the page has been read in, the tables for translating virtual addresses to physical addresses are updated to reflect the revised contents of the physical memory. Once the page swap completes, it exits, and the program is restarted and continues on as if nothing had happened, returning to the point in the program that caused the exception. It is also possible that a virtual page was marked as unavailable because the page was never previously allocated. In such cases, a page of physical memory is allocated and filled with zeros, the page table is modified to describe it, and the program is restarted as above. Details The translation from virtual to physical addresses is implemented by an MMU (Memory Management Unit). This may be either a module of the CPU, or an auxiliary, closely coupled chip. The operating system is responsible for deciding which parts of the program's simulated main memory are kept in physical memory. The operating system also maintains the translation tables which provide the mappings between virtual and physical addresses, for use by the MMU. Finally, when a virtual memory exception occurs, the operating system
  • 40. is responsible for allocating an area of physical memory to hold the missing information (and possibly in the process pushing something else out to disk), bringing the relevant information in from the disk, updating the translation tables, and finally resuming execution of the software that incurred the virtual memory exception. In most computers, these translation tables are stored in physical memory. Therefore, a virtual memory reference might actually involve two or more physical memory references: one or more to retrieve the needed address translation from the page tables, and a final one to actually do the memory reference. To minimize the performance penalty of address translation, most modern CPUs include an on-chip MMU, and maintain a table of recently used virtual-to-physical translations, called a Translation Lookaside Buffer, or TLB. Addresses with entries in the TLB require no additional memory references (and therefore time) to translate, However, the TLB can only maintain a fixed number of mappings between virtual and physical addresses; when the needed translation is not resident in the TLB, action will have to be taken to load it in. On some processors, this is performed entirely in hardware; the MMU has to do additional memory references to load the required translations from the translation tables, but no other action is needed. In other processors, assistance from the operating system is needed; an exception is raised, and on this exception, the operating system replaces one of the entries in the TLB with an entry from the translation table, and the instruction which made the original memory reference is restarted. The hardware that supports virtual memory almost always supports memory protection mechanisms as well. The MMU may have the ability to vary its operation according to the type of memory reference (for read, write or execution), as well as the privilege mode of the CPU at the time the memory reference was made. This allows the operating system to protect its own code and data (such as the translation tables used for virtual memory) from corruption by an erroneous application program and to protect application programs from each other and (to some extent) from themselves (e.g. by preventing writes to areas of memory which contain code) History Before the development of the virtual memory technique, programmers in the 1940s and 1950s had to manage two-level storage (main memory or RAM, and secondary memory in the form of hard disks or earlier, magnetic drums) directly. Virtual memory was developed in approximately 1959 - 1962, at the University of Manchester for the Atlas Computer, completed in 1962. However, Fritz-Rudolf Güntsch, one of Germany's pioneering computer scientists and later the developer of the Telefunken TR 440 mainframe, claims to have invented the concept in his doctoral dissertation Logischer Entwurf eines digitalen Rechengerätes mit mehreren asynchron laufenden Trommeln und automatischem Schnellspeicherbetrieb (Logic Concept of a
  • 41. Digital Computing Device with Multiple Asynchronous Drum Storage and Automatic Fast Memory Mode) in 1957. In 1961, Burroughs released the B5000 the first commercial computer with virtual memory. Like many technologies in the history of computing, virtual memory was not accepted without challenge. Before it could be regarded as a stable entity, many models, experiments, and theories had to be developed to overcome the numerous problems with virtual memory. Specialized hardware had to be developed that would take a "virtual" address and translate it into an actual physical address in memory (secondary or primary). Some worried that this process would be expensive, hard to build, and take too much processor power to do the address translation.[citation needed] By 1969 the debates over virtual memory for commercial computers were over[citation needed]. An IBM research team, lead by David Sayre, showed that the virtual memory overlay system worked consistently better than the best manual-controlled systems. Possibly the first minicomputer to introduce virtual memory was the Norwegian NORD-1 minicomputer. During the 1970s, other minicomputer models such as VAX models running VMS implemented virtual memories. Virtual memory was introduced to the x86 architecture with the protected mode of the Intel 80286 processor. At first it was done with segment swapping, which becomes inefficent as segments get larger. With the Intel 80386 comes support for paging, which lay under segmentation. The page fault exception could be chained with other exceptions without causing a double fault.
  • 42. Compilers A diagram of the operation of a typical multi-language, multi-target compiler. A compiler is a computer program (or set of programs) that translates text written in a computer language (the source language) into another computer language (the target language). The original sequence is usually called the source code and the output called object code. Commonly the output has a form suitable for processing by other programs (e.g., a linker), but it may be a human readable text file. The most common reason for wanting to translate source code is to create an executable program. The name "compiler" is primarily used for programs that translate source code from a high level language to a lower level language (e.g., assembly language or machine language). A program that translates from a low level language to a higher level one is a decompiler. A program that translates between high-level languages is usually called a language translator, source to source translator, or language converter. A language rewriter is usually a program that translates the form of expressions without a change of language. A compiler is likely to perform many or all of the following operations: lexing, preprocessing, parsing, semantic analysis, code optimizations, and code
  • 43. Linker Figure of the linking process, where object files and static libraries are assembled into a new library or executable. In computer science, a linker or link editor is a program that takes one or more objects generated by compilers and assembles them into a single executable program. In IBM mainframe environments such as OS/360 this program is known as a linkage editor. (On Unix variants the term loader is often used as a synonym for linker. Because this usage blurs the distinction between the compile-time process and the run-time process, this article will use linking for the former and loading for the latter.) The objects are program modules containing machine code and information for the linker. This information comes mainly in the form of symbol definitions, which come in two varieties: • Defined or exported symbols are functions or variables that are present in the module represented by the object, and which should be available for use by other modules. • Undefined or imported symbols are functions or variables that are called or referenced by this object, but not internally defined. In short, the linker's job is to resolve references to undefined symbols by finding out which other object defines a symbol in question, and replacing placeholders with the symbol's address. Linkers can take objects from a collection called a library. Some linkers do not include the whole library in the output; they only include its symbols that are referenced from other object files or libraries. Libraries for diverse purposes exist, and one or more system libraries are usually linked in by default.
  • 44. The linker also takes care of arranging the objects in a program's address space. This may involve relocating code that assumes a specific base address to another base. Since a compiler seldom knows where an object will reside, it often assumes a fixed base location (for example, zero). Relocating machine code may involve re-targeting of absolute jumps, loads and stores. The executable output by the linker may need another relocation pass when it is finally loaded into memory (just before execution). On hardware offering virtual memory this is usually omitted, though—every program is put into its own address space, so there is no conflict even if all programs load at the same base address. Assembler Typically a modern assembler creates object code by translating assembly instruction mnemonics into opcodes, and by resolving symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution — e.g. to generate common short sequences of instructions to run inline, instead of in a subroutine. Assemblers are generally simpler to write than compilers for high-level languages, and have been available since the 1950s. (The first assemblers, in the early days of computers, were a breakthrough for a generation of tired programmers.) Modern assemblers, especially for RISC based architectures, such as MIPS, Sun SPARC and HP PA-RISC, optimize instruction scheduling to exploit the CPU pipeline efficiently. More sophisticated High-level assemblers provide language abstractions such as: • Advanced control structures • High-level procedure/function declarations and invocations • High-level abstract data types, including structures/records, unions, classes, and sets • Sophisticated macro processing Note that, in normal professional usage, the term assembler is often used ambiguously: It is frequently used to refer to an assembly language itself, rather than to the assembler utility. Thus: "CP/CMS was written in S/360 assembler" as opposed to "ASM-H was a widely-used S/370 assembler."
  • 45. The C Compilation Model We will briefly highlight key features of the C Compilation model here.
  • 46. The Preprocessor We will study this part of the compilation process in greater detail later (Chapter 13. However we need some basic information for some C programs. The Preprocessor accepts source code as input and is responsible for • removing comments • interpreting special preprocessor directives denoted by #. For example • #include -- includes contents of a named file. Files usually called header files. e.g o #include <math.h> -- standard library maths file. o #include <stdio.h> -- standard library I/O file • #define -- defines a symbolic name or constant. Macro substitution. o #define MAX_ARRAY_SIZE 100 C Compiler The C compiler translates source to assembly code. The source code is received from the preprocessor. Assembler The assembler creates object code. On a UNIX system you may see files with a .o suffix (.OBJ on MSDOS) to indicate object code files. Link Editor If a source file references library functions or functions defined in other source files the link editor combines these functions (with main()) to create an executable file. External Variable references resolved here also. More on this later (Chapter 34). Digitally signed by Vinayak Vinayak Ashok Bharadi DN: cn=Vinayak Ashok Bharadi, c=IN, o=GPM, Ashok ou=Engineering IT, email=vinu_bharadi@rediffmail. com Reason: I am the author of this Bharadi document Date: 2006.11.20 16:19:13 +05'30'