SlideShare una empresa de Scribd logo
1 de 78
Descargar para leer sin conexión
DESIGN OF A COMPUTER SYSTEM FOR AN INFORMATION SYSTEM
ORGANIZATION
THAT PROVIDES
WEB BASED MARKETING, SALES, AND CUSTOMER SERVICE
FOR AN INDUSTRIAL CONSULTANT AND PATENT LAW
ORGANIZATION
A DIRECTED STUDY PROJECT SUBMITTED TO THE
FACULTY OF THE GRADUATE SCHOOL OF COMPUTERS SCIENCE
IN
CANDIDACY
FOR A
MASTERS OF SCIENCE IN INFORMATION SYSTEMS
By
Anthony W. Sublett
STRAYER UNIVERSITY NEWPORT NEWS CAMPUS
June 2006
- ii-
ABSTRACT
This paper detailed the design and functional aspects of a computer system that
was used by an information system organization. A triangulation research method was
used to gather data for this study and deign of this computer information system. The
information system was designed to obtain and store pertinent data that was vital to the
daily operation of an information system organization performing web based marketing,
sales, and customer service for an Industrial Consultant and Intellectual Property Law
Organization (which is the information system organization’s parent company). This
paper explained how the systems analysis was performed and what the results of the
analysis were. How the system was designed, which was based on the system analysis,
was discussed as well. After the system was designed sample calculations were ran to
verify and validate system performance in regards to the number of PCs and operators
required to sufficiently handle the predicated influx of business. An alternative solution
of running a simulation was also suggested as possible method to verify, validate, and
predetermine requirements dictated by the business requirements at the out set of the
design problem formulation.
- iii-
ACKNOWLEDGEMENTS
Professor Martinez Del Rio, Dr. Otto, and DR. Hayes deserve special thanks for
their guidance and insight in terms of helping me organize the specific subject areas of
this paper as well as formatting it properly in order to ensure that it would be acceptable
in accordance with the current DRP guidelines and specifications. I would also like to
thank all of my other course instructors in the CIS Department at Strayer University
Newport News campus as well as my online instructors that provided me with the
theoretical knowledge which enabled me to perform both a quantitative and qualitative
analysis and incorporate it into this paper.
- iv-
TABLE OF CONTENTS
CHAPTER
Abstract ii
Acknowledgements iii
List of Appendixes vi
List of Figures vii
List of Tables viii
Chapter
1 Introduction 1
1.1 Context and Statement of Problem 1
1.2 Significance of the Design (study) 1
1.3 Specific Research Questions and Sub-questions 1
1.4 Statement of Problem 2
1.5 Research and Design Methodology (use of interviews and etc.) 2
1.6 Organization of Study 3
2 Secondary Research and Design of System 5
2.1 System Analysis 5
2.2 Information System Strategic Planning 7
2.3 Design Issues 10
2.4 System Processors 11
- v-
TABLE OF CONTENTS (CONT)
2.5 Hard Drive 22
2.6 Output Devices 24
2.7 Network 27
2.8 Functional Aspects of the System 30
2.8.1 System Software 31
2.8.2 Application Development Software 31
2.9 Functional Data Bases 31
3.0 Security Introduction 44
3 Primary Research System Evaluation and Validation 49
3.1 Purpose and Justification for Using a Simulation Model 49
4 Summary and Conclusion 65
- vi-
APPENDIXES
Appendix A-1 67
Appendix B-1 68
- vii-
LIST OF FIGURES
Figure 2-1 The Analyst Approach to Problem Solving 6
Figure 2-2 Company Organization Structure 8
Figure 2.3 Symmetric Multiprocessing Architecture 14
Figure 2.4 General Structure of a Client Server System 15
Figure 2-5 Interrelationships Between the Computing Model 16
Figure 2.6 Storage Device Hierarchy 23
Figure 2.7 Stages in the Requirements Analysis Process 32
Figure 2.8 Stage 3 – Requirements Evaluation 36
Figure 2.9 Employee Table Before Normalization 38
Figure 2.10 Employee and Project Tables in 1NF 38
Figure 2.11 The Project Table Before 2NF 39
Figure 2-12 Project and Department Tables in 2NF 40
Figure 2-13 The Project Table Before 3NF 41
Figure 2-14 Project and Priority Tables in 3NF 42
- viii-
LIST OF TABLES
Table 2-1 HP L1506 Flat Panel Monitor Specifications 24
Table 2-2 Server Specifications 27
Table 2-3 Data Categories 34
Table 3-0 Problem Properties 51
Table 3-1 Interval Distribution of Incoming E-mails 52
Table 3-2 Service Distribution of PC1 51
Table 3-3 Service Distribution of PC2 53
Table 3-4 Service Distribution of PC3 53
Table 3-5 Service Distribution of PC4 54
Table 3-6 Service Distribution of PC5 54
- 1-
CHAPTER 1
INTRODUCTION
1.1 Context of Problem:
A brand new information system organization has been created in Washington
DC. This information system organization is a separate division but a valued part of its
parent organization that is brand new as well. The parent organization is an Industrial
Consultation and Intellectual Property Law Consulting firm that provides industrial and
intellectual property law consolation services to both national and international clients.
This information system organization is the life bread of the parent company, because it
provides web based marketing, sales, and customer services activities for the parent
organization. Since this information system organization was a brand new entity it did not
have any computer system in place to be to assist in the organization in performing the
above listed activities.
1.2 Statement of the Problem:
A newly developed information organization that performs web based marketing
and sales for a patent law and industrial consulting firm requires a newly designed
information system that is cable of meeting the organization’s requirements in terms of
storing, processing, and transmitting information and attribute support to the organization
meeting its objectives and goals.
1.3 Specific Research Questions and Sub-questions:
How will the system primarily be used and by who?
- 2-
• What role will the system play strategically in the organization?
• What are the function ional requirements and capacity requirements of the
system?
• How will the system be validated?
Will the system provide the necessary functions and satisfy the organization’s user
requirements and ultimately add value to the information system organization?
1.4 Significance of the Design (study):
This paper is significant because it will serve as a functional aid and blueprint for
the design, development, and maintenance of the new computer system that is definitely
needed by the information system. Without this the new computer system organizations
ability to perform web based marketing and sale for the parent organization would be
greatly crippled. If the information system organization cannot successfully perform their
web based marketing and sales either up to or beyond the parent organization’s projected
expectations. The parent company could possibly lose money, investors, and possibly
fold. The culmination of increased marketing and procurement of new clients would
attribute greatly to success of the information system organization and ultimately solidify
the parent organization’s significant foot hole in their respective industry. Ultimately the
role that this newly designed computer system can play in regards to procuring, providing
service for and retaining new customers can make or break the parent organization.
1.5 Research and Design Methodology (use of interviews and etc.):
Triangulation, which is an integration of qualitative and quantitative analysis, was
the DRP research design methodology used. Qualitative research data was obtained
- 3-
through interviews with the CIO and other members of the organization that are listed as
follows:
Herman Miller, President CIO, Senior Counselor Intellectual Property Law
Michael Taylor, VP Marketing & Sales & IS
Darold Thomas, Chief Industrial Project Consultant & Mechanical Project
Engineer Susan Jones, Executive Secretary & Paralegal
Victor Jones, Director of industrial Research & Chief Electrical Project Engineer
Rick Hamilton, Chief Marketing & Sales Executive & IS Analyst
A Quantitative analysis was performed by obtaining component statistical data and
analyzing if it would meet the user requirements. Also a simulation was run to validate
that the system could operate efficiently enough to be perform up to the capacity required
by the organization.
1.6 Organization of Study:
The system design was based on system requirements specified by the CIO to the
designer during a user/client interview process. An analysis was performed and reviewed
by the CIO and these super users. This analysis specified details in regards to all of the
system hardware, software, and network requirements as well as how the system would
be integrated into each respective department functionally. How and why various
components of the system were selected an implemented as part of the computer system
design will be highlighted. Quantitative research, that included statistical analysis and a
simulation, was performed to validate the system’s efficiency, and verify that it would
perform up to the organization’s requirements. Ultimately a summary of the design and a
- 4-
conclusion was made on whether the system would be an asset to the information system
organization and parent consulting organization.
- 5-
CHAPTER 2
SECONDARY RESAERCH OF SYSTEM DESIGN
This chapter will detail how secondary research was used to perform the system
analysis and define the requirements and functional aspects that were used to select the
hardware, software, network components, and peripherals of the system.
2.1 System Analysis:
The first step in performing this analysis was developing a clear understanding of
the information that would be processed by the system during a normal business day. The
process that was used in obtaining and organizing this information was outlined as shown
in the analyst approach to problem solving shown in figure 2-1 below (Burd, Jackson,
and Satzinger, 2004).
- 6-
Figure 2-1 The Analyst Approach to Problem Solving (Burd, Jackson, and Satzinger,
2004)
Research and
understand the problem
Verify the benefits of
solving the problem
outweigh the costs
Define the requirements
for solving the problem
Develop a set of
possible solutions
(alternatives)
Decide which solution is
best and make a
recommendation
Define the details of the
chosen solution
Implement the solution
Define the details of the
chosen solution
Monitor to make sure that
you obtain the desired
results
- 7-
2.2 Information System Strategic Planning:
The first step in designing a computer system for any organization is to define the
information strategic plan and the basic objectives of the customer support system that is
part of the plan. In many instances a consulting company is bought into carefully think
about the entire information technology infrastructure and formulate a strategic
information system’s plan (Burd, Jackson, and Satzinger, 2004). This particular
organization is relatively small and happens to have a small group of diverse employees
(i.e. one employee has a Bachelors of Science in Mechanical Engineering, Jurist
Doctorate in Law, a Masters in Information Systems, and a Masters in Business
Administration. Another employee has a Bachelors of Science in computer Science and a
marketing background) that perform the tasks that would normally be required by two or
three individuals.
The employees focus primarily on the following strategic objectives:
1. Customer relationship management
2. Supply chain management
“ Customer relationship management (CRM) concerns processes that support marketing,
sales, and service operations involving direct and indirect customer interaction” (Burd,
Jackson, and Satzinger, 2004, pg 19)
“Supply chain management (SCM) concerns processes that seamlessly integrate product
development (the product in this case are the services the industrial and legal consulting
services that are provided by the parent organization), client acquisition, client
procurement, and data base inventory of potential and existing clients” (Burd, Jackson,
- 8-
and Satzinger, 2004, pg 19). Focusing on both of these strategic objectives will help the
business provide services to clients while promoting efficient operations.
The information systems strategic plan includes an application architecture plan,
detailing the information systems projects that the organization still has to complete as
well as technology architecture plan, detailing the technology infrastructure needed to
support the systems. Both of these plans were based on the supply chain management
and customer relationship objectives that are depicted later in this document (Burd,
Jackson, and Satzinger, 2004). The organizational structure of the company is shown
below in figure 2-2.
Figure 2-2 Company Organization Structure
Herman Miller
President CIO
Darold Thomas
Chief Industrial Project
Consultant & Mechanical
Project Engineer
Herman Miller
Senior Counselor
Intellectual
Property Law
Michael Taylor
VP Marketing &
Sales & IS
Rick Hamilton
Chief Marketing &
Sales Executive &
IS Analyst
Susan Jones
Executive
Secretary &
Paralegal
Victor Jones
Director of industrial
Research & Chief
Electrical Project Engineer
- 9-
The individuals and their respective positions are shown above in Figure 2-2.
Note, all departments feed into the marketing and sales organization which in turn feed
into the information systems (IS) organization.
The IS organization is broken down into two distinct areas, system support and
system development. System support is composed of the following functional areas:
telemarketing, data base administration, operations management, and user support.
System development involves project management, system analysis, program analysis,
and clerical support. These tasks are performed by the Herman Miller CIO and Rick
Hamilton, see figure 2-2 above (Burd, Jackson, and Satzinger, 2004).
The information system strategic plan that was developed includes a technology
architecture plan and an application architecture plan. The main features of both of these
plans are as follows:
Technology Architecture Plan:
1. Strategically move towards conducting business processes via the internet,
first supporting new client acquisition, client service demands, and web
innovation.
2. Anticipate and react to targeted customer demands in order to supply
customer with company portfolio information and relative data on their
competitors.
3. Anticipate move towards web based internet solutions for information
systems.
- 10-
4. Ensure the intranet hardware is in place and the technology is
continuously updated to aid in the acquisition, transfer, and storage of
information through out the organization.
5. Ensure that adequate security software is in place on the system and
updated frequently to negate fraudulent cyber activities.
Application Architecture Plan:
1. Supply Chain Management (SCM): implement systems that seamlessly
integrate products (services in this case), development, product services
acquisition, new client procurement, existing client retention, referral
database information, distribution and storage database inventory, and
inventory management in anticipation of rapid client growth.
2.3 Design Issues:
Making the multiplicity of processors and storage devices transparent to the users
was the primary focus. The user interface of a transparent distributed system looks at its
users. The user interface of a transparent distributed system should not distinguish
between local and remote resources. Ideally the users will be able to access data basis on
their PCs from any remote location within the facility. Users will be able to log onto any
machine by using their roaming profile, which will contain an encrypted password and
username. Another major form of transparency is the mobility of user address books. The
users will be able to access their address books on their home system or from the office.
The system shall be fault tolerant, thus the system will still be able to perform regardless
if some of its components fail. A DEC VAX cluster which allows multiple computers to
share multiple sets of disks will be used to attribute to the fault tolerant aspects of the
- 11-
system. The system will also have some degree of scalability, the ability of the system to
adapt to increased service load. Ultimately the system resources will take longer to reach
a saturated state. The service demand from any component of the system will be bounded
by a constant that is independent of the nodes in the system
2.4 System Processors:
The system will utilize multiple processor boards (MPU-A) that will share the
same physical memory by using a Shared Memory Facility (SMF). The SMF will allow
up to six processors to communicate via a shared memory, and allow any processor to
access several shared memory systems. I/O interfaces will also be shared when selected
as memory addresses instead of port numbers. Each processor is allowed up to 64K bytes
of its own local memory, less the amount of memory in the shared block. The shared
memory may be up to 64K bytes (Wilson, 2003).
The SMF will include:
• MABP-3 Access Port (bus multiplexer) board, which is a bus multiplexer board
composed of three identical ports used to switch information between one of three
processors and the Shared Memory Bus. Two MABP-3 boards will be used to
make a six-way switch controlled by the MAPT-6 Shared Memory Controller
board. The MABP-3 board will contain three complete sets of logic, each
comprising an address decoder, request latch and bus buffers. When two boards
are used in a system, they will provide a six-way bus switch in which the six
processor ports differ only in their priority assigned on the MAPT-6 board. Any
processor will have access to information in shared memory as well as access to
any other memory space except that it may need to wait while the SMF services
- 12-
higher priority requests. An EXPM edge connector will be used to install each
MABP-3 board (Wilson, 2003).
• MAPT-6 Shared Memory Controller board, which will board perform timing and
control functions of the Shared Memory Facility. Logic will include a latched
priority encoder that will generate the signal PORT SELECT I, where I is the
processor presently with highest priority. This signal enables related bus drivers
on the MABP-3 boards to perform bus switching. Other logic elements will form
a sequential logic network to generate signals controlling the memory. These
signals will include SYNC that will substitute for the SYNC signal normally
generated by the processor during a fetch cycle and which possibly is used for
other system functions. Another EXPM edge connector will be used to install the
MAPT-6 board.
Both boards will be connected to a special Mother Board section (Shared Memory
Bus) in a standard IMSAI 8080 chassis. Memory boards shared by the processors will
also be plugged into this section .The SMF is completed by Bus boards plugged into
the Mother Board of each processor, and cables connecting the Bus boards to the
MABP-3 boards on the Shared Memory Bus. A separate front panel will be connected
(through an extender board) to each processor. The processors may be contained in
separate cabinets connected only by the Bus board and associated cable.
Bus Boards:
Each processor sharing memory will require a Bus Board and cable to connect its
Bus to the MABP-3 board. Because of the required length of the runs an 18-inch flat
- 13-
cable with 50-pin card edge connectors will be is used to connect a Bus board to a
MABP-3 board (Wilson, 2003).
This multiprocessor will share the computer bus, the clock, and some times
memory and peripheral devices (Gage, Galvin, Silberschatz, 2004). Multiprocessor
systems have the following three advantages:
1) Increased throughput: By increasing the number of processors more work can
get done in less time. “The speed up ratio with N processors is not N; rather it is
less then N. When multiple processors cooperate on a task a certain amount of
overhead is incurred in keeping all of the parts working correctly. This overhead
plus contention for shared resources, lowers the expected gain from additional
processors. Similarly a group of N programmers does not result in N times the
amount of work being accomplished.” (Gage, Galvin, Silberschatz, pg 12).
2) Economy of scale: Multiprocessor systems will save more money than multiple
single-processor systems, because they can share peripherals, storage space, and
power supplies. “They allow several programs to operate on the same set of data
on one disk and have all the processor share them, opposed to having many
computers with local disks and many copies of the data”
3) Increased reliability: It allows functions to be distributed among several
processors and negates system failure or decrease in speed due to a faulty
processor within the network. This ability to continue to providing service
proportional to the level of surviving hardware is called graceful degradation.
“System designed from graceful degradation are also called fault tolerant.” (Gage,
Galvin, Silberschatz, pg 12).
- 14-
This multiple processor system will use symmetric multiprocessing (SMP), where
each processor will run an identical copy of the operating system. SMP basically means
that all processors work equally, there is no CPU hierarchy (see figure 2.3. below). The
advantage of this model is that many processes can run simultaneously –N processes can
run if there are N CPUs without causing a significant deterioration of performance.
Figure 2.3 Symmetric Multiprocessing Architecture
Client /Servers:
Server hardware capabilities depend upon the resources that are being shared and
the umber of simultaneous users (Burd, 2006). In this case an average of 12 people was
the design capacity specified for the server that could possibly be accessing the network
at one time (even though at present there are six employees). A distributed client server
system will be used, which will allow the clients or PC users to access data as if it was
stored centrally. A distributed configuration over a Wide Area Network (WAN) will be
used so that the servers can be synchronized regularly to ensure that they hold the same
CPU CPU CPU
MEMORY
- 15-
data. This system will be centralized which will enable it to act as a server system to
satisfy request generated by client systems. The general layout for a client server system
is depicted in Figure 2.4 below. The computer server system is broadly recognized as a
compute servers and file servers. Computer server systems provide an interface to which
clients can send requests to perform an action, in response to which they execute the
action and send back the results warranted by the user or client. File-server systems
provide a file-system interface where clients can create, update, read, and delete files.
Figure 2.4 General structure of a client server system (Gage, Galvin, Silberschatz, 2004)
A distributed computer system (DCS) was selected that would allow the clients or
PC users access to data as if it was stored centrally. A distributed computing system is
interrelated to a client/server system. The DCS technically is a collection of autonomous
computers interconnected through a communication network in order to carry out the
business functions of the organization (Umar, 1997). Information cannot be shared
globally and none of the systems connected by the network share memory. The
information is exchanged through messages on the network. In essence the computers
CLIENT CLIENT CLIENT
MEMORY
CLIENT
Network
- 16-
have to work together and cooperate with each other in order to meet organizational
requirements (Umar, 1997). The client / server model is a special case of the distributed
computing model. Figure 2.5 below shows the relationship between the two.
Figure 2-5 Interrelationships Between the Computing Model
The fact that the network computers of this system do not share memory is what
sets it apart from a multiprocessor system.
The client / server (C/S) model, which is one of three models that can be used to
achieve distributing computing was used in this system design. The C/S model allows
processes at different locations to readily exchange information and is more interactive
then the file transfer model (Umar, 1997).
Client Processes:
The client service process performs the application functions on the client’s
behalf. Client processes may range from simple interfaces, producing spread sheets, or
Computing
Models
Terminal
Host
Model
Distributing
Computing
Model
File
Transfer
Model
Peer-to-
Peer
Model
Client /
Server
Model
File
Transfer
Model
- 17-
producing analytical data such as graphs Client processes general have some of the
following characteristics:
• It interacts with the user through a user interface, which are typically graphical
user interfaces or object oriented interfaces.
• It performs some application functions if needed, i.e. user interface processing,
spreadsheets, report, generation, and object manipulation.
• It interacts with the client middleware by forming queries and or commands in
application program interface, which is the format that is readily understood by
the client middleware.
• It receives responses from the server middleware which it displays, if need be on
the user interface.
Server Processes:
The server performs the applications on the server’s side of the coin which has the
following characteristics:
• It provides services to the client, some of which may be as simple as providing the
time of day. Other services may be more complex such as processing electronic
transfer of funds.
• An ideal server will hide its activities so the client does not have knowledge of
the details of the functions that have been carried out on their behalf.
• It is invoked by he server middleware and returns the results back to the server
middleware.
- 18-
• It may provide some limited scheduling services, in leiu of multiple clients being
provided service concurrently. But generally speaking providing scheduling is
usually a service that the middleware server is responsible for.
• It provides error-recovery and failure handling services (Umar, 1997).
Application Architecture:
A two tier architecture with a distribute application program will be used. The
application programs shall be split between the client server machines and with each
other through remote-procedure-call (RPC) middleware. The remote data will be stored
in an SQL server and accessed through ad hoc SQL statements sent over the network.
This architecture was selected because of its ability to be supported by SQL Windows
software (Umar, 1997).
Middleware:
The C/S middleware provides information exchange service that provides the user
the ability to access services like remote databases and real time message exchanges
between remote programs supported by directory, security and failure handling services.
The middleware can be viewed as the backbone of the client server system. The
following questions were addressed in designing the client server system and determining
what middleware would be used:
1. What basic services would the C/S middle ware have to provide and who would
provide them?
2. What type of information exchange services would be required by the
middleware?
- 19-
3. What type of management and support services would be provided by the
middleware?
4. What is the functional aspects of OSF DCE and how will hey provide the basic
CS middleware services (Umar, 1997)?
Net Operating System:
The network operating systems (NOSs) will provide all of the basic C/S
middleware services in retrospect thus providing the user with transparent access to
printers, files, and databases across the network. The net operating system (NOSs)
Windows 2000 was the NOS selected to support the RPC paradigm previously mentioned
that will provide the server with the capabilities to access remote files, printers, e-mail,
directory, and back up recovery services. An Open Software Foundation’s Distributed
Environment (OSF DCE) will be used to integrate remote-procedure call with security,
directory, time, file, and print services. OSF DCE is a good client to use in terms of
providing an open CS environment where clients and servers from different vendors can
operate through the network with each other and the users of the system at the subject
information systems location. This middleware will provide the following;
• Remote communication interfaces (RCIs) for sending and receiving information
across the network. RCI may use basic protocols such as remote call (RPC),
remote data access and message oriented middleware (MOM).
• Global directories that show the location of resources (servers, databases, files,
printers) in the network.
• Print and file services (Umar, 1997).
- 20-
The client side of middleware can be quite intense because it is responsible for
providing the user access to printers, files, data bases, and other software and hardware
through network connections and network gateways that may convert network protocols.
Windows 200 NOS was selected because of its ability to cement together C/S
applications enterprise wide via a network through gateways by converting protocols
from one NOS to another. This would allow the information system to achieve system
objectives by and reduce failure in network operations capacity. This is a crucial element
in regards to the parent organization’s ability to market their product niche, transmit
pertinent data to respective clients, and receive respective data from clients in real-time
processing speed. This middleware will cost an estimated $400 per client PC (Umar,
1997).
Utilization of RPC Paradigm:
Generally speaking the RPC Paradigm is where the client process invokes a
remotely located procedure, and in turn the remote procedure executes and sends the
response back to the client process. Note that each request / response of an RPC is treated
as a separate unit of work, therefore each request can carry enough information needed
by the server to generate and send back a response. Because there is a portion of
information at the client and a portion is at the server an Interface Definition Language
(IDL) will be used to define what will passed back and forth between the client and
server by the RPC. Remote Data Access (RDA) was not used due to the insignificant
amount of variation in the type of data being sent and received.
Gateways and Applets:
- 21-
The two comparable types of gateways on the market are Java based gateways
and CGI-based gateways. In comparing these two types of gateways the primary
difference between the two is that the CGI gateway programs (which resides on the web
server) are executed on the server platform and perform defined specialized functions
opposed to simply retrieving and displaying an existing HTML page (Umar, 1997). Java
gateways on the other hand distribute the code for the target application, and send it to
the web client where it is executed. This allows Java applets to be embedded in HTML
pages and sent to the web browsers where they execute. This methodology allows for
remote application access to databases directly from the browser which is a powerful tool
(Umar, 1997). In regards to database gateways a java applet can be invoked which ask
the user to issue a query and then sends that query to a remote application or database.
This is a great user option where the database functionality runs on the client side (Umar,
1997). Java based applets can be called from an HTML document with parameters passed
in either direction. Java, which is a scripting language along with visual basic, allows
scripts to be embedded in HTML ultimately providing gateways for program access
without compilation or link editing (Burd, 2006). The java applet, which executes in
another program, i.e. web browser, runs within a specified area known as a sand box. The
sandbox provides extensive security controls that prevent the applets form accessing
unauthorized resources or damaging hardware, operating system, or file system (Burd,
2006). As depicted above the capabilities of java gateways far exceed those of CGI
gateways, therefore the java gateway types were the selection of choice for this system in
regards to the user bridging the gap between web browsers and corporate applications
and databases.
- 22-
Web Site:
In a teaming effort the web site will be set up by the marketing and it department
of the information systems organization. The information systems organizations will bye
the site on the web opposed to leasing or renting it. The web site software will coexist
with other existing network software (that has been depicted above). Network
configuration for the site has also been depicted above. Back up site and administration
tasks will be designated with the system has been constructed and put into operation
2.5 Hard Drive:
One hundred gig magnetic disks will provide the bulk of the secondary storage
space for each PC’s CPU. Each disk has a flat circular shape, like a CD. Common platter
diameter ranges from 1.8 to 5.25inches, 5.25 disks will be employed in these CPUs. The
disk arm will be capable of reading information off a disk being rotated at 200 times per
second. In order to combat the issue of possible volatile storage losses during temporary
power loss the system will have an electronic-disk device containing a hidden magnetic
hard drive and a battery for back up power. If external power is interrupted the electronic
disk controller will copy the data from the RAM to a magnetic disk. After external power
has been restored the controller will copy the data back into the RAM (Gage, Galvin,
Silberschatz, 2004). In the hierarchy shown below in Figure 2.3, the storage systems
shown above the electronic disks are volatile, whereas those below are non-volatile.
- 23-
Figure 2.6 Storage Device Hierarchy (Gage, Galvin, Silberschatz, 2004)
For PCMagnetic tapes
Optical disk
Magnetic disk
Electronic disk
Registers
Cache
- 24-
2.6 Output Devices
Monitors:
The monitor s that were selected the PC work station and its specifications are
depicted below in Table 2-1.
Table 2-1 HP L1506 Flat Panel Monitor Specifications
HP L1506 Flat Panel Monitor
Viewable area 15.0 in (38.1 cm)
Pixel pitch 0.297 mm
Brightness Up to 250 nits
Contrast ratio Up to 450:1
Response rate (typical) 16 ms
Viewing Angle - Horizontal 130 °
Viewing Angle - Vertical 100 °
Native resolution 1024 x 768@60 Hz
Interface (analog/digital) Analog
Ports
Warranty - year(s)
(parts/labor/onsite)
3/3/3
Printers / Scanners / Copiers
With high regard in reference to its speed and durability, the HP LaserJet 3390 All-in-
One (AiO) was the printer of choice selected. “This unit lets you print and copy
complete, high-quality, documents in no time, at speeds of up to 22 pages per minute
(ppm) letter
Work confidently—this product won’t hold you up. Instant-on Technology delivers
fast first page out speeds—less than 11 seconds copying and less than 8.5 seconds
printing—so you won’t waste valuable time waiting for output. You can pick up your
completed job before our competitors’ devices have even begun to warm up.
- 25-
Enjoy superior quality with genuine HP print cartridges. With HP Smart printing
technology, the HP print cartridge and the device work together to achieve consistent,
outstanding print quality, enhanced reliability, and a system that is easy to use, manage,
and maintain. Automatic alerts let you know when the cartridge is low or out, and you
can depend on convenient online ordering with HP Sure Supply™
Be flexible with enhanced finishing features. Standard automatic two-sided printing
helps you conserve paper. Plus, this compact desktop product multitasks—if someone is
printing or copying, you can still receive that important fax you’ve been waiting for..3
Maintain your momentum. With an optional 250-sheet input tray 3, you can get a total
input capacity of up to 500 sheets, which means you can print more pages with less
intervention and depend on the product to be ready when you need it.
Increase work team efficiency. The product effortlessly handles your volume demands
with 64 MB memory (expandable to 192 MB), 4 MB fax memory, and a fast, reliable
engine.
Get comprehensive print language support. Print complex documents with built-in
support for HP PCL5e, HP PCL6, and HP postscript level 3 emulation
Share the All-in-One capabilities. Easily and reliably connect multiple users with
standard 10/100Base-T Ethernet networking or take advantage of easy direct connections
with the Hi-Speed USB 2.0 port.
Manage your All-in-One simply. HP Toolbox FX lets you interact with your product
from the comfort of your desk, with configuration, status, and support for every feature.”
- 26-
Servers:
The HP Integrity rx7640 Server achieves high levels of functionality and value
compared with other server products in the mid-range class. Combined with the
processor-enhancing capabilities of the HP Super-Scalable Processor Chipset sx2000, the
rx7640 server delivers breakthrough performance, outstanding scalability, and simplified
management at an exceptional value. In addition, multiple-operating-system support
gives you the freedom to run the right solution for your IT requirements. The rx7640
server makes highly available enterprise computing an affordable reality.
• 2-8 Intel® Itanium® 2 processors (1.6 GHz)
• 64 GB memory capacity
• Up to 2 hardware partitions (nPars)
Up to 15 hot-plug PCI-X slots (See Table 2-1 below for Server Specifictions)
- 27-
Table 2-2 Server Specifications
2.7 Network:
“In order to allow computers on a network to communicate, the information has to
get from the user's application to the network interface card and across the network into
the user's application at the destination. This gets more complex when there may be
computers of completely different architectures on the network, and even more complex
when there are networks of different types connected together. To allow this, the process
A Collaborative Network, which is a web based network that uses the World Wide Web
to combine both distributed and centralized technologies, will be used to network the PCs
in the information system with long term client data basis. This will allow information to
readily be exchanged between the consultants, lawyers, and clients. Each computer will
operate as a separate work station, but will be able to receive resources from the server
Microprocessor
architecture
8 Intel® Itanium® 2 processors (1.6 GHz with 6 MB L3 cache)
Main memory Bus bandwidth: 34 GB/s (32) Peak (sustained)
Capacity: 64 GB Max
Memory slots: 32 DIMM slots
Operating systems HP-UX 11i v2
Microsoft® Windows® Server 2003 Enterprise Edition
Microsoft Windows Server 2003 Datacenter Edition
Internal storage
devices
Internal HDD drive bays – 4
Removable media – 1 open bay for 1 DVD-RW or DDS or 2 slimline DVD
Maximum HDD
(internal)
1200 GB (4x300 GB)
Expansion slots PCI-X slots available – 15 x 266MHz
I/O bandwidth –23 GB/s (16) Peak (sustained)
Core I/O interconnect 10/100/1000BT LAN
Ultra160 SCSI
100BT management LAN
RS-232 serial ports
Rack-optimized
design
Rackmount solution offering allows server to fit into 10U (44.45 cm height) space
in all HP racks (Rack System/E and 10000 series rack)
J1530B: HP field rack kit for server and server expansion unit
J1530BZ: HP factory rack kit for server and server expansion unit
For a complete list of racks and rack accessories, refer to
http://h30140.www3.hp.com/
- 28-
and share its own application and files. Each computer in this collaborative network will
also have the capability to work as a client machine that can receive and process data, but
not share its own resources. There will be a web browser installed on each client that will
act as a terminal that sources information from the centralized server. This will give each
computer (client) the power of centralized computing. These network computers will
communicate to the web using a protocol called TCP/IP. Sections of the information
system organizations intranet will be connected to sections of the global internet so that
authorized clients can access specified databases within the company network. These
extended intranets are known as extranets or virtual private networks.
This collaborative network will operate similar to a mainframe computer. It will
receive information request from web browsers, receive structures, and deliver the
requested data. These servers will have the ability to distribute data any where in the
world, and provide an open network on which global users can exchange information.
The browser will grant system capability to clients, such as paying for services over a
secured internet line (via the secured server) with credit cards.
Networking Protocols:
The network protocols will be categorized according to the communicative tasks
they perform. There are two categories of protocols, which are as follows (see figure 2.6
below):
• Application layer or upper layer protocols operate at the application, presentation,
and session layer of the OSI system. These are also called user-oriented protocols
because they allow data to be exchanged between the network users and the
- 29-
applications. These protocols will first establish connections between network
users and applications and then they will initiate data transmission.
• Transport layer protocols work at the transport layer of the OSI model.
Network Architecture:
Network Software:
The ability for multiple users to use programs to allocate and compete for various
resources simultaneously on the same system is the primary objective of the network
system. Reducing the complexity of allocating resources that are not present locally or
from a remote location was a secondary network design objective. In order to meet this
tall order the system software would have to perform the following (Burd, 2006):
• Find the requested resources on the network
• Negotiate resources access with distant resource allocation software
• Receive and deliver the resources to the requesting user or program
If the computer system happens to makes its local resources available to other computers,
then system software must implement an additional set of functions.
• Listen for resource requests
• Validate resource requests
• Deliver resources via the network (Burd, 2006)
Basically the system software plays both the role of access and response in regards to
network access. The Windows XP operating system will implement the intelligence
needed to respond to external resource request. Internet explorer a self contained
component packaged within the Windows XP operating system will enable the system to
access the internet (Burd, 2006).
- 30-
2.8 Functional Aspects of the System:
The primary task of software is to translate the user’s requirements into CPU
instructions that the system can understand, process, and respond to with formatted
information that can readily be understood by the user. Some software can be complex
because it bridges the following two gaps during the transformation process:
• Human language to machine language
• High- level abstraction to low level detail
The definitions of the two primary types of software, which are System Software and
Application Development Software, that are used in a system and more importantly the
software that was selected for this particular system are depicted below in the following
subsections.
2.8.1 System Software
System software’s primary function is to designate and resource allocation to satisfy user
requirements. In a nutshell System Software can be divided into the following layers:
• System Management- are usually defined as utility programs, that have the
ability to control and manage the computer resources for the user.
• System Services – are denoted as utility programs that are used by system
management and application programs to perform common actions
• Resource allocation- are depicted as utility programs that allocate
hardware and other resources among multiple users and programs
• Hardware interface- can be defined as utility program that interact and
administer control over hardware devices (Burd, 2006)
- 31-
The operating system, which is a collection of utility programs, supports the
system users and application programs. It facilitates user access and application to
programs and hardware. The primary functions of Windows 2000, which was the
operating system of choice selected for this designed computer system, are as follows:
• Program storage, loading, and execution
• File manipulation and access
• Secondary storage management
• Network and interactive user interfaces
Web server software will be will be an optimal component of the operating system
software that will be included on all client’s server (see client/server section listed above)
2.8.2 Application Development Software:
Application Development Software is composed of binary instruction code. Many
compilers such as Java and Visual Basic are just a few of the recognized development
software. Program translators such as C++ translate instructions into CPU instruction.
Java will be used to interact with the CPU in regards to requesting web based functions.
C++ will be used to translate and request the CPU to process technical data
2.9 Functional Data Bases:
This section provides an overview of the requirements analysis process that was
implemented during the actual database design. The design team, composed of a
business/systems analyst and database designer, began requirements analysis process by
recording the requirements of the business. Once they recorded these requirements, they
analyzed the information to determine whether they identified the user, customer, and
- 32-
business needs correctly. After identifying the business requirements and goals, they
began the database design process.
In order to design a database for a business, the designers needed to understand
and document the requirements of the business. This allowed them to create and
implement a database system that satisfied the organization's business requirements and
the needs of its customers and end users (Burd, Jackson, & Satzinger, 2004). The
database system requirements analysis process involved determining, analyzing and
evaluating the business requirements (see figure 2.7 below).
Figure 2.7 Stages in the Requirements Analysis Process
The fundamental elements that were determined included the rules, data, and
processes that defined the business. These elements define what the business required of
the database and how the database system should be built in reference to those
requirements (Burd, Jackson, & Satzinger, 2004). Once the business requirements were
gathered, the design team proceeded with analyzing those requirements. By analyzing the
- 33-
requirements, they determined the system requirements, and used those requirements to
generate a functional requirements specification. The functional requirements
specification included the development of process models, Data Flow Diagrams (DFDs)
and a high-level conceptual data model of the database called a conceptual Entity-
Relationship Diagram (ERD).The business/systems analyst produced the process models
and DFDs, and the database designer generated the conceptual ERD (Rob, Coronel,
2004). Once they analyzed and documented the requirements, the analysis was evaluated.
This was a three-step iterative process. If the analysis would have resulted in being
incorrect, the design team would have been required to repeat the entire process until they
defined a final workable solution, but this was not the case.
Determining Business Requirements:
Requirements of the business were acquired by collecting information from users
in regards to their tasks. Information in regards to how data was interchanged and
exchanged in the organization, and what factors would affect the business processes. To
find out what tasks users perform, the business was divided into departments and then
subdivide the departments into the functions that each user performs. The users were
interviewed and information was gathered in reference to the work and tasks of the
performed daily. This was done to determine what information each department dealt
with, the types of processes would be involved in the tasks, and the business rules of the
department. Because information and processes are interdependent, information in
conjunction with the processes they would affect needed to be determined, which means
that either would be seen in isolation (Rob, Coronel, 2004). Once a determination was
made in regards to the information and associated processes, the business rules associated
- 34-
with these were documented. For example, a business rule may determine how
information can be accessed or how information relates to other information. Business
rules are often the determinant of database referential integrity (Rob, Coronel, 2004).
Business Requirements Analysis:
The primary reason for researching and analyzing the business requirements is to
generate a high-level conceptual data model for the proposed database. This high-level
conceptual data model is called a conceptual ERD. An ERD is a concise description of
the business data requirements and includes detailed descriptions of data groupings or
entities, the relationships between the entities, and some of the constraints imposed upon
data defined by the entities (Rob, Coronel, 2004).
Data categories
Category Definition
entity a group or category of data or objects
relationship a connection between objects in a database
Table 2-3 Data Categories
No implementation details were defined at this stage, and an ERD was created to
ensure that the data requirements of the business had been met. The ERD, which is in a
non technical format, was used to communicate the structure and content of the proposed
database to users. The ERD defined the structure of the data without being concerned
with the storage or implementation details of the final database. It allowed the user to see
the larger picture clearly, and once verified, to decompose the ERD into a lower level
detailed logical data model. This ERD and the data operations defined by process models
- 35-
and DFDs represented the requirements of the business and how the proposed system
would be developed. At this point the process models the DFDs with the ERD could be
cross checked and a determination was made in regards to whether the business
requirements had been met and whether the resulting database system would be
functional and complete (Rob, Coronel, 2004). A modification of the ERD could have
been made the user discovered any inconsistencies between the process model, DFDs and
ERD.
Requirements Evaluation:
In the final stage of the three-step requirements analysis process, the results were
evaluated. Requirements evaluation is the process of determining whether the business
requirements that are gathered and analyzed from end users and management meet the
initial requirements of the users and business involved. During this phase, the design
team needed to ensure that they had covered all the information and interpreted the
information correctly. In addition a check was made for any conflicts in the information
that they collected and analyzed. No significant conflicts were found; therefore a revision
to the requirements analysis process was not necessitated.
- 36-
Figure 2.8 Stage 3 – Requirements Evaluation
Normalization in Database Design:
Normalization optimizes database management by reducing or eliminating
redundant data and ensuring that only related data is stored in a relational database table.
Normalization is based on levels or rules called normal forms (DeBetta 2005). Currently, five
normal forms – numbered one through five – have been defined. Every normal form
builds on the previous one.
When you normalize a database, you want to
• arrange the data into logical entities that form part of the whole
• minimize the amount of duplicate data that is stored in a database
• design a database in which users can access and modify the data quickly
• ensure the integrity of the data in the database
• optimize query times (DeBetta 2005)
- 37-
First Normal Form (1NF):
The objective of 1NF is to divide the database data into logical entities or tables.
After each entity has been designed a primary key is assigned to a unique identifier
(UID.) All attributes of an entity must be single-valued attributes. A repeating or multi-
valued attribute is an attribute or group of attributes with multiple values for one
occurrence of the UID . The data base designer needed to move a repeating attribute or
attributes to a separate entity and create a relationship between the two decomposed
entities (Rob, Coronel, 2004). When working with tables, repeating columns were moved
to their own table. In the new table, a copy of the original table's UID was used as a
foreign key. Let's say that the Employee table, which is one of the administrative tables
that was designed and used by the customer, has multiple values to define the various
projects an employee that may be involved in over a period of time. Although projects
start and finish on a specific date, they may stop temporarily because of prioritization.
The Employee-Name attribute is the UID for the entity.
The Employee table prior to 1NF is shown here.
- 38-
Figure 2.9 Employee Table Before Normalization
For example, Project (1–3), Start-Date (1–3), and End-Date (1–3) are multi-
valued attributes or repeating columns. This violates the 1NF normalization rule. So a
Project, Start_Date, and End_Date was placed in a new table called ‘Project’ (DeBetta
2005).
Figure 2.10 Employee and Project Tables in 1NF
- 39-
The UID for Project is a composite key comprising ‘Project_Name’ and
‘Start_Date’, and the foreign key for Project is ‘Employee_Name’, which creates a
relationship between the Employee and Project tables.
Second Normal Form (2NF):
The aim of 2NF is to take data that's partly dependent on the primary key and
place it in another table. If a table has a composite UID, all the non-UID attributes must
be dependent on all the UID attributes – not just one or some of them. This is a
requirement of 2NF. The designer had to move any attributes that were not dependent on
an entire composite UID to another table and establish a relationship between the two
tables (DeBetta 2005).
Figure 2.11 The Project Table Before 2NF
For example, the department details depend on the ‘Project_Name’ only instead
of on the whole key. The solution for this violation of 2NF was to move
‘Department_Name’ and Department-Location to a new table called Department:
- 40-
Figure 2-12 Project and Department Tables in 2NF
The Project table has a composite primary key comprising ‘Project-Name’ and
‘Start_Date’. The foreign key comprises ‘Employee_Name’ and ‘Department_ Name’.
And the table has an ‘End_Date’ column. The Department table has ‘Department_Name’
as its primary key, and a ‘Department_Location’ column.
Third Normal Form (3NF):
The aim of 3NF is to remove data in a table that is independent of the primary key
(DeBetta 2005). Therefore a Priority attribute was added to the Project table. The Priority
attribute has potential values, such as ‘High’, ‘Medium’ and ‘Low’. Additionally, an
attribute called ‘Priority Level’ was added, which further qualified Priority into a more
detailed subgroup with potential values of "1", "2", and "3".
- 41-
The figure is shown here.
Figure 2-13 The Project Table Before 3NF
The Project table has a composite primary key consisting of ‘Project Name’ and
‘Start Date’. The foreign key comprises ‘Employee Name’ and ‘Department Name’.
Other columns in the table are ‘End Date’, ‘Priority and Priority Level’. In this example,
‘Priority Level’ depends on Priority first. ‘Priority_Level’ depends on the
‘Project_Name’ and ‘Start_Date’ UID through Priority only. The transitive dependency
of ‘Priority_Level’ is unacceptable in 3NF. So you need to move the Priority and
‘Priority_Level’ attributes from the Project table to their own Priority table (DeBetta
2005).
- 42-
The tables in 3NF are shown here.
Figure 2-14 Project and Priority Tables in 3NF
The Project table has a composite primary key comprising ‘Project_Name’ and
‘Start_Date. The foreign key comprises Employee_Name, Department_Name, and
Priority. And the table has an End_Date column. The Priority table has Priority as its
primary key, and it has a Priority_Level column. Normalization of production systems
usually stops at 3NF. Levels above 3NF are conceptually complicated and applicable to
situations that require specific solutions only (DeBetta 2005).
Boyce-Codd Normal Form (BCNF)
Unlike 3NF, BCNF requires each attribute or combination of attributes upon
which any attribute is fully or functionally dependent to be a candidate key. For example,
if ‘Column_B’ is functionally dependent on ‘Column_A’, you can associate every value
in ‘Column_A’ with only one value in ‘Column_B’ at a specific point in time. A specific
value for ‘Column_A’ always has the same value in ‘Column_B’, but the reverse is not
- 43-
true. A specific value for ‘Column_B’ can have multiple corresponding values in
‘Column_A’.
Fourth Normal Form (4NF):
For 4NF, a table needs to be in BCNF, and it shouldn't have any multi-valued
dependencies. Neither should the dependent attributes be a subset of the attributes they
depend on. And the dependent attributes combined with the attributes they depend on
should not constitute the entire entity. In a multi-valued dependency, two or more
attributes depend on a determinant, and each dependent attribute has a particular set of
values. The values in the dependent attributes are independent of each other (DeBetta
2005).
Fifth Normal Form (5NF):
For 5NF to be in effect, an entity should be in 4NF, and all join dependencies on
the table should be related to its candidate keys. When you decompose tables through
normalization, you should be able to reconstruct the original table by doing joins between
the resulting tables without losing data and without generating extra rows. This is a
lossless join. When you can't decompose a table into smaller tables, which have different
keys from the original without data losses, the table is in 5NF (DeBetta 2005).
Summary of the Database Design:
These access databases were designed using the parent child model or Network
model. One of these databases was designated as Industrial Consultant Client Data Base,
which stored client the industrial consults name (which was specified and used as the
- 44-
primary field) and relative information, i.e. organization , address of organization phone
number , fax , e-mail, (which was listed as records in the rows of this data sheet). A one
to many relationship was used to construct this database as well as all other client related
data basis including those used for patent law consults (Rob, Coronel, 2004). The
customer’s organization and ID (composite keys, where two or more primary key were
used in combination as the primary designator key) was used as the designated foreign
key, to which other databases are related or connected.
3.0 Security Introduction
Many current computer systems have a very poor level of computer security,
which has led to identity theft as well as theft of intellectual property. In such way,
computer breach has been considered to be a disastrous mistake, and the cause of much
of the insecurity of current computer systems. For Example, once an attacker gets into a
system, without well equipped computer or security features, attackers usually has access
to most or all of the features of that system. Because computer systems are very complex,
and cannot be guaranteed to be free of defects in security, therefore security tends to
change readily in regards to the computer systems. That’s when computer security comes
into play. Computer Security is the measures used to protect the systems valuable
information from getting it to wrong hand. Computer Security involves specifying and
implementing security policies, that is; defining and setting up the perimeter from lower
to higher level computer security access.
Security requires consideration of the external environment as well as protection
of the system. Both the data and the code in the system need to be protected, along with
the physical resources of the system from unauthorized intrusion and access.
- 45-
Security software should be designed to detour some of the following forms if malicious
access:
• Unauthorized reading of data (i.e. theft of information)
• Unauthorized destruction of data
• Unauthorized modification of data
• Preventing illegitimate use of the system and it’s peripherals
Authentication and Passwords
Authentication or the ability to protect the system from unauthorized access is a
major security problem. This problem will be remedied by user authentication, which is
based on either user knowledge, user possession, or user finger print. Passwords are the
most commonly used vehicle to authenticate the user’s identity to the system. Passwords
are a combination of a certain number of letter characters and numbers, as prescribed by
the software security parameters (i.e. some main frame systems may require the password
to be 8 characters of which the third character has to be numeric). The encryption that is
set up by the UNIX system will allow the system to accept the password of the user and
store it safely. However passwords can accidentally be exposed or easily guessed.
Remedies for the authentication besides the UNIX system that will be incorporated are as
follows:
Programs and System Threats:An unauthorized user may fraudulently access the system
or misuse the programs by the following methods:
• Trojan horses are code segments that misuse its environment. However long
search paths such as common UNIX systems can eliminate this problem along
- 46-
with a non- trappable key sequence, like the control –alt-delete combination that
Windows NT uses.
• Trap doors are instances where the software has been designed to check for a
specific user ID that might circumvent the normal security procedures. A trap
door can also be included in the compiler, where the compiler can generate
standard object code as well as a trap door that is supercilious to the source code.
In order to detect trap door all of the source code of the system has to be
analyzed.
Preventive Security Measures:
The following measures can be implemented to combat security issues:
1) Assessing and maintaining a risk management policy for hardware, software and the
people that uses it.
2) Researching new security issues and adapting policies without degrading the
performance of the computers and their networks.
3) Managing physical access to computers.
4) Doing all of the above, without service disruption to those who rely on that network.
Techniques for Creating Secure Systems:
The following techniques will be used in engineering secure systems:
1) Simple microkernel will be written so that the system will not contain any bugs.
2) A bigger Operating System OS, able of providing a standard API like POSIX, will be
built on a microkernel using small API servers running as normal programs. If one of
these API servers has a bug, the kernel and the other servers will not be affected.
- 47-
3) Cryptographic techniques will be used to defend data in transit between systems,
reducing the probability that data exchanged between systems can be intercepted or
modified.
4) Strong authentication techniques will be used to ensure that communication end-points
are who they say they are.
5) Chain of trust techniques will be used in an attempt to ensure that all software loaded
has been certified as authentic by the system's designers.
6) Mandatory access control will be used to ensure that privileged access is withdrawn
when privileges are revoked. For example, deleting a user account should also stop any
processes that are running with that user's privileges.
7) Capability and access control list techniques will be used to ensure privilege separation
and mandatory access control.
Processes of Computer Security:
User account access controls and cryptography can protect systems files and data,
respectively. Firewalls are by far the most common prevention systems from a network
security perspective as they can (if properly configured) block the normal packets types,
preventing some kinds of attacks. Intrusion Detection Systems (IDS's) are designed to
detect network attacks in progress and assist in post-attack forensics, while audit trails
and logs serve a similar function for individual systems. “Response” is not necessarily
defined by the assessed security requirements of an individual system and may cover the
range from simple upgrade of protections to notification of legal authorities, counter-
attacks, and the like. In some special cases, a complete destruction of the system is
favored. The manufactures and specifications of the firewalls, virus protections and other
- 48-
devices that will be used during system implementation will be withheld at this point for
security purposes. Management of risks and weakest links will minimize security
breaches while maximizing productivity and performance. The most effective
methodology in computer security will be to assert and maintain an intelligent policy to
risk manage workstation use and functionality without inflicting a denial-of-service to the
people who rely on access to your computer or network.
- 49-
CHAPTER 3
PRIMARY REASERACH OF SYSTEM EVALUATION AND VALIDATION
Simulation and Modeling:
3.1 Purpose and Justification for Using a Simulation Model
For all technical purposes simulation is the imitation of a dynamic system using a
computer model in order to evaluate and improve system performance. Discrete event
simulation which models the effects of the events in a system as they occur over time was
used to implement the simulation model used to aid in the analyst in the design of this
system (Harrell, Biman, and Bowden, 2004). The emphasis in industry today has been
placed on time based competition; traditional trial and error methods of decision making
are no longer adequate. The power of simulation lies in its ability to provide a method of
detailed analysis that is not only formal and predictive, but is capable of predicting the
performance of complex systems (Harrell, Biman, and Bowden, 2004). The key to sound
management decision in any industry lies in the ability to accurately predict the outcomes
of alternative courses of action (Banks, Carson, Barry, & Nicol, 2001). This simulation
will provide the designers of the system as well as management with this type of
information.
Some characteristics of simulation which make it such a powerful planning and decision
making tool are summarized as follows (Banks, Carson, Barry, & Nicol, 2001):
• Captures system interdependencies.
• Accounts for variability in the system.
- 50-
• Is versatile enough to model any system.
• Shows behavior over time.
• Is less costly, time consuming, and disruptive than experimenting on the actual
system.
• Provides information on multiple performance measures.
• Is visually appealing and engages people’s interest.
• Provides results that are easy to understand and to communicate.
• Runs in compressed, real or even delayed time.
• Forces attention to detail in a design.
Simulation results will be used to make organizational decisions in regards to
overhead predictions, system layout, man hours, and the number of targeted customers
that the customer service representatives (servers) can interface with in a normal eight
hour work day, thus the accuracy of the results of the simulation is very important. In
many instances simulation appear realistic on the surface because simulation models
opposed to analytical models can incorporate any level of detail about the real system. To
avoid this illusion it is best to compare system data to model data, and make the
comparison using a wide variety of techniques including an objective statistical test if
possible (Banks, Carson, Barry, & Nicol, 2001).
Description of System:
A group of 5 PCs connected together by a LAN, composed of Optical-fiber-based
FDDI networking which runs at over 100 megabits per second. The LAN system will be
connected to a server which is connected to the WAN which provides access to the
interne through cooperative IP address recognition and protocol.
- 51-
Entities Attributes Activities Events State Variables
Customers Industrial or
legal
consultation
Customer
response received
and read
electronic
receipt of
response
number of
responses
received per PC
per day
targeted client’s
feed back
Positive
response,
negative
response
received from
customer
Customer
response
inventoried in
system to specific
target area (i.e.
industrial or
legal)
response filed
and stored on
designated
consultants
databases
number of
positive
responses
verses number
of negative
responses
Customer
service follow
up
delta in time
between receipt
of customer
response and
consultants
follow up
response to
customer
response sent to
customer
electronic
disbursement
of
information
system
organization’s
response
number of
responses sent
and received by
targeted
customers
Table 3-0 Problem Properties
Problem Formulation:
The Information System Organization will have five PCs that will receive and
inventory responses that targeted customers send in electronically through the web in
regards to electronic questioners that the customer has received at their organization (see
Table 3-0).
There were two questioners, one composed of five questions and the other
composed of ten questions, shown in Appendixes A-1 and A-2 respectively. These
questioners are composed of targeted questions that have been developed by the
industrial consultant and the patent attorney to procure industrial consultant and patent
law clients. Potential clients will then e-mail their responses back to the information
- 52-
system organization. Customer responses will then be filed contingent on how they
respond to the questioners. They will either be filed as potential industrial consultant
clients or potential patent law clients. Correct inventory and expeditious feed back to
these client responses are essential to the parent organization’s marketing and sales.
There is more then one service channel for this simulation, in fact there are five
PCs being used Monday through Friday. For the sake of simplicity we will refer to the
PCs as PC1, PC2, PC3, PC4, and PC5. A key assumption is that each PC operator,
operates that PC for eight hours per day Monday through Friday, they do not change PCs
at any time. Operator of PC1 is a better operator then PC2 operator two, PC2 is a better
operator then PC3 operator, PC3 is a better operator then PC4 operator, and PC4 is a
better operator then PC5 operator. E-mail responses arrival as shown below in Table 3-1.
The distribution of e-mail service times for PCs 1 through 5 are shown in Table 3-2
through Table 3-6.
Table 3-1 Interval Distribution of Incoming E-mails
Time Between
e-mail arrivals
(minutes)
Probability Cumulative
probability
Random-Digit
Assignment
1
2
3
4
0.25
0.30
0.25
0.20
0.25
0.55
0.80
1.00
01 – 25
26 – 55
56 – 80
81 - 100
- 53-
Table 3-2 Service Distribution of PC1
Time Between
e-mail arrivals
(minutes)
Probability Cumulative
probability
Random-Digit
Assignment
2
3
4
5
0.30
0.28
0.25
0.17
0.30
0.58
0.83
1.00
01 – 30
31 – 58
59 – 83
84 - 100
Table 3-3 Service Distribution of PC2
Time Between
e-mail arrivals
(minutes)
Probability Cumulative
probability
Random-Digit
Assignment
3
4
5
6
0.35
0.25
0.20
0.20
0.35
0.60
0.80
1.00
01 – 25
36 – 60
61 – 80
81 - 100
Table 3-4 Service Distribution of PC3
Time Between
e-mail arrivals
(minutes)
Probability Cumulative
probability
Random-Digit
Assignment
4
5
6
7
0.37
0.28
0.25
0.10
0.37
0.65
0.90
1.00
01 – 37
38 – 65
66 – 90
91 - 100
- 54-
Table 3-5 Service Distribution of PC4
Time Between
e-mail arrivals
(minutes)
Probability Cumulative
probability
Random-Digit
Assignment
5
6
7
8
0.39
0.31
0.15
0.15
0.39
0.70
0.85
1.00
01 – 39
40 - 70
71 – 85
86 - 100
Table 3-6 Service Distribution of PC5
Time Between
e-mail arrivals
(minutes)
Probability Cumulative
probability
Random-Digit
Assignment
6
7
8
9
0.40
0.29
0.13
0.18
0.40
0.69
0.82
1.00
01 – 40
41 – 69
70 – 82
83 - 100
The problem was to determine how well the 5 PC systems would work. To
estimate the system measures of performance, a simulation of 1 hour of operation was
made. The simulation should proceed as follows: PC1 operator receives an e-mail from a
client, PC1 operator distributes and files e-mail response. PC2 operator receives an e-mail
from a client, PC2 operator distributes and files e-mail response. PC3 operator receives
an e-mail from a client, PC3 operator distributes and files e-mail response. PC4 operator
receives an e-mail from a client, PC4 operator distributes and files e-mail response. PC5
- 55-
operator receives an e-mail from a client, PC5 operator distributes and files e-mail
response.
Setting of Objectives and Overall Project Plan:
The simulation will show the following:
1. Number of e-mails received, distributed, and filed
2. Time between e-mails serviced
3. Arrival time of e-mails
4. Service time used towards evaluation and distribution of e-mails
5. Time e-mail waits in queue before serviced
6. Average time e-mail has to wait before being serviced
7. Average time between e-mail arrivals
Consequently the service time of each e-mail is very important, because the
consultants need to get the response information to the questioner so that they can contact
the party that filled the questionnaire out while it’s fresh on their mind. The key to
marketing and sales is presenting the product in a manner that makes the product
advertising and desirable by the targeted audience, and following up with the potential
client as quickly as possible. That is why the wait time for e-mails in the queue must be
kept to a minimum. Therefore it was determined that performing a simulation would be
an appropriate methodology. The simulation would ultimately show if the system is
balanced, if another PC should be added, or a PC operator may need to be replaced due to
over extended service time. Performing a simulation in C++ is an alternative method that
could be used to perform a simulation, because all of the details of the problem scenario
can be programmed into the simulation. A comparison of the initial simulation
- 56-
Model results can be compared to with those of the C++ simulation. A more
direct indictor will be a comparison of average queue time for in-service e-mails.
The amount of people and money involved are five, and $ 15,000 respectively.
Note cost must be kept to the minimum due to the limited size of the organization and the
fact that this is a new organization that has not bee established.
Model Conceptualization:
General Lay Out:
Data for this system model is limited because there is no actual system to model it
after. However a general layout of the system will be designed out as follows:
1) There will be 5 PCs connected together by a LAN for intranet access, and also
connected to a WAN for internet access.
2) These five PCs will be operated by information system representatives that have a
strong knowledge base in the areas of industrial consulting and patent law.
Operations that Require PC Service Time
1) The PC operators will be receiving e-mail responses to electronic questioners
(see Appendix A-1 and Appendix B-1) and evaluating them in regards to whether
they need to be responded to and what division specifically needs to respond to
them.
2) Distributing and filing the e-mail response that need further correspondence with
the correct division (either industrial consultant or patent law division of the
organization).
3) Send out secondary response after positive feedback is received in reference to the
questioner. The secondary response will be a second questioner sent out with
- 57-
more detailed question that are more specific in regard to that subject area, i.e.
industrial consultation or patent law. Note, this secondary response has not been
developed yet.
4) The key to this type of system is that the customer is not personally waiting for
service either in person or on the phone. The target customer receives a secondary
response via the internet within a 24 hour period. The customer is able to access
and respond to this response without engaging in and wasting valuable
communication time over the phone or face to face.
Model Objectives:
1) To calculate the average amount of service time
2) To calculate the average amount of time e-mail stays in the system before
being serviced by a PC operator.
3) The average time it takes for each individual server to file and distribute the e-
mail responses appropriately with the right department
4) To find out if the system is balanced and additional PCs will not be
necessitated to handle the influx of work.
Data Collection:
The main objective of this study is to verify that each PC operator can service 50
e-mails as denoted below on consistent bases. This is the most important data because
approximate sales as shown in the implantation phase below can be predicted, as well as
facets of the overhead budget that would be necessitated to operate and sustain the
system. As previously stated the time the e-mail waits in queue and goes in queue does
not have a direct affect on customer service or the sales generated.
- 58-
Each PC operator will start with 70 e-mails to service in their queue at the
beginning of the day; this quantity is the amount that was prescribed by management
after there assessing the following calculation:
In an 8 hour work day minus 30 minutes for lunch and two 15 minute breaks,
leaves 7 actual of hour of work time per PC operator to service e-mails.
There are (7 x 60 minutes = 420 minutes) 420 minutes in 7 hours,
Assuming that it can take a maximum service time of 8 minutes for an operator to
access, interpret, file, and service an e-mail; 420 minutes divided by 8 = 52.5
Therefore it was assumed that each operator should be able to service 52 e-mails
in the course of one day. Therefore five operators working five days a week
should be able to service 52 x 5 (days) x 5 (PC operators) = 1300 e-mails for the
week, 1300 x 4 (weeks) = 5,200 e-mails / month; 5200 x 12 (months) = 62,400 e-
mails service per year.
The hours of operation of the PCs are between 8am EST and 4 pm EST. Baring in mind
that the individuals that have the authority to review the initial questioners shown in
Appendix A-1 and B- would be upper management, who would normally send out
response e-mails after normal business hours; with this in mind the data that the PC
operator service will be collected two days prior and placed into their queue one day prior
to them accessing it.
Model Translation:
This model will not be programmed into a simulation language. Instead for the
sake of simplicity and diminished special purpose simulation software will be used, ‘Pro
Model’ simulation software. This software was selected because the problem is amenable
- 59-
to solution with this type of software, which will result in reduced model development
time (Gagne, Galvin, and Silberschatz, 2004).
The Arena Business Edition software will be used to simulate this model.
Graphical representation and object based development are some of the key attributes of
the Arena Standard Edition software. Modules represented by icons will be used to
represent individual PCs and the flow of e-mails that enter the system to be evaluated.
Upon evaluation the e-mails that need to be responded to will be sent to one of two
selected modules; either the module that contain e-mails for targeted industrial consul
tees or the module that contains the-emails for the targeted customers that need patent
attorney representation. The standard resources that this software provides in regards to
queuing operations, process logic, and system data make the software an excellent fit.
The standard graphics that are used in this software will be used to show the average
number of e-mails serviced for the five day week, the average time e-mails are left in
queue. Graphical representation of all of these elements will be ideal for summarizing
data for presentation purposes. Investors can predict organizational income, growth, and
overhead expenditures. Random numbers will be generated to perform an analysis on the
input data given (assumed) in the problem formulation and implementation sections.
Hand calculations will be used to perform the analysis, which consist of the following
three steps:
1. Identifying the appropriate probability distribution
2. Estimating the parameters of the hypothesized distribution
3. Validating the assumed statistical model by a goodness-of fit test
(Gagne, Galvin, and Silberschatz, 2004)
- 60-
Verification:
The concept ional model is realistically and correctly represented in the
computerized representation. The following guidelines will be used for verification
purposes:
1. Computerized representation will be checked by an outside vendor
2. The model output will be examined closely for reliability under a variety of
settings, inputs, and scenarios by the systems analyst and the PC operators
3. The computerized representation will print the parameters at the end of the
simulation to ensure that the parameter values have not been changed by mistake
4. The computerized documentation shall be made to be as self documenting as
possible. Every variable will be depicted by a precise definition
5. An IRC will be used to debug and correct errors in the simulation (Gagne,
Galvin, and Silberschatz, 2004)
Validation:
Model assumptions for this model fall under both categories structural
assumptions and data assumptions. Structural assumptions refer to how the system works
in reality, which has been outlined above in the problem formulation section above
(Banks, Carson, Barry, & Nicol, 2001). The only assumption that was not addressed is
that e-mails are serviced on the rotation in which they entered the queue, simply put a
first come firs served basis. The structural assumptions will be verified by actual
observation on a daily and weekly basis. Data assumptions such as the number of e-mails
serviced per PC per hour, per day, per month, and the amount of revenue that it could
possible lead to is important in regards to generating an economic analysis from a sales
- 61-
standpoint. Calculations have been performed in the problem formulation and
implementation section to validate these assumptions.
Experimental Design:
The model will simulate the e-mail being accessed by the PC operator at which
point the operator will evaluate the responses and distribute and file the e-mail response
to the appropriate section of the organization. The function of runs may vary if
experimental data supports it, for example if the number of e-mails that were expected to
be serviced by each operator was consistently exceeded then the number placed in queue
to be serviced the previous day would be changed by an increment of 10 in order to
reduce the amount of possible PC operator idle time. The number of simulation runs will
be dictated by the consistency of the out put after the simulation has been actively ran for
a 6 month period. However, note that this is a working simulation where all input and out
put data will be used to procure clients and ultimately increase revenue for the parent
industrial consultant and patent law organization. During this six month period PC
operators will be monitored on their production monthly. Additional training in regards to
the interpretation, file, and distribution will be given by the industrial consultant and the
patent attorney. This will serve two primary purposes to increase the number of e-mails
serviced and more importantly negate discarding potential customers due to wrongful
interpretation of questioner responses by the PC operator. An output analysis for steady
state simulations will be performed for a single long run simulation. After six months a
managerial evaluation of all experimental data will be performed and the system shall be
tweaked according to the permutations of experimental data collected.
Production Runs and Analysis:
- 62-
The production run analysis shall be performed by output analysis based on
random variability and random number generations that will be used to produce the
values f the input variables: where two different streams of or sequences of random
numbers will produce two different sets of outputs which (probably) will differ (Banks,
Carson, Barry, & Nicol, 2001) .
The performance of the system will be measured by a parameter θ; the result of a set of
simulation experiments will use an estimator θ1of θ. The precision of the estimator will
be measured by the variance of θ1. The purpose of the statistical analysis will be to
estimate the variance or determine the number of observations required to achieve the
desired position. Where Y, is the total number of –emails serviced per week by a
networked system of PCs; Y will be treated as a random variable with an unknown
distribution. The simulation shall be ran for 1 week, that will proved a single sample
observation from the population of all possible observations on Y. The run length of the
sample size will be progressively increased to n observations, Y1, Y2, and Yn, based on a
run length of n weeks up to a maximum of 24 weeks. However these observations do not
constitute a random sample, because they are statistically independent. Therefore since
statistical independence will not be directly applicable to the analysis of this data. Other
methods and assumption shall be performed during model conceptualization.
More Runs:
As long as the number of e-mails serviced does not decline greater then 20 %,
additional simulations beyond six months will not be necessitated or performed, due to
the nature of the business and more importantly the shortage of start up revenue
possessed by the organization.
- 63-
Documentation and Reporting:
The only individuals that will be running reports will be the manager of the
Information systems organization; it will not be modified by different analysis. Progress
documentation will be implemented for this model, due to the small close knit
environment of the organization. A chronological history of system performance, changes
made, and work done will better serve this organization.
Reports shall be generated and discussed during each week to enable the PC
operators to see where their numbers are. This important to the PC operator, since their
monthly incentive bonuses are based on these numbers. The only deliverables or major
accomplishments will be number of e-mails serviced that resulted in new client
procurement. The logic behind this is simple; revenue is made by effort put forth on the
front end by the PC operator being prudent in interpreting, distributing, and filing the e-
mails that had positive responses. This has a direct correlation on the revenue earned by
the parent organization.
Production results before and after training sessions shall be documented for
every PC operator as well as their feed back on the instruction given during the training.
This feed back is important and shall be merged into upcoming training sessions for
future as well as present employees.
A monthly or final report shall be distributed to the parent organization, primarily
the industrial consultant, patent attorney, chief marketing executive, and CEO. Decisions
in regards to adjustments of input variables, training changes, equipment changes or
additions, and management shall be based on these final reports.
- 64-
Implementation Phase:
Rough Economic Analysis of Expected Simulation Results:
If 10 % of the 62,400 e-mails service per year resulted (previously stated in the Data
Collection Phase) in client procurement for both divisions combined, and that 10 % was
divided by 2, there would be 3200 probable new customers for each division, taking 5%
of those to equate a more realistic number of 160 new customer annual for each division.
This would break down as follows in terms of compensation:
160 Industrial Consultant Clients x average charge of $4000 = $640,000 annually
160 Patent Law Consultations (which includes representation for new patent submittals,
defense against infringement suits, prosecutions against infringement suits, and contracts)
160 clients X average fee of 7,000 =1,120,000
In summary if the guidelines previously mentioned were used to run a simulation
and the system was shown as being balanced which resulted in similar numbers
previously mentioned in the hand calculations then the design of this system could be
viewed by management as cost effective investment.
- 65-
CHAPTER 4
SUMMARY AND CONCLUSION
In summary this paper defined some of the business requirements, define
explained how the systems analysis was performed, and what the results of the analysis
were. How the system was designed, which was based on the system analysis, was
discussed as well. A problem was formulated that incorporated the functional aspects of
the system while operating and performing the required task during a specified time.
Hand calculations were performed to provide numerical results to management that
would validate and verify that the system design selected would in fact satisfy the
business requirements, i.e. if the number of PCs and operators were sufficient to handle
the predicated influx of business. The option of running simulations were presented along
with the required input and parameters if that option was in fact selected as an alternative
form of verification and validation of the system design functional results. Simulation can
prove to be a powerful tool in regards to its ability to provide a method of detailed
analysis that is not only formal and predictive, but is capable of predicting the
performance of complex systems (Harrell, Biman, and Bowden, 2004). Simulation can
provide management with ability to accurately predict the outcomes of alternative
courses of action (Banks, Carson, Barry, & Nicol, 2001).
However, running a simulation would result in substantial upfront cost for the
parent organization. Granted the fact that there immediate capital expenditures are low
- 66-
and there projected log term net capital gain would be minimal, the available amount of
upfront capital investment is extremely limited. Therefore a simulation was not ran.
In conclusion based on the results generated from the hand calculations performed
in chapter three, management could ascertain realistically how the system would perform
and completed all business requirements, thus validating its design as positive investment
and asset to the parent organization as a whole.
- 67-
APPENDIX A-1
Industrial Consultant Consultation Questioner
1. What is your company size in terms of number of employees?
2. How many facilities do you have either nation wide or internationally?
3. What is the nature of your business?
4. What is your current level of production and does it exceed or fail to meet the
projected company objectives in terms of production, gross sales, net income?
5. What is your current overhead costs in the following areas:
A: Production:
1) Man hours:
2) Raw Materials:
3) Plant Operations Cost, which includes utilities, machine maintenance,
and preventative maintenance.
B: Marketing:
1) Man hours
2) Present Marketing proposals:
3) Future Marketing Ventures
- 68-
APPENDIX B-1
Patent Right Assistance Questioner
1. How and why did you think of this design?
2. What is your design and describe its functional aspects?
3. Why do you think you need to procure patent rights for your design?
4. How do you know that your design is not duplication already on the market?
5. Was your design directly designed as an extension of another design?
6. Does your design perform some of the same functional aspects that another
design does?
7. How do you plan to manufacture your product and what amount of overhead
cost will be involved?
8. Is a company backing you on this design or are you submitting it as an
independent agent?
9. How long does it take to manufacture one of the products?
10. Where will you obtain the raw material and machinery necessary for
production?
Note: The questioners listed above are generic and are used to gain general knowledge
about potential client’s organizations. The questioner will be detailed further by
the organization’s proper personal once they are in place.
- 69-
REFERENCES
Banks, J., Carson, J., Barry, N., Nicol, D. (2001) Simulation and Modeling I
Prentice Hill A Pearson Education Co. Upper Saddle River, NJ 07458
Burd, S., Jackson, R., Satzinger, J. (2004). Systems Analysis & Design In A Changing
World, Boston Massachusetts: Course Technology
Burd, S. (2006). Systems Architecture, Fifth Edition.
Course Technology, 25 Thomson Place, Boston, MA. 02210
DeBetta, P., (2005). New T-SQL Features in SQL Server
http://vnu.bitpipe.com/rlist/term/Database-Design.html
Gagne, G., Galvin P., Silberschatz A., (2004). Computer Operating Systems
Hoboken New Jersey: John Whiley & Sons, Inc.
Harrell, B., Biman, G., Boyden, R. (2004). Simulation and Modeling II
Mc Graw-Hill Inc.
Rob, P., Coronel, C. (2004). Database Systems Design, Implementation, and
Management, Sixth edition; Technology a division of Thompson learning Co.
Turabian, K. (1996). A manual for Writers of Term Papers, Thesis, and Dissertations,
Sixth Edition. Chicago: The University of Chicago Press, Chicago IL.
Umar, A., (1997). Client Server Computing, Custom Edition. Prentice Hal Inc. a Pearson
Education Company, Upper Saddle River, NJ. 07458
http://www.cioupdate.com/trends/article.php/3559636
http://en.wikipedia.org/wiki/Computer_security
http://www.computerworld.com/securitytopics/security/story/0,10801,106169,00.html
http://rwebs.net/micros/Imsai/multi.htm
Multiprocessors and Shared Memory for Ultimate Flexibility; 1999-2003 by Randy
Wilson
- 70-
REFERENCES (CONT.)
http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/18972-238444-410636-12004-f51-
1140785.htmlHP LaserJet 3390 All-in-One - overview and features

Más contenido relacionado

La actualidad más candente

Logical design vs physical design
Logical design vs physical designLogical design vs physical design
Logical design vs physical designMd. Mahedi Mahfuj
 
Elements of systems design
Elements of systems designElements of systems design
Elements of systems designChandan Arora
 
System Analysis And Design
System Analysis And DesignSystem Analysis And Design
System Analysis And DesignLijo Stalin
 
System Design Presentation
System Design PresentationSystem Design Presentation
System Design PresentationSCOUT9989
 
System analysis and design logical design
System analysis and design  logical designSystem analysis and design  logical design
System analysis and design logical designGirdharRatne
 
System Design and Analysis 1
System Design and Analysis 1System Design and Analysis 1
System Design and Analysis 1Boeun Tim
 
The process and stages of system design
The process and stages of system designThe process and stages of system design
The process and stages of system designJahidul Islam
 
SSAD; TOOLS & TECHNIQUES
SSAD; TOOLS & TECHNIQUESSSAD; TOOLS & TECHNIQUES
SSAD; TOOLS & TECHNIQUESMalvika Bansal
 
Success or failure of information system implementation
Success or failure of information system implementationSuccess or failure of information system implementation
Success or failure of information system implementationbamaki
 
System Analysis and Design (SAD)
System Analysis and Design (SAD)System Analysis and Design (SAD)
System Analysis and Design (SAD)Sachith Perera
 
Structure system analysis and design method -SSADM
Structure system analysis and design method -SSADMStructure system analysis and design method -SSADM
Structure system analysis and design method -SSADMFLYMAN TECHNOLOGY LIMITED
 
Systems Analysis And Design 2
Systems Analysis And Design 2Systems Analysis And Design 2
Systems Analysis And Design 2MISY
 
A&D - Introduction to Analysis & Design of Information System
A&D - Introduction to Analysis & Design of Information SystemA&D - Introduction to Analysis & Design of Information System
A&D - Introduction to Analysis & Design of Information Systemvinay arora
 
System Analysis & Designing : Elements of a System [In short]
System Analysis & Designing : Elements of a System [In short]System Analysis & Designing : Elements of a System [In short]
System Analysis & Designing : Elements of a System [In short]Abir Maheshwari
 
System imolementation(Modern Systems Analysis and Design)
System imolementation(Modern Systems Analysis and Design)System imolementation(Modern Systems Analysis and Design)
System imolementation(Modern Systems Analysis and Design)United International University
 
analysis and design of information system
analysis and design of information systemanalysis and design of information system
analysis and design of information systemRenu Sharma
 

La actualidad más candente (20)

Logical design vs physical design
Logical design vs physical designLogical design vs physical design
Logical design vs physical design
 
Elements of systems design
Elements of systems designElements of systems design
Elements of systems design
 
System Analysis And Design
System Analysis And DesignSystem Analysis And Design
System Analysis And Design
 
System Design Presentation
System Design PresentationSystem Design Presentation
System Design Presentation
 
System analysis and design logical design
System analysis and design  logical designSystem analysis and design  logical design
System analysis and design logical design
 
System Design and Analysis 1
System Design and Analysis 1System Design and Analysis 1
System Design and Analysis 1
 
The process and stages of system design
The process and stages of system designThe process and stages of system design
The process and stages of system design
 
SSAD; TOOLS & TECHNIQUES
SSAD; TOOLS & TECHNIQUESSSAD; TOOLS & TECHNIQUES
SSAD; TOOLS & TECHNIQUES
 
Success or failure of information system implementation
Success or failure of information system implementationSuccess or failure of information system implementation
Success or failure of information system implementation
 
System Analysis and Design (SAD)
System Analysis and Design (SAD)System Analysis and Design (SAD)
System Analysis and Design (SAD)
 
System Analysis and Design
System Analysis and DesignSystem Analysis and Design
System Analysis and Design
 
System design
System designSystem design
System design
 
Structure system analysis and design method -SSADM
Structure system analysis and design method -SSADMStructure system analysis and design method -SSADM
Structure system analysis and design method -SSADM
 
Systems Analysis And Design 2
Systems Analysis And Design 2Systems Analysis And Design 2
Systems Analysis And Design 2
 
A&D - Introduction to Analysis & Design of Information System
A&D - Introduction to Analysis & Design of Information SystemA&D - Introduction to Analysis & Design of Information System
A&D - Introduction to Analysis & Design of Information System
 
SAD
SADSAD
SAD
 
System Analysis & Designing : Elements of a System [In short]
System Analysis & Designing : Elements of a System [In short]System Analysis & Designing : Elements of a System [In short]
System Analysis & Designing : Elements of a System [In short]
 
System imolementation(Modern Systems Analysis and Design)
System imolementation(Modern Systems Analysis and Design)System imolementation(Modern Systems Analysis and Design)
System imolementation(Modern Systems Analysis and Design)
 
analysis and design of information system
analysis and design of information systemanalysis and design of information system
analysis and design of information system
 
SAD Chapter1
SAD  Chapter1SAD  Chapter1
SAD Chapter1
 

Destacado

Design computer system group a
Design computer system group aDesign computer system group a
Design computer system group aRaheem Mohamed
 
hints for computer system design by Butler Lampson
hints for computer system design by Butler Lampsonhints for computer system design by Butler Lampson
hints for computer system design by Butler Lampsonmustafa sarac
 
Computer System Overview,
Computer System Overview, Computer System Overview,
Computer System Overview, Sajid Marwat
 
Packaging & labeling in food industries
Packaging & labeling in food industriesPackaging & labeling in food industries
Packaging & labeling in food industriesnaveenaugust
 

Destacado (6)

Original assignment
Original assignmentOriginal assignment
Original assignment
 
Design computer system group a
Design computer system group aDesign computer system group a
Design computer system group a
 
System design
System designSystem design
System design
 
hints for computer system design by Butler Lampson
hints for computer system design by Butler Lampsonhints for computer system design by Butler Lampson
hints for computer system design by Butler Lampson
 
Computer System Overview,
Computer System Overview, Computer System Overview,
Computer System Overview,
 
Packaging & labeling in food industries
Packaging & labeling in food industriesPackaging & labeling in food industries
Packaging & labeling in food industries
 

Similar a DESIGN OF A COMPUTER SYSTEM FOR AN INFORMATION SYSTEM

SAD_UnitII.docx
SAD_UnitII.docxSAD_UnitII.docx
SAD_UnitII.docx8759000398
 
Hospital E-Token Management(outdoor)
Hospital E-Token Management(outdoor)Hospital E-Token Management(outdoor)
Hospital E-Token Management(outdoor)ANISUR RAHMAN
 
CS 414 (IT Project Management)
CS 414 (IT Project Management)CS 414 (IT Project Management)
CS 414 (IT Project Management)raszky
 
Mis jaiswal-chapter-09
Mis jaiswal-chapter-09Mis jaiswal-chapter-09
Mis jaiswal-chapter-09Amit Fogla
 
System Analysis And Design_FinalPPT_NirmishaK
System Analysis And Design_FinalPPT_NirmishaKSystem Analysis And Design_FinalPPT_NirmishaK
System Analysis And Design_FinalPPT_NirmishaKShehla Ghori
 
Issue Management System
Issue Management SystemIssue Management System
Issue Management SystemPratik Vipul
 
Sad sabitha 2012students
Sad sabitha 2012studentsSad sabitha 2012students
Sad sabitha 2012studentsabinmathew007
 
A System Approach For Defining Data Center Value Proposition.pdf
A System Approach For Defining Data Center Value Proposition.pdfA System Approach For Defining Data Center Value Proposition.pdf
A System Approach For Defining Data Center Value Proposition.pdfVernette Whiteside
 
lake city institute of technology
lake city institute of technology lake city institute of technology
lake city institute of technology RaviKalola786
 
167543812 a-study-on-smart-card-doc
167543812 a-study-on-smart-card-doc167543812 a-study-on-smart-card-doc
167543812 a-study-on-smart-card-dochomeworkping8
 
Software Requirements Specification for GBI information system dev.docx
Software Requirements Specification for GBI information system dev.docxSoftware Requirements Specification for GBI information system dev.docx
Software Requirements Specification for GBI information system dev.docxrronald3
 
System_Analysis_and_Design_Assignment_New2.ppt
System_Analysis_and_Design_Assignment_New2.pptSystem_Analysis_and_Design_Assignment_New2.ppt
System_Analysis_and_Design_Assignment_New2.pptMarissaPedragosa
 

Similar a DESIGN OF A COMPUTER SYSTEM FOR AN INFORMATION SYSTEM (20)

Sdlc1
Sdlc1Sdlc1
Sdlc1
 
SAD_UnitII.docx
SAD_UnitII.docxSAD_UnitII.docx
SAD_UnitII.docx
 
Hospital E-Token Management(outdoor)
Hospital E-Token Management(outdoor)Hospital E-Token Management(outdoor)
Hospital E-Token Management(outdoor)
 
CS 414 (IT Project Management)
CS 414 (IT Project Management)CS 414 (IT Project Management)
CS 414 (IT Project Management)
 
Mis jaiswal-chapter-09
Mis jaiswal-chapter-09Mis jaiswal-chapter-09
Mis jaiswal-chapter-09
 
Mobile shopping
Mobile shoppingMobile shopping
Mobile shopping
 
System Analysis And Design_FinalPPT_NirmishaK
System Analysis And Design_FinalPPT_NirmishaKSystem Analysis And Design_FinalPPT_NirmishaK
System Analysis And Design_FinalPPT_NirmishaK
 
Issue Management System
Issue Management SystemIssue Management System
Issue Management System
 
Sad sabitha 2012students
Sad sabitha 2012studentsSad sabitha 2012students
Sad sabitha 2012students
 
SE chapters 6-7
SE chapters 6-7SE chapters 6-7
SE chapters 6-7
 
Slides chapters 6-7
Slides chapters 6-7Slides chapters 6-7
Slides chapters 6-7
 
Final mkis
Final mkisFinal mkis
Final mkis
 
CHAPTER FOUR.pptx
CHAPTER FOUR.pptxCHAPTER FOUR.pptx
CHAPTER FOUR.pptx
 
Touch With Industry
Touch With IndustryTouch With Industry
Touch With Industry
 
Multiview Methodology
Multiview MethodologyMultiview Methodology
Multiview Methodology
 
A System Approach For Defining Data Center Value Proposition.pdf
A System Approach For Defining Data Center Value Proposition.pdfA System Approach For Defining Data Center Value Proposition.pdf
A System Approach For Defining Data Center Value Proposition.pdf
 
lake city institute of technology
lake city institute of technology lake city institute of technology
lake city institute of technology
 
167543812 a-study-on-smart-card-doc
167543812 a-study-on-smart-card-doc167543812 a-study-on-smart-card-doc
167543812 a-study-on-smart-card-doc
 
Software Requirements Specification for GBI information system dev.docx
Software Requirements Specification for GBI information system dev.docxSoftware Requirements Specification for GBI information system dev.docx
Software Requirements Specification for GBI information system dev.docx
 
System_Analysis_and_Design_Assignment_New2.ppt
System_Analysis_and_Design_Assignment_New2.pptSystem_Analysis_and_Design_Assignment_New2.ppt
System_Analysis_and_Design_Assignment_New2.ppt
 

DESIGN OF A COMPUTER SYSTEM FOR AN INFORMATION SYSTEM

  • 1. DESIGN OF A COMPUTER SYSTEM FOR AN INFORMATION SYSTEM ORGANIZATION THAT PROVIDES WEB BASED MARKETING, SALES, AND CUSTOMER SERVICE FOR AN INDUSTRIAL CONSULTANT AND PATENT LAW ORGANIZATION A DIRECTED STUDY PROJECT SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF COMPUTERS SCIENCE IN CANDIDACY FOR A MASTERS OF SCIENCE IN INFORMATION SYSTEMS By Anthony W. Sublett STRAYER UNIVERSITY NEWPORT NEWS CAMPUS June 2006
  • 2. - ii- ABSTRACT This paper detailed the design and functional aspects of a computer system that was used by an information system organization. A triangulation research method was used to gather data for this study and deign of this computer information system. The information system was designed to obtain and store pertinent data that was vital to the daily operation of an information system organization performing web based marketing, sales, and customer service for an Industrial Consultant and Intellectual Property Law Organization (which is the information system organization’s parent company). This paper explained how the systems analysis was performed and what the results of the analysis were. How the system was designed, which was based on the system analysis, was discussed as well. After the system was designed sample calculations were ran to verify and validate system performance in regards to the number of PCs and operators required to sufficiently handle the predicated influx of business. An alternative solution of running a simulation was also suggested as possible method to verify, validate, and predetermine requirements dictated by the business requirements at the out set of the design problem formulation.
  • 3. - iii- ACKNOWLEDGEMENTS Professor Martinez Del Rio, Dr. Otto, and DR. Hayes deserve special thanks for their guidance and insight in terms of helping me organize the specific subject areas of this paper as well as formatting it properly in order to ensure that it would be acceptable in accordance with the current DRP guidelines and specifications. I would also like to thank all of my other course instructors in the CIS Department at Strayer University Newport News campus as well as my online instructors that provided me with the theoretical knowledge which enabled me to perform both a quantitative and qualitative analysis and incorporate it into this paper.
  • 4. - iv- TABLE OF CONTENTS CHAPTER Abstract ii Acknowledgements iii List of Appendixes vi List of Figures vii List of Tables viii Chapter 1 Introduction 1 1.1 Context and Statement of Problem 1 1.2 Significance of the Design (study) 1 1.3 Specific Research Questions and Sub-questions 1 1.4 Statement of Problem 2 1.5 Research and Design Methodology (use of interviews and etc.) 2 1.6 Organization of Study 3 2 Secondary Research and Design of System 5 2.1 System Analysis 5 2.2 Information System Strategic Planning 7 2.3 Design Issues 10 2.4 System Processors 11
  • 5. - v- TABLE OF CONTENTS (CONT) 2.5 Hard Drive 22 2.6 Output Devices 24 2.7 Network 27 2.8 Functional Aspects of the System 30 2.8.1 System Software 31 2.8.2 Application Development Software 31 2.9 Functional Data Bases 31 3.0 Security Introduction 44 3 Primary Research System Evaluation and Validation 49 3.1 Purpose and Justification for Using a Simulation Model 49 4 Summary and Conclusion 65
  • 6. - vi- APPENDIXES Appendix A-1 67 Appendix B-1 68
  • 7. - vii- LIST OF FIGURES Figure 2-1 The Analyst Approach to Problem Solving 6 Figure 2-2 Company Organization Structure 8 Figure 2.3 Symmetric Multiprocessing Architecture 14 Figure 2.4 General Structure of a Client Server System 15 Figure 2-5 Interrelationships Between the Computing Model 16 Figure 2.6 Storage Device Hierarchy 23 Figure 2.7 Stages in the Requirements Analysis Process 32 Figure 2.8 Stage 3 – Requirements Evaluation 36 Figure 2.9 Employee Table Before Normalization 38 Figure 2.10 Employee and Project Tables in 1NF 38 Figure 2.11 The Project Table Before 2NF 39 Figure 2-12 Project and Department Tables in 2NF 40 Figure 2-13 The Project Table Before 3NF 41 Figure 2-14 Project and Priority Tables in 3NF 42
  • 8. - viii- LIST OF TABLES Table 2-1 HP L1506 Flat Panel Monitor Specifications 24 Table 2-2 Server Specifications 27 Table 2-3 Data Categories 34 Table 3-0 Problem Properties 51 Table 3-1 Interval Distribution of Incoming E-mails 52 Table 3-2 Service Distribution of PC1 51 Table 3-3 Service Distribution of PC2 53 Table 3-4 Service Distribution of PC3 53 Table 3-5 Service Distribution of PC4 54 Table 3-6 Service Distribution of PC5 54
  • 9. - 1- CHAPTER 1 INTRODUCTION 1.1 Context of Problem: A brand new information system organization has been created in Washington DC. This information system organization is a separate division but a valued part of its parent organization that is brand new as well. The parent organization is an Industrial Consultation and Intellectual Property Law Consulting firm that provides industrial and intellectual property law consolation services to both national and international clients. This information system organization is the life bread of the parent company, because it provides web based marketing, sales, and customer services activities for the parent organization. Since this information system organization was a brand new entity it did not have any computer system in place to be to assist in the organization in performing the above listed activities. 1.2 Statement of the Problem: A newly developed information organization that performs web based marketing and sales for a patent law and industrial consulting firm requires a newly designed information system that is cable of meeting the organization’s requirements in terms of storing, processing, and transmitting information and attribute support to the organization meeting its objectives and goals. 1.3 Specific Research Questions and Sub-questions: How will the system primarily be used and by who?
  • 10. - 2- • What role will the system play strategically in the organization? • What are the function ional requirements and capacity requirements of the system? • How will the system be validated? Will the system provide the necessary functions and satisfy the organization’s user requirements and ultimately add value to the information system organization? 1.4 Significance of the Design (study): This paper is significant because it will serve as a functional aid and blueprint for the design, development, and maintenance of the new computer system that is definitely needed by the information system. Without this the new computer system organizations ability to perform web based marketing and sale for the parent organization would be greatly crippled. If the information system organization cannot successfully perform their web based marketing and sales either up to or beyond the parent organization’s projected expectations. The parent company could possibly lose money, investors, and possibly fold. The culmination of increased marketing and procurement of new clients would attribute greatly to success of the information system organization and ultimately solidify the parent organization’s significant foot hole in their respective industry. Ultimately the role that this newly designed computer system can play in regards to procuring, providing service for and retaining new customers can make or break the parent organization. 1.5 Research and Design Methodology (use of interviews and etc.): Triangulation, which is an integration of qualitative and quantitative analysis, was the DRP research design methodology used. Qualitative research data was obtained
  • 11. - 3- through interviews with the CIO and other members of the organization that are listed as follows: Herman Miller, President CIO, Senior Counselor Intellectual Property Law Michael Taylor, VP Marketing & Sales & IS Darold Thomas, Chief Industrial Project Consultant & Mechanical Project Engineer Susan Jones, Executive Secretary & Paralegal Victor Jones, Director of industrial Research & Chief Electrical Project Engineer Rick Hamilton, Chief Marketing & Sales Executive & IS Analyst A Quantitative analysis was performed by obtaining component statistical data and analyzing if it would meet the user requirements. Also a simulation was run to validate that the system could operate efficiently enough to be perform up to the capacity required by the organization. 1.6 Organization of Study: The system design was based on system requirements specified by the CIO to the designer during a user/client interview process. An analysis was performed and reviewed by the CIO and these super users. This analysis specified details in regards to all of the system hardware, software, and network requirements as well as how the system would be integrated into each respective department functionally. How and why various components of the system were selected an implemented as part of the computer system design will be highlighted. Quantitative research, that included statistical analysis and a simulation, was performed to validate the system’s efficiency, and verify that it would perform up to the organization’s requirements. Ultimately a summary of the design and a
  • 12. - 4- conclusion was made on whether the system would be an asset to the information system organization and parent consulting organization.
  • 13. - 5- CHAPTER 2 SECONDARY RESAERCH OF SYSTEM DESIGN This chapter will detail how secondary research was used to perform the system analysis and define the requirements and functional aspects that were used to select the hardware, software, network components, and peripherals of the system. 2.1 System Analysis: The first step in performing this analysis was developing a clear understanding of the information that would be processed by the system during a normal business day. The process that was used in obtaining and organizing this information was outlined as shown in the analyst approach to problem solving shown in figure 2-1 below (Burd, Jackson, and Satzinger, 2004).
  • 14. - 6- Figure 2-1 The Analyst Approach to Problem Solving (Burd, Jackson, and Satzinger, 2004) Research and understand the problem Verify the benefits of solving the problem outweigh the costs Define the requirements for solving the problem Develop a set of possible solutions (alternatives) Decide which solution is best and make a recommendation Define the details of the chosen solution Implement the solution Define the details of the chosen solution Monitor to make sure that you obtain the desired results
  • 15. - 7- 2.2 Information System Strategic Planning: The first step in designing a computer system for any organization is to define the information strategic plan and the basic objectives of the customer support system that is part of the plan. In many instances a consulting company is bought into carefully think about the entire information technology infrastructure and formulate a strategic information system’s plan (Burd, Jackson, and Satzinger, 2004). This particular organization is relatively small and happens to have a small group of diverse employees (i.e. one employee has a Bachelors of Science in Mechanical Engineering, Jurist Doctorate in Law, a Masters in Information Systems, and a Masters in Business Administration. Another employee has a Bachelors of Science in computer Science and a marketing background) that perform the tasks that would normally be required by two or three individuals. The employees focus primarily on the following strategic objectives: 1. Customer relationship management 2. Supply chain management “ Customer relationship management (CRM) concerns processes that support marketing, sales, and service operations involving direct and indirect customer interaction” (Burd, Jackson, and Satzinger, 2004, pg 19) “Supply chain management (SCM) concerns processes that seamlessly integrate product development (the product in this case are the services the industrial and legal consulting services that are provided by the parent organization), client acquisition, client procurement, and data base inventory of potential and existing clients” (Burd, Jackson,
  • 16. - 8- and Satzinger, 2004, pg 19). Focusing on both of these strategic objectives will help the business provide services to clients while promoting efficient operations. The information systems strategic plan includes an application architecture plan, detailing the information systems projects that the organization still has to complete as well as technology architecture plan, detailing the technology infrastructure needed to support the systems. Both of these plans were based on the supply chain management and customer relationship objectives that are depicted later in this document (Burd, Jackson, and Satzinger, 2004). The organizational structure of the company is shown below in figure 2-2. Figure 2-2 Company Organization Structure Herman Miller President CIO Darold Thomas Chief Industrial Project Consultant & Mechanical Project Engineer Herman Miller Senior Counselor Intellectual Property Law Michael Taylor VP Marketing & Sales & IS Rick Hamilton Chief Marketing & Sales Executive & IS Analyst Susan Jones Executive Secretary & Paralegal Victor Jones Director of industrial Research & Chief Electrical Project Engineer
  • 17. - 9- The individuals and their respective positions are shown above in Figure 2-2. Note, all departments feed into the marketing and sales organization which in turn feed into the information systems (IS) organization. The IS organization is broken down into two distinct areas, system support and system development. System support is composed of the following functional areas: telemarketing, data base administration, operations management, and user support. System development involves project management, system analysis, program analysis, and clerical support. These tasks are performed by the Herman Miller CIO and Rick Hamilton, see figure 2-2 above (Burd, Jackson, and Satzinger, 2004). The information system strategic plan that was developed includes a technology architecture plan and an application architecture plan. The main features of both of these plans are as follows: Technology Architecture Plan: 1. Strategically move towards conducting business processes via the internet, first supporting new client acquisition, client service demands, and web innovation. 2. Anticipate and react to targeted customer demands in order to supply customer with company portfolio information and relative data on their competitors. 3. Anticipate move towards web based internet solutions for information systems.
  • 18. - 10- 4. Ensure the intranet hardware is in place and the technology is continuously updated to aid in the acquisition, transfer, and storage of information through out the organization. 5. Ensure that adequate security software is in place on the system and updated frequently to negate fraudulent cyber activities. Application Architecture Plan: 1. Supply Chain Management (SCM): implement systems that seamlessly integrate products (services in this case), development, product services acquisition, new client procurement, existing client retention, referral database information, distribution and storage database inventory, and inventory management in anticipation of rapid client growth. 2.3 Design Issues: Making the multiplicity of processors and storage devices transparent to the users was the primary focus. The user interface of a transparent distributed system looks at its users. The user interface of a transparent distributed system should not distinguish between local and remote resources. Ideally the users will be able to access data basis on their PCs from any remote location within the facility. Users will be able to log onto any machine by using their roaming profile, which will contain an encrypted password and username. Another major form of transparency is the mobility of user address books. The users will be able to access their address books on their home system or from the office. The system shall be fault tolerant, thus the system will still be able to perform regardless if some of its components fail. A DEC VAX cluster which allows multiple computers to share multiple sets of disks will be used to attribute to the fault tolerant aspects of the
  • 19. - 11- system. The system will also have some degree of scalability, the ability of the system to adapt to increased service load. Ultimately the system resources will take longer to reach a saturated state. The service demand from any component of the system will be bounded by a constant that is independent of the nodes in the system 2.4 System Processors: The system will utilize multiple processor boards (MPU-A) that will share the same physical memory by using a Shared Memory Facility (SMF). The SMF will allow up to six processors to communicate via a shared memory, and allow any processor to access several shared memory systems. I/O interfaces will also be shared when selected as memory addresses instead of port numbers. Each processor is allowed up to 64K bytes of its own local memory, less the amount of memory in the shared block. The shared memory may be up to 64K bytes (Wilson, 2003). The SMF will include: • MABP-3 Access Port (bus multiplexer) board, which is a bus multiplexer board composed of three identical ports used to switch information between one of three processors and the Shared Memory Bus. Two MABP-3 boards will be used to make a six-way switch controlled by the MAPT-6 Shared Memory Controller board. The MABP-3 board will contain three complete sets of logic, each comprising an address decoder, request latch and bus buffers. When two boards are used in a system, they will provide a six-way bus switch in which the six processor ports differ only in their priority assigned on the MAPT-6 board. Any processor will have access to information in shared memory as well as access to any other memory space except that it may need to wait while the SMF services
  • 20. - 12- higher priority requests. An EXPM edge connector will be used to install each MABP-3 board (Wilson, 2003). • MAPT-6 Shared Memory Controller board, which will board perform timing and control functions of the Shared Memory Facility. Logic will include a latched priority encoder that will generate the signal PORT SELECT I, where I is the processor presently with highest priority. This signal enables related bus drivers on the MABP-3 boards to perform bus switching. Other logic elements will form a sequential logic network to generate signals controlling the memory. These signals will include SYNC that will substitute for the SYNC signal normally generated by the processor during a fetch cycle and which possibly is used for other system functions. Another EXPM edge connector will be used to install the MAPT-6 board. Both boards will be connected to a special Mother Board section (Shared Memory Bus) in a standard IMSAI 8080 chassis. Memory boards shared by the processors will also be plugged into this section .The SMF is completed by Bus boards plugged into the Mother Board of each processor, and cables connecting the Bus boards to the MABP-3 boards on the Shared Memory Bus. A separate front panel will be connected (through an extender board) to each processor. The processors may be contained in separate cabinets connected only by the Bus board and associated cable. Bus Boards: Each processor sharing memory will require a Bus Board and cable to connect its Bus to the MABP-3 board. Because of the required length of the runs an 18-inch flat
  • 21. - 13- cable with 50-pin card edge connectors will be is used to connect a Bus board to a MABP-3 board (Wilson, 2003). This multiprocessor will share the computer bus, the clock, and some times memory and peripheral devices (Gage, Galvin, Silberschatz, 2004). Multiprocessor systems have the following three advantages: 1) Increased throughput: By increasing the number of processors more work can get done in less time. “The speed up ratio with N processors is not N; rather it is less then N. When multiple processors cooperate on a task a certain amount of overhead is incurred in keeping all of the parts working correctly. This overhead plus contention for shared resources, lowers the expected gain from additional processors. Similarly a group of N programmers does not result in N times the amount of work being accomplished.” (Gage, Galvin, Silberschatz, pg 12). 2) Economy of scale: Multiprocessor systems will save more money than multiple single-processor systems, because they can share peripherals, storage space, and power supplies. “They allow several programs to operate on the same set of data on one disk and have all the processor share them, opposed to having many computers with local disks and many copies of the data” 3) Increased reliability: It allows functions to be distributed among several processors and negates system failure or decrease in speed due to a faulty processor within the network. This ability to continue to providing service proportional to the level of surviving hardware is called graceful degradation. “System designed from graceful degradation are also called fault tolerant.” (Gage, Galvin, Silberschatz, pg 12).
  • 22. - 14- This multiple processor system will use symmetric multiprocessing (SMP), where each processor will run an identical copy of the operating system. SMP basically means that all processors work equally, there is no CPU hierarchy (see figure 2.3. below). The advantage of this model is that many processes can run simultaneously –N processes can run if there are N CPUs without causing a significant deterioration of performance. Figure 2.3 Symmetric Multiprocessing Architecture Client /Servers: Server hardware capabilities depend upon the resources that are being shared and the umber of simultaneous users (Burd, 2006). In this case an average of 12 people was the design capacity specified for the server that could possibly be accessing the network at one time (even though at present there are six employees). A distributed client server system will be used, which will allow the clients or PC users to access data as if it was stored centrally. A distributed configuration over a Wide Area Network (WAN) will be used so that the servers can be synchronized regularly to ensure that they hold the same CPU CPU CPU MEMORY
  • 23. - 15- data. This system will be centralized which will enable it to act as a server system to satisfy request generated by client systems. The general layout for a client server system is depicted in Figure 2.4 below. The computer server system is broadly recognized as a compute servers and file servers. Computer server systems provide an interface to which clients can send requests to perform an action, in response to which they execute the action and send back the results warranted by the user or client. File-server systems provide a file-system interface where clients can create, update, read, and delete files. Figure 2.4 General structure of a client server system (Gage, Galvin, Silberschatz, 2004) A distributed computer system (DCS) was selected that would allow the clients or PC users access to data as if it was stored centrally. A distributed computing system is interrelated to a client/server system. The DCS technically is a collection of autonomous computers interconnected through a communication network in order to carry out the business functions of the organization (Umar, 1997). Information cannot be shared globally and none of the systems connected by the network share memory. The information is exchanged through messages on the network. In essence the computers CLIENT CLIENT CLIENT MEMORY CLIENT Network
  • 24. - 16- have to work together and cooperate with each other in order to meet organizational requirements (Umar, 1997). The client / server model is a special case of the distributed computing model. Figure 2.5 below shows the relationship between the two. Figure 2-5 Interrelationships Between the Computing Model The fact that the network computers of this system do not share memory is what sets it apart from a multiprocessor system. The client / server (C/S) model, which is one of three models that can be used to achieve distributing computing was used in this system design. The C/S model allows processes at different locations to readily exchange information and is more interactive then the file transfer model (Umar, 1997). Client Processes: The client service process performs the application functions on the client’s behalf. Client processes may range from simple interfaces, producing spread sheets, or Computing Models Terminal Host Model Distributing Computing Model File Transfer Model Peer-to- Peer Model Client / Server Model File Transfer Model
  • 25. - 17- producing analytical data such as graphs Client processes general have some of the following characteristics: • It interacts with the user through a user interface, which are typically graphical user interfaces or object oriented interfaces. • It performs some application functions if needed, i.e. user interface processing, spreadsheets, report, generation, and object manipulation. • It interacts with the client middleware by forming queries and or commands in application program interface, which is the format that is readily understood by the client middleware. • It receives responses from the server middleware which it displays, if need be on the user interface. Server Processes: The server performs the applications on the server’s side of the coin which has the following characteristics: • It provides services to the client, some of which may be as simple as providing the time of day. Other services may be more complex such as processing electronic transfer of funds. • An ideal server will hide its activities so the client does not have knowledge of the details of the functions that have been carried out on their behalf. • It is invoked by he server middleware and returns the results back to the server middleware.
  • 26. - 18- • It may provide some limited scheduling services, in leiu of multiple clients being provided service concurrently. But generally speaking providing scheduling is usually a service that the middleware server is responsible for. • It provides error-recovery and failure handling services (Umar, 1997). Application Architecture: A two tier architecture with a distribute application program will be used. The application programs shall be split between the client server machines and with each other through remote-procedure-call (RPC) middleware. The remote data will be stored in an SQL server and accessed through ad hoc SQL statements sent over the network. This architecture was selected because of its ability to be supported by SQL Windows software (Umar, 1997). Middleware: The C/S middleware provides information exchange service that provides the user the ability to access services like remote databases and real time message exchanges between remote programs supported by directory, security and failure handling services. The middleware can be viewed as the backbone of the client server system. The following questions were addressed in designing the client server system and determining what middleware would be used: 1. What basic services would the C/S middle ware have to provide and who would provide them? 2. What type of information exchange services would be required by the middleware?
  • 27. - 19- 3. What type of management and support services would be provided by the middleware? 4. What is the functional aspects of OSF DCE and how will hey provide the basic CS middleware services (Umar, 1997)? Net Operating System: The network operating systems (NOSs) will provide all of the basic C/S middleware services in retrospect thus providing the user with transparent access to printers, files, and databases across the network. The net operating system (NOSs) Windows 2000 was the NOS selected to support the RPC paradigm previously mentioned that will provide the server with the capabilities to access remote files, printers, e-mail, directory, and back up recovery services. An Open Software Foundation’s Distributed Environment (OSF DCE) will be used to integrate remote-procedure call with security, directory, time, file, and print services. OSF DCE is a good client to use in terms of providing an open CS environment where clients and servers from different vendors can operate through the network with each other and the users of the system at the subject information systems location. This middleware will provide the following; • Remote communication interfaces (RCIs) for sending and receiving information across the network. RCI may use basic protocols such as remote call (RPC), remote data access and message oriented middleware (MOM). • Global directories that show the location of resources (servers, databases, files, printers) in the network. • Print and file services (Umar, 1997).
  • 28. - 20- The client side of middleware can be quite intense because it is responsible for providing the user access to printers, files, data bases, and other software and hardware through network connections and network gateways that may convert network protocols. Windows 200 NOS was selected because of its ability to cement together C/S applications enterprise wide via a network through gateways by converting protocols from one NOS to another. This would allow the information system to achieve system objectives by and reduce failure in network operations capacity. This is a crucial element in regards to the parent organization’s ability to market their product niche, transmit pertinent data to respective clients, and receive respective data from clients in real-time processing speed. This middleware will cost an estimated $400 per client PC (Umar, 1997). Utilization of RPC Paradigm: Generally speaking the RPC Paradigm is where the client process invokes a remotely located procedure, and in turn the remote procedure executes and sends the response back to the client process. Note that each request / response of an RPC is treated as a separate unit of work, therefore each request can carry enough information needed by the server to generate and send back a response. Because there is a portion of information at the client and a portion is at the server an Interface Definition Language (IDL) will be used to define what will passed back and forth between the client and server by the RPC. Remote Data Access (RDA) was not used due to the insignificant amount of variation in the type of data being sent and received. Gateways and Applets:
  • 29. - 21- The two comparable types of gateways on the market are Java based gateways and CGI-based gateways. In comparing these two types of gateways the primary difference between the two is that the CGI gateway programs (which resides on the web server) are executed on the server platform and perform defined specialized functions opposed to simply retrieving and displaying an existing HTML page (Umar, 1997). Java gateways on the other hand distribute the code for the target application, and send it to the web client where it is executed. This allows Java applets to be embedded in HTML pages and sent to the web browsers where they execute. This methodology allows for remote application access to databases directly from the browser which is a powerful tool (Umar, 1997). In regards to database gateways a java applet can be invoked which ask the user to issue a query and then sends that query to a remote application or database. This is a great user option where the database functionality runs on the client side (Umar, 1997). Java based applets can be called from an HTML document with parameters passed in either direction. Java, which is a scripting language along with visual basic, allows scripts to be embedded in HTML ultimately providing gateways for program access without compilation or link editing (Burd, 2006). The java applet, which executes in another program, i.e. web browser, runs within a specified area known as a sand box. The sandbox provides extensive security controls that prevent the applets form accessing unauthorized resources or damaging hardware, operating system, or file system (Burd, 2006). As depicted above the capabilities of java gateways far exceed those of CGI gateways, therefore the java gateway types were the selection of choice for this system in regards to the user bridging the gap between web browsers and corporate applications and databases.
  • 30. - 22- Web Site: In a teaming effort the web site will be set up by the marketing and it department of the information systems organization. The information systems organizations will bye the site on the web opposed to leasing or renting it. The web site software will coexist with other existing network software (that has been depicted above). Network configuration for the site has also been depicted above. Back up site and administration tasks will be designated with the system has been constructed and put into operation 2.5 Hard Drive: One hundred gig magnetic disks will provide the bulk of the secondary storage space for each PC’s CPU. Each disk has a flat circular shape, like a CD. Common platter diameter ranges from 1.8 to 5.25inches, 5.25 disks will be employed in these CPUs. The disk arm will be capable of reading information off a disk being rotated at 200 times per second. In order to combat the issue of possible volatile storage losses during temporary power loss the system will have an electronic-disk device containing a hidden magnetic hard drive and a battery for back up power. If external power is interrupted the electronic disk controller will copy the data from the RAM to a magnetic disk. After external power has been restored the controller will copy the data back into the RAM (Gage, Galvin, Silberschatz, 2004). In the hierarchy shown below in Figure 2.3, the storage systems shown above the electronic disks are volatile, whereas those below are non-volatile.
  • 31. - 23- Figure 2.6 Storage Device Hierarchy (Gage, Galvin, Silberschatz, 2004) For PCMagnetic tapes Optical disk Magnetic disk Electronic disk Registers Cache
  • 32. - 24- 2.6 Output Devices Monitors: The monitor s that were selected the PC work station and its specifications are depicted below in Table 2-1. Table 2-1 HP L1506 Flat Panel Monitor Specifications HP L1506 Flat Panel Monitor Viewable area 15.0 in (38.1 cm) Pixel pitch 0.297 mm Brightness Up to 250 nits Contrast ratio Up to 450:1 Response rate (typical) 16 ms Viewing Angle - Horizontal 130 ° Viewing Angle - Vertical 100 ° Native resolution 1024 x 768@60 Hz Interface (analog/digital) Analog Ports Warranty - year(s) (parts/labor/onsite) 3/3/3 Printers / Scanners / Copiers With high regard in reference to its speed and durability, the HP LaserJet 3390 All-in- One (AiO) was the printer of choice selected. “This unit lets you print and copy complete, high-quality, documents in no time, at speeds of up to 22 pages per minute (ppm) letter Work confidently—this product won’t hold you up. Instant-on Technology delivers fast first page out speeds—less than 11 seconds copying and less than 8.5 seconds printing—so you won’t waste valuable time waiting for output. You can pick up your completed job before our competitors’ devices have even begun to warm up.
  • 33. - 25- Enjoy superior quality with genuine HP print cartridges. With HP Smart printing technology, the HP print cartridge and the device work together to achieve consistent, outstanding print quality, enhanced reliability, and a system that is easy to use, manage, and maintain. Automatic alerts let you know when the cartridge is low or out, and you can depend on convenient online ordering with HP Sure Supply™ Be flexible with enhanced finishing features. Standard automatic two-sided printing helps you conserve paper. Plus, this compact desktop product multitasks—if someone is printing or copying, you can still receive that important fax you’ve been waiting for..3 Maintain your momentum. With an optional 250-sheet input tray 3, you can get a total input capacity of up to 500 sheets, which means you can print more pages with less intervention and depend on the product to be ready when you need it. Increase work team efficiency. The product effortlessly handles your volume demands with 64 MB memory (expandable to 192 MB), 4 MB fax memory, and a fast, reliable engine. Get comprehensive print language support. Print complex documents with built-in support for HP PCL5e, HP PCL6, and HP postscript level 3 emulation Share the All-in-One capabilities. Easily and reliably connect multiple users with standard 10/100Base-T Ethernet networking or take advantage of easy direct connections with the Hi-Speed USB 2.0 port. Manage your All-in-One simply. HP Toolbox FX lets you interact with your product from the comfort of your desk, with configuration, status, and support for every feature.”
  • 34. - 26- Servers: The HP Integrity rx7640 Server achieves high levels of functionality and value compared with other server products in the mid-range class. Combined with the processor-enhancing capabilities of the HP Super-Scalable Processor Chipset sx2000, the rx7640 server delivers breakthrough performance, outstanding scalability, and simplified management at an exceptional value. In addition, multiple-operating-system support gives you the freedom to run the right solution for your IT requirements. The rx7640 server makes highly available enterprise computing an affordable reality. • 2-8 Intel® Itanium® 2 processors (1.6 GHz) • 64 GB memory capacity • Up to 2 hardware partitions (nPars) Up to 15 hot-plug PCI-X slots (See Table 2-1 below for Server Specifictions)
  • 35. - 27- Table 2-2 Server Specifications 2.7 Network: “In order to allow computers on a network to communicate, the information has to get from the user's application to the network interface card and across the network into the user's application at the destination. This gets more complex when there may be computers of completely different architectures on the network, and even more complex when there are networks of different types connected together. To allow this, the process A Collaborative Network, which is a web based network that uses the World Wide Web to combine both distributed and centralized technologies, will be used to network the PCs in the information system with long term client data basis. This will allow information to readily be exchanged between the consultants, lawyers, and clients. Each computer will operate as a separate work station, but will be able to receive resources from the server Microprocessor architecture 8 Intel® Itanium® 2 processors (1.6 GHz with 6 MB L3 cache) Main memory Bus bandwidth: 34 GB/s (32) Peak (sustained) Capacity: 64 GB Max Memory slots: 32 DIMM slots Operating systems HP-UX 11i v2 Microsoft® Windows® Server 2003 Enterprise Edition Microsoft Windows Server 2003 Datacenter Edition Internal storage devices Internal HDD drive bays – 4 Removable media – 1 open bay for 1 DVD-RW or DDS or 2 slimline DVD Maximum HDD (internal) 1200 GB (4x300 GB) Expansion slots PCI-X slots available – 15 x 266MHz I/O bandwidth –23 GB/s (16) Peak (sustained) Core I/O interconnect 10/100/1000BT LAN Ultra160 SCSI 100BT management LAN RS-232 serial ports Rack-optimized design Rackmount solution offering allows server to fit into 10U (44.45 cm height) space in all HP racks (Rack System/E and 10000 series rack) J1530B: HP field rack kit for server and server expansion unit J1530BZ: HP factory rack kit for server and server expansion unit For a complete list of racks and rack accessories, refer to http://h30140.www3.hp.com/
  • 36. - 28- and share its own application and files. Each computer in this collaborative network will also have the capability to work as a client machine that can receive and process data, but not share its own resources. There will be a web browser installed on each client that will act as a terminal that sources information from the centralized server. This will give each computer (client) the power of centralized computing. These network computers will communicate to the web using a protocol called TCP/IP. Sections of the information system organizations intranet will be connected to sections of the global internet so that authorized clients can access specified databases within the company network. These extended intranets are known as extranets or virtual private networks. This collaborative network will operate similar to a mainframe computer. It will receive information request from web browsers, receive structures, and deliver the requested data. These servers will have the ability to distribute data any where in the world, and provide an open network on which global users can exchange information. The browser will grant system capability to clients, such as paying for services over a secured internet line (via the secured server) with credit cards. Networking Protocols: The network protocols will be categorized according to the communicative tasks they perform. There are two categories of protocols, which are as follows (see figure 2.6 below): • Application layer or upper layer protocols operate at the application, presentation, and session layer of the OSI system. These are also called user-oriented protocols because they allow data to be exchanged between the network users and the
  • 37. - 29- applications. These protocols will first establish connections between network users and applications and then they will initiate data transmission. • Transport layer protocols work at the transport layer of the OSI model. Network Architecture: Network Software: The ability for multiple users to use programs to allocate and compete for various resources simultaneously on the same system is the primary objective of the network system. Reducing the complexity of allocating resources that are not present locally or from a remote location was a secondary network design objective. In order to meet this tall order the system software would have to perform the following (Burd, 2006): • Find the requested resources on the network • Negotiate resources access with distant resource allocation software • Receive and deliver the resources to the requesting user or program If the computer system happens to makes its local resources available to other computers, then system software must implement an additional set of functions. • Listen for resource requests • Validate resource requests • Deliver resources via the network (Burd, 2006) Basically the system software plays both the role of access and response in regards to network access. The Windows XP operating system will implement the intelligence needed to respond to external resource request. Internet explorer a self contained component packaged within the Windows XP operating system will enable the system to access the internet (Burd, 2006).
  • 38. - 30- 2.8 Functional Aspects of the System: The primary task of software is to translate the user’s requirements into CPU instructions that the system can understand, process, and respond to with formatted information that can readily be understood by the user. Some software can be complex because it bridges the following two gaps during the transformation process: • Human language to machine language • High- level abstraction to low level detail The definitions of the two primary types of software, which are System Software and Application Development Software, that are used in a system and more importantly the software that was selected for this particular system are depicted below in the following subsections. 2.8.1 System Software System software’s primary function is to designate and resource allocation to satisfy user requirements. In a nutshell System Software can be divided into the following layers: • System Management- are usually defined as utility programs, that have the ability to control and manage the computer resources for the user. • System Services – are denoted as utility programs that are used by system management and application programs to perform common actions • Resource allocation- are depicted as utility programs that allocate hardware and other resources among multiple users and programs • Hardware interface- can be defined as utility program that interact and administer control over hardware devices (Burd, 2006)
  • 39. - 31- The operating system, which is a collection of utility programs, supports the system users and application programs. It facilitates user access and application to programs and hardware. The primary functions of Windows 2000, which was the operating system of choice selected for this designed computer system, are as follows: • Program storage, loading, and execution • File manipulation and access • Secondary storage management • Network and interactive user interfaces Web server software will be will be an optimal component of the operating system software that will be included on all client’s server (see client/server section listed above) 2.8.2 Application Development Software: Application Development Software is composed of binary instruction code. Many compilers such as Java and Visual Basic are just a few of the recognized development software. Program translators such as C++ translate instructions into CPU instruction. Java will be used to interact with the CPU in regards to requesting web based functions. C++ will be used to translate and request the CPU to process technical data 2.9 Functional Data Bases: This section provides an overview of the requirements analysis process that was implemented during the actual database design. The design team, composed of a business/systems analyst and database designer, began requirements analysis process by recording the requirements of the business. Once they recorded these requirements, they analyzed the information to determine whether they identified the user, customer, and
  • 40. - 32- business needs correctly. After identifying the business requirements and goals, they began the database design process. In order to design a database for a business, the designers needed to understand and document the requirements of the business. This allowed them to create and implement a database system that satisfied the organization's business requirements and the needs of its customers and end users (Burd, Jackson, & Satzinger, 2004). The database system requirements analysis process involved determining, analyzing and evaluating the business requirements (see figure 2.7 below). Figure 2.7 Stages in the Requirements Analysis Process The fundamental elements that were determined included the rules, data, and processes that defined the business. These elements define what the business required of the database and how the database system should be built in reference to those requirements (Burd, Jackson, & Satzinger, 2004). Once the business requirements were gathered, the design team proceeded with analyzing those requirements. By analyzing the
  • 41. - 33- requirements, they determined the system requirements, and used those requirements to generate a functional requirements specification. The functional requirements specification included the development of process models, Data Flow Diagrams (DFDs) and a high-level conceptual data model of the database called a conceptual Entity- Relationship Diagram (ERD).The business/systems analyst produced the process models and DFDs, and the database designer generated the conceptual ERD (Rob, Coronel, 2004). Once they analyzed and documented the requirements, the analysis was evaluated. This was a three-step iterative process. If the analysis would have resulted in being incorrect, the design team would have been required to repeat the entire process until they defined a final workable solution, but this was not the case. Determining Business Requirements: Requirements of the business were acquired by collecting information from users in regards to their tasks. Information in regards to how data was interchanged and exchanged in the organization, and what factors would affect the business processes. To find out what tasks users perform, the business was divided into departments and then subdivide the departments into the functions that each user performs. The users were interviewed and information was gathered in reference to the work and tasks of the performed daily. This was done to determine what information each department dealt with, the types of processes would be involved in the tasks, and the business rules of the department. Because information and processes are interdependent, information in conjunction with the processes they would affect needed to be determined, which means that either would be seen in isolation (Rob, Coronel, 2004). Once a determination was made in regards to the information and associated processes, the business rules associated
  • 42. - 34- with these were documented. For example, a business rule may determine how information can be accessed or how information relates to other information. Business rules are often the determinant of database referential integrity (Rob, Coronel, 2004). Business Requirements Analysis: The primary reason for researching and analyzing the business requirements is to generate a high-level conceptual data model for the proposed database. This high-level conceptual data model is called a conceptual ERD. An ERD is a concise description of the business data requirements and includes detailed descriptions of data groupings or entities, the relationships between the entities, and some of the constraints imposed upon data defined by the entities (Rob, Coronel, 2004). Data categories Category Definition entity a group or category of data or objects relationship a connection between objects in a database Table 2-3 Data Categories No implementation details were defined at this stage, and an ERD was created to ensure that the data requirements of the business had been met. The ERD, which is in a non technical format, was used to communicate the structure and content of the proposed database to users. The ERD defined the structure of the data without being concerned with the storage or implementation details of the final database. It allowed the user to see the larger picture clearly, and once verified, to decompose the ERD into a lower level detailed logical data model. This ERD and the data operations defined by process models
  • 43. - 35- and DFDs represented the requirements of the business and how the proposed system would be developed. At this point the process models the DFDs with the ERD could be cross checked and a determination was made in regards to whether the business requirements had been met and whether the resulting database system would be functional and complete (Rob, Coronel, 2004). A modification of the ERD could have been made the user discovered any inconsistencies between the process model, DFDs and ERD. Requirements Evaluation: In the final stage of the three-step requirements analysis process, the results were evaluated. Requirements evaluation is the process of determining whether the business requirements that are gathered and analyzed from end users and management meet the initial requirements of the users and business involved. During this phase, the design team needed to ensure that they had covered all the information and interpreted the information correctly. In addition a check was made for any conflicts in the information that they collected and analyzed. No significant conflicts were found; therefore a revision to the requirements analysis process was not necessitated.
  • 44. - 36- Figure 2.8 Stage 3 – Requirements Evaluation Normalization in Database Design: Normalization optimizes database management by reducing or eliminating redundant data and ensuring that only related data is stored in a relational database table. Normalization is based on levels or rules called normal forms (DeBetta 2005). Currently, five normal forms – numbered one through five – have been defined. Every normal form builds on the previous one. When you normalize a database, you want to • arrange the data into logical entities that form part of the whole • minimize the amount of duplicate data that is stored in a database • design a database in which users can access and modify the data quickly • ensure the integrity of the data in the database • optimize query times (DeBetta 2005)
  • 45. - 37- First Normal Form (1NF): The objective of 1NF is to divide the database data into logical entities or tables. After each entity has been designed a primary key is assigned to a unique identifier (UID.) All attributes of an entity must be single-valued attributes. A repeating or multi- valued attribute is an attribute or group of attributes with multiple values for one occurrence of the UID . The data base designer needed to move a repeating attribute or attributes to a separate entity and create a relationship between the two decomposed entities (Rob, Coronel, 2004). When working with tables, repeating columns were moved to their own table. In the new table, a copy of the original table's UID was used as a foreign key. Let's say that the Employee table, which is one of the administrative tables that was designed and used by the customer, has multiple values to define the various projects an employee that may be involved in over a period of time. Although projects start and finish on a specific date, they may stop temporarily because of prioritization. The Employee-Name attribute is the UID for the entity. The Employee table prior to 1NF is shown here.
  • 46. - 38- Figure 2.9 Employee Table Before Normalization For example, Project (1–3), Start-Date (1–3), and End-Date (1–3) are multi- valued attributes or repeating columns. This violates the 1NF normalization rule. So a Project, Start_Date, and End_Date was placed in a new table called ‘Project’ (DeBetta 2005). Figure 2.10 Employee and Project Tables in 1NF
  • 47. - 39- The UID for Project is a composite key comprising ‘Project_Name’ and ‘Start_Date’, and the foreign key for Project is ‘Employee_Name’, which creates a relationship between the Employee and Project tables. Second Normal Form (2NF): The aim of 2NF is to take data that's partly dependent on the primary key and place it in another table. If a table has a composite UID, all the non-UID attributes must be dependent on all the UID attributes – not just one or some of them. This is a requirement of 2NF. The designer had to move any attributes that were not dependent on an entire composite UID to another table and establish a relationship between the two tables (DeBetta 2005). Figure 2.11 The Project Table Before 2NF For example, the department details depend on the ‘Project_Name’ only instead of on the whole key. The solution for this violation of 2NF was to move ‘Department_Name’ and Department-Location to a new table called Department:
  • 48. - 40- Figure 2-12 Project and Department Tables in 2NF The Project table has a composite primary key comprising ‘Project-Name’ and ‘Start_Date’. The foreign key comprises ‘Employee_Name’ and ‘Department_ Name’. And the table has an ‘End_Date’ column. The Department table has ‘Department_Name’ as its primary key, and a ‘Department_Location’ column. Third Normal Form (3NF): The aim of 3NF is to remove data in a table that is independent of the primary key (DeBetta 2005). Therefore a Priority attribute was added to the Project table. The Priority attribute has potential values, such as ‘High’, ‘Medium’ and ‘Low’. Additionally, an attribute called ‘Priority Level’ was added, which further qualified Priority into a more detailed subgroup with potential values of "1", "2", and "3".
  • 49. - 41- The figure is shown here. Figure 2-13 The Project Table Before 3NF The Project table has a composite primary key consisting of ‘Project Name’ and ‘Start Date’. The foreign key comprises ‘Employee Name’ and ‘Department Name’. Other columns in the table are ‘End Date’, ‘Priority and Priority Level’. In this example, ‘Priority Level’ depends on Priority first. ‘Priority_Level’ depends on the ‘Project_Name’ and ‘Start_Date’ UID through Priority only. The transitive dependency of ‘Priority_Level’ is unacceptable in 3NF. So you need to move the Priority and ‘Priority_Level’ attributes from the Project table to their own Priority table (DeBetta 2005).
  • 50. - 42- The tables in 3NF are shown here. Figure 2-14 Project and Priority Tables in 3NF The Project table has a composite primary key comprising ‘Project_Name’ and ‘Start_Date. The foreign key comprises Employee_Name, Department_Name, and Priority. And the table has an End_Date column. The Priority table has Priority as its primary key, and it has a Priority_Level column. Normalization of production systems usually stops at 3NF. Levels above 3NF are conceptually complicated and applicable to situations that require specific solutions only (DeBetta 2005). Boyce-Codd Normal Form (BCNF) Unlike 3NF, BCNF requires each attribute or combination of attributes upon which any attribute is fully or functionally dependent to be a candidate key. For example, if ‘Column_B’ is functionally dependent on ‘Column_A’, you can associate every value in ‘Column_A’ with only one value in ‘Column_B’ at a specific point in time. A specific value for ‘Column_A’ always has the same value in ‘Column_B’, but the reverse is not
  • 51. - 43- true. A specific value for ‘Column_B’ can have multiple corresponding values in ‘Column_A’. Fourth Normal Form (4NF): For 4NF, a table needs to be in BCNF, and it shouldn't have any multi-valued dependencies. Neither should the dependent attributes be a subset of the attributes they depend on. And the dependent attributes combined with the attributes they depend on should not constitute the entire entity. In a multi-valued dependency, two or more attributes depend on a determinant, and each dependent attribute has a particular set of values. The values in the dependent attributes are independent of each other (DeBetta 2005). Fifth Normal Form (5NF): For 5NF to be in effect, an entity should be in 4NF, and all join dependencies on the table should be related to its candidate keys. When you decompose tables through normalization, you should be able to reconstruct the original table by doing joins between the resulting tables without losing data and without generating extra rows. This is a lossless join. When you can't decompose a table into smaller tables, which have different keys from the original without data losses, the table is in 5NF (DeBetta 2005). Summary of the Database Design: These access databases were designed using the parent child model or Network model. One of these databases was designated as Industrial Consultant Client Data Base, which stored client the industrial consults name (which was specified and used as the
  • 52. - 44- primary field) and relative information, i.e. organization , address of organization phone number , fax , e-mail, (which was listed as records in the rows of this data sheet). A one to many relationship was used to construct this database as well as all other client related data basis including those used for patent law consults (Rob, Coronel, 2004). The customer’s organization and ID (composite keys, where two or more primary key were used in combination as the primary designator key) was used as the designated foreign key, to which other databases are related or connected. 3.0 Security Introduction Many current computer systems have a very poor level of computer security, which has led to identity theft as well as theft of intellectual property. In such way, computer breach has been considered to be a disastrous mistake, and the cause of much of the insecurity of current computer systems. For Example, once an attacker gets into a system, without well equipped computer or security features, attackers usually has access to most or all of the features of that system. Because computer systems are very complex, and cannot be guaranteed to be free of defects in security, therefore security tends to change readily in regards to the computer systems. That’s when computer security comes into play. Computer Security is the measures used to protect the systems valuable information from getting it to wrong hand. Computer Security involves specifying and implementing security policies, that is; defining and setting up the perimeter from lower to higher level computer security access. Security requires consideration of the external environment as well as protection of the system. Both the data and the code in the system need to be protected, along with the physical resources of the system from unauthorized intrusion and access.
  • 53. - 45- Security software should be designed to detour some of the following forms if malicious access: • Unauthorized reading of data (i.e. theft of information) • Unauthorized destruction of data • Unauthorized modification of data • Preventing illegitimate use of the system and it’s peripherals Authentication and Passwords Authentication or the ability to protect the system from unauthorized access is a major security problem. This problem will be remedied by user authentication, which is based on either user knowledge, user possession, or user finger print. Passwords are the most commonly used vehicle to authenticate the user’s identity to the system. Passwords are a combination of a certain number of letter characters and numbers, as prescribed by the software security parameters (i.e. some main frame systems may require the password to be 8 characters of which the third character has to be numeric). The encryption that is set up by the UNIX system will allow the system to accept the password of the user and store it safely. However passwords can accidentally be exposed or easily guessed. Remedies for the authentication besides the UNIX system that will be incorporated are as follows: Programs and System Threats:An unauthorized user may fraudulently access the system or misuse the programs by the following methods: • Trojan horses are code segments that misuse its environment. However long search paths such as common UNIX systems can eliminate this problem along
  • 54. - 46- with a non- trappable key sequence, like the control –alt-delete combination that Windows NT uses. • Trap doors are instances where the software has been designed to check for a specific user ID that might circumvent the normal security procedures. A trap door can also be included in the compiler, where the compiler can generate standard object code as well as a trap door that is supercilious to the source code. In order to detect trap door all of the source code of the system has to be analyzed. Preventive Security Measures: The following measures can be implemented to combat security issues: 1) Assessing and maintaining a risk management policy for hardware, software and the people that uses it. 2) Researching new security issues and adapting policies without degrading the performance of the computers and their networks. 3) Managing physical access to computers. 4) Doing all of the above, without service disruption to those who rely on that network. Techniques for Creating Secure Systems: The following techniques will be used in engineering secure systems: 1) Simple microkernel will be written so that the system will not contain any bugs. 2) A bigger Operating System OS, able of providing a standard API like POSIX, will be built on a microkernel using small API servers running as normal programs. If one of these API servers has a bug, the kernel and the other servers will not be affected.
  • 55. - 47- 3) Cryptographic techniques will be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified. 4) Strong authentication techniques will be used to ensure that communication end-points are who they say they are. 5) Chain of trust techniques will be used in an attempt to ensure that all software loaded has been certified as authentic by the system's designers. 6) Mandatory access control will be used to ensure that privileged access is withdrawn when privileges are revoked. For example, deleting a user account should also stop any processes that are running with that user's privileges. 7) Capability and access control list techniques will be used to ensure privilege separation and mandatory access control. Processes of Computer Security: User account access controls and cryptography can protect systems files and data, respectively. Firewalls are by far the most common prevention systems from a network security perspective as they can (if properly configured) block the normal packets types, preventing some kinds of attacks. Intrusion Detection Systems (IDS's) are designed to detect network attacks in progress and assist in post-attack forensics, while audit trails and logs serve a similar function for individual systems. “Response” is not necessarily defined by the assessed security requirements of an individual system and may cover the range from simple upgrade of protections to notification of legal authorities, counter- attacks, and the like. In some special cases, a complete destruction of the system is favored. The manufactures and specifications of the firewalls, virus protections and other
  • 56. - 48- devices that will be used during system implementation will be withheld at this point for security purposes. Management of risks and weakest links will minimize security breaches while maximizing productivity and performance. The most effective methodology in computer security will be to assert and maintain an intelligent policy to risk manage workstation use and functionality without inflicting a denial-of-service to the people who rely on access to your computer or network.
  • 57. - 49- CHAPTER 3 PRIMARY REASERACH OF SYSTEM EVALUATION AND VALIDATION Simulation and Modeling: 3.1 Purpose and Justification for Using a Simulation Model For all technical purposes simulation is the imitation of a dynamic system using a computer model in order to evaluate and improve system performance. Discrete event simulation which models the effects of the events in a system as they occur over time was used to implement the simulation model used to aid in the analyst in the design of this system (Harrell, Biman, and Bowden, 2004). The emphasis in industry today has been placed on time based competition; traditional trial and error methods of decision making are no longer adequate. The power of simulation lies in its ability to provide a method of detailed analysis that is not only formal and predictive, but is capable of predicting the performance of complex systems (Harrell, Biman, and Bowden, 2004). The key to sound management decision in any industry lies in the ability to accurately predict the outcomes of alternative courses of action (Banks, Carson, Barry, & Nicol, 2001). This simulation will provide the designers of the system as well as management with this type of information. Some characteristics of simulation which make it such a powerful planning and decision making tool are summarized as follows (Banks, Carson, Barry, & Nicol, 2001): • Captures system interdependencies. • Accounts for variability in the system.
  • 58. - 50- • Is versatile enough to model any system. • Shows behavior over time. • Is less costly, time consuming, and disruptive than experimenting on the actual system. • Provides information on multiple performance measures. • Is visually appealing and engages people’s interest. • Provides results that are easy to understand and to communicate. • Runs in compressed, real or even delayed time. • Forces attention to detail in a design. Simulation results will be used to make organizational decisions in regards to overhead predictions, system layout, man hours, and the number of targeted customers that the customer service representatives (servers) can interface with in a normal eight hour work day, thus the accuracy of the results of the simulation is very important. In many instances simulation appear realistic on the surface because simulation models opposed to analytical models can incorporate any level of detail about the real system. To avoid this illusion it is best to compare system data to model data, and make the comparison using a wide variety of techniques including an objective statistical test if possible (Banks, Carson, Barry, & Nicol, 2001). Description of System: A group of 5 PCs connected together by a LAN, composed of Optical-fiber-based FDDI networking which runs at over 100 megabits per second. The LAN system will be connected to a server which is connected to the WAN which provides access to the interne through cooperative IP address recognition and protocol.
  • 59. - 51- Entities Attributes Activities Events State Variables Customers Industrial or legal consultation Customer response received and read electronic receipt of response number of responses received per PC per day targeted client’s feed back Positive response, negative response received from customer Customer response inventoried in system to specific target area (i.e. industrial or legal) response filed and stored on designated consultants databases number of positive responses verses number of negative responses Customer service follow up delta in time between receipt of customer response and consultants follow up response to customer response sent to customer electronic disbursement of information system organization’s response number of responses sent and received by targeted customers Table 3-0 Problem Properties Problem Formulation: The Information System Organization will have five PCs that will receive and inventory responses that targeted customers send in electronically through the web in regards to electronic questioners that the customer has received at their organization (see Table 3-0). There were two questioners, one composed of five questions and the other composed of ten questions, shown in Appendixes A-1 and A-2 respectively. These questioners are composed of targeted questions that have been developed by the industrial consultant and the patent attorney to procure industrial consultant and patent law clients. Potential clients will then e-mail their responses back to the information
  • 60. - 52- system organization. Customer responses will then be filed contingent on how they respond to the questioners. They will either be filed as potential industrial consultant clients or potential patent law clients. Correct inventory and expeditious feed back to these client responses are essential to the parent organization’s marketing and sales. There is more then one service channel for this simulation, in fact there are five PCs being used Monday through Friday. For the sake of simplicity we will refer to the PCs as PC1, PC2, PC3, PC4, and PC5. A key assumption is that each PC operator, operates that PC for eight hours per day Monday through Friday, they do not change PCs at any time. Operator of PC1 is a better operator then PC2 operator two, PC2 is a better operator then PC3 operator, PC3 is a better operator then PC4 operator, and PC4 is a better operator then PC5 operator. E-mail responses arrival as shown below in Table 3-1. The distribution of e-mail service times for PCs 1 through 5 are shown in Table 3-2 through Table 3-6. Table 3-1 Interval Distribution of Incoming E-mails Time Between e-mail arrivals (minutes) Probability Cumulative probability Random-Digit Assignment 1 2 3 4 0.25 0.30 0.25 0.20 0.25 0.55 0.80 1.00 01 – 25 26 – 55 56 – 80 81 - 100
  • 61. - 53- Table 3-2 Service Distribution of PC1 Time Between e-mail arrivals (minutes) Probability Cumulative probability Random-Digit Assignment 2 3 4 5 0.30 0.28 0.25 0.17 0.30 0.58 0.83 1.00 01 – 30 31 – 58 59 – 83 84 - 100 Table 3-3 Service Distribution of PC2 Time Between e-mail arrivals (minutes) Probability Cumulative probability Random-Digit Assignment 3 4 5 6 0.35 0.25 0.20 0.20 0.35 0.60 0.80 1.00 01 – 25 36 – 60 61 – 80 81 - 100 Table 3-4 Service Distribution of PC3 Time Between e-mail arrivals (minutes) Probability Cumulative probability Random-Digit Assignment 4 5 6 7 0.37 0.28 0.25 0.10 0.37 0.65 0.90 1.00 01 – 37 38 – 65 66 – 90 91 - 100
  • 62. - 54- Table 3-5 Service Distribution of PC4 Time Between e-mail arrivals (minutes) Probability Cumulative probability Random-Digit Assignment 5 6 7 8 0.39 0.31 0.15 0.15 0.39 0.70 0.85 1.00 01 – 39 40 - 70 71 – 85 86 - 100 Table 3-6 Service Distribution of PC5 Time Between e-mail arrivals (minutes) Probability Cumulative probability Random-Digit Assignment 6 7 8 9 0.40 0.29 0.13 0.18 0.40 0.69 0.82 1.00 01 – 40 41 – 69 70 – 82 83 - 100 The problem was to determine how well the 5 PC systems would work. To estimate the system measures of performance, a simulation of 1 hour of operation was made. The simulation should proceed as follows: PC1 operator receives an e-mail from a client, PC1 operator distributes and files e-mail response. PC2 operator receives an e-mail from a client, PC2 operator distributes and files e-mail response. PC3 operator receives an e-mail from a client, PC3 operator distributes and files e-mail response. PC4 operator receives an e-mail from a client, PC4 operator distributes and files e-mail response. PC5
  • 63. - 55- operator receives an e-mail from a client, PC5 operator distributes and files e-mail response. Setting of Objectives and Overall Project Plan: The simulation will show the following: 1. Number of e-mails received, distributed, and filed 2. Time between e-mails serviced 3. Arrival time of e-mails 4. Service time used towards evaluation and distribution of e-mails 5. Time e-mail waits in queue before serviced 6. Average time e-mail has to wait before being serviced 7. Average time between e-mail arrivals Consequently the service time of each e-mail is very important, because the consultants need to get the response information to the questioner so that they can contact the party that filled the questionnaire out while it’s fresh on their mind. The key to marketing and sales is presenting the product in a manner that makes the product advertising and desirable by the targeted audience, and following up with the potential client as quickly as possible. That is why the wait time for e-mails in the queue must be kept to a minimum. Therefore it was determined that performing a simulation would be an appropriate methodology. The simulation would ultimately show if the system is balanced, if another PC should be added, or a PC operator may need to be replaced due to over extended service time. Performing a simulation in C++ is an alternative method that could be used to perform a simulation, because all of the details of the problem scenario can be programmed into the simulation. A comparison of the initial simulation
  • 64. - 56- Model results can be compared to with those of the C++ simulation. A more direct indictor will be a comparison of average queue time for in-service e-mails. The amount of people and money involved are five, and $ 15,000 respectively. Note cost must be kept to the minimum due to the limited size of the organization and the fact that this is a new organization that has not bee established. Model Conceptualization: General Lay Out: Data for this system model is limited because there is no actual system to model it after. However a general layout of the system will be designed out as follows: 1) There will be 5 PCs connected together by a LAN for intranet access, and also connected to a WAN for internet access. 2) These five PCs will be operated by information system representatives that have a strong knowledge base in the areas of industrial consulting and patent law. Operations that Require PC Service Time 1) The PC operators will be receiving e-mail responses to electronic questioners (see Appendix A-1 and Appendix B-1) and evaluating them in regards to whether they need to be responded to and what division specifically needs to respond to them. 2) Distributing and filing the e-mail response that need further correspondence with the correct division (either industrial consultant or patent law division of the organization). 3) Send out secondary response after positive feedback is received in reference to the questioner. The secondary response will be a second questioner sent out with
  • 65. - 57- more detailed question that are more specific in regard to that subject area, i.e. industrial consultation or patent law. Note, this secondary response has not been developed yet. 4) The key to this type of system is that the customer is not personally waiting for service either in person or on the phone. The target customer receives a secondary response via the internet within a 24 hour period. The customer is able to access and respond to this response without engaging in and wasting valuable communication time over the phone or face to face. Model Objectives: 1) To calculate the average amount of service time 2) To calculate the average amount of time e-mail stays in the system before being serviced by a PC operator. 3) The average time it takes for each individual server to file and distribute the e- mail responses appropriately with the right department 4) To find out if the system is balanced and additional PCs will not be necessitated to handle the influx of work. Data Collection: The main objective of this study is to verify that each PC operator can service 50 e-mails as denoted below on consistent bases. This is the most important data because approximate sales as shown in the implantation phase below can be predicted, as well as facets of the overhead budget that would be necessitated to operate and sustain the system. As previously stated the time the e-mail waits in queue and goes in queue does not have a direct affect on customer service or the sales generated.
  • 66. - 58- Each PC operator will start with 70 e-mails to service in their queue at the beginning of the day; this quantity is the amount that was prescribed by management after there assessing the following calculation: In an 8 hour work day minus 30 minutes for lunch and two 15 minute breaks, leaves 7 actual of hour of work time per PC operator to service e-mails. There are (7 x 60 minutes = 420 minutes) 420 minutes in 7 hours, Assuming that it can take a maximum service time of 8 minutes for an operator to access, interpret, file, and service an e-mail; 420 minutes divided by 8 = 52.5 Therefore it was assumed that each operator should be able to service 52 e-mails in the course of one day. Therefore five operators working five days a week should be able to service 52 x 5 (days) x 5 (PC operators) = 1300 e-mails for the week, 1300 x 4 (weeks) = 5,200 e-mails / month; 5200 x 12 (months) = 62,400 e- mails service per year. The hours of operation of the PCs are between 8am EST and 4 pm EST. Baring in mind that the individuals that have the authority to review the initial questioners shown in Appendix A-1 and B- would be upper management, who would normally send out response e-mails after normal business hours; with this in mind the data that the PC operator service will be collected two days prior and placed into their queue one day prior to them accessing it. Model Translation: This model will not be programmed into a simulation language. Instead for the sake of simplicity and diminished special purpose simulation software will be used, ‘Pro Model’ simulation software. This software was selected because the problem is amenable
  • 67. - 59- to solution with this type of software, which will result in reduced model development time (Gagne, Galvin, and Silberschatz, 2004). The Arena Business Edition software will be used to simulate this model. Graphical representation and object based development are some of the key attributes of the Arena Standard Edition software. Modules represented by icons will be used to represent individual PCs and the flow of e-mails that enter the system to be evaluated. Upon evaluation the e-mails that need to be responded to will be sent to one of two selected modules; either the module that contain e-mails for targeted industrial consul tees or the module that contains the-emails for the targeted customers that need patent attorney representation. The standard resources that this software provides in regards to queuing operations, process logic, and system data make the software an excellent fit. The standard graphics that are used in this software will be used to show the average number of e-mails serviced for the five day week, the average time e-mails are left in queue. Graphical representation of all of these elements will be ideal for summarizing data for presentation purposes. Investors can predict organizational income, growth, and overhead expenditures. Random numbers will be generated to perform an analysis on the input data given (assumed) in the problem formulation and implementation sections. Hand calculations will be used to perform the analysis, which consist of the following three steps: 1. Identifying the appropriate probability distribution 2. Estimating the parameters of the hypothesized distribution 3. Validating the assumed statistical model by a goodness-of fit test (Gagne, Galvin, and Silberschatz, 2004)
  • 68. - 60- Verification: The concept ional model is realistically and correctly represented in the computerized representation. The following guidelines will be used for verification purposes: 1. Computerized representation will be checked by an outside vendor 2. The model output will be examined closely for reliability under a variety of settings, inputs, and scenarios by the systems analyst and the PC operators 3. The computerized representation will print the parameters at the end of the simulation to ensure that the parameter values have not been changed by mistake 4. The computerized documentation shall be made to be as self documenting as possible. Every variable will be depicted by a precise definition 5. An IRC will be used to debug and correct errors in the simulation (Gagne, Galvin, and Silberschatz, 2004) Validation: Model assumptions for this model fall under both categories structural assumptions and data assumptions. Structural assumptions refer to how the system works in reality, which has been outlined above in the problem formulation section above (Banks, Carson, Barry, & Nicol, 2001). The only assumption that was not addressed is that e-mails are serviced on the rotation in which they entered the queue, simply put a first come firs served basis. The structural assumptions will be verified by actual observation on a daily and weekly basis. Data assumptions such as the number of e-mails serviced per PC per hour, per day, per month, and the amount of revenue that it could possible lead to is important in regards to generating an economic analysis from a sales
  • 69. - 61- standpoint. Calculations have been performed in the problem formulation and implementation section to validate these assumptions. Experimental Design: The model will simulate the e-mail being accessed by the PC operator at which point the operator will evaluate the responses and distribute and file the e-mail response to the appropriate section of the organization. The function of runs may vary if experimental data supports it, for example if the number of e-mails that were expected to be serviced by each operator was consistently exceeded then the number placed in queue to be serviced the previous day would be changed by an increment of 10 in order to reduce the amount of possible PC operator idle time. The number of simulation runs will be dictated by the consistency of the out put after the simulation has been actively ran for a 6 month period. However, note that this is a working simulation where all input and out put data will be used to procure clients and ultimately increase revenue for the parent industrial consultant and patent law organization. During this six month period PC operators will be monitored on their production monthly. Additional training in regards to the interpretation, file, and distribution will be given by the industrial consultant and the patent attorney. This will serve two primary purposes to increase the number of e-mails serviced and more importantly negate discarding potential customers due to wrongful interpretation of questioner responses by the PC operator. An output analysis for steady state simulations will be performed for a single long run simulation. After six months a managerial evaluation of all experimental data will be performed and the system shall be tweaked according to the permutations of experimental data collected. Production Runs and Analysis:
  • 70. - 62- The production run analysis shall be performed by output analysis based on random variability and random number generations that will be used to produce the values f the input variables: where two different streams of or sequences of random numbers will produce two different sets of outputs which (probably) will differ (Banks, Carson, Barry, & Nicol, 2001) . The performance of the system will be measured by a parameter θ; the result of a set of simulation experiments will use an estimator θ1of θ. The precision of the estimator will be measured by the variance of θ1. The purpose of the statistical analysis will be to estimate the variance or determine the number of observations required to achieve the desired position. Where Y, is the total number of –emails serviced per week by a networked system of PCs; Y will be treated as a random variable with an unknown distribution. The simulation shall be ran for 1 week, that will proved a single sample observation from the population of all possible observations on Y. The run length of the sample size will be progressively increased to n observations, Y1, Y2, and Yn, based on a run length of n weeks up to a maximum of 24 weeks. However these observations do not constitute a random sample, because they are statistically independent. Therefore since statistical independence will not be directly applicable to the analysis of this data. Other methods and assumption shall be performed during model conceptualization. More Runs: As long as the number of e-mails serviced does not decline greater then 20 %, additional simulations beyond six months will not be necessitated or performed, due to the nature of the business and more importantly the shortage of start up revenue possessed by the organization.
  • 71. - 63- Documentation and Reporting: The only individuals that will be running reports will be the manager of the Information systems organization; it will not be modified by different analysis. Progress documentation will be implemented for this model, due to the small close knit environment of the organization. A chronological history of system performance, changes made, and work done will better serve this organization. Reports shall be generated and discussed during each week to enable the PC operators to see where their numbers are. This important to the PC operator, since their monthly incentive bonuses are based on these numbers. The only deliverables or major accomplishments will be number of e-mails serviced that resulted in new client procurement. The logic behind this is simple; revenue is made by effort put forth on the front end by the PC operator being prudent in interpreting, distributing, and filing the e- mails that had positive responses. This has a direct correlation on the revenue earned by the parent organization. Production results before and after training sessions shall be documented for every PC operator as well as their feed back on the instruction given during the training. This feed back is important and shall be merged into upcoming training sessions for future as well as present employees. A monthly or final report shall be distributed to the parent organization, primarily the industrial consultant, patent attorney, chief marketing executive, and CEO. Decisions in regards to adjustments of input variables, training changes, equipment changes or additions, and management shall be based on these final reports.
  • 72. - 64- Implementation Phase: Rough Economic Analysis of Expected Simulation Results: If 10 % of the 62,400 e-mails service per year resulted (previously stated in the Data Collection Phase) in client procurement for both divisions combined, and that 10 % was divided by 2, there would be 3200 probable new customers for each division, taking 5% of those to equate a more realistic number of 160 new customer annual for each division. This would break down as follows in terms of compensation: 160 Industrial Consultant Clients x average charge of $4000 = $640,000 annually 160 Patent Law Consultations (which includes representation for new patent submittals, defense against infringement suits, prosecutions against infringement suits, and contracts) 160 clients X average fee of 7,000 =1,120,000 In summary if the guidelines previously mentioned were used to run a simulation and the system was shown as being balanced which resulted in similar numbers previously mentioned in the hand calculations then the design of this system could be viewed by management as cost effective investment.
  • 73. - 65- CHAPTER 4 SUMMARY AND CONCLUSION In summary this paper defined some of the business requirements, define explained how the systems analysis was performed, and what the results of the analysis were. How the system was designed, which was based on the system analysis, was discussed as well. A problem was formulated that incorporated the functional aspects of the system while operating and performing the required task during a specified time. Hand calculations were performed to provide numerical results to management that would validate and verify that the system design selected would in fact satisfy the business requirements, i.e. if the number of PCs and operators were sufficient to handle the predicated influx of business. The option of running simulations were presented along with the required input and parameters if that option was in fact selected as an alternative form of verification and validation of the system design functional results. Simulation can prove to be a powerful tool in regards to its ability to provide a method of detailed analysis that is not only formal and predictive, but is capable of predicting the performance of complex systems (Harrell, Biman, and Bowden, 2004). Simulation can provide management with ability to accurately predict the outcomes of alternative courses of action (Banks, Carson, Barry, & Nicol, 2001). However, running a simulation would result in substantial upfront cost for the parent organization. Granted the fact that there immediate capital expenditures are low
  • 74. - 66- and there projected log term net capital gain would be minimal, the available amount of upfront capital investment is extremely limited. Therefore a simulation was not ran. In conclusion based on the results generated from the hand calculations performed in chapter three, management could ascertain realistically how the system would perform and completed all business requirements, thus validating its design as positive investment and asset to the parent organization as a whole.
  • 75. - 67- APPENDIX A-1 Industrial Consultant Consultation Questioner 1. What is your company size in terms of number of employees? 2. How many facilities do you have either nation wide or internationally? 3. What is the nature of your business? 4. What is your current level of production and does it exceed or fail to meet the projected company objectives in terms of production, gross sales, net income? 5. What is your current overhead costs in the following areas: A: Production: 1) Man hours: 2) Raw Materials: 3) Plant Operations Cost, which includes utilities, machine maintenance, and preventative maintenance. B: Marketing: 1) Man hours 2) Present Marketing proposals: 3) Future Marketing Ventures
  • 76. - 68- APPENDIX B-1 Patent Right Assistance Questioner 1. How and why did you think of this design? 2. What is your design and describe its functional aspects? 3. Why do you think you need to procure patent rights for your design? 4. How do you know that your design is not duplication already on the market? 5. Was your design directly designed as an extension of another design? 6. Does your design perform some of the same functional aspects that another design does? 7. How do you plan to manufacture your product and what amount of overhead cost will be involved? 8. Is a company backing you on this design or are you submitting it as an independent agent? 9. How long does it take to manufacture one of the products? 10. Where will you obtain the raw material and machinery necessary for production? Note: The questioners listed above are generic and are used to gain general knowledge about potential client’s organizations. The questioner will be detailed further by the organization’s proper personal once they are in place.
  • 77. - 69- REFERENCES Banks, J., Carson, J., Barry, N., Nicol, D. (2001) Simulation and Modeling I Prentice Hill A Pearson Education Co. Upper Saddle River, NJ 07458 Burd, S., Jackson, R., Satzinger, J. (2004). Systems Analysis & Design In A Changing World, Boston Massachusetts: Course Technology Burd, S. (2006). Systems Architecture, Fifth Edition. Course Technology, 25 Thomson Place, Boston, MA. 02210 DeBetta, P., (2005). New T-SQL Features in SQL Server http://vnu.bitpipe.com/rlist/term/Database-Design.html Gagne, G., Galvin P., Silberschatz A., (2004). Computer Operating Systems Hoboken New Jersey: John Whiley & Sons, Inc. Harrell, B., Biman, G., Boyden, R. (2004). Simulation and Modeling II Mc Graw-Hill Inc. Rob, P., Coronel, C. (2004). Database Systems Design, Implementation, and Management, Sixth edition; Technology a division of Thompson learning Co. Turabian, K. (1996). A manual for Writers of Term Papers, Thesis, and Dissertations, Sixth Edition. Chicago: The University of Chicago Press, Chicago IL. Umar, A., (1997). Client Server Computing, Custom Edition. Prentice Hal Inc. a Pearson Education Company, Upper Saddle River, NJ. 07458 http://www.cioupdate.com/trends/article.php/3559636 http://en.wikipedia.org/wiki/Computer_security http://www.computerworld.com/securitytopics/security/story/0,10801,106169,00.html http://rwebs.net/micros/Imsai/multi.htm Multiprocessors and Shared Memory for Ultimate Flexibility; 1999-2003 by Randy Wilson