SlideShare una empresa de Scribd logo
1 de 67
ABSTRACT
Multimodal Biometric System using multiple sources of information for
establishing the identity has been widely recognized. But the computational models for
multimodal biometrics recognition have only recently received attention. In this paper
multimodal biometric image such as fingerprint, face, and eye tracking are extracted
individually and are fused together using a sparse fusion mechanism. A multimodal
sparse representation method is proposed, which interprets the test data by a sparse linear
combination of training data, while constraining the observations from different
modalities of the test subject to share their sparse representations.
The images are pre-processed for feature extraction. In this process Sobel, canny,
Prewitt edge detection methods were applied. The image quality was measured using
PSNR, NAE, and NCC metrics. Based on the results obtained, Sobel edge detection was
used for feature extraction. Extracted features were subjected to sparse representation for
the fusion of different modalities. The fused template can be used for watermarking and
person identification application. CASIA database is chosen for the biometric images.
CHAPTER – 1
INTRODUCTION
UNIMODAL biometric systems rely on a single source of information such as a
single iris or fingerprint or face for authentication. Unfortunately, these systems have to
deal with some of the following inevitable problems such as 1. Noisy data. Poor lighting
on a user’s face or occlusion are examples of noisy data. 2. Nonuniversality. The
biometric system based on a single source of evidence may not be able to capture
meaningful data from some users. For instance, an iris biometric system may extract
incorrect texture patterns from the iris of certain users due to thepresence of contact
lenses. 3. Intraclass variations. In the case of fingerprint recognition, the presence of
wrinkles due to wetness can cause these variations. These types of variations often occur
when a user incorrectly interacts with the sensor. 4. Spoof attack. Hand signature forgery
is an example of this type of attack. Classification in multi biometric systems is done by
fusing information from different biometric modalities.
Information fusion can be done at different levels, broadly divided into feature-
level, score-level, and rank-/decision level fusion. Due to preservation of raw
information, feature-level fusion can be more discriminative than score or decision-level
fusion. But, feature-level fusion methods have been explored in the biometric community
only recently. This is because of the differences in features extracted from different
sensors in terms of types and dimensions. Often features have large dimensions, and
fusion becomes difficult at the feature level. The prevalent method is feature
concatenation, which has been used for different multi biometric settings. However, for
high-dimensional feature vectors, simple feature concatenation may be inefficient and
non-robust. A related work in the machine learning literature is multiple kernel learning
(MKL), which aims to integrate information from different features by learning a
weighted combination of respective kernels. A detailed survey of MKL-based methods
can be found. However, for multimodal systems, weight determination during testing is
important, based on the quality of modalities. The proposed the seminal sparse
representation-based classification (SRC) algorithm for face recognition. It was shown
that by exploiting the inherent sparsity of data, one can obtain improved recognition
performance over traditional methods especially when data are contaminated by various
artifacts such as illumination variations, disguise, occlusion, and random pixel corruption.
Pillai et al. extended this work for robust cancelable eye tracking recognition. Nagesh and
Li presented an expression-invariant face recognition method using distributed CS and
joint sparsity models. Patel et al. proposed a dictionary-based method for face recognition
under varying pose and illumination. The paper makes the following contributions. We
present a robust feature level fusion algorithm for multi biometric recognition.
Through the proposedjoint sparse framework, we can easily handle unequal
dimensions from different modalities by forcing the different features to interact through
their sparse coefficients. Furthermore, the proposed algorithm can efficiently handle
large-dimensional feature vectors. We make the classification robust to occlusion and
noise by introducing an error term in the optimization framework. The algorithm is easily
generalizable to handle multiple test inputs from a modality. We introduce a quality
measure for multimodal fusion based on the joint sparse representation. . Last, we
kernelize the algorithm to handle nonlinearity in the data samples.
CHAPTER – 2
SYSTEM CONFIGURATION
2.1 HARDWARE SPECIFICATION
 Hard disk : 40 GB
 RAM : 512mb
 Processor : Pentium IV
 Speed : 1.44 GHZ
 General : Keyboard, Monitor, Mouse
2.2 SOFTWARE SPECIFICATION
 Front-End : Visual studio 2008.
 Coding language : C#.net.
 Operating System : Windows 7
 Back End : SQLSERVER 2005
CHAPTER – 3
SYSTEM ANALYSIS
3.1 EXISTING SYSTEM
In Existing, multimodal biometric systems that essentially integrate the evidence
presented by multiple sources of information such as eye tracking, fingerprints, and face.
Such systems are less vulnerable to spoof attacks, as it would be difficult for an imposter
to simultaneously spoof multiple biometric traits of a genuine user. Due to sufficient
population coverage, these systems are able to address the problem of non-universality.
DISADVANTAGES
 Noisy data: Poor lighting on a user’s face or occlusion are examples of
noisy data.
 Nonuniversality: The biometric system based on a single source of
evidence may not be able to capture meaningful data from some users. For
instance, an eye tracking biometric system may extract incorrect texture
patterns from the eye tracking of certain users due to the presence of
contact lenses.
 Intraclass variations. In the case of fingerprint recognition, the presence
of wrinkles due to wetness can cause these variations. These types of
variations often occur when a user incorrectly interacts with the sensor.
 Spoof attack: Hand signature forgery is an example of this type of attack.
3.2 PROPOSED SYSTEM
The proposed of system a novel joint sparsity-based feature level fusion algorithm
for multimodal biometrics recognition. The algorithm is robust as it explicitly includes
both noise and occlusion terms. An efficient algorithm based on the alternative direction
was proposed for solving the optimization problem. We also proposed a multimodal
quality measure based on sparse representation. Furthermore, the algorithm was
kernelized to handle nonlinear variations. Various experiments have shown that the
method is robust and significantly improves the overall recognition accuracy methods.
ADVANTAGES
 We present a robust feature level fusion algorithm for multibiometric recognition.
Through the proposed joint sparse framework, we can easily handle unequal
dimensions from different modalities by forcing the different features to interact
through their sparse coefficients. Furthermore, the proposed algorithm can
efficiently handle large-dimensional feature vectors.
 We make the classification robust to occlusion and noise by introducing an error
term in the optimization framework.
 The algorithm is easily generalizable to handle multiple test inputs from a
modality.
 We introduce a quality measure for multimodal fusion based on the joint sparse
representation.
 Last, we kernelize the algorithm to handle non- linearity in the data samples.
CHAPTER – 4
SOFTWARE DESCRIPTION
MICROSOFT .NET FRAMEWORK
The .NET Framework is a new computing platform that simplifies application
development in the highly distributed environment of the Internet.
FRAMEWORK IS DESIGNED TO FULFILL THE FOLLOWING OBJECTIVES
 To provide a consistent object-oriented programming environment whether object
code is stored and executed locally, executed locally but Internet-distributed, or
executed remotely.
 To provide a code-execution environment that minimizes software deployment
and versioning conflicts.
 To provide a code-execution environment that guarantees safe execution of code,
including code created by an unknown or semi-trusted third party.
 To provide a code-execution environment that eliminates the performance
problems of scripted or interpreted environments.
 To make the developer experience consistent across widely varying types of
applications, such as Windows-based applications and Web-based applications.
 To build all communication on industry standards to ensure that code based on the
.NET Framework can integrate with any other code.
THE .NET FRAMEWORK HAS TWO MAIN COMPONENTS
 The common language runtime.
 The .NET Framework class library.
The common language runtime is the foundation of the .NET Framework. You
can think of the runtime as an agent that manages code at execution time, providing core
services such as memory management, thread management, and remoting, while also
enforcing strict type safety and other forms of code accuracy that ensure security and
robustness. In fact, the concept of code management is a fundamental principle of the
runtime. Code that targets the runtime is known as managed code, while code that does
not target the runtime is known as unmanaged code. The class library, the other main
component of the .NET Framework, is a comprehensive, object-oriented collection of
reusable types that you can use to develop applications ranging from traditional
command-line or graphical user interface (GUI) applications to applications based on the
latest innovations provided by ASP.NET, such as Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the
common language runtime into their processes and initiate the execution of managed
code, thereby creating a software environment that can exploit both managed and
unmanaged features. The .NET Framework not only provides several runtime hosts, but
also supports the development of third-party runtime hosts.
For example, ASP.NET hosts the runtime to provide a scalable, server-side
environment for managed code. ASP.NET works directly with the runtime to enable Web
Forms applications and XML Web services, both of which are discussed later in this
topic.
Internet Explorer is an example of an unmanaged application that hosts the
runtime (in the form of a MIME type extension). Using Internet Explorer to host the
runtime enables you to embed managed components or Windows Forms controls in
HTML documents. Hosting the runtime in this way makes managed mobile code (similar
to Microsoft® ActiveX® controls) possible, but with significant improvements that only
managed code can offer, such as semi-trusted execution and secure isolated file storage.
The following illustration shows the relationship of the common language
runtime and the class library to your applications and to the overall system. The
illustration also shows how managed code operates within a larger architecture.
FEATURES OF THE COMMON LANGUAGE RUNTIME
The common language runtime manages memory, thread execution, code
execution, code safety verification, compilation, and other system services. These
features are intrinsic to the managed code that runs on the common language runtime.
With regards to security, managed components are awarded varying degrees of
trust, depending on a number of factors that include their origin (such as the Internet,
enterprise network, or local computer). This means that a managed component might or
might not be able to perform file-access operations, registry-access operations, or other
sensitive functions, even if it is being used in the same active application.
The runtime enforces code access security. For example, users can trust that an
executable embedded in a Web page can play an animation on screen or sing a song, but
cannot access their personal data, file system, or network.
The security features of the runtime thus enable legitimate Internet-deployed
software to be exceptionally feature rich.
The runtime also enforces code robustness by implementing a strict type- and
code-verification infrastructure called the common type system (CTS). The CTS ensures
that all managed code is self-describing. The various Microsoft and third-party language
compilers
Generate managed code that conforms to the CTS. This means that managed code can
consume other managed types and instances, while strictly enforcing type fidelity and
type safety.
In addition, the managed environment of the runtime eliminates many common
software issues. For example, the runtime automatically handles object layout and
manages references to objects, releasing them when they are no longer being used. This
automatic memory management resolves the two most common application errors,
memory leaks and invalid memory references. The runtime also accelerates developer
productivity.
For example, programmers can write applications in their development language
of choice, yet take full advantage of the runtime, the class library, and components
written in other languages by other developers. Any compiler vendor who chooses to
target the runtime can do so. Language compilers that target the .NET Framework make
the features of the .NET Framework available to existing code written in that language,
greatly easing the migration process for existing applications.
While the runtime is designed for the software of the future, it also supports
software of today and yesterday. Interoperability between managed and unmanaged code
enables developers to continue to use necessary COM components and DLLs.
The runtime is designed to enhance performance. Although the common language
runtime provides many standard runtime services, managed code is never interpreted. A
feature called just-in-time (JIT) compiling enables all managed code to run in the native
machine language of the system on which it is executing. Meanwhile, the memory
manager removes the possibilities of fragmented memory and increases memory locality-
of-reference to further increase performance.
Finally, the runtime can be hosted by high-performance, server-side applications,
such as Microsoft® SQL Server™ and Internet Information Services (IIS). This
infrastructure enables you to use managed code to write your business logic, while still
enjoying the superior performance of the industry's best enterprise servers that support
runtime hosting.
.NET FRAMEWORK CLASS LIBRARY
The .NET Framework class library is a collection of reusable types that tightly
integrate with the common language runtime. The class library is object oriented,
providing types from which your own managed code can derive functionality. This not
only makes the .NET Framework types easy to use, but also reduces the time associated
with learning new features of the .NET Framework. In addition, third-party components
can integrate seamlessly with classes in the .NET Framework.
For example, the .NET Framework collection classes implement a set of
interfaces that you can use to develop your own collection classes. Your collection
classes will blend seamlessly with the classes in the .NET Framework.
As you would expect from an object-oriented class library, the .NET Framework
types enable you to accomplish a range of common programming tasks, including tasks
such as string management, data collection, database connectivity, and file access. In
addition to these common tasks, the class library includes types that support a variety of
specialized development scenarios. For example, we can use the .NET Framework to
develop the following types of applications and services:
 Console applications.
 Scripted or hosted applications.
 Windows GUI applications (Windows Forms).
 ASP.NET applications.
 XML Web services.
 Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable
types that vastly simplify Windows GUI development. If you write an ASP.NET Web
Form application, you can use the Web Forms classes.
CLIENT APPLICATION DEVELOPMENT
Client applications are the closest to a traditional style of application in Windows-
based programming. These are the types of applications that display windows or forms on
the desktop, enabling a user to perform a task. Client applications include applications
such as word processors and spreadsheets, as well as custom business applications such
as data-entry tools, reporting tools, and so on. Client applications usually employ
windows, menus, buttons, and other GUI elements, and they likely access local resources
such as the file system and peripherals such as printers.
Another kind of client application is the traditional ActiveX control (now replaced
by the managed Windows Forms control) deployed over the Internet as a Web page. This
application is much like other client applications: it is executed natively, has access to
local resources, and includes graphical elements.
In the past, developers created such applications using C/C++ in conjunction with
the Microsoft Foundation Classes (MFC) or with a rapid application development (RAD)
environment such as Microsoft® Visual Basic®.
The .NET Framework incorporates aspects of these existing products into a
single, consistent development environment that drastically simplifies the development of
client applications.
The Windows Forms classes contained in the .NET Framework are designed to be
used for GUI development. You can easily create command windows, buttons, menus,
toolbars, and other screen elements with the flexibility necessary to accommodate
shifting business needs. For example, the .NET Framework provides simple properties to
adjust visual attributes associated with forms. In some cases the underlying operating
system does not support changing these attributes directly, and in these cases the .NET
Framework automatically recreates the forms. This is one of many ways in which the
.NET Framework integrates the developer interface, making coding simpler and more
consistent.
Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a
user's computer. This means that binary or natively executing code can access some of
the resources on the user's system (such as GUI elements and limited file access) without
being able to access or compromise other resources. Because of code access security,
many applications that once needed to be installed on a user's system can now be safely
deployed through the Web. Your applications can implement the features of a local
application while being deployed like a Web page.
INTRODUCTION TO C#.NET
C# (pronounced as C-sharp) is a new language for windows applications, intended
as an alternative to the main previous languages, C++, VB. Its purpose is two folds:
It gives access to many of the facilities previously available only in C++, while retaining
some of the simplicity to learn of VB.
It has been designed specifically with the .NET Framework in mind, and hence is
very well structured for writing code that will be compiled for the .NET. C# is a simple,
modern, object-oriented language which aims to combine the high productivity of VB
and raw power of C++. C# is a new programming language developed by Microsoft.
Using C# we can develop console applications, web applications and windows
applications .In C#, Microsoft has taken care of C++ problems such as memory
management, pointers, so forth.
ACTIVE SERVER PAGES .NET (ASP.NET)
ASP.NET is a programming framework built on the common language runtime
that can be used on a server to build powerful Web applications. ASP.NET offers several
important advantages over previous Web development models.
Enhanced Performance:ASP.NET is compiled common language runtime code running
on the server. Unlike its interpreted predecessors, ASP.NET can take advantage of early
binding, just-in-time compilation, native optimization, and caching services right out of
the box. This amounts to dramatically better performance before you ever write a line of
code.
World-Class Tool Support: A rich toolbox and designer in the Visual Studio integrated
development environment complement the ASP.NET framework. WYSIWYG editing,
drag-and-drop server controls, and automatic deployment are just a few of the features
this powerful tool provides.
Power and Flexibility: Because ASP.NET is based on the common language runtime,
the power and flexibility of that entire platform is available to Web application
developers. The .NET Framework class library, Messaging, and Data Access solutions
are all seamlessly accessible from the Web. ASP.NET is also language-independent, so
you can choose the language that best applies to your application or partition your
application across many languages. Further, common language runtime interoperability
guarantees that your existing investment in COM-based development is preserved when
migrating to ASP.NET.
Simplicity:ASP.NET makes it easy to perform common tasks, from simple form
submission and client authentication to deployment and site configuration. For example,
the ASP.NET page framework allows you to build user interfaces that cleanly separate
application logic from presentation code and to handle events in a simple, Visual Basic -
like forms processing model.
Manageability:ASP.NET employs a text-based, hierarchical configuration system,
which simplifies applying settings to your server environment and Web applications.
Because configuration information is stored as plain text, new settings may be applied
without the aid of local administration tools. This "zero local administration" philosophy
extends to deploying ASP.NET Framework applications as well. An ASP.NET
Framework application is deployed to a server simply by copying the necessary files to
the server. No server restart is required, even to deploy or replace running compiled code.
Scalability and Availability:ASP.NET has been designed with scalability in mind, with
features specifically tailored to improve performance in clustered and multiprocessor
environments. Further, processes are closely monitored and managed by the ASP.NET
runtime, so that if one misbehaves (leaks, deadlocks), a new process can be created in its
place, which helps keep your application constantly available to handle requests.
Customizability and Extensibility:ASP.NET delivers a well-factored architecture that
allows developers to "plug-in" their code at the appropriate level. In fact, it is possible to
extend or replace any subcomponent of the ASP.NET runtime with your own custom-
written component. Implementing custom authentication or state services has never been
easier
Security: With built in Windows authentication and per-application configuration, you
can be assured that your applications are secure.
LANGUAGE SUPPORT
The Microsoft .NET Platform currently offers built-in support for many
languages: C#, Visual Basic, Jscript etc.
WHAT IS ASP.NET WEB FORMS?
The ASP.NET Web Forms page framework is a scalable common language
runtime-programming model that can be used on the server to dynamically generate Web
pages.
Intended as a logical evolution of ASP (ASP.NET provides syntax compatibility
with existing pages), the ASP.NET Web Forms framework has been specifically
designed to address a number of key deficiencies in the previous model. In particular, it
provides:
 The ability for developers to cleanly structure their page logic in an orderly
fashion (not "spaghetti code").
 The ability for development tools to provide strong WYSIWYG design support
for pages (existing ASP code is opaque to tools).
 The ability to create and use reusable UI controls that can encapsulate common
functionality and thus reduce the amount of code that a page developer has to
write.
ASP.NET Web Forms pages are text files with an. aspx file name extension. They
can be deployed throughout an IIS virtual root directory tree. When a browser client
requests. Aspx resources, the ASP.NET runtime parses and compiles the target file into a
.NET Framework class. This class can then be used to dynamically process incoming
requests. (Note that the .aspx file is compiled only the first time it is accessed; the
compiled type instance is then reused across multiple requests).
An ASP.NET page can be created simply by taking an existing HTML file and
changing its file name extension to .aspx (no modification of code is required). For
example, the following sample demonstrates a simple HTML page that collects a user's
name and category preference and then performs a form post back to the originating page
when a button is clicked:
ASP.NET provides syntax compatibility with existing ASP pages. This includes
support for <% %> code render blocks that can be intermixed with HTML content within
an .aspx file. These code blocks execute in a top-down manner at page render time.
CODE-BEHIND WEB FORMS
ASP.NET supports two methods of authoring dynamic pages. The first is the
method shown in the preceding samples, where the page code is physically declared
within the originating .aspx file. An alternative approach is known as the code-behind
method where the page code can be more cleanly separated from the HTML content into
an entirely separate file.
INTRODUCTION TO ASP.NET SERVER CONTROLS
In addition to (or instead of) using <% %> code blocks to program dynamic
content, ASP.NET page developers can use ASP.NET server controls to program Web
Pages. Server controls are declared within an .aspx file using custom tags or intrinsic
HTML tags that contains a run at="server" attributes value. Intrinsic HTML tags are
handled by one of the controls in the System.Web.UI.HtmlControls namespace. Any
tag that doesn't explicitly map to one of the controls is assigned the type of:
SYSTEM.WEB.UI.HTML CONTROLS.HTMLGENERICCONTROL
Server controls automatically maintain any client-entered values between round
trips to the server. This control state is not stored on the server (it is instead stored within
an <input type="hidden"> form field that is round-tripped between requests). Note also
that no client-side script is required.
In addition to supporting standard HTML input controls, ASP.NET enables
developers to utilize richer custom controls on their pages. For example, the following
sample demonstrates how the <asp: adrotator> control can be used to dynamically
display rotating ads on a page.
 ASP.NET Web Forms provide an easy and powerful way to build dynamic Web
UI.
 ASP.NET Web Forms pages can target any browser client (there are no script
library or cookie requirements).
 ASP.NET Web Forms pages provide syntax compatibility with existing ASP
pages.
 ASP.NET server controls provide an easy way to encapsulate common
functionality.
 ASP.NET ships with 45 built-in server controls. Developers can also use controls
built by third parties.
 ASP.NET server controls can automatically project both up level and down-level
HTML.
ADO.NET OVERVIEW
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the
web with scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command
objects, and also introduces new objects. Key new ADO.NET objects include the Data
Set, Data Reader, and Data Adapter.
The important distinction between this evolved stage of ADO.NET and previous
data architectures is that there exists an object -- the DataSet -- that is separate and
distinct from any data stores. Because of that, the DataSet functions as a standalone
entity. You can think of the DataSet as an always disconnected record set that knows
nothing about the source or destination of the data it contains. Inside a DataSet, much like
in a database, there are tables, columns, relationships, constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the DataSet.
Then, it connects back to the database to update the data there, based on operations
performed while the DataSet held the data.
In the past, data processing has been primarily connection-based. Now, in an
effort to make multi-tiered apps more efficient, data processing is turning to a message-
based approach that revolves around chunks of information. At the center of this
approach is the DataAdapter, which provides a bridge to retrieve and save data between a
DataSet and its source data store.
It accomplishes this by means of requests to the appropriate SQL commands
made against the data store.
The XML-based DataSet object provides a consistent programming model that
works with all models of data storage: flat, relational, and hierarchical. It does this by
having no 'knowledge' of the source of its data, and by representing the data that it holds
as collections and data types. No matter what the source of the data within the DataSet is,
it is manipulated through the same set of standard APIs exposed through the DataSet and
its subordinate objects.
While the DataSet has no knowledge of the source of its data, the managed
provider has detailed and specific information. The role of the managed provider is to
connect, fill, and persist the DataSet to and from data stores. The OLE DB and SQL
Server .NET Data Providers (System.Data.OleDb and System.Data.SqlClient) that are
part of the .Net Framework provide four basic objects: the Command, Connection, Data
Reader and DataAdapter. In the remaining sections of this document, we'll walk through
each part of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining
what they are, and how to program against them. The following sections will introduce
you to some objects that have evolved, and some that are new. These objects are:
 Connections. For connection to and managing transactions against database.
 Commands. For issuing SQL commands against a database.
 Data Readers. For reading a forward-only stream of data records from a SQL
Server data source.
 Datasets. For storing, remoting and programming against flat data, XML data and
relational data.
 Data Adapters. For pushing data into a DataSet, and reconciling data against a
database.
CONNECTIONS
Connections are used to 'talk to' databases, and are represented by provider-
specific classes such as SQLConnection. Commands travel over connections and result
sets are returned in the form of streams which can be read by a Data Reader object, or
pushed into a DataSet object.
COMMANDS
Commands contain the information that is submitted to a database, and are
represented by provider-specific classes such as SQLCommand. A command can be a
stored procedure call, an UPDATE statement, or a statement that returns results. You can
also use input and output parameters, and return values as part of your command syntax.
The example below shows how to issue an INSERT statement against the Northwind
database.
DATA READERS
The Data Reader object is somewhat synonymous with a read-only/forward-only
cursor over data. The Data Reader API supports flat as well as hierarchical data. A Data
Reader object is returned after executing a command against a database. The format of
the returned DataReader object is different from a record set. For example, you might use
the DataReader to show the results of a search list in a web page.
DATA SETS AND DATA ADAPTERS
DATA SETS
The DataSet object is similar to the ADO Record set object, but more powerful,
and with one other important distinction: the DataSet is always disconnected. The
DataSet object represents a cache of data, with database-like structures such as tables,
columns, relationships, and constraints.
However, though a DataSet can and does behave much like a database, it is
important to remember that DataSet objects do not interact directly with databases, or
other source data. This allows the developer to work with a programming model that is
always consistent, regardless of where the source data resides.
Data coming from a database, an XML file, from code, or user input can all be
placed into DataSet objects. Then, as changes are made to the DataSet they can be
tracked and verified before updating the source data. The Get Changes method of the
DataSet object actually creates a second DatSet that contains only the changes to the data.
This DataSet is then used by a DataAdapter (or other objects) to update the original data
source.
The DataSet has many XML characteristics, including the ability to produce and
consume XML data and XML schemas. XML schemas can be used to describe schemas
interchanged via Web Services. In fact, a DataSet with a schema can actually be
compiled for type safety and statement completion.
DATA ADAPTERS (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source
data. Using the provider-specific SqlDataAdapter (along with its associated Sqlcommand
and SqlConnection) can increase overall performance when working with a Microsoft
SQL Server databases. For other OLE DB-supported databases, you would use the
OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection
objects.
The DataAdapter object uses commands to update the data source after changes
have been made to the DataSet. Using the Fill method of the DataAdapter calls the
SELECT command; using the Update method calls the INSERT, UPDATE or DELETE
command for each changed row. You can explicitly set these commands in order to
control the statements used at runtime to resolve changes, including the use of stored
procedures. For ad-hoc scenarios, a CommandBuilder object can generate these at run-
time based upon a select statement.
SQL SERVER
SQL (STRUCTURED QUERY LANGUAGE)
Structured Query Language (SQL) is a standard computer language for relational
database management and data manipulation. SQL is used to query, insert, update and
modify data. Most relational databases support SQL, which is an added benefit for
database administrators (DBAs), as they are often required to support databases across
several different platforms.
DATABASE
A database is a collection of information that is organized so that it can easily be
accessed, managed, and updated. In one view, databases can be classified according to
types of content: bibliographic, full-text, numeric, and images. In computing, databases
are sometimes classified according to their organizational approach. The most prevalent
approach is the relational database, a tabular database in which data is defined so that it
can be reorganized and accessed in a number of different ways. A distributed database is
one that can be dispersed or replicated among different points in a network. An object-
oriented programming database is one that is congruent with the data defined in object
classes and subclasses. Computer databases typically contain aggregations of data records
or files, such as sales transactions, product catalogs and inventories, and customer
profiles. Typically, a database manager provides users the capabilities of controlling
read/write access, specifying report generation, and analyzing usage. Databases and
database managers are prevalent in large mainframe systems, but are also present in
smaller distributed workstation and mid-range systems such as the AS/400 and on
personal computers. SQL (Structured Query Language) is a standard language for making
interactive queries from and updating a database such as IBM's DB2, Microsoft's SQL
Server, and database products from Oracle, Sybase, and Computer Associates.
DEFINING A DATABASE
Define a relational database by using the New Database Definition wizard in the
Data Definition view. A relational database is a set of tables that can be manipulated in
accordance with the relational model of data. A relational database contains a set of data
objects that are used to store, manage, and access data. Examples of such data objects are
tables, views, indexes, functions, and stored procedures.
DEFINING A SCHEMA
Define a schema to organize the tables and other data objects by using the New
Schema Definition wizard. A schema is a collection of named objects. In relational
database technology, schemas provide a logical classification of objects in the database.
Some of the objects that a schema might contain include tables, views, aliases, indexes,
triggers, and structured types. Define schemas to organize the tables and other data
objects
DEFINING A TABLE
Define a table by using the New Table Definition wizard. Tables are logical
structures that are maintained by the database manager. Tables consist of columns and
rows. You can define tables as part of your data definitions in the Data perspective. If you
are new to the Microsoft SQL Server environment, you probably encountered the
possibility to choose between Windows Authentication and SQL Authentication.
SQL AUTHENTICATION
SQL Authentication is the typical authentication used for various database
systems, composed of a username and a password. Obviously, an instance of SQL Server
can have multiple such user accounts (using SQL authentication) with different
usernames and passwords. In shared servers where different users should have access to
different databases, SQL authentication should be used. Also, when a client (remote
computer) connects to an instance of SQL Server on other computer than the one on
which the client is running, SQL Server authentication is needed. Even if you don't define
any SQL Server user accounts, at the time of installation a root account - as - is added
with the password you provided. Just like any SQL Server account, this can be used to
log-in locally or remotely, however if an application is the one that does the log in, and it
should have access to only one database, it's strongly recommended that you don't use
the as account, but create a new one with limited access. Overall, SQL authentication is
the main authentication method to be used while the one we review below - Windows
Authentication - is more of a convenience.
WINDOWS AUTHENTICATION
When you are accessing SQL Server from the same computer it is installed on,
you shouldn't be prompted to type in an username and password. And you are not, if
you're using Windows Authentication. With Windows Authentication, the SQL Server
service already knows that someone is logged in into the operating system with the
correct credentials, and it uses these credentials to allow the user into its databases. Of
course, this works as long as the client resides on the same computer as the SQL Server,
or as long as the connecting client matches the Windows credentials of the server.
Windows Authentication is often used as a more convenient way to log-in into a SQL
Server instance without typing a username and a password, however when more users are
envolved, or remote connections are being established with the SQL Server, SQL
authentication should be used.
PRIMARY KEY
In a SQL database, the primary key is one or more columns that uniquely identify
each row in a table. The primary key is defined by using the PRIMARY KEY constraint
when either creating a table or altering a table. Each table can have only one primary key.
The column(s) defined as the primary key inherently have the NOT NULL
constraint, meaning they must contain a value. If a table is being altered to add a primary
key, any column being defined as the primary key must not contain blank, or NULL,
values. If the column does, the primary key constraint cannot be added. Also, in some
relational databases, adding a primary key also creates a table index, to help improve the
speed of finding specific rows of data in the table when SQL queries are run against that
table.
FOREIGN KEY
Foreign keys are used to reference unique columns in another table. So, for
example, a foreign key can be defined on one table A, and it can reference some unique
column(s) in another table B. Why would you want a foreign key? Well, whenever it
makes sense to have a relationship between columns in two different tables.
ROWS
In a database, a row (sometimes called a record) is the set of fields within a table
that are relevant to a specific entity. For example, in a table called customer contact
information, a row would likely contain fields such as: ID number, name, street address,
city, telephone number and so on.
COLUMNS
The records is made by the collection of column or field, It is also called as single
attribute of the row.
SQL COMMANDS
BASIC SQL
Each record has a unique identifier or primary key. SQL, which stands for
Structured Query Language, is used to communicate with a database. Through SQL one
can create and delete tables. Here are some commands:
 CREATE TABLE - creates a new database table
 ALTER TABLE - alters a database table
 DROP TABLE - deletes a database table
 CREATE INDEX - creates an index (search key)
 DROP INDEX - deletes an index
SQL also has syntax to update, insert, and delete records.
 SELECT - get data from a database table
 UPDATE - change data in a database table
 DELETE - remove data from a database table
 INSERT INTO - insert new data in a database table
CHAPTER – 5
PROJECT DESCRIPTION
5.1 MODULES
 Collect multi modal Data
 Multimodal Multivariate test
 Joint Sparse Representation
 Reconstruction Error based Classification
MODULES DESCRIPTION
COLLECT MULTIMODAL DATA:
In real world to enrich Security the single biometric based authentication
is used later multimodal biometric is used for authentication, In order to use Multimodal
biometric the user is to register for each authentication, in different variations, the
collected data are used as Data set for further steps of processing.
MULTIMODAL MULTIVARIATE TEST
The module represent the how attacker is compromise the users in a social
network. The admin maintain the each node in a network. Servers can therefore blacklist
anonymous users without knowledge of their IP addresses while allowing behaving users
to connect anonymously. Although our work applies to anonymizing networks in general,
we consider Tor for purposes of exposition. In fact, any number of anonymizing
networks can rely on the same Trustee base social system, blacklisting anonymous users
regardless of their anonymizing network(s) of choice.
JOINT SPARSE REPRESENTATION
In Joint Sparse Representations the image is divided into blocks and each blocks
is taken into account correlations as well as coupling information among biometric
modalities. A multimodal quality measure is also proposed to weight each modality as it
gets fused and to handle nonlinear variations. It shown that the method is robust and
significantly improves the overall recognition accuracy.
RECONSTRUCTION ERROR BASED CLASSIFICATION
In the collection of sample images, the image is filtered based on the sparsity
concentration index SCI and based on the weightage of each block. This helps the
multimodal biometric for reconstruction of the image. This joint sparse Representation is
used for reducing time consuming process and easy recognition of Multimodal Biometric
System.
CHAPTER – 6
SYSTEM DESIGN
6.1 DATA FLOW DIAGRAM
Collect Multimodal Data
Multimodal Multivariate test
Joint Sparse Representation
Reconstruction Error based Classification
6.2 SYSTEM ARCHITECTURE
6.3 UML DIAGRAMS
Dataset collect for
each authentication
from users
Collect dataset
Register multimodal
authentication
Get image sample
from mulltimodal
Perform SCI calculate for
different modality in
dictionary & identify
image quality
Multivariate test
Joint spasre representation
Image into blocks &
calculate weight and
based on weight
recognize the image
Multimodal
authenticate
Reconstruct the
image
Reconstruct image
User
Provide more
security to user
6.4 CLASS DIAGRAM
6.5 SEQUENCE DIAGRAM
Prepare
dataset
Multivariate test Joint sparse Reconstruct
Reconstruct image
Provide more security
to user
Collect dataset from user
Input image
Calculate SCI & identify
image quality
Input image divide into
block
Each block calculate
weight &
6.6 COLLABORATION DIAGRAM
Prepare
dataset
Multivariate
test
Joint sparse Reconstruct
1: Collect dataset
from user
2: Input image
3: Calculate SCI & identify
image quality
4: Input image divide
into block
5: Each block
calculate weight
6: Reconstruct
image
7: Provide more
security to user
CHAPTER – 7
SYSTEM IMPLEMENTATION
Implementation is the process of translating design specification in to source
code. The primary goal of implementation is to write source code and internal
implementation. So that conformance of code to its specification can be easily verified,
So that debugging, testing and modification are eased. The source is developed with
clarity, simplicity and elegance.
The coding is done in a modular fashion giving such importance even to the
minute detail so, when hardware and storage procedures are changed or now data is
added, rewriting of application programs is not necessary. To adapt or perfect use must
determine new requirements, redesign generate code and test exiting software/hardware.
Traditionally such task when they are applied to an existing program has been called
maintenance.
7.1 SOURCE CODE
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.IO;
using System.Threading;
using System.Data.SqlClient;
using System.Collections;
namespace VariousRepresentation
{
publicpartialclassJointSparse : Form
{
SqlConnection cn;
SqlCommand cmd;
string s;
SqlDataAdapter da;
DataTable dt;
DataSet ds;
SqlDataReader dr;
string[] files = newstring[1000];
Bitmap img1;
Bitmap img2;
int i;
int l;
int distance;
Point p;
int sno;
ArrayList list5 = newArrayList();
ArrayList list55 = newArrayList();
ArrayList fi = newArrayList();
public JointSparse()
{
InitializeComponent();
}
publicvoid getconnection()
{
cn = newSqlConnection("Data Source=.sqlexpress; Initial Catalog=JointSparse;
Integrated Security=true; Max Pool Size=1000;");
cn.Open();
}
privatevoid button1_Click(object sender, EventArgs e)
{
DialogResult res = folderBrowserDialog1.ShowDialog();
if (res == DialogResult.OK)
{
textBox1.Text = folderBrowserDialog1.SelectedPath;
files = Directory.GetFiles(folderBrowserDialog1.SelectedPath);
label4.Text = files.Length.ToString();
}
Point centerPoint = newPoint(100, 100);
Point result = newPoint(0, 0);
double angle;
angle = 360 / 8;
for (int j = 0; j < 8; j++)
{
distance = 20;
result.Y = centerPoint.Y + (int)Math.Round(distance * Math.Sin(angle));
result.X = centerPoint.X + (int)Math.Round(distance * Math.Cos(angle));
angle = angle + 45;
listBox1.Items.Add(result);
}
}
privatevoid button2_Click(object sender, EventArgs e)
{
DialogResult res = openFileDialog1.ShowDialog();
if (res == DialogResult.OK)
{
textBox2.Text = openFileDialog1.FileName;
pictureBox1.Image = Image.FromFile(openFileDialog1.FileName);
}
try
{
getconnection();
s = "Drop table PatternInfo";
cmd = newSqlCommand(s, cn);
cmd.ExecuteNonQuery();
}
catch
{
}
string filename = textBox1.Text;
string filenam = filename.Substring(filename.LastIndexOf("") + 1);
if (filenam == "Face")
{
getconnection();
s = "Select * into PatternInfo from PatternFace";
cmd = newSqlCommand(s, cn);
cmd.ExecuteNonQuery();
cn.Close();
}
elseif (filenam == "Finger")
{
getconnection();
s = "Select * into PatternInfo from PatternFinger";
cmd = newSqlCommand(s, cn);
cmd.ExecuteNonQuery();
cn.Close();
}
elseif (filenam == "Iris")
{
getconnection();
s = "Select * into PatternInfo from PatternIris";
cmd = newSqlCommand(s, cn);
cmd.ExecuteNonQuery();
cn.Close();
}
else
{
}
pattern1();
select();
}
privatevoid button3_Click(object sender, EventArgs e)
{
for (i = 0; i < files.Length; i++)
{
label8.Text = files.Length.ToString();
label8.Refresh();
progressBar1.Maximum = files.Length;
progressBar1.Minimum = 0;
int p = i;
p = p + 1;
label6.Text = p.ToString();
label6.Refresh();
string filename = files[i];
string filenam = filename.Substring(filename.LastIndexOf("") + 1);
img1 = newBitmap(files[i]);
img2 = newBitmap(img1, newSize(200, 200));
pictureBox1.Image = img2;
pictureBox1.Refresh();
progressBar1.Value = p;
string path;
path = textBox3.Text + "" + filenam;
img2.Save(path);
}
}
publicvoid pattern()
{
for (l = 0; l < files.Length; l++)
{
img1 = newBitmap(files[l]);
for (int k = 0; k <Convert.ToInt32(listBox1.Items.Count); k++)
{
var selection = listBox1.Items[k];
p = (Point)selection;
for (int i = 0; i < img1.Width; i++)
{
for (int j = 0; j < img1.Height; j++)
{
if (i == p.X && j == p.Y)
{
Color cr = newColor();
cr = img1.GetPixel(i, j);
listBox2.Items.Add(cr.R * .3 + cr.G * .59 + cr.B * 0.11);
}
}
}
}
}
}
publicvoid pattern1()
{
listBox2.Items.Clear();
img1 = newBitmap(pictureBox1.Image);
for (int k = 0; k <Convert.ToInt32(listBox1.Items.Count); k++)
{
var selection = listBox1.Items[k];
p = (Point)selection;
for (int i = 0; i < img1.Width; i++)
{
for (int j = 0; j < img1.Height; j++)
{
if (i == p.X && j == p.Y)
{
Color cr = newColor();
cr = img1.GetPixel(i, j);
listBox2.Items.Add(cr.R * .3 + cr.G * .59 + cr.B * 0.11);
}
}
}
}
}
privateColor grayscale(Color cr)
{
returnColor.FromArgb(cr.A, (int)(cr.R * .3 + cr.G * .59 + cr.B * 0.11),
(int)(cr.R * .3 + cr.G * .59 + cr.B * 0.11),
(int)(cr.R * .3 + cr.G * .59 + cr.B * 0.11));
}
privatevoid button4_Click(object sender, EventArgs e)
{
DialogResult res = folderBrowserDialog1.ShowDialog();
if (res == DialogResult.OK)
{
textBox3.Text = folderBrowserDialog1.SelectedPath;
}
}
publicvoid pattern2()
{
progressBar1.Minimum=0;
progressBar1.Maximum = Convert.ToInt32(listBox1.Items.Count);
for (l = 0; l < files.Length; l++)
{
img1 = newBitmap(files[l]);
for (int k = 0; k <Convert.ToInt32(listBox1.Items.Count); k++)
{
var selection = listBox1.Items[k];
p = (Point)selection;
for (int i = 0; i < img1.Width; i++)
{
for (int j = 0; j < img1.Height; j++)
{
if (i == p.X && j == p.Y)
{
Color cr = newColor();
cr = img1.GetPixel(i, j);
listBox2.Items.Add(cr.R * .3 + cr.G * .59 + cr.B * 0.11);
progressBar1.Value = k;
}
}
}
}
progressBar1.Value = progressBar1.Maximum;
}
}
publicvoid save2()
{
int i = 0;
int k = 0;
int countt = 1;
while (i < listBox2.Items.Count)
{
double one = Convert.ToDouble(listBox2.Items[i]);
i++;
double two = Convert.ToDouble(listBox2.Items[i]);
i++;
double three = Convert.ToDouble(listBox2.Items[i]);
i++;
double four = Convert.ToDouble(listBox2.Items[i]);
i++;
double five = Convert.ToDouble(listBox2.Items[i]);
i++;
double six = Convert.ToDouble(listBox2.Items[i]);
i++;
double seven = Convert.ToDouble(listBox2.Items[i]);
i++;
double eight = Convert.ToDouble(listBox2.Items[i]);
i++;
l = k;
if (l < 25)
{
listBox3.Items.Add(files[l]);
}
listBox6.Items.Add(one);
listBox7.Items.Add(two);
k++;
listBox4.Items.Clear();
foreach (string it in list5)
{
listBox4.Items.Add(it);
}
listBox5.Items.Clear();
foreach (string it1 in list55)
{
listBox5.Items.Add(it1);
}
}
}
privatevoid button5_Click(object sender, EventArgs e)
{
pattern2();
save2();
}
publicvoid save()
{
int i = 0;
int countt = 1;
while (i < listBox2.Items.Count)
{
double one = Convert.ToDouble(listBox2.Items[i]);
i++;
double two = Convert.ToDouble(listBox2.Items[i]);
i++;
double three = Convert.ToDouble(listBox2.Items[i]);
i++;
double four = Convert.ToDouble(listBox2.Items[i]);
i++;
double five = Convert.ToDouble(listBox2.Items[i]);
i++;
double six = Convert.ToDouble(listBox2.Items[i]);
i++;
double seven = Convert.ToDouble(listBox2.Items[i]);
i++;
double eight = Convert.ToDouble(listBox2.Items[i]);
i++;
l = countt;
getconnection();
s = "insert into PatternInfo values('" + l + "','" + one + "','" + two + "','" + three + "','" +
four + "','" + five + "','" + six + "','" + seven + "','" + eight + "')";
cmd = newSqlCommand(s, cn);
cmd.ExecuteNonQuery();
cn.Close();
countt++;
}
}
publicvoid select()
{
int i = 0;
double one = Convert.ToDouble(listBox2.Items[i]);
i++;
double two = Convert.ToDouble(listBox2.Items[i]);
i++;
double three = Convert.ToDouble(listBox2.Items[i]);
i++;
double four = Convert.ToDouble(listBox2.Items[i]);
i++;
double five = Convert.ToDouble(listBox2.Items[i]);
i++;
double six = Convert.ToDouble(listBox2.Items[i]);
i++;
double seven = Convert.ToDouble(listBox2.Items[i]);
i++;
double eight = Convert.ToDouble(listBox2.Items[i]);
i++;
getconnection();
s = "Select * from PatternInfo where Point1='" + one + "' and Point2='" + two + "' and
Point3='" + three + "' and Point4='" + four + "' and Point5='" + five + "' and Point6='" +
six + "' and Point7='" + seven + "' and Point8='" + eight + "'";
cmd = newSqlCommand(s, cn);
dr = cmd.ExecuteReader();
if (dr.HasRows)
{
while (dr.Read())
{
sno = Convert.ToInt32((dr["ImagePath"]));
if (sno <= 5)
{
sno = 1;
}
elseif (sno <= 10)
{
sno = 6;
}
elseif (sno <= 15)
{
sno = 11;
}
elseif (sno <= 20)
{
sno = 16;
}
elseif (sno <= 25)
{
sno = 21;
}
}
}
cn.Close();
getconnection();
int sno1 = sno + 4;
s = "Select * from PatternInfo where ImagePath>=" + sno + " and ImagePath<=" + sno1
+ "";
cmd = newSqlCommand(s, cn);
dr = cmd.ExecuteReader();
if (dr.HasRows)
{
while (dr.Read())
{
list5.Add(dr["Point1"]);
list55.Add(dr["Point2"]);
}
}
}
privatevoid button6_Click(object sender, EventArgs e)
{
int count = 1;
for (int i = 0; i < listBox4.Items.Count; i++)
{
for (int j = 0; j < listBox6.Items.Count; j++)
{
int one = Convert.ToInt32(listBox4.Items[i]);
int two = Convert.ToInt32(listBox6.Items[j]);
int three = Convert.ToInt32(listBox5.Items[i]);
int four = Convert.ToInt32(listBox7.Items[j]);
if (one == two && three == four)
{
fi.Add(j);
if (count == 1)
{
pictureBox2.Image=Image.FromFile(listBox3.Items[j].ToString());
count++;
}
elseif (count == 2)
{
pictureBox3.Image = Image.FromFile(listBox3.Items[j].ToString());
count++;
}
elseif (count == 3)
{
pictureBox4.Image = Image.FromFile(listBox3.Items[j].ToString());
count++;
}
elseif (count == 4)
{
pictureBox5.Image = Image.FromFile(listBox3.Items[j].ToString());
count++;
}
elseif (count == 5)
{
pictureBox6.Image = Image.FromFile(listBox3.Items[j].ToString());
count++;
}
else
{
}
}
}
}
}
privatevoid listBox7_SelectedIndexChanged(object sender, EventArgs e)
{
}
privatevoid button7_Click(object sender, EventArgs e)
{
this.Hide();
JointSparse a = newJointSparse();
a.Show();
}}
}
7.2 SCREEN SHOTS
Joint Sparse Representation:
Select Input Dataset:
Select Training Dataset:
Convert images of Input Format to Training Dataset Format:
View Pattern Results of Joint Sparse Representation:
Joint Sparse Representation for Face:
Joint Sparse Representation for Finger:
Joint Sparse Representation for Iris:
CHAPTER – 8
SYSTEM TESTING
SYSTEM TEST AND MAINTENANCE
SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub-assemblies, assemblies and/or a finished
product it is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific
testing requirement.
TYPES OF TESTS
FUNCTIONAL TEST
Functional tests provide a systematic demonstration that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to identify
Business process flows; data fields, predefined processes, and successive processes must
be considered for testing. Before functional testing is complete, additional tests are
identified and the effective value of current tests is determined.
SYSTEM TEST
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test. System
testing is based on process descriptions and flows, emphasizing pre-driven process links
and integration points.
WHITE BOX TESTING
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least its
purpose. It is purpose. It is used to test areas that cannot be reached from a black box
level.
BLACK BOX TESTING
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most other
kinds of tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.
UNIT TESTING
Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.
FIELD TESTING
Field testing will be performed manually and functional tests will be written in
detail.
TEST OBJECTIVES
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
FEATURES TO BE TESTED
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.
INTEGRATION TESTING
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by
interface defects. The task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.
The task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.
TEST RESULTS
All the test cases mentioned above passed successfully. No defects encountered.
ACCEPTANCE TESTING
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
Acceptance testing for intranet lives search tax management system:
 Users have separate roles to modify the database tables.
 Users should have the ability to modify the privilege for a screen.
TEST RESULTS
All the test cases mentioned above passed successfully. No defects encountered.
PROJECT CREATION USING RATIONAL ADMINISTRATOR
CONCLUSION
We proposed a novel joint sparsity-based feature level fusion algorithm for
multimodal biometrics recognition. The algorithm is robust as it explicitly includes both
noise and occlusion terms. An efficient algorithm based on the alternative direction was
proposed for solving the optimization problem. We also proposed a multimodal quality
measure based on sparse representation. Furthermore, the algorithm was kernelized to
handle nonlinear variations. Various experiments have shown that the method is robust
and significantly improves the overall recognition accuracy.
REFERENCES
[1] C.J. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” June
1998.
[2] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet, “SimpleMKL,” 2008.
[3] R. Bolle, J. Connell, S. Pankanti, N. Ratha, and A. Senior, “The Relation between the
ROC Curve and the CMC,” 2005.
[4] K. Nandakumar, Y. Chen, S. Dass, and A. Jain, “Likelihood Ratio Based Biometric
Score Fusion,” Feb. 2008.
[5] X.F.M. Yang, L. Zhang, and D. Zhang, “Fisher Discrimination Dictionary Learning
for Sparse Representation,” 2011.
[6] A. Jain, S. Prabhakar, L. Hong, and S. Pankanti, “Filterbank-Based Fingerprint
Matching,” IEEE Trans. Image Processing, vol. 9, no. 5, pp. 846-859, May 2000.
[7] S.S.S. Crihalmeanu, A. Ross, and L. Hornak, “A Protocol for Multibiometric Data
Acquisition, Storage and Dissemination,” technical report, Lane Dept. of Computer
Science and Electrical Eng., West Virginia Univ., 2007.
[8] V.M. Patel, R. Chellappa, and M. Tistarelli, “Sparse Representations and Random
Projections for Robust and Cancelable Biometrics,” Proc. Int’l Conf. Control,
Automation, Robotics, and Vision, pp. 1-6, Dec. 2010.
[9] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang, and S. Yan, “Sparse Representation
for Computer Vision and Pattern Recognition,” Proc. IEEE, vol. 98, no. 6, pp. 1031-
1044, June 2010.
[10] X.-T. Yuan and S. Yan, “Visual Classification with Multi-Task Joint Sparse
Representation,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 3493-
3500, June 2010.
[11] N.H. Nguyen, N.M. Nasrabadi, and T.D. Tran, “Robust MultiSensor Classification
via Joint Sparse Representation,” Proc. Int’l Conf. Information Fusion, pp. 1-8, July
2011.
[12] A. Jain, S. Prabhakar, L. Hong, and S. Pankanti, “Filterbank-Based Fingerprint
Matching,” IEEE Trans. Image Processing, vol. 9, no. 5, pp. 846-859, May 2000.

Más contenido relacionado

La actualidad más candente

IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
 
Paper multi-modal biometric system using fingerprint , face and speech
Paper   multi-modal biometric system using fingerprint , face and speechPaper   multi-modal biometric system using fingerprint , face and speech
Paper multi-modal biometric system using fingerprint , face and speechAalaa Khattab
 
A NOVEL APPROACH FOR GENERATING FACE TEMPLATE USING BDA
A NOVEL APPROACH FOR GENERATING FACE TEMPLATE USING BDAA NOVEL APPROACH FOR GENERATING FACE TEMPLATE USING BDA
A NOVEL APPROACH FOR GENERATING FACE TEMPLATE USING BDAcsandit
 
A novel approach to generate face biometric template using binary discriminat...
A novel approach to generate face biometric template using binary discriminat...A novel approach to generate face biometric template using binary discriminat...
A novel approach to generate face biometric template using binary discriminat...sipij
 
International Journal of Biometrics and Bioinformatics(IJBB) Volume (2) Issue...
International Journal of Biometrics and Bioinformatics(IJBB) Volume (2) Issue...International Journal of Biometrics and Bioinformatics(IJBB) Volume (2) Issue...
International Journal of Biometrics and Bioinformatics(IJBB) Volume (2) Issue...CSCJournals
 
Bimodal Biometric System using Multiple Transformation Features of Fingerprin...
Bimodal Biometric System using Multiple Transformation Features of Fingerprin...Bimodal Biometric System using Multiple Transformation Features of Fingerprin...
Bimodal Biometric System using Multiple Transformation Features of Fingerprin...IDES Editor
 
OVERVIEW OF MULTIBIOMETRIC SYSTEMS
OVERVIEW OF MULTIBIOMETRIC SYSTEMSOVERVIEW OF MULTIBIOMETRIC SYSTEMS
OVERVIEW OF MULTIBIOMETRIC SYSTEMSAM Publications
 
A SURVEY ON MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEM IN CLOUD COMPUTING
A SURVEY ON MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEM IN CLOUD COMPUTINGA SURVEY ON MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEM IN CLOUD COMPUTING
A SURVEY ON MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEM IN CLOUD COMPUTINGpharmaindexing
 
An Indexing Technique Based on Feature Level Fusion of Fingerprint Features
An Indexing Technique Based on Feature Level Fusion of Fingerprint FeaturesAn Indexing Technique Based on Feature Level Fusion of Fingerprint Features
An Indexing Technique Based on Feature Level Fusion of Fingerprint FeaturesIDES Editor
 
IRDO: Iris Recognition by fusion of DTCWT and OLBP
IRDO: Iris Recognition by fusion of DTCWT and OLBPIRDO: Iris Recognition by fusion of DTCWT and OLBP
IRDO: Iris Recognition by fusion of DTCWT and OLBPIJERA Editor
 
Highly Secured Bio-Metric Authentication Model with Palm Print Identification
Highly Secured Bio-Metric Authentication Model with Palm Print IdentificationHighly Secured Bio-Metric Authentication Model with Palm Print Identification
Highly Secured Bio-Metric Authentication Model with Palm Print IdentificationIJERA Editor
 
Feature Level Fusion of Multibiometric Cryptosystem in Distributed System
Feature Level Fusion of Multibiometric Cryptosystem in Distributed SystemFeature Level Fusion of Multibiometric Cryptosystem in Distributed System
Feature Level Fusion of Multibiometric Cryptosystem in Distributed SystemIJMER
 
An overview of face liveness detection
An overview of face liveness detectionAn overview of face liveness detection
An overview of face liveness detectionijitjournal
 
M phil-computer-science-biometric-system-projects
M phil-computer-science-biometric-system-projectsM phil-computer-science-biometric-system-projects
M phil-computer-science-biometric-system-projectsVijay Karan
 

La actualidad más candente (17)

IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
 
G0333946
G0333946G0333946
G0333946
 
L_3011_62.+1908
L_3011_62.+1908L_3011_62.+1908
L_3011_62.+1908
 
Paper multi-modal biometric system using fingerprint , face and speech
Paper   multi-modal biometric system using fingerprint , face and speechPaper   multi-modal biometric system using fingerprint , face and speech
Paper multi-modal biometric system using fingerprint , face and speech
 
A NOVEL APPROACH FOR GENERATING FACE TEMPLATE USING BDA
A NOVEL APPROACH FOR GENERATING FACE TEMPLATE USING BDAA NOVEL APPROACH FOR GENERATING FACE TEMPLATE USING BDA
A NOVEL APPROACH FOR GENERATING FACE TEMPLATE USING BDA
 
A novel approach to generate face biometric template using binary discriminat...
A novel approach to generate face biometric template using binary discriminat...A novel approach to generate face biometric template using binary discriminat...
A novel approach to generate face biometric template using binary discriminat...
 
International Journal of Biometrics and Bioinformatics(IJBB) Volume (2) Issue...
International Journal of Biometrics and Bioinformatics(IJBB) Volume (2) Issue...International Journal of Biometrics and Bioinformatics(IJBB) Volume (2) Issue...
International Journal of Biometrics and Bioinformatics(IJBB) Volume (2) Issue...
 
Bimodal Biometric System using Multiple Transformation Features of Fingerprin...
Bimodal Biometric System using Multiple Transformation Features of Fingerprin...Bimodal Biometric System using Multiple Transformation Features of Fingerprin...
Bimodal Biometric System using Multiple Transformation Features of Fingerprin...
 
OVERVIEW OF MULTIBIOMETRIC SYSTEMS
OVERVIEW OF MULTIBIOMETRIC SYSTEMSOVERVIEW OF MULTIBIOMETRIC SYSTEMS
OVERVIEW OF MULTIBIOMETRIC SYSTEMS
 
A SURVEY ON MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEM IN CLOUD COMPUTING
A SURVEY ON MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEM IN CLOUD COMPUTINGA SURVEY ON MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEM IN CLOUD COMPUTING
A SURVEY ON MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEM IN CLOUD COMPUTING
 
An Indexing Technique Based on Feature Level Fusion of Fingerprint Features
An Indexing Technique Based on Feature Level Fusion of Fingerprint FeaturesAn Indexing Technique Based on Feature Level Fusion of Fingerprint Features
An Indexing Technique Based on Feature Level Fusion of Fingerprint Features
 
IRDO: Iris Recognition by fusion of DTCWT and OLBP
IRDO: Iris Recognition by fusion of DTCWT and OLBPIRDO: Iris Recognition by fusion of DTCWT and OLBP
IRDO: Iris Recognition by fusion of DTCWT and OLBP
 
Highly Secured Bio-Metric Authentication Model with Palm Print Identification
Highly Secured Bio-Metric Authentication Model with Palm Print IdentificationHighly Secured Bio-Metric Authentication Model with Palm Print Identification
Highly Secured Bio-Metric Authentication Model with Palm Print Identification
 
Feature Level Fusion of Multibiometric Cryptosystem in Distributed System
Feature Level Fusion of Multibiometric Cryptosystem in Distributed SystemFeature Level Fusion of Multibiometric Cryptosystem in Distributed System
Feature Level Fusion of Multibiometric Cryptosystem in Distributed System
 
An overview of face liveness detection
An overview of face liveness detectionAn overview of face liveness detection
An overview of face liveness detection
 
M phil-computer-science-biometric-system-projects
M phil-computer-science-biometric-system-projectsM phil-computer-science-biometric-system-projects
M phil-computer-science-biometric-system-projects
 
Fingerprints recognition
Fingerprints recognitionFingerprints recognition
Fingerprints recognition
 

Destacado

Oracle sql quick reference
Oracle sql quick referenceOracle sql quick reference
Oracle sql quick referencemaddy9055
 
Each one, reach one pp
Each one, reach one ppEach one, reach one pp
Each one, reach one ppTish Calhamer
 
Customers Sentiment on Life Insurance Industry
Customers Sentiment on Life Insurance IndustryCustomers Sentiment on Life Insurance Industry
Customers Sentiment on Life Insurance Industryzhongshu zhao
 
Every person is a book every life tells a story
Every person is a book every life tells a story Every person is a book every life tells a story
Every person is a book every life tells a story Tish Calhamer
 
Wainwright WinterSpring2016Guide
Wainwright WinterSpring2016GuideWainwright WinterSpring2016Guide
Wainwright WinterSpring2016GuideLinda Parker
 
REVISED WH Late Spring Summer Guide 2015
REVISED WH Late Spring Summer Guide 2015REVISED WH Late Spring Summer Guide 2015
REVISED WH Late Spring Summer Guide 2015Linda Parker
 
2015 Fall Class Guide
2015 Fall Class Guide2015 Fall Class Guide
2015 Fall Class GuideLinda Parker
 
Dynamic digital assignments
Dynamic digital assignmentsDynamic digital assignments
Dynamic digital assignmentsJames Matechuk
 
Acht the role of hospital board members
Acht the role of hospital board membersAcht the role of hospital board members
Acht the role of hospital board membersDavid Levien
 
Final Fall Guide from John 8.13.14 916am email
Final Fall Guide from John 8.13.14 916am emailFinal Fall Guide from John 8.13.14 916am email
Final Fall Guide from John 8.13.14 916am emailLinda Parker
 

Destacado (20)

Oracle sql quick reference
Oracle sql quick referenceOracle sql quick reference
Oracle sql quick reference
 
Each one, reach one pp
Each one, reach one ppEach one, reach one pp
Each one, reach one pp
 
Muhammad taught us to fight
Muhammad taught us to fightMuhammad taught us to fight
Muhammad taught us to fight
 
Econs Tuition
Econs TuitionEcons Tuition
Econs Tuition
 
Economics Tuition
Economics TuitionEconomics Tuition
Economics Tuition
 
Singapore Economics Tuition
Singapore Economics TuitionSingapore Economics Tuition
Singapore Economics Tuition
 
Customers Sentiment on Life Insurance Industry
Customers Sentiment on Life Insurance IndustryCustomers Sentiment on Life Insurance Industry
Customers Sentiment on Life Insurance Industry
 
Every person is a book every life tells a story
Every person is a book every life tells a story Every person is a book every life tells a story
Every person is a book every life tells a story
 
Wainwright WinterSpring2016Guide
Wainwright WinterSpring2016GuideWainwright WinterSpring2016Guide
Wainwright WinterSpring2016Guide
 
Secure final
Secure finalSecure final
Secure final
 
Best Economics Tuition
Best Economics TuitionBest Economics Tuition
Best Economics Tuition
 
TargetPresentation
TargetPresentationTargetPresentation
TargetPresentation
 
REVISED WH Late Spring Summer Guide 2015
REVISED WH Late Spring Summer Guide 2015REVISED WH Late Spring Summer Guide 2015
REVISED WH Late Spring Summer Guide 2015
 
Best Economics Tuition
Best Economics TuitionBest Economics Tuition
Best Economics Tuition
 
2015 Fall Class Guide
2015 Fall Class Guide2015 Fall Class Guide
2015 Fall Class Guide
 
Dynamic digital assignments
Dynamic digital assignmentsDynamic digital assignments
Dynamic digital assignments
 
Sign verification
Sign verificationSign verification
Sign verification
 
Acht the role of hospital board members
Acht the role of hospital board membersAcht the role of hospital board members
Acht the role of hospital board members
 
finger prints
finger printsfinger prints
finger prints
 
Final Fall Guide from John 8.13.14 916am email
Final Fall Guide from John 8.13.14 916am emailFinal Fall Guide from John 8.13.14 916am email
Final Fall Guide from John 8.13.14 916am email
 

Similar a Full biometric eye tracking

Feature Extraction using Sparse SVD for Biometric Fusion in Multimodal Authen...
Feature Extraction using Sparse SVD for Biometric Fusion in Multimodal Authen...Feature Extraction using Sparse SVD for Biometric Fusion in Multimodal Authen...
Feature Extraction using Sparse SVD for Biometric Fusion in Multimodal Authen...IJNSA Journal
 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...IOSR Journals
 
Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...
Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...
Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...SBGC
 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid TechniqueBiometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Techniqueijsc
 
Automated attendance system using Face recognition
Automated attendance system using Face recognitionAutomated attendance system using Face recognition
Automated attendance system using Face recognitionIRJET Journal
 
DATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITODATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITOMarcoMellia
 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique  Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique ijsc
 
smartwatch-user-identification
smartwatch-user-identificationsmartwatch-user-identification
smartwatch-user-identificationSebastian W. Cheah
 
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...Editor IJCATR
 
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...Editor IJCATR
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPIRJET Journal
 
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...Editor IJCATR
 
Security for Identity Based Identification using Water Marking and Visual Cry...
Security for Identity Based Identification using Water Marking and Visual Cry...Security for Identity Based Identification using Water Marking and Visual Cry...
Security for Identity Based Identification using Water Marking and Visual Cry...IRJET Journal
 
Face recognition using laplacianfaces
Face recognition using laplacianfaces Face recognition using laplacianfaces
Face recognition using laplacianfaces StudsPlanet.com
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET Journal
 
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy  up...OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy  up...
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...feature software solutions pvt ltd
 
Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...
Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...
Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...INFOGAIN PUBLICATION
 

Similar a Full biometric eye tracking (20)

Feature Extraction using Sparse SVD for Biometric Fusion in Multimodal Authen...
Feature Extraction using Sparse SVD for Biometric Fusion in Multimodal Authen...Feature Extraction using Sparse SVD for Biometric Fusion in Multimodal Authen...
Feature Extraction using Sparse SVD for Biometric Fusion in Multimodal Authen...
 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
 
Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...
Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...
Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...
 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid TechniqueBiometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique
 
Automated attendance system using Face recognition
Automated attendance system using Face recognitionAutomated attendance system using Face recognition
Automated attendance system using Face recognition
 
DATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITODATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITO
 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique  Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique
 
smartwatch-user-identification
smartwatch-user-identificationsmartwatch-user-identification
smartwatch-user-identification
 
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
 
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
 
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
 
Security for Identity Based Identification using Water Marking and Visual Cry...
Security for Identity Based Identification using Water Marking and Visual Cry...Security for Identity Based Identification using Water Marking and Visual Cry...
Security for Identity Based Identification using Water Marking and Visual Cry...
 
Face recognition using laplacianfaces
Face recognition using laplacianfaces Face recognition using laplacianfaces
Face recognition using laplacianfaces
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language Interpreter
 
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy  up...OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy  up...
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...
 
G041041047
G041041047G041041047
G041041047
 
Ko3618101814
Ko3618101814Ko3618101814
Ko3618101814
 
Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...
Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...
Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...
 

Último

Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdfSuman Jyoti
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptMsecMca
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
Unit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdfUnit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdfRagavanV2
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfRagavanV2
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxJuliansyahHarahap1
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01KreezheaRecto
 
Intro To Electric Vehicles PDF Notes.pdf
Intro To Electric Vehicles PDF Notes.pdfIntro To Electric Vehicles PDF Notes.pdf
Intro To Electric Vehicles PDF Notes.pdfrs7054576148
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Standamitlee9823
 

Último (20)

Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
Unit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdfUnit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdf
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
Intro To Electric Vehicles PDF Notes.pdf
Intro To Electric Vehicles PDF Notes.pdfIntro To Electric Vehicles PDF Notes.pdf
Intro To Electric Vehicles PDF Notes.pdf
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
 

Full biometric eye tracking

  • 1. ABSTRACT Multimodal Biometric System using multiple sources of information for establishing the identity has been widely recognized. But the computational models for multimodal biometrics recognition have only recently received attention. In this paper multimodal biometric image such as fingerprint, face, and eye tracking are extracted individually and are fused together using a sparse fusion mechanism. A multimodal sparse representation method is proposed, which interprets the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. The images are pre-processed for feature extraction. In this process Sobel, canny, Prewitt edge detection methods were applied. The image quality was measured using PSNR, NAE, and NCC metrics. Based on the results obtained, Sobel edge detection was used for feature extraction. Extracted features were subjected to sparse representation for the fusion of different modalities. The fused template can be used for watermarking and person identification application. CASIA database is chosen for the biometric images.
  • 2. CHAPTER – 1 INTRODUCTION UNIMODAL biometric systems rely on a single source of information such as a single iris or fingerprint or face for authentication. Unfortunately, these systems have to deal with some of the following inevitable problems such as 1. Noisy data. Poor lighting on a user’s face or occlusion are examples of noisy data. 2. Nonuniversality. The biometric system based on a single source of evidence may not be able to capture meaningful data from some users. For instance, an iris biometric system may extract incorrect texture patterns from the iris of certain users due to thepresence of contact lenses. 3. Intraclass variations. In the case of fingerprint recognition, the presence of wrinkles due to wetness can cause these variations. These types of variations often occur when a user incorrectly interacts with the sensor. 4. Spoof attack. Hand signature forgery is an example of this type of attack. Classification in multi biometric systems is done by fusing information from different biometric modalities. Information fusion can be done at different levels, broadly divided into feature- level, score-level, and rank-/decision level fusion. Due to preservation of raw information, feature-level fusion can be more discriminative than score or decision-level fusion. But, feature-level fusion methods have been explored in the biometric community only recently. This is because of the differences in features extracted from different sensors in terms of types and dimensions. Often features have large dimensions, and fusion becomes difficult at the feature level. The prevalent method is feature concatenation, which has been used for different multi biometric settings. However, for high-dimensional feature vectors, simple feature concatenation may be inefficient and non-robust. A related work in the machine learning literature is multiple kernel learning (MKL), which aims to integrate information from different features by learning a weighted combination of respective kernels. A detailed survey of MKL-based methods can be found. However, for multimodal systems, weight determination during testing is important, based on the quality of modalities. The proposed the seminal sparse
  • 3. representation-based classification (SRC) algorithm for face recognition. It was shown that by exploiting the inherent sparsity of data, one can obtain improved recognition performance over traditional methods especially when data are contaminated by various artifacts such as illumination variations, disguise, occlusion, and random pixel corruption. Pillai et al. extended this work for robust cancelable eye tracking recognition. Nagesh and Li presented an expression-invariant face recognition method using distributed CS and joint sparsity models. Patel et al. proposed a dictionary-based method for face recognition under varying pose and illumination. The paper makes the following contributions. We present a robust feature level fusion algorithm for multi biometric recognition. Through the proposedjoint sparse framework, we can easily handle unequal dimensions from different modalities by forcing the different features to interact through their sparse coefficients. Furthermore, the proposed algorithm can efficiently handle large-dimensional feature vectors. We make the classification robust to occlusion and noise by introducing an error term in the optimization framework. The algorithm is easily generalizable to handle multiple test inputs from a modality. We introduce a quality measure for multimodal fusion based on the joint sparse representation. . Last, we kernelize the algorithm to handle nonlinearity in the data samples.
  • 4. CHAPTER – 2 SYSTEM CONFIGURATION 2.1 HARDWARE SPECIFICATION  Hard disk : 40 GB  RAM : 512mb  Processor : Pentium IV  Speed : 1.44 GHZ  General : Keyboard, Monitor, Mouse 2.2 SOFTWARE SPECIFICATION  Front-End : Visual studio 2008.  Coding language : C#.net.  Operating System : Windows 7  Back End : SQLSERVER 2005
  • 5. CHAPTER – 3 SYSTEM ANALYSIS 3.1 EXISTING SYSTEM In Existing, multimodal biometric systems that essentially integrate the evidence presented by multiple sources of information such as eye tracking, fingerprints, and face. Such systems are less vulnerable to spoof attacks, as it would be difficult for an imposter to simultaneously spoof multiple biometric traits of a genuine user. Due to sufficient population coverage, these systems are able to address the problem of non-universality. DISADVANTAGES  Noisy data: Poor lighting on a user’s face or occlusion are examples of noisy data.  Nonuniversality: The biometric system based on a single source of evidence may not be able to capture meaningful data from some users. For instance, an eye tracking biometric system may extract incorrect texture patterns from the eye tracking of certain users due to the presence of contact lenses.  Intraclass variations. In the case of fingerprint recognition, the presence of wrinkles due to wetness can cause these variations. These types of variations often occur when a user incorrectly interacts with the sensor.  Spoof attack: Hand signature forgery is an example of this type of attack.
  • 6. 3.2 PROPOSED SYSTEM The proposed of system a novel joint sparsity-based feature level fusion algorithm for multimodal biometrics recognition. The algorithm is robust as it explicitly includes both noise and occlusion terms. An efficient algorithm based on the alternative direction was proposed for solving the optimization problem. We also proposed a multimodal quality measure based on sparse representation. Furthermore, the algorithm was kernelized to handle nonlinear variations. Various experiments have shown that the method is robust and significantly improves the overall recognition accuracy methods. ADVANTAGES  We present a robust feature level fusion algorithm for multibiometric recognition. Through the proposed joint sparse framework, we can easily handle unequal dimensions from different modalities by forcing the different features to interact through their sparse coefficients. Furthermore, the proposed algorithm can efficiently handle large-dimensional feature vectors.  We make the classification robust to occlusion and noise by introducing an error term in the optimization framework.  The algorithm is easily generalizable to handle multiple test inputs from a modality.  We introduce a quality measure for multimodal fusion based on the joint sparse representation.  Last, we kernelize the algorithm to handle non- linearity in the data samples.
  • 7. CHAPTER – 4 SOFTWARE DESCRIPTION MICROSOFT .NET FRAMEWORK The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet. FRAMEWORK IS DESIGNED TO FULFILL THE FOLLOWING OBJECTIVES  To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.  To provide a code-execution environment that minimizes software deployment and versioning conflicts.  To provide a code-execution environment that guarantees safe execution of code, including code created by an unknown or semi-trusted third party.  To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.  To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.  To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code. THE .NET FRAMEWORK HAS TWO MAIN COMPONENTS  The common language runtime.  The .NET Framework class library.
  • 8. The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that ensure security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code. The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services. The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts. For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable Web Forms applications and XML Web services, both of which are discussed later in this topic. Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and secure isolated file storage. The following illustration shows the relationship of the common language runtime and the class library to your applications and to the overall system. The illustration also shows how managed code operates within a larger architecture.
  • 9. FEATURES OF THE COMMON LANGUAGE RUNTIME The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime. With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application. The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich. The runtime also enforces code robustness by implementing a strict type- and code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers Generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety. In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references. The runtime also accelerates developer productivity.
  • 10. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications. While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs. The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality- of-reference to further increase performance. Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting. .NET FRAMEWORK CLASS LIBRARY The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. In addition, third-party components can integrate seamlessly with classes in the .NET Framework.
  • 11. For example, the .NET Framework collection classes implement a set of interfaces that you can use to develop your own collection classes. Your collection classes will blend seamlessly with the classes in the .NET Framework. As you would expect from an object-oriented class library, the .NET Framework types enable you to accomplish a range of common programming tasks, including tasks such as string management, data collection, database connectivity, and file access. In addition to these common tasks, the class library includes types that support a variety of specialized development scenarios. For example, we can use the .NET Framework to develop the following types of applications and services:  Console applications.  Scripted or hosted applications.  Windows GUI applications (Windows Forms).  ASP.NET applications.  XML Web services.  Windows services. For example, the Windows Forms classes are a comprehensive set of reusable types that vastly simplify Windows GUI development. If you write an ASP.NET Web Form application, you can use the Web Forms classes. CLIENT APPLICATION DEVELOPMENT Client applications are the closest to a traditional style of application in Windows- based programming. These are the types of applications that display windows or forms on the desktop, enabling a user to perform a task. Client applications include applications such as word processors and spreadsheets, as well as custom business applications such as data-entry tools, reporting tools, and so on. Client applications usually employ windows, menus, buttons, and other GUI elements, and they likely access local resources such as the file system and peripherals such as printers.
  • 12. Another kind of client application is the traditional ActiveX control (now replaced by the managed Windows Forms control) deployed over the Internet as a Web page. This application is much like other client applications: it is executed natively, has access to local resources, and includes graphical elements. In the past, developers created such applications using C/C++ in conjunction with the Microsoft Foundation Classes (MFC) or with a rapid application development (RAD) environment such as Microsoft® Visual Basic®. The .NET Framework incorporates aspects of these existing products into a single, consistent development environment that drastically simplifies the development of client applications. The Windows Forms classes contained in the .NET Framework are designed to be used for GUI development. You can easily create command windows, buttons, menus, toolbars, and other screen elements with the flexibility necessary to accommodate shifting business needs. For example, the .NET Framework provides simple properties to adjust visual attributes associated with forms. In some cases the underlying operating system does not support changing these attributes directly, and in these cases the .NET Framework automatically recreates the forms. This is one of many ways in which the .NET Framework integrates the developer interface, making coding simpler and more consistent. Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a user's computer. This means that binary or natively executing code can access some of the resources on the user's system (such as GUI elements and limited file access) without being able to access or compromise other resources. Because of code access security, many applications that once needed to be installed on a user's system can now be safely deployed through the Web. Your applications can implement the features of a local application while being deployed like a Web page.
  • 13. INTRODUCTION TO C#.NET C# (pronounced as C-sharp) is a new language for windows applications, intended as an alternative to the main previous languages, C++, VB. Its purpose is two folds: It gives access to many of the facilities previously available only in C++, while retaining some of the simplicity to learn of VB. It has been designed specifically with the .NET Framework in mind, and hence is very well structured for writing code that will be compiled for the .NET. C# is a simple, modern, object-oriented language which aims to combine the high productivity of VB and raw power of C++. C# is a new programming language developed by Microsoft. Using C# we can develop console applications, web applications and windows applications .In C#, Microsoft has taken care of C++ problems such as memory management, pointers, so forth. ACTIVE SERVER PAGES .NET (ASP.NET) ASP.NET is a programming framework built on the common language runtime that can be used on a server to build powerful Web applications. ASP.NET offers several important advantages over previous Web development models. Enhanced Performance:ASP.NET is compiled common language runtime code running on the server. Unlike its interpreted predecessors, ASP.NET can take advantage of early binding, just-in-time compilation, native optimization, and caching services right out of the box. This amounts to dramatically better performance before you ever write a line of code. World-Class Tool Support: A rich toolbox and designer in the Visual Studio integrated development environment complement the ASP.NET framework. WYSIWYG editing, drag-and-drop server controls, and automatic deployment are just a few of the features this powerful tool provides. Power and Flexibility: Because ASP.NET is based on the common language runtime, the power and flexibility of that entire platform is available to Web application
  • 14. developers. The .NET Framework class library, Messaging, and Data Access solutions are all seamlessly accessible from the Web. ASP.NET is also language-independent, so you can choose the language that best applies to your application or partition your application across many languages. Further, common language runtime interoperability guarantees that your existing investment in COM-based development is preserved when migrating to ASP.NET. Simplicity:ASP.NET makes it easy to perform common tasks, from simple form submission and client authentication to deployment and site configuration. For example, the ASP.NET page framework allows you to build user interfaces that cleanly separate application logic from presentation code and to handle events in a simple, Visual Basic - like forms processing model. Manageability:ASP.NET employs a text-based, hierarchical configuration system, which simplifies applying settings to your server environment and Web applications. Because configuration information is stored as plain text, new settings may be applied without the aid of local administration tools. This "zero local administration" philosophy extends to deploying ASP.NET Framework applications as well. An ASP.NET Framework application is deployed to a server simply by copying the necessary files to the server. No server restart is required, even to deploy or replace running compiled code. Scalability and Availability:ASP.NET has been designed with scalability in mind, with features specifically tailored to improve performance in clustered and multiprocessor environments. Further, processes are closely monitored and managed by the ASP.NET runtime, so that if one misbehaves (leaks, deadlocks), a new process can be created in its place, which helps keep your application constantly available to handle requests. Customizability and Extensibility:ASP.NET delivers a well-factored architecture that allows developers to "plug-in" their code at the appropriate level. In fact, it is possible to extend or replace any subcomponent of the ASP.NET runtime with your own custom- written component. Implementing custom authentication or state services has never been easier
  • 15. Security: With built in Windows authentication and per-application configuration, you can be assured that your applications are secure. LANGUAGE SUPPORT The Microsoft .NET Platform currently offers built-in support for many languages: C#, Visual Basic, Jscript etc. WHAT IS ASP.NET WEB FORMS? The ASP.NET Web Forms page framework is a scalable common language runtime-programming model that can be used on the server to dynamically generate Web pages. Intended as a logical evolution of ASP (ASP.NET provides syntax compatibility with existing pages), the ASP.NET Web Forms framework has been specifically designed to address a number of key deficiencies in the previous model. In particular, it provides:  The ability for developers to cleanly structure their page logic in an orderly fashion (not "spaghetti code").  The ability for development tools to provide strong WYSIWYG design support for pages (existing ASP code is opaque to tools).  The ability to create and use reusable UI controls that can encapsulate common functionality and thus reduce the amount of code that a page developer has to write. ASP.NET Web Forms pages are text files with an. aspx file name extension. They can be deployed throughout an IIS virtual root directory tree. When a browser client requests. Aspx resources, the ASP.NET runtime parses and compiles the target file into a .NET Framework class. This class can then be used to dynamically process incoming
  • 16. requests. (Note that the .aspx file is compiled only the first time it is accessed; the compiled type instance is then reused across multiple requests). An ASP.NET page can be created simply by taking an existing HTML file and changing its file name extension to .aspx (no modification of code is required). For example, the following sample demonstrates a simple HTML page that collects a user's name and category preference and then performs a form post back to the originating page when a button is clicked: ASP.NET provides syntax compatibility with existing ASP pages. This includes support for <% %> code render blocks that can be intermixed with HTML content within an .aspx file. These code blocks execute in a top-down manner at page render time. CODE-BEHIND WEB FORMS ASP.NET supports two methods of authoring dynamic pages. The first is the method shown in the preceding samples, where the page code is physically declared within the originating .aspx file. An alternative approach is known as the code-behind method where the page code can be more cleanly separated from the HTML content into an entirely separate file. INTRODUCTION TO ASP.NET SERVER CONTROLS In addition to (or instead of) using <% %> code blocks to program dynamic content, ASP.NET page developers can use ASP.NET server controls to program Web Pages. Server controls are declared within an .aspx file using custom tags or intrinsic HTML tags that contains a run at="server" attributes value. Intrinsic HTML tags are handled by one of the controls in the System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map to one of the controls is assigned the type of: SYSTEM.WEB.UI.HTML CONTROLS.HTMLGENERICCONTROL Server controls automatically maintain any client-entered values between round trips to the server. This control state is not stored on the server (it is instead stored within
  • 17. an <input type="hidden"> form field that is round-tripped between requests). Note also that no client-side script is required. In addition to supporting standard HTML input controls, ASP.NET enables developers to utilize richer custom controls on their pages. For example, the following sample demonstrates how the <asp: adrotator> control can be used to dynamically display rotating ads on a page.  ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI.  ASP.NET Web Forms pages can target any browser client (there are no script library or cookie requirements).  ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages.  ASP.NET server controls provide an easy way to encapsulate common functionality.  ASP.NET ships with 45 built-in server controls. Developers can also use controls built by third parties.  ASP.NET server controls can automatically project both up level and down-level HTML. ADO.NET OVERVIEW ADO.NET is an evolution of the ADO data access model that directly addresses user requirements for developing scalable applications. It was designed specifically for the web with scalability, statelessness, and XML in mind. ADO.NET uses some ADO objects, such as the Connection and Command objects, and also introduces new objects. Key new ADO.NET objects include the Data Set, Data Reader, and Data Adapter.
  • 18. The important distinction between this evolved stage of ADO.NET and previous data architectures is that there exists an object -- the DataSet -- that is separate and distinct from any data stores. Because of that, the DataSet functions as a standalone entity. You can think of the DataSet as an always disconnected record set that knows nothing about the source or destination of the data it contains. Inside a DataSet, much like in a database, there are tables, columns, relationships, constraints, views, and so forth. A DataAdapter is the object that connects to the database to fill the DataSet. Then, it connects back to the database to update the data there, based on operations performed while the DataSet held the data. In the past, data processing has been primarily connection-based. Now, in an effort to make multi-tiered apps more efficient, data processing is turning to a message- based approach that revolves around chunks of information. At the center of this approach is the DataAdapter, which provides a bridge to retrieve and save data between a DataSet and its source data store. It accomplishes this by means of requests to the appropriate SQL commands made against the data store. The XML-based DataSet object provides a consistent programming model that works with all models of data storage: flat, relational, and hierarchical. It does this by having no 'knowledge' of the source of its data, and by representing the data that it holds as collections and data types. No matter what the source of the data within the DataSet is, it is manipulated through the same set of standard APIs exposed through the DataSet and its subordinate objects. While the DataSet has no knowledge of the source of its data, the managed provider has detailed and specific information. The role of the managed provider is to connect, fill, and persist the DataSet to and from data stores. The OLE DB and SQL Server .NET Data Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide four basic objects: the Command, Connection, Data Reader and DataAdapter. In the remaining sections of this document, we'll walk through
  • 19. each part of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they are, and how to program against them. The following sections will introduce you to some objects that have evolved, and some that are new. These objects are:  Connections. For connection to and managing transactions against database.  Commands. For issuing SQL commands against a database.  Data Readers. For reading a forward-only stream of data records from a SQL Server data source.  Datasets. For storing, remoting and programming against flat data, XML data and relational data.  Data Adapters. For pushing data into a DataSet, and reconciling data against a database. CONNECTIONS Connections are used to 'talk to' databases, and are represented by provider- specific classes such as SQLConnection. Commands travel over connections and result sets are returned in the form of streams which can be read by a Data Reader object, or pushed into a DataSet object. COMMANDS Commands contain the information that is submitted to a database, and are represented by provider-specific classes such as SQLCommand. A command can be a stored procedure call, an UPDATE statement, or a statement that returns results. You can also use input and output parameters, and return values as part of your command syntax. The example below shows how to issue an INSERT statement against the Northwind database.
  • 20. DATA READERS The Data Reader object is somewhat synonymous with a read-only/forward-only cursor over data. The Data Reader API supports flat as well as hierarchical data. A Data Reader object is returned after executing a command against a database. The format of the returned DataReader object is different from a record set. For example, you might use the DataReader to show the results of a search list in a web page. DATA SETS AND DATA ADAPTERS DATA SETS The DataSet object is similar to the ADO Record set object, but more powerful, and with one other important distinction: the DataSet is always disconnected. The DataSet object represents a cache of data, with database-like structures such as tables, columns, relationships, and constraints. However, though a DataSet can and does behave much like a database, it is important to remember that DataSet objects do not interact directly with databases, or other source data. This allows the developer to work with a programming model that is always consistent, regardless of where the source data resides. Data coming from a database, an XML file, from code, or user input can all be placed into DataSet objects. Then, as changes are made to the DataSet they can be tracked and verified before updating the source data. The Get Changes method of the DataSet object actually creates a second DatSet that contains only the changes to the data. This DataSet is then used by a DataAdapter (or other objects) to update the original data source. The DataSet has many XML characteristics, including the ability to produce and consume XML data and XML schemas. XML schemas can be used to describe schemas interchanged via Web Services. In fact, a DataSet with a schema can actually be compiled for type safety and statement completion.
  • 21. DATA ADAPTERS (OLEDB/SQL) The DataAdapter object works as a bridge between the DataSet and the source data. Using the provider-specific SqlDataAdapter (along with its associated Sqlcommand and SqlConnection) can increase overall performance when working with a Microsoft SQL Server databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection objects. The DataAdapter object uses commands to update the data source after changes have been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command; using the Update method calls the INSERT, UPDATE or DELETE command for each changed row. You can explicitly set these commands in order to control the statements used at runtime to resolve changes, including the use of stored procedures. For ad-hoc scenarios, a CommandBuilder object can generate these at run- time based upon a select statement.
  • 22. SQL SERVER SQL (STRUCTURED QUERY LANGUAGE) Structured Query Language (SQL) is a standard computer language for relational database management and data manipulation. SQL is used to query, insert, update and modify data. Most relational databases support SQL, which is an added benefit for database administrators (DBAs), as they are often required to support databases across several different platforms. DATABASE A database is a collection of information that is organized so that it can easily be accessed, managed, and updated. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images. In computing, databases are sometimes classified according to their organizational approach. The most prevalent approach is the relational database, a tabular database in which data is defined so that it can be reorganized and accessed in a number of different ways. A distributed database is one that can be dispersed or replicated among different points in a network. An object- oriented programming database is one that is congruent with the data defined in object classes and subclasses. Computer databases typically contain aggregations of data records or files, such as sales transactions, product catalogs and inventories, and customer profiles. Typically, a database manager provides users the capabilities of controlling read/write access, specifying report generation, and analyzing usage. Databases and database managers are prevalent in large mainframe systems, but are also present in smaller distributed workstation and mid-range systems such as the AS/400 and on personal computers. SQL (Structured Query Language) is a standard language for making interactive queries from and updating a database such as IBM's DB2, Microsoft's SQL Server, and database products from Oracle, Sybase, and Computer Associates.
  • 23. DEFINING A DATABASE Define a relational database by using the New Database Definition wizard in the Data Definition view. A relational database is a set of tables that can be manipulated in accordance with the relational model of data. A relational database contains a set of data objects that are used to store, manage, and access data. Examples of such data objects are tables, views, indexes, functions, and stored procedures. DEFINING A SCHEMA Define a schema to organize the tables and other data objects by using the New Schema Definition wizard. A schema is a collection of named objects. In relational database technology, schemas provide a logical classification of objects in the database. Some of the objects that a schema might contain include tables, views, aliases, indexes, triggers, and structured types. Define schemas to organize the tables and other data objects DEFINING A TABLE Define a table by using the New Table Definition wizard. Tables are logical structures that are maintained by the database manager. Tables consist of columns and rows. You can define tables as part of your data definitions in the Data perspective. If you are new to the Microsoft SQL Server environment, you probably encountered the possibility to choose between Windows Authentication and SQL Authentication. SQL AUTHENTICATION SQL Authentication is the typical authentication used for various database systems, composed of a username and a password. Obviously, an instance of SQL Server can have multiple such user accounts (using SQL authentication) with different usernames and passwords. In shared servers where different users should have access to different databases, SQL authentication should be used. Also, when a client (remote computer) connects to an instance of SQL Server on other computer than the one on
  • 24. which the client is running, SQL Server authentication is needed. Even if you don't define any SQL Server user accounts, at the time of installation a root account - as - is added with the password you provided. Just like any SQL Server account, this can be used to log-in locally or remotely, however if an application is the one that does the log in, and it should have access to only one database, it's strongly recommended that you don't use the as account, but create a new one with limited access. Overall, SQL authentication is the main authentication method to be used while the one we review below - Windows Authentication - is more of a convenience. WINDOWS AUTHENTICATION When you are accessing SQL Server from the same computer it is installed on, you shouldn't be prompted to type in an username and password. And you are not, if you're using Windows Authentication. With Windows Authentication, the SQL Server service already knows that someone is logged in into the operating system with the correct credentials, and it uses these credentials to allow the user into its databases. Of course, this works as long as the client resides on the same computer as the SQL Server, or as long as the connecting client matches the Windows credentials of the server. Windows Authentication is often used as a more convenient way to log-in into a SQL Server instance without typing a username and a password, however when more users are envolved, or remote connections are being established with the SQL Server, SQL authentication should be used. PRIMARY KEY In a SQL database, the primary key is one or more columns that uniquely identify each row in a table. The primary key is defined by using the PRIMARY KEY constraint when either creating a table or altering a table. Each table can have only one primary key. The column(s) defined as the primary key inherently have the NOT NULL constraint, meaning they must contain a value. If a table is being altered to add a primary key, any column being defined as the primary key must not contain blank, or NULL, values. If the column does, the primary key constraint cannot be added. Also, in some
  • 25. relational databases, adding a primary key also creates a table index, to help improve the speed of finding specific rows of data in the table when SQL queries are run against that table. FOREIGN KEY Foreign keys are used to reference unique columns in another table. So, for example, a foreign key can be defined on one table A, and it can reference some unique column(s) in another table B. Why would you want a foreign key? Well, whenever it makes sense to have a relationship between columns in two different tables. ROWS In a database, a row (sometimes called a record) is the set of fields within a table that are relevant to a specific entity. For example, in a table called customer contact information, a row would likely contain fields such as: ID number, name, street address, city, telephone number and so on. COLUMNS The records is made by the collection of column or field, It is also called as single attribute of the row. SQL COMMANDS BASIC SQL Each record has a unique identifier or primary key. SQL, which stands for Structured Query Language, is used to communicate with a database. Through SQL one can create and delete tables. Here are some commands:  CREATE TABLE - creates a new database table  ALTER TABLE - alters a database table  DROP TABLE - deletes a database table
  • 26.  CREATE INDEX - creates an index (search key)  DROP INDEX - deletes an index SQL also has syntax to update, insert, and delete records.  SELECT - get data from a database table  UPDATE - change data in a database table  DELETE - remove data from a database table  INSERT INTO - insert new data in a database table
  • 27. CHAPTER – 5 PROJECT DESCRIPTION 5.1 MODULES  Collect multi modal Data  Multimodal Multivariate test  Joint Sparse Representation  Reconstruction Error based Classification MODULES DESCRIPTION COLLECT MULTIMODAL DATA: In real world to enrich Security the single biometric based authentication is used later multimodal biometric is used for authentication, In order to use Multimodal biometric the user is to register for each authentication, in different variations, the collected data are used as Data set for further steps of processing. MULTIMODAL MULTIVARIATE TEST The module represent the how attacker is compromise the users in a social network. The admin maintain the each node in a network. Servers can therefore blacklist anonymous users without knowledge of their IP addresses while allowing behaving users to connect anonymously. Although our work applies to anonymizing networks in general, we consider Tor for purposes of exposition. In fact, any number of anonymizing networks can rely on the same Trustee base social system, blacklisting anonymous users regardless of their anonymizing network(s) of choice.
  • 28. JOINT SPARSE REPRESENTATION In Joint Sparse Representations the image is divided into blocks and each blocks is taken into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weight each modality as it gets fused and to handle nonlinear variations. It shown that the method is robust and significantly improves the overall recognition accuracy. RECONSTRUCTION ERROR BASED CLASSIFICATION In the collection of sample images, the image is filtered based on the sparsity concentration index SCI and based on the weightage of each block. This helps the multimodal biometric for reconstruction of the image. This joint sparse Representation is used for reducing time consuming process and easy recognition of Multimodal Biometric System.
  • 29. CHAPTER – 6 SYSTEM DESIGN 6.1 DATA FLOW DIAGRAM Collect Multimodal Data Multimodal Multivariate test
  • 30. Joint Sparse Representation Reconstruction Error based Classification
  • 32. 6.3 UML DIAGRAMS Dataset collect for each authentication from users Collect dataset Register multimodal authentication Get image sample from mulltimodal Perform SCI calculate for different modality in dictionary & identify image quality Multivariate test Joint spasre representation Image into blocks & calculate weight and based on weight recognize the image Multimodal authenticate Reconstruct the image Reconstruct image User Provide more security to user
  • 34. 6.5 SEQUENCE DIAGRAM Prepare dataset Multivariate test Joint sparse Reconstruct Reconstruct image Provide more security to user Collect dataset from user Input image Calculate SCI & identify image quality Input image divide into block Each block calculate weight &
  • 35. 6.6 COLLABORATION DIAGRAM Prepare dataset Multivariate test Joint sparse Reconstruct 1: Collect dataset from user 2: Input image 3: Calculate SCI & identify image quality 4: Input image divide into block 5: Each block calculate weight 6: Reconstruct image 7: Provide more security to user
  • 36. CHAPTER – 7 SYSTEM IMPLEMENTATION Implementation is the process of translating design specification in to source code. The primary goal of implementation is to write source code and internal implementation. So that conformance of code to its specification can be easily verified, So that debugging, testing and modification are eased. The source is developed with clarity, simplicity and elegance. The coding is done in a modular fashion giving such importance even to the minute detail so, when hardware and storage procedures are changed or now data is added, rewriting of application programs is not necessary. To adapt or perfect use must determine new requirements, redesign generate code and test exiting software/hardware. Traditionally such task when they are applied to an existing program has been called maintenance.
  • 37. 7.1 SOURCE CODE using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Threading; using System.Data.SqlClient; using System.Collections; namespace VariousRepresentation { publicpartialclassJointSparse : Form { SqlConnection cn; SqlCommand cmd; string s; SqlDataAdapter da; DataTable dt; DataSet ds; SqlDataReader dr; string[] files = newstring[1000]; Bitmap img1; Bitmap img2; int i; int l; int distance;
  • 38. Point p; int sno; ArrayList list5 = newArrayList(); ArrayList list55 = newArrayList(); ArrayList fi = newArrayList(); public JointSparse() { InitializeComponent(); } publicvoid getconnection() { cn = newSqlConnection("Data Source=.sqlexpress; Initial Catalog=JointSparse; Integrated Security=true; Max Pool Size=1000;"); cn.Open(); } privatevoid button1_Click(object sender, EventArgs e) { DialogResult res = folderBrowserDialog1.ShowDialog(); if (res == DialogResult.OK) { textBox1.Text = folderBrowserDialog1.SelectedPath; files = Directory.GetFiles(folderBrowserDialog1.SelectedPath); label4.Text = files.Length.ToString(); } Point centerPoint = newPoint(100, 100); Point result = newPoint(0, 0); double angle; angle = 360 / 8; for (int j = 0; j < 8; j++) {
  • 39. distance = 20; result.Y = centerPoint.Y + (int)Math.Round(distance * Math.Sin(angle)); result.X = centerPoint.X + (int)Math.Round(distance * Math.Cos(angle)); angle = angle + 45; listBox1.Items.Add(result); } } privatevoid button2_Click(object sender, EventArgs e) { DialogResult res = openFileDialog1.ShowDialog(); if (res == DialogResult.OK) { textBox2.Text = openFileDialog1.FileName; pictureBox1.Image = Image.FromFile(openFileDialog1.FileName); } try { getconnection(); s = "Drop table PatternInfo"; cmd = newSqlCommand(s, cn); cmd.ExecuteNonQuery(); } catch { } string filename = textBox1.Text; string filenam = filename.Substring(filename.LastIndexOf("") + 1); if (filenam == "Face") { getconnection();
  • 40. s = "Select * into PatternInfo from PatternFace"; cmd = newSqlCommand(s, cn); cmd.ExecuteNonQuery(); cn.Close(); } elseif (filenam == "Finger") { getconnection(); s = "Select * into PatternInfo from PatternFinger"; cmd = newSqlCommand(s, cn); cmd.ExecuteNonQuery(); cn.Close(); } elseif (filenam == "Iris") { getconnection(); s = "Select * into PatternInfo from PatternIris"; cmd = newSqlCommand(s, cn); cmd.ExecuteNonQuery(); cn.Close(); } else { } pattern1(); select(); } privatevoid button3_Click(object sender, EventArgs e) { for (i = 0; i < files.Length; i++)
  • 41. { label8.Text = files.Length.ToString(); label8.Refresh(); progressBar1.Maximum = files.Length; progressBar1.Minimum = 0; int p = i; p = p + 1; label6.Text = p.ToString(); label6.Refresh(); string filename = files[i]; string filenam = filename.Substring(filename.LastIndexOf("") + 1); img1 = newBitmap(files[i]); img2 = newBitmap(img1, newSize(200, 200)); pictureBox1.Image = img2; pictureBox1.Refresh(); progressBar1.Value = p; string path; path = textBox3.Text + "" + filenam; img2.Save(path); } } publicvoid pattern() { for (l = 0; l < files.Length; l++) { img1 = newBitmap(files[l]); for (int k = 0; k <Convert.ToInt32(listBox1.Items.Count); k++) { var selection = listBox1.Items[k]; p = (Point)selection;
  • 42. for (int i = 0; i < img1.Width; i++) { for (int j = 0; j < img1.Height; j++) { if (i == p.X && j == p.Y) { Color cr = newColor(); cr = img1.GetPixel(i, j); listBox2.Items.Add(cr.R * .3 + cr.G * .59 + cr.B * 0.11); } } } } } } publicvoid pattern1() { listBox2.Items.Clear(); img1 = newBitmap(pictureBox1.Image); for (int k = 0; k <Convert.ToInt32(listBox1.Items.Count); k++) { var selection = listBox1.Items[k]; p = (Point)selection; for (int i = 0; i < img1.Width; i++) { for (int j = 0; j < img1.Height; j++) { if (i == p.X && j == p.Y) { Color cr = newColor();
  • 43. cr = img1.GetPixel(i, j); listBox2.Items.Add(cr.R * .3 + cr.G * .59 + cr.B * 0.11); } } } } } privateColor grayscale(Color cr) { returnColor.FromArgb(cr.A, (int)(cr.R * .3 + cr.G * .59 + cr.B * 0.11), (int)(cr.R * .3 + cr.G * .59 + cr.B * 0.11), (int)(cr.R * .3 + cr.G * .59 + cr.B * 0.11)); } privatevoid button4_Click(object sender, EventArgs e) { DialogResult res = folderBrowserDialog1.ShowDialog(); if (res == DialogResult.OK) { textBox3.Text = folderBrowserDialog1.SelectedPath; } } publicvoid pattern2() { progressBar1.Minimum=0; progressBar1.Maximum = Convert.ToInt32(listBox1.Items.Count); for (l = 0; l < files.Length; l++) { img1 = newBitmap(files[l]); for (int k = 0; k <Convert.ToInt32(listBox1.Items.Count); k++) {
  • 44. var selection = listBox1.Items[k]; p = (Point)selection; for (int i = 0; i < img1.Width; i++) { for (int j = 0; j < img1.Height; j++) { if (i == p.X && j == p.Y) { Color cr = newColor(); cr = img1.GetPixel(i, j); listBox2.Items.Add(cr.R * .3 + cr.G * .59 + cr.B * 0.11); progressBar1.Value = k; } } } } progressBar1.Value = progressBar1.Maximum; } } publicvoid save2() { int i = 0; int k = 0; int countt = 1; while (i < listBox2.Items.Count) { double one = Convert.ToDouble(listBox2.Items[i]); i++; double two = Convert.ToDouble(listBox2.Items[i]); i++;
  • 45. double three = Convert.ToDouble(listBox2.Items[i]); i++; double four = Convert.ToDouble(listBox2.Items[i]); i++; double five = Convert.ToDouble(listBox2.Items[i]); i++; double six = Convert.ToDouble(listBox2.Items[i]); i++; double seven = Convert.ToDouble(listBox2.Items[i]); i++; double eight = Convert.ToDouble(listBox2.Items[i]); i++; l = k; if (l < 25) { listBox3.Items.Add(files[l]); } listBox6.Items.Add(one); listBox7.Items.Add(two); k++; listBox4.Items.Clear(); foreach (string it in list5) { listBox4.Items.Add(it); } listBox5.Items.Clear(); foreach (string it1 in list55) { listBox5.Items.Add(it1); }
  • 46. } } privatevoid button5_Click(object sender, EventArgs e) { pattern2(); save2(); } publicvoid save() { int i = 0; int countt = 1; while (i < listBox2.Items.Count) { double one = Convert.ToDouble(listBox2.Items[i]); i++; double two = Convert.ToDouble(listBox2.Items[i]); i++; double three = Convert.ToDouble(listBox2.Items[i]); i++; double four = Convert.ToDouble(listBox2.Items[i]); i++; double five = Convert.ToDouble(listBox2.Items[i]); i++; double six = Convert.ToDouble(listBox2.Items[i]); i++; double seven = Convert.ToDouble(listBox2.Items[i]); i++; double eight = Convert.ToDouble(listBox2.Items[i]); i++; l = countt;
  • 47. getconnection(); s = "insert into PatternInfo values('" + l + "','" + one + "','" + two + "','" + three + "','" + four + "','" + five + "','" + six + "','" + seven + "','" + eight + "')"; cmd = newSqlCommand(s, cn); cmd.ExecuteNonQuery(); cn.Close(); countt++; } } publicvoid select() { int i = 0; double one = Convert.ToDouble(listBox2.Items[i]); i++; double two = Convert.ToDouble(listBox2.Items[i]); i++; double three = Convert.ToDouble(listBox2.Items[i]); i++; double four = Convert.ToDouble(listBox2.Items[i]); i++; double five = Convert.ToDouble(listBox2.Items[i]); i++; double six = Convert.ToDouble(listBox2.Items[i]); i++; double seven = Convert.ToDouble(listBox2.Items[i]); i++; double eight = Convert.ToDouble(listBox2.Items[i]); i++; getconnection();
  • 48. s = "Select * from PatternInfo where Point1='" + one + "' and Point2='" + two + "' and Point3='" + three + "' and Point4='" + four + "' and Point5='" + five + "' and Point6='" + six + "' and Point7='" + seven + "' and Point8='" + eight + "'"; cmd = newSqlCommand(s, cn); dr = cmd.ExecuteReader(); if (dr.HasRows) { while (dr.Read()) { sno = Convert.ToInt32((dr["ImagePath"])); if (sno <= 5) { sno = 1; } elseif (sno <= 10) { sno = 6; } elseif (sno <= 15) { sno = 11; } elseif (sno <= 20) { sno = 16; } elseif (sno <= 25) { sno = 21; }
  • 49. } } cn.Close(); getconnection(); int sno1 = sno + 4; s = "Select * from PatternInfo where ImagePath>=" + sno + " and ImagePath<=" + sno1 + ""; cmd = newSqlCommand(s, cn); dr = cmd.ExecuteReader(); if (dr.HasRows) { while (dr.Read()) { list5.Add(dr["Point1"]); list55.Add(dr["Point2"]); } } } privatevoid button6_Click(object sender, EventArgs e) { int count = 1; for (int i = 0; i < listBox4.Items.Count; i++) { for (int j = 0; j < listBox6.Items.Count; j++) { int one = Convert.ToInt32(listBox4.Items[i]); int two = Convert.ToInt32(listBox6.Items[j]); int three = Convert.ToInt32(listBox5.Items[i]); int four = Convert.ToInt32(listBox7.Items[j]); if (one == two && three == four)
  • 50. { fi.Add(j); if (count == 1) { pictureBox2.Image=Image.FromFile(listBox3.Items[j].ToString()); count++; } elseif (count == 2) { pictureBox3.Image = Image.FromFile(listBox3.Items[j].ToString()); count++; } elseif (count == 3) { pictureBox4.Image = Image.FromFile(listBox3.Items[j].ToString()); count++; } elseif (count == 4) { pictureBox5.Image = Image.FromFile(listBox3.Items[j].ToString()); count++; } elseif (count == 5) { pictureBox6.Image = Image.FromFile(listBox3.Items[j].ToString()); count++; } else { }
  • 51. } } } } privatevoid listBox7_SelectedIndexChanged(object sender, EventArgs e) { } privatevoid button7_Click(object sender, EventArgs e) { this.Hide(); JointSparse a = newJointSparse(); a.Show(); }} }
  • 52. 7.2 SCREEN SHOTS Joint Sparse Representation:
  • 55. Convert images of Input Format to Training Dataset Format:
  • 56. View Pattern Results of Joint Sparse Representation: Joint Sparse Representation for Face:
  • 59. CHAPTER – 8 SYSTEM TESTING SYSTEM TEST AND MAINTENANCE SYSTEM TESTING The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub-assemblies, assemblies and/or a finished product it is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement. TYPES OF TESTS FUNCTIONAL TEST Functional tests provide a systematic demonstration that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Valid Input : identified classes of valid input must be accepted. Invalid Input : identified classes of invalid input must be rejected. Functions : identified functions must be exercised. Output : identified classes of application outputs must be exercised. Systems/Procedures : interfacing systems or procedures must be invoked.
  • 60. Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined. SYSTEM TEST System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points. WHITE BOX TESTING White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level. BLACK BOX TESTING Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works. UNIT TESTING Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases.
  • 61. FIELD TESTING Field testing will be performed manually and functional tests will be written in detail. TEST OBJECTIVES  All field entries must work properly.  Pages must be activated from the identified link.  The entry screen, messages and responses must not be delayed. FEATURES TO BE TESTED  Verify that the entries are of the correct format  No duplicate entries should be allowed  All links should take the user to the correct page. INTEGRATION TESTING Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error. The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error. TEST RESULTS All the test cases mentioned above passed successfully. No defects encountered.
  • 62. ACCEPTANCE TESTING User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements. Acceptance testing for intranet lives search tax management system:  Users have separate roles to modify the database tables.  Users should have the ability to modify the privilege for a screen. TEST RESULTS All the test cases mentioned above passed successfully. No defects encountered. PROJECT CREATION USING RATIONAL ADMINISTRATOR
  • 63.
  • 64.
  • 65. CONCLUSION We proposed a novel joint sparsity-based feature level fusion algorithm for multimodal biometrics recognition. The algorithm is robust as it explicitly includes both noise and occlusion terms. An efficient algorithm based on the alternative direction was proposed for solving the optimization problem. We also proposed a multimodal quality measure based on sparse representation. Furthermore, the algorithm was kernelized to handle nonlinear variations. Various experiments have shown that the method is robust and significantly improves the overall recognition accuracy.
  • 66. REFERENCES [1] C.J. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” June 1998. [2] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet, “SimpleMKL,” 2008. [3] R. Bolle, J. Connell, S. Pankanti, N. Ratha, and A. Senior, “The Relation between the ROC Curve and the CMC,” 2005. [4] K. Nandakumar, Y. Chen, S. Dass, and A. Jain, “Likelihood Ratio Based Biometric Score Fusion,” Feb. 2008. [5] X.F.M. Yang, L. Zhang, and D. Zhang, “Fisher Discrimination Dictionary Learning for Sparse Representation,” 2011. [6] A. Jain, S. Prabhakar, L. Hong, and S. Pankanti, “Filterbank-Based Fingerprint Matching,” IEEE Trans. Image Processing, vol. 9, no. 5, pp. 846-859, May 2000. [7] S.S.S. Crihalmeanu, A. Ross, and L. Hornak, “A Protocol for Multibiometric Data Acquisition, Storage and Dissemination,” technical report, Lane Dept. of Computer Science and Electrical Eng., West Virginia Univ., 2007. [8] V.M. Patel, R. Chellappa, and M. Tistarelli, “Sparse Representations and Random Projections for Robust and Cancelable Biometrics,” Proc. Int’l Conf. Control, Automation, Robotics, and Vision, pp. 1-6, Dec. 2010. [9] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang, and S. Yan, “Sparse Representation for Computer Vision and Pattern Recognition,” Proc. IEEE, vol. 98, no. 6, pp. 1031- 1044, June 2010. [10] X.-T. Yuan and S. Yan, “Visual Classification with Multi-Task Joint Sparse Representation,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 3493- 3500, June 2010. [11] N.H. Nguyen, N.M. Nasrabadi, and T.D. Tran, “Robust MultiSensor Classification via Joint Sparse Representation,” Proc. Int’l Conf. Information Fusion, pp. 1-8, July 2011.
  • 67. [12] A. Jain, S. Prabhakar, L. Hong, and S. Pankanti, “Filterbank-Based Fingerprint Matching,” IEEE Trans. Image Processing, vol. 9, no. 5, pp. 846-859, May 2000.