final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
1. IEEE BASED
SOFTWARE PROJECTS
pFusion: A P2P Architecture for Internet-Scale Content-
Based Search and Retrieval
Demetrios Zeinalipour-Yazti, Member, IEEE, Vana Kalogeraki, Member, IEEE, and
Dimitrios Gunopulos, Member, IEEE
Abstract
The emerging Peer-to-Peer (P2P) model has become a very powerful and
attractive paradigm for developing Internet-scale systems for sharing resources,
including files and documents. The distributed nature of these systems, where
nodes are typically located across different networks and domains, inherently
hinders the efficient retrieval of information.
In this paper, we consider the effects of topologically aware overlay construction
techniques on efficient P2P keyword search algorithms. We present the Peer
Fusion (pFusion) architecture that aims to efficiently integrate heterogeneous
information that is geographically scattered on peers of different networks.
Our approach builds on work in unstructured P2P systems and uses only local
knowledge. Our empirical results, using the pFusion middleware architecture and
data sets from Akamai’s Internet mapping infrastructure (AKAMAI), the Active
Measurement Project (NLANR), and the Text REtrieval Conference (TREC) show
that the architecture we propose is both efficient and practical.
Index Terms—Information retrieval, peer-to-peer, overlay construction algorithms.
28235816, ncctchennai@gmail.com, www.ncct.in
2. IEEE BASED
SOFTWARE PROJECTS
A/I Net: a network that integrates ATM and IP
Chin-Tau Lea Chi-Ying Tsui Bo Li Kwan,
C.-Y. Chan, S.K.-M. Chan, A.H.-W.
Hong Kong Univ. of Sci. & Technol.;
This paper appears in: Network, IEEE
Volume: 13, Issue: 1
On page(s): 48-55
ISSN: 0890-8044
References Cited: 30
CODEN: IENEET
INSPEC Accession Number: 6213014
Digital Object Identifier: 10.1109/65.750449
Posted online: 2002-08-06 22:26:00.0
Abstract
Future networks need both connectionless and connection-oriented services. IP
and ATM are major examples of the two types.
Connectionless IP is more efficient for browsing, e-mail, and other non-real-time
services; but for services demanding quality and real-time delivery, connection-
oriented ATM is a much better candidate.
Given the popularity of the Internet and the established status of ATM as the
broadband transport standard, it is unlikely that one can replace the other.
Therefore, the challenge we face lies in finding an efficient way to integrate the
two. This article describes a research project reflecting this trend.
The project aims at efficient integration of the two to eliminate the deficiencies of a
standalone ATM or IP network
28235816, ncctchennai@gmail.com, www.ncct.in
3. IEEE BASED
SOFTWARE PROJECTS
Distributed Cache Updating for the Dynamic Source
Routing Protocol
Xin Yu
Department of Computer Science
New York University
xinyu@cs.nyu.edu
Abstract
On-demand routing protocols use route caches to make routing decisions. Due to
mobility, cached routes easily become stale. To address the cache staleness
issue, prior work in DSR used heuristics with ad hoc parameters to predict the
lifetime of a link or a route.
However, heuristics cannot accurately estimate timeouts because topology
changes are unpredictable. In this paper, we propose proactively disseminating the
broken link information to the nodes that have that link in their caches. We define a
new cache structure called a cache table and present a distributed cache update
algorithm.
Each node maintains in its cache table the information necessary for cache
updates. When a link failure is detected, the algorithm notifies all reachable nodes
that have cached the link in a distributed manner. The algorithm does not use any
ad hoc parameters, thus making route caches fully adaptive to topology changes.
We show that the algorithm outperforms DSR with path caches and with Link-
MaxLife, an adaptive timeout mechanism for link caches. We conclude that
proactive cache updating is key to the adaptation of on-demand routing protocols
to mobility.
28235816, ncctchennai@gmail.com, www.ncct.in
4. IEEE BASED
SOFTWARE PROJECTS
Distributed Data Mining in Credit Card Fraud Detection
CREDIT CARD TRANSACTIONS Continue to grow in number, taking an ever-larger share of the
US payment system and leading to a higher rate of stolen account numbers and subsequent
losses by banks. Improved fraud detection thus has become essential to maintain the viability of
the US payment system.
Banks have used early fraud warning systems for some years. Large-scale data-mining techniques
can improve on the state of the art in commercial practice. Scalable techniques to analyze
massive amounts of transaction data that efficiently
compute fraud detectors in a timely manner is an important problem, especially for e-commerce.
Besides scalability and efficiency, the fraud-detection task exhibits technical problems that include
skewed distributions of training data and nonuniform cost per error, both of which have not been
widely studied in the knowledge-discovery and datamining community.
In this article, we survey and evaluate a number of techniques that address these three main
issues concurrently. Our proposed methods of combining multiple learned fraud detectors under a
“cost model” are general and demonstrably useful; our empirical results demonstrate that we can
significantly reduce loss due to fraud through distributed data mining of fraud models.
Our approach In today’s increasingly electronic society and with the rapid advances of electronic
commerce on the Internet, the use of credit cards for purchases has become convenient and
necessary. Credit card transactions have become the de facto standard for Internet and Web
based e-commerce. The US government estimates that credit cards accounted for approximately
US $13 billion in Internet sales during 1998.
This figure is expected to grow rapidly each year. However, the growing number of credit card
transactions provides more opportunity for thieves to steal credit card numbers and subsequently
commit fraud. When banks lose money because of credit card fraud, cardholders pay for all of that
loss through higher interest rates, higher fees, and reduced benefits.
Cardholders interest to reduce illegitimate use of credit cards by early fraud detection. For many
years, the credit card industry has studied computing models for automated detection systems;
recently, these models have been the subject of academic research, especially with respect to e-
commerce.
The credit card fraud-detection domain presents a number of challenging issues for data mining:
• There are millions of credit card transactions processed each day. Mining such
massive amounts of data requires highly efficient techniques that scale.
• The data are highly skewed—many more transactions are legitimate than fraudulent.
• Typical accuracy-based mining techniques can generate highly accurate fraud
THIS SCALABLE BLACK-BOX APPROACH FOR BUILDING EFFICIENT FRAUD DETECTORS
CAN SIGNIFICANTLY REDUCE LOSS DUE TO ILLEGITIMATE BEHAVIOR. IN MANY CASES,
THE AUTHORS’ METHODS OUTPERFORM A WELL-KNOWN, STATE OF THE ART
COMMERCIAL FRAUD-DETECTION SYSTEM.
28235816, ncctchennai@gmail.com, www.ncct.in
5. IEEE BASED
SOFTWARE PROJECTS
A Distributed Database Architecture for Global Roaming in
Next-Generation Mobile Networks
Zuji Mao, Member, IEEE, and Christos Douligeris, Senior Member, IEEE
Abstract
The next-generation mobile network will support terminal mobility, personal
mobility, and service provider portability, making global roaming seamless. A
location-independent personal telecommunication number (PTN) scheme is
conducive to implementing such a global mobile system. However, the
nongeographic PTNs coupled with the anticipated large number of mobile users in
future mobile networks may introduce very large centralized databases.
This necessitates research into the design and performance of high-throughput
database technologies used in mobile systems to ensure that future systems will
be able to carry efficiently the anticipated loads.
This paper proposes a scalable, robust, efficient location database architecture
based on the location- independent PTNs. The proposed multitree database
architecture consists of a number of database subsystems, each of which is a
three-level tree structure and is connected to the others only through its root.
By exploiting the localized nature of calling and mobility patterns, the proposed
architecture effectively reduces the database loads as well as the signaling traffic
incurred by the location registration and call delivery procedures.
In addition, two memory-resident database indices, memory-resident direct file and
T-tree, are proposed for the location databases to further improve their throughput.
Analysis model and numerical results are presented to evaluate the efficiency of
the proposed database architecture.
Results have revealed that the proposed database architecture for location
management can effectively support the anticipated high user density in the future
mobile networks.
Index Terms—Database architecture, location management, location tracking,
mobile networks
28235816, ncctchennai@gmail.com, www.ncct.in
6. IEEE BASED
SOFTWARE PROJECTS
A Software Defect Report and Tracking
System in an Intranet
Abstract
This paper describes a case study where SofTrack - a Software Defect Report and
Tracking System – was implemented using internet technology in a geographically
distributed organization. Four medium to large size information systems with
different levels of maturity are being analyzed within the scope of this project.
They belong to the Portuguese Navy’s Information Systems Infrastructure and
were developed using typical legacy systems technology: COBOL with embedded
SQL for queries in a Relational Database environment.
This pilot project of Empirical Software Engineering has allowed the development
of techniques to help software managers to better understand, control and
ultimately improve the software process.
Among them are the introduction of automatic system documentation, module’s
complexity assessment and effort estimation for maintenance activities in the
organization.
28235816, ncctchennai@gmail.com, www.ncct.in
7. IEEE BASED
SOFTWARE PROJECTS
Secure Electronic Data Interchange over the Internet
The Electronic Data Interchange over the Internet (EDI-INT) standards provide a
secure means of transporting EDI and XML business documents over the Internet.
EDI-INT includes different implementation protocols that work over the Internet’s
three major transports — SMTP, HTTP, and FTP.
Each uses Secure Multipurpose Internet Mail Extensions (S/MIME), digital
signatures, encryption, and message-receipt validation to ensure the necessary
security for business-to business communications.
Numerous retailers, manufacturers, and other companies within business
supply chains are leveraging Applicability Statement #2 (AS2) and other standards
developed by the IETF’s Electronic Data Interchange over the Internet (EDI-INT)
working group (www.imc.org/ietf-ediint/). Founded in 1996 to develop a secure
transport service for EDI business documents, the EDI-INT WG later expanded its
focus to include XML and virtually any other electronic business-documentation
format.
It began by providing the digital security and message-receipt validation for
Internet communication for MIME (Multipurpose Internet Mail Extensions)
packaging of EDI.1 EDI-INT has since become the leading means of business-to-
business (B2B) transport for retail and other industries.
Although invisible to the consumer, standards for secure electronic communication
of purchase orders, invoices, and other business transactions are helping
enterprises drive down costs and offer flexibility in B2B relationships. EDI-INT
provides digital security of email, Web, and FTP payloads through authentication,
content-integrity, confidentiality, and receipt validation.
28235816, ncctchennai@gmail.com, www.ncct.in
8. IEEE BASED
SOFTWARE PROJECTS
Building Intelligent Shopping Assistants Using Individual
Consumer Models
Chad Cumby, Andrew Fano, Rayid Ghani, Marko Krema
Accenture Technology Labs, 161 N. Clark St, Chicago, IL, USA
chad.m.cumby,andrew.e.fano,rayid.ghani,marko.krema@accenture.com
ABSTRACT
This paper describes an Intelligent Shopping Assistant de-signed for a shopping
cart mounted tablet PC that enables individual interactions with customers. We use
machine learning algorithms to predict a shopping list for the customer's current
trip and present this list on the device.
As they navigate through the store, personalized promotions are presented using
consumer models derived from loyalty card data for each individual.
In order for shopping assistant devices to be effective, we believe that they have to
be powered by algorithms that are tuned for individual customers and can make
accurate predictions about an individual's actions.
We formally frame the shopping list prediction as a classication problem, describe
the algorithms and methodology behind our system, and show that shopping list
prediction can be done with high levels of accuracy, precision, and recall.
Beyond the prediction of shopping lists we brie introduce other aspects of the
shopping assistant project, such as the use of consumer models to select
appropriate promotional tactics, and the development of promotion planning
simulation tools to enable retailers to plan personalized promotions delivered
through such a shopping assistant.
Categories and Subject Descriptors: H.2.8 Database Management Database
Applications [Data Mining] General Terms: Algorithms, Economics,
Experimentation. Keywords: Retail applications, Machine learning, Classification
28235816, ncctchennai@gmail.com, www.ncct.in
9. IEEE BASED
SOFTWARE PROJECTS
ObjectRank: Authority-Based Keyword Search in
Databases
Andrey Balmin IBM Almaden Research Center San Jose, CA 95120
abalmin@us.ibm.com
Vagelis Hristidis School of Computer Science Florida International University
Miami, FL 33199 vagelis@cs.fiu.edu
Yannis Papakonstantinou Computer Science UC, San Diego La Jolla, CA 92093
yannis@cs.ucsd.edu
Abstract
The Object Rank system applies authority-based ranking to keyword search in
databases modeled as labeled graphs.
Conceptually, authority originates at the nodes (objects) containing the keywords
and flows to objects according to their semantic connections.
Each node is ranked according to its authority with respect to the particular
keywords.
One can adjust the weight of global importance, the weight of each keyword of the
query, the importance of a result actually containing the keywords versus being
referenced by nodes containing them, and the volume of authority flow via each
type of semantic connection.
Novel performance challenges and opportunities are addressed. First, schemas
impose constraints on the graph, which are exploited for performance purposes.
Second, in order to address the issue of authority ranking with respect to the given
keywords (as opposed to Google’s global PageRank) we precompute single
keyword ObjectRanks and combine them during run time.
We conducted user surveys and a set of performance experiments on multiple real
and synthetic datasets, to assess the semantic meaningfulness and performance
of ObjectRank.
28235816, ncctchennai@gmail.com, www.ncct.in
10. IEEE BASED
SOFTWARE PROJECTS
An Acknowledgment-based Approach for the Detection of
Routing Misbehavior in MANETs
Kejun Liu, Jing Deng, Pramod K. Varshney, and Kashyap Balakrishnan
Abstract
We study routing misbehavior in MANETs (Mobile Ad Hoc Networks) in this paper.
In general, routing protocols for MANETs are designed based on the assumption
that all participating nodes are fully cooperative.
However, due to the open structure and scarcely available battery-based energy,
node misbehaviors may exist. One such routing misbehavior is that some sel sh
nodes will participate in the route discovery and maintenance processes but refuse
to forward data packets.
In this paper, we propose the 2ACK scheme that serves as an add-on technique
for routing schemes to detect routing misbehavior and to mitigate their adverse
effect.
The main idea of the 2ACK scheme is to send two-hop acknowledgment packets in
the opposite direction of the routing path. In order to reduce additional routing
overhead, only a fraction of the received data packets are acknowledged in the
2ACK scheme. Analytical and simulation results are presented to evaluate the
performance of the proposed scheme.
Index Terms Mobile Ad hoc Networks (MANETs); Routing Misbehavior; Node
Misbehavior; Network Security; Dynamic Source Routing (DSR)
28235816, ncctchennai@gmail.com, www.ncct.in
11. IEEE BASED
SOFTWARE PROJECTS
A Self-Repairing Tree Topology Enabling Content-Based
Routing in Mobile Ad Hoc Networks
Luca Mottola, Gianpaolo Cugola, and Gian Pietro Picco
Abstract
Content-based routing (CBR) provides a powerful and flexible foundation for
distributed applications.
Its communication model, based on implicit addressing, fosters decoupling among
the communicating components, therefore meeting the needs of many dynamic
scenarios, including mobile ad hoc networks (MANETs).
Unfortunately, the characteristics of the CBR model are only rarely met by
available systems, which typically assume that application-level routers are
organized in a tree-shaped network with a fixed topology.
In this paper we present COMAN, a protocol to organize the nodes of a MANET in
a tree-shaped network able to i) selfrepair to tolerate the frequent topological
reconfigurations typical of MANETs; ii) achieve this goal through repair strategies
that minimize the changes that may impact the CBR layer exploiting the tree.
COMAN is implemented and publicly available.
Here we report about its performance in simulated scenarios as well as in real-
world experiments.
The results confirm that its characteristics enable reliable and efficient CBR on
MANETs.
Index Terms—Content-based routing, publish-subscribe, Query-advertise, mobile
ad hoc network
28235816, ncctchennai@gmail.com, www.ncct.in
12. IEEE BASED
SOFTWARE PROJECTS
Continuous k-Means Monitoring over Moving Objects
Zhenjie Zhang, Yin Yang, Anthony K.H. Tung, and Dimitris Papadias
Abstract
Given a dataset P, a k-means query returns k points in space (called centers),
such that the average squared distance between each point in P and its nearest
center is minimized.
Since this problem is NP-hard, several approximate algorithms have been
proposed and used in practice. In this paper, we study continuous k-means
computation at a server that monitors a set of moving objects.
Re-evaluating k-means every time there is an object update imposes a heavy
burden on the server (for computing the centers from scratch) and the clients (for
continuously sending location updates).
We overcome these problems with a novel approach that significantly reduces the
computation and communication costs, while guaranteeing that the quality of the
solution, with respect to the re-evaluation approach, is bounded by a user-defined
tolerance.
The proposed method assigns each moving object a threshold (i.e., range) such
that the object sends a location update only when it crosses the range boundary.
First, we develop an efficient technique for maintaining the k-means. Then, we
present mathematical formulae and algorithms for deriving the individual
thresholds. Finally, we justify our performance claims with extensive experiments
28235816, ncctchennai@gmail.com, www.ncct.in
13. IEEE BASED
SOFTWARE PROJECTS
Bandwidth Estimation for IEEE 802.11-Based Ad Hoc
Networks
Cheikh Sarr, Claude Chaudet, Guillaume Chelius, and Isabelle Gue´ rin Lassous
Abstract
Since 2005, IEEE 802.11-based networks have been able to provide a certain level
of quality of service (QoS) by the means of service differentiation, due to the IEEE
802.11e amendment.
However, no mechanism or method has been standardized to accurately evaluate
the amount of resources remaining on a given channel.
Such an evaluation would, however, be a good asset for bandwidth-constrained
applications. In multihop ad hoc networks, such evaluation becomes even more
difficult.
Consequently, despite the various contributions around this research topic, the
estimation of the available bandwidth still represents one of the main issues in this
field.
In this paper, we propose an improved mechanism to estimate the available
bandwidth in IEEE 802.11-based ad hoc networks.
Through simulations, we compare the accuracy of the estimation we propose to the
estimation performed by other state-of-the-art QoS protocols, BRuIT, AAC, and
QoS-AODV.
28235816, ncctchennai@gmail.com, www.ncct.in
14. IEEE BASED
SOFTWARE PROJECTS
Dual-Link Failure Resiliency Through Backup Link Mutual
Exclusion
Srinivasan Ramasubramanian, Member, IEEE, and Amit Chandak
Abstract
Networks employ link protection to achieve fast recovery from link failures. While
the first link failure can be protected using link protection, there are several
alternatives for protecting against the second failure.
This paper formally classifies the approaches to dual-link failure resiliency. One of
the strategies to recover from dual-link failures is to employ link protection for the
two failed links independently, which requires that two links may not use each
other in their backup paths if they may fail simultaneously.
Such a requirement is referred to as Backup Link Mutual Exclusion (BLME)
constraint and the problem of identifying a backup path for every link that satisfies
the above requirement is referred to as the BLME problem.
This paper develops the necessary theory to establish the sufficient conditions for
existence of a solution to the BLME problem. Solution methodologies for the BLME
problem is developed using two approaches by: (1) formulating the backup path
selection as an integer linear program; and (2) developing a polynomial time
heuristic based on minimum cost path routing.
The ILP formulation and heuristic are applied to six networks and their
performance is compared to approaches that assume precise knowledge of dual-
link failure. It is observed that a solution exists for all the six networks considered.
The heuristic approach is shown to obtain feasible solutions that are resilient to
most dual-link failures, although the backup path lengths may be significantly
higher than optimal.
In addition, the paper illustrates the significance of the knowledge of failure
location by illustrating that network with higher connectivity may require lesser
capacity than one with a lower connectivity to recover from dual-link failures
28235816, ncctchennai@gmail.com, www.ncct.in
15. IEEE BASED
SOFTWARE PROJECTS
A Geometric Approach to Improving Active Packet Loss
Measurement
Joel Sommers, Paul Barford, Nick Duffield, and Amos Ron
Abstract
Measurement and estimation of packet loss characteristics are challenging due to
the relatively rare occurrence and typically short duration of packet loss episodes.
While active probe tools are commonly used to measure packet loss on end-to end
Paths, there has been little analysis of the accuracy of these tools or their impact
on the network.
The objective of our study is to understand how to measure packet loss episodes
accurately with end-to-end probes. We begin by testing the capability of standard
Poisson-modulated end-to-end measurements of loss in a controlled laboratory
environment using IP routers and commodity end hosts.
Our tests show that loss characteristics reported from such Poisson-modulated
probe tools can be quite inaccurate over a range of traffic conditions. Motivated by
these observations, we introduce a new algorithm for packet loss measurement
that is designed to overcome the deficiencies in standard Poisson-based tools.
Specifically, our method entails probe experiments that follow a geometric
distribution to (1) enable an explicit trade-off between accuracy and impact on the
network, and (2) enable more accurate measurements than standard Poisson
probing at the same rate.
We evaluate the capabilities of our methodology experimentally by developing and
implementing a prototype tool, called BADABING. The experiments demonstrate
the trade-offs between impact on the network and measurement accuracy. We
show that BADABING reports loss characteristics far more accurately than
traditional loss measurement tools
28235816, ncctchennai@gmail.com, www.ncct.in
16. IEEE BASED
SOFTWARE PROJECTS
A Framework for Representation and Analysis
Charles B. Haley, Robin Laney, Jonathan D. Moffett, Member, IEEE, and
Bashar Nuseibeh, Member, IEEE Computer Society
Abstract
This paper presents a framework for security requirements elicitation and analysis.
The framework is based on constructing a context for the system, representing
security requirements as constraints, and developing satisfaction arguments for the
security requirements.
The system context is described using a problem-oriented notation, then is
validated against the security requirements through construction of a satisfaction
argument.
The satisfaction argument consists of two parts: a formal argument that the system
can meet its security requirements and a structured informal argument supporting
the assumptions expressed in the formal argument.
The construction of the satisfaction argument may fail, revealing either that the
security requirement cannot be satisfied in the context or that the context does not
contain sufficient information to develop the argument.
In this case, designers and architects are asked to provide additional design
information to resolve the problems. We evaluate the framework by applying it to a
security requirements analysis within an air traffic control technology evaluation
project.
Index Terms- Requirements engineering, security engineering, security
requirements, argumentation
28235816, ncctchennai@gmail.com, www.ncct.in
17. IEEE BASED
SOFTWARE PROJECTS
Logarithmic Store-Carry-Forward Routing in Mobile Ad
Hoc Networks
Jie Wu and Shuhui Yang
Department of Computer Science and Engineering
Florida Atlantic University
Boca Raton, FL 33431
Fei Dai
Department of Electrical and Computer Engineering
North Dakota State University
Fargo, ND 58105
Abstract
Two schools of thought exist in terms of handling mobility in mobile ad hoc
networks (MANETs). One is the traditional connection-based model, which views
node mobility as undesirable and tries to either remove (through recovery
schemes) or mask (through tolerant schemes) the effect of mobility.
The other is the mobility-assisted model, which considers mobility as a desirable
feature, where routing is based on the store-carry-forward paradigm with random
or controlled movement of mobile nodes (called ferries).
It is well known that mobility increases the capacity of MANETs by reducing the
number of relays in routing. Surprisingly, only two models, diameter-hop-count in
the connection-based model and constant-hop-count in the mobility-assisted
model, which correspond to two extremes of the spectrum, have been
systematically studied.
In this paper, we propose a new routing model that deals with message routing as
well as trajectory planning of the ferries that carry the message. A logarithmic
number of relays is enforced to achieve a good balance among several
contradictory goals, including increasing network capacity, increasing ferry sharing,
and reducing moving distance.
The model considers the dynamic control of ferries in terms of the number of
ferries, trajectory planning of ferries, and node communication and
synchronization. The effectiveness of the proposed model is evaluated analytically
as well as through simulation.
Keywords: MANETs, mobile nodes, network capacity, store-carry-forward,
trajectory planning.
28235816, ncctchennai@gmail.com, www.ncct.in
18. IEEE BASED
SOFTWARE PROJECTS
A New TCP for Persistent Packet Reordering
Stephan Bohacek, João P. Hespanha, Junsoo Lee, Chansook Lim, and Katia
Obraczka
Abstract
Most standard implementations of TCP perform poorly when packets are
reordered. In this paper, we propose a new version of TCP that maintains high
throughput when reordering occurs and yet, when packet reordering does not
occur, is friendly to other versions of TCP.
The proposed TCP variant, or TCP-PR, does not rely on duplicate
acknowledgments to detect a packet loss. Instead, timers are maintained to keep
track of how long ago a packet was transmitted.
In case the corresponding acknowledgment has not yet arrived and the elapsed
time since the packet was sent is larger than a given threshold, the packet is
assumed lost. Because TCP-PR does not rely on duplicate acknowledgments,
packet reordering (including out-or-order acknowledgments) has no effect on
TCPPR’s performance.
Through extensive simulations, we show that TCP-PR performs consistently better
than existing mechanisms that try to make TCP more robust to packet reordering.
In the case that packets are not reordered, we verify that TCP-PR maintains the
same throughput as typical implementations of TCP (specifically, TCP-SACK) and
shares network resources fairly. Furthermore, TCP-PR only requires changes to
the TCP sender side making it easier to deploy.
28235816, ncctchennai@gmail.com, www.ncct.in
19. IEEE BASED
SOFTWARE PROJECTS
Location-based Spatial Queries with Data Sharing in
Wireless Broadcast Environments
Abstract
Location-based spatial queries (LBSQs) refer to spatial queries whose answers
rely on the location of the inquirer. Efficient processing of LBSQs is of critical
importance with the ever-increasing deployment and use of mobile technologies.
We show that LBSQs have certain unique characteristics that traditional spatial
query processing in centralized databases does not address. For example, a
significant challenge is presented by wireless broadcasting environments, which
often exhibit high-latency database access.
In this paper, we present a novel query processing technique that, while
maintaining high scalability and accuracy, manages to reduce the latency
considerably in answering location-based spatial queries.
Our approach is based on peer-to-peer sharing, which enables us to process
queries without delay at a mobile host by using query results cached in its
neighboring mobile peers. We illustrate the appeal of our technique through
extensive simulation results.
28235816, ncctchennai@gmail.com, www.ncct.in
20. IEEE BASED
SOFTWARE PROJECTS
Distributed Suffix Tree for Peer-to-Peer Search
Hai Zhuge and Liang Feng
China Knowledge Grid Research Group, Key Lab of Intelligent Information
Processing
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Abstract
Establishing an appropriate semantic overlay on Peer-to-Peer networks to obtain
both semantic ability and scalability is a challenge. Current DHT-based P2P
networks are limited in their ability to support semantic search.
This paper proposes the DST Distributed Suffix Tree) overlay as the intermediate
layer between the DHT overlay and the semantic overlay. The DST overlay
supports search of keyword sequences. Its time cost is sub-linear with the length
of the keyword sequences. Using a common interface, the DST overlay is
independent of the variation of the underlying DHT overlays.
Analysis and experiments show that DST-based search is fast, load-balanced, and
useful in realizing accurate content search on large networks.
Key words: DHT, Peer-to-Peer, Search, Semantics, Suffix Tree, Load Balance.
28235816, ncctchennai@gmail.com, www.ncct.in
21. IEEE BASED
SOFTWARE PROJECTS
Dual-link failure resiliency through backup link mutual
exclusion
Source IEEE/ACM Transactions on Networking (TON) archive
Volume 16, Issue 1 (February 2008) table of contents
Year of Publication: 2008
ISSN: 1063-6692
Abstract
Networks employ link protection to achieve fast recovery from link failures. While
the first link failure can be protected using link protection, there are several
alternatives for protecting against the second failure.
This paper formally classifies the approaches to dual-link failure resiliency. One of
the strategies to recover from dual-link failures is to employ link protection for the
two failed links independently, which requires that two links may not use each
other in their backup paths if they may fail simultaneously.
Such a requirement is referred to as backup link mutual exclusion (BLME)
constraint and the problem of identifying a backup path for every link that satisfies
the above requirement is referred to as the BLME problem. This paper develops
the necessary theory to establish the sufficient conditions for existence of a
solution to the BLME problem.
Solution methodologies for the BLME problem is developed using two approaches
by: 1) formulating the backup path selection as an integer linear program; 2)
developing a polynomial time heuristic based on minimum cost path routing. The
ILP formulation and heuristic are applied to six networks and their performance is
compared with approaches that assume precise knowledge of dual-link failure.
It is observed that a solution exists for all of the six networks considered. The
heuristic approach is shown to obtain feasible solutions that are resilient to most
dual-link failures, although the backup path lengths may be significantly higher
than optimal.
In addition, the paper illustrates the significance of the knowledge of failure
location by illustrating that network with higher connectivity may require lesser
capacity than one with a lower connectivity to recover from dual-link failures.
28235816, ncctchennai@gmail.com, www.ncct.in
22. IEEE BASED
SOFTWARE PROJECTS
Solving the Package Router Control problem
Jon G. Hall Lucia Rapanotti Michael A. Jackson
Centre for Research in Computing
The Open University
fJ.G.Hall,L.Rapanottig@open.ac.uk, jacksonma@acm.org
Abstract
Problem Orientation is gaining interest as a way of approaching the development
of software intensive systems and yet a significant example that explores its use is
missing from the literature. In this paper, we present the basic elements of
Problem Oriented Software Engineering (POSE) which aims to bring both non-
formal and formal aspects of software development together in a single framework.
We provide an example of a detailed and systematic POSE development of a
software problem, that of designing the controller for a package router. The
problem is drawn from the literature, but the analysis presented here is new. The
aim of the example is twofold: to illustrate the main aspects of POSE and how it
supports software engineering design, and to demonstrate how a non-trivial
problem can be dealt with by the approach.
28235816, ncctchennai@gmail.com, www.ncct.in
23. IEEE BASED
SOFTWARE PROJECTS
Protection of Database Security Via Collaborative
Inference Detection
Yu Chen and Wesley W. Chu
Computer Science Department,
University of California, USA
{chenyu, wwc}@cs.ucla.edu
Abstract
Malicious users can exploit the correlation among data to infer sensitive
information from a series of seemingly innocuous data accesses. Thus, we develop
an inference violation detection system to protect sensitive data content.
Based on data dependency, database schema and semantic knowledge, we
constructed a semantic inference model (SIM) that represents the possible
inference channels from any attribute to the pre-assigned sensitive attributes.
The SIM is then instantiated to a semantic inference graph (SIG) for query-time
inference violation detection. For a single user case, when a user poses a query,
the detection system will examine his/her past query log and calculate the
probability of inferring sensitive information.
The query request will be denied if the inference probability exceeds the pre-
specified threshold. For multi-user cases, the users may share their query answers
to increase the inference probability.
Therefore, we develop a model to evaluate collaborative inference based on the
query sequences of collaborators and their task-sensitive collaboration levels.
Experimental studies reveal that information authoritativeness and communication
fidelity are two key factors that affect the level of achievable collaboration.
An example is given to illustrate the use of the proposed technique to prevent
multiple collaborative users from deriving sensitive information via inference.
28235816, ncctchennai@gmail.com, www.ncct.in
24. IEEE BASED
SOFTWARE PROJECTS
On the Performance Benefits of
Multihoming Route Control
Aditya Akella, Member, IEEE, Bruce Maggs, Srinivasan Seshan, Member, IEEE,
Anees Shaikh, Member, IEEE, and Ramesh Sitaraman, Member, IEEE
Abstract
Multihoming is increasingly being employed by large enterprises and data centers to
extract good performance and reliability from their ISP connections.
Multihomed end networks today can employ a variety of route control products to
optimize their Internet access performance and reliability. However, little is known about
the tangible benefits that such products can offer, the mechanisms they employ and
their trade-offs.
This paper makes two important contributions. First,we present a study of the potential
improvements in Internet round-trip times (RTTs) and transfer speeds from employing
multihoming route control. Our analysis shows that multihoming to 3 or more ISPs and
cleverly scheduling traffic across the ISPs can improve Internet RTTs and throughputs
by up to 25% and 20%, respectively.
However, a careful selection of ISPs is important to realize the performance
improvements. Second, focusing on large enterprises, we propose and evaluate a
widerange of route control mechanisms and evaluate their designtrade-offs. We
implement the proposed schemes on a Linuxbased Web proxy and perform a trace-
based evaluation of their performance.
We show that both passive and active measurement based techniques are equally
effective and could improve the Web response times of enterprise networks by up to
25% on average, compared to using a single ISP. We also outline several “best
common practices” for the design of route control products.
Index Terms - Multihoming, performance, reliability.
28235816, ncctchennai@gmail.com, www.ncct.in
25. IEEE BASED
SOFTWARE PROJECTS
HBA: Distributed Metadata Management for Large Cluster-
based Storage Systems
Yifeng Zhu, Member, IEEE, Hong Jiang, Member, IEEE Jun Wang, Member, IEEE,
Feng Xian, Student Member, IEEE,
Abstract
An efficient and distributed scheme for file mapping or file lookup is critical in
decentralizing metadata management within a group of metadata servers. This
paper presents a novel technique called HBA (Hierarchical Bloom filter Arrays) to
map filenames to the metadata servers holding their metadata. Two levels of
probabilistic arrays, namely, Bloom filter arrays, with different level of accuracies,
are used on each metadata server.
One array, with lower accuracy and representing the distribution of the entire
metadata, trades accuracy for significantly reduced memory overhead, while the
other array, with higher accuracy, caches partial distribution information and
exploits the temporal locality of file access patterns. Both arrays are replicated to
all metadata servers to support fast local lookups. We evaluate HBA through
extensive trace-driven simulations and implementation in Linux.
Simulation results show our HBA design to be highly effective and efficient in
improving performance and scalability of file systems in clusters with 1,000 to
10,000 nodes (or superclusters) and with the amount of data in the Petabyte scale
or higher. Our implementation indicates that HBA can reduce metadata operation
time of a single-metadata-server architecture by a factor of up to 43.9 when the
system is configured with 16 metadata servers.
28235816, ncctchennai@gmail.com, www.ncct.in
26. IEEE BASED
SOFTWARE PROJECTS
Energy-Efficient Resource Allocation in Wireless Networks:
An overview of game- theoretic approaches
Farhad Meshkati, H. Vincent Poor, and Stuart C. Schwartz
Abstract
An overview of game-theoretic approaches to energy-efficient resource allocation
in wireless networks is presented. Focusing on multiple-access networks, it is
demonstrated that game theory can be used as an effective tool to study resource
allocation in wireless networks with quality-of-service (QoS) constraints.
A family of non-cooperative (distributed) games is presented in which each user
seeks to choose a strategy that maximizes its own utility while satisfying its QoS
requirements. The utility function considered here measures the number of reliable
bits that are transmitted per joule of energy consumed and, hence, is particulary
suitable for energy-constrained networks.
The actions available to each user in trying to maximize its own utility are at least
the choice of the transmit power and, depending on the situation, the user may
also be able to choose its transmission rate, modulation, packet size, multiuser
receiver, multi-antenna processing algorithm, or carrier allocation strategy.
The best-response strategy and Nash equilibrium for each game is presented.
Using this game-theoretic framework, the effects of power control, rate control,
modulation, temporal and spatial signal processing, carrier allocation strategy and
delay QoS constraints on energy efficiency and network capacity are quantified.
28235816, ncctchennai@gmail.com, www.ncct.in
27. IEEE BASED
SOFTWARE PROJECTS
Building a Distributed E-Healthcare System Using SOA
March/April 2008 (vol. 10 no. 2) pp. 24-30
This article describes a distributed e-healthcare system that uses the service-
oriented architecture as a means of designing, implementing, and managing
healthcare services.
Index Terms:
Atom, RSS, e-healthcare, electronic health record, e-prescription, healthcare
standards, interoperability, medical devices, service-oriented architecture, SOA,
speech software, Web services
Citation: Firat Kart, Louise E. Moser, P. Michael Melliar-Smith, "Building a
Distributed E-Healthcare System Using SOA," IT Professional, vol. 10, no. 2, pp.
24-30, Mar/Apr, 2008
28235816, ncctchennai@gmail.com, www.ncct.in
28. IEEE BASED
SOFTWARE PROJECTS
Impact of user participation on Web-based information
system: The Hong Kong experience
Quaddus, M. Lau, A.
Grad. Sch. of Bus., Curtin Univ. of Technol., Perth, WA;
This paper appears in: Computer and information technology, 2008. iccit 2007.
10th international conference on
Publication Date: 27-29 Dec. 2007
Location: Dhaka,
ISBN: 978-1-4244-1550-2
INSPEC Accession Number: 10114576
Digital Object Identifier: 10.1109/ICCITECHN.2007.4579419
Current Version Published: 2008-07-25
Abstract
The rapid growth of highly sophisticated computers and Web-based information
systems (WIS) as integral components of business operations have led to an
increased interest in the role of user participation during WIS implementation and
its influence on end-user satisfaction and, ultimately, organisational success.
The primary purpose of this research is, therefore, to investigate the significance of
user-characteristics during WIS implementation. The research is conducted by
collecting data via survey among organizations in Hong Kong.
The important findings of this study demonstrate that user participation is positively
related to user satisfaction and organisational effectiveness. In addition, user
satisfaction can be largely applied to mediate the relationship between user
participation (through user training, career stage, and empowerment) and
organisational effectiveness.
A deeper understanding of these concepts will provide organisations in Hong Kong
with a richer view of the role of user participation during Web based information
system implementation, which in turn has the potential to contribute towards
improved business performance.
28235816, ncctchennai@gmail.com, www.ncct.in
29. IEEE BASED
SOFTWARE PROJECTS
Dual-Link Failure Resiliency
Through Backup Link Mutual Exclusion
Srinivasan Ramasubramanian, Member, IEEE, and Amit Chandak
Abstract
Networks employ link protection to achieve fast recovery from link failures. While
the first link failure can be protected using link protection, there are several
alternatives for protecting against the second failure. This paper formally classifies
the approaches to dual-link failure resiliency. One of the strategies to recover from
dual-link failures is to employ link protection for the two failed links independently,
which requires that two links may not use each other in their backup paths if they
may fail simultaneously.
Such a requirement is referred to as Backup Link Mutual Exclusion (BLME)
constraint and the problem of identifying a backup path for every link that satisfies
the above requirement is referred to as the BLME problem. This paper develops
the necessary theory to establish the sufficient conditions for existence of a
solution to the BLME problem.
Solution methodologies for the BLME problem is developed using two approaches
by: (1) formulating the backup path selection as an integer linear program; and (2)
developing a polynomial time heuristic based on minimum cost path routing. The
ILP formulation and heuristic are applied to six networks and their performance is
compared to approaches that assume precise knowledge of dual-link failure.
It is observed that a solution exists for all the six networks considered. The
heuristic approach is shown to obtain feasible solutions that are resilient to most
dual-link failures, although the backup path lengths may be significantly higher
than optimal.
In addition, the paper illustrates the significance of the knowledge of failure
location by illustrating that network with higher connectivity may require lesser
capacity than one with a lower connectivity to recover from dual-link failures
28235816, ncctchennai@gmail.com, www.ncct.in
30. IEEE BASED
SOFTWARE PROJECTS
Dual-resource TCP/AQM for
processing-constrained networks
Abstract
This paper examines congestion control issues for TCP flows that require in-
network processing on the fly in network elements such as gateways, proxies,
firewalls and even routers.
Applications of these flows are increasingly abundant in the future as the Internet
evolves. Since these flows require use of CPUs in network elements, both
bandwidth and CPU resources can be a bottleneck and thus congestion control
must deal with “congestion” on both of these resources.
In this paper, we show that conventional TCP/AQM schemes can significantly lose
throughput and suffer harmful unfairness in this environment, particularly when
CPU cycles become more scarce (which is likely the trend given the recent
explosive growth rate of bandwidth).
As a solution to this problem, we establish a notion of dual-resource proportional
fairness and propose an AQM scheme, called Dual- Resource Queue (DRQ), that
can closely approximate proportional fairness for TCP Reno sources with in-
network processing requirements.
DRQ is scalable because it does not maintain per- flow states while minimizing
communication among different resource queues, and is also incrementally
deployable because of no required change in TCP stacks. The simulation study
shows that DRQ approximates proportional fairness without much implementation
cost and even an incremental deployment of DRQ at the edge of the Internet
improves the fairness and throughput of these TCP flows. Our work is at its early
stage and might lead to an interesting development in congestion control research.
28235816, ncctchennai@gmail.com, www.ncct.in
31. IEEE BASED
SOFTWARE PROJECTS
Dynamic Signature Verification A stroke based algorithm
for dynamic signature verification
Tong Qu; El Saddik, A.; Adler, A.
Electrical and Computer Engineering, 2004. Canadian Conference on
Volume 1, Issue , 2-5 May 2004 Page(s): 461 - 464 Vol.1
Digital Object Identifier
Summary:
Dynamic signature verification (DSV) uses the behavioral biometrics of a hand-
written signature to confirm the identity of a computer user. This paper presents a
novel stroke-based algorithm for DSV. An algorithm is developed to convert
sample signatures to a template by considering their spatial and time domain
characteristics, and by extracting features in terms of individual strokes.
Individual strokes are identified by finding the points where there is a: 1) decrease
in pen tip pressure, 2) decrease in pen velocity, and 3) rapid change in pen angle.
A significant stroke is discriminated by the maximum correlation with respect to the
reference signatures.
Between each pair of signatures, the local correlation comparisons are computed
between portions of pressure and velocity signals using segment alignment by
elastic matching.
Experimental results were obtained for signatures from 10 volunteers over a four-
month period. The result shows that stroke based features contain robust dynamic
information, and offer greater accuracy for dynamic signature verification, in
comparison to results without using stroke features.
28235816, ncctchennai@gmail.com, www.ncct.in
32. IEEE BASED
SOFTWARE PROJECTS
TCP-LP: Low-Priority Service via
End-Point Congestion Control
Aleksandar Kuzmanovic and Edward W. Knightly
Abstract
Service prioritization among different traffic classes is an important goal for the
Internet. Conventional approaches to solving this problem consider the existing
best-effort class as the low-priority class, and attempt to develop mechanisms that
provide “better-than-best-effort” service.
In this paper, we explore the opposite approach, and devise a new distributed
algorithm to realize a low-priority service (as compared to the existing best effort)
from the network endpoints. To this end, we develop TCP Low Priority (TCP-LP), a
distributed algorithm whose goal is to utilize only the excess network bandwidth as
compared to the “fair share” of bandwidth as targeted by TCP.
The key mechanisms unique to TCP-LP congestion control are the use of one-way
packet delays for early congestion indications and a TCP-transparent congestion
avoidance policy.
The results of our simulation and Internet experiments show that that:
(1) TCP-LP is largely non-intrusive to TCP traffic; (2) both single and aggregate
TCPLP flows are able to successfully utilize excess network bandwidth; moreover,
multiple TCP-LP flows share excess bandwidth fairly; (3) substantial amounts of
excess bandwidth are available to the low-priority class, even in the presence of
“greedy” TCP flows; (4) the response times of web connections in the best-effort
class decrease by up to 90% when long-lived bulk data transfers use TCP-LP
rather than TCP; (5) despite their low-priority nature, TCP-LP flows are able to
utilize significant amounts of available bandwidth in a wide-area network
environment.
Keywords
TCP-LP, TCP, available bandwidth, service prioritization, TCP-transparency.
28235816, ncctchennai@gmail.com, www.ncct.in
33. IEEE BASED
SOFTWARE PROJECTS
Dynamic Load Balancing in Distributed
Systems in the Presence of Delays:
A Regeneration-Theory Approach Source
IEEE Transactions on Parallel and Distributed Systems archive
Volume 18, Issue 4 (April 2007) table of contents
Pages 485-497 Year of Publication: 2007
ISSN:1045-9219
Authors Sagar Dhakal Jorge E. Pezoa Cundong Yang Senior Members Majeed M.
Hayat IEEE David A. Bader IEEE
Publisher IEEE Press Piscataway, NJ, USA
Abstract
A regeneration-theory approach is undertaken to analytically characterize the
average overall completion time in a distributed system. The approach considers
the heterogeneity in the processing rates of the nodes as well as the randomness
in the delays imposed by the communication medium.
The optimal one-shot load balancing policy is developed and subsequently
extended to develop an autonomous and distributed load-balancing policy that can
dynamically reallocate incoming external loads at each node. This adaptive and
dynamic load balancing policy is implemented and evaluated in a two-node
distributed system.
The performance of the proposed dynamic load-balancing policy is compared to
that of static policies as well as existing dynamic load-balancing policies by
considering the average completion time per task and the system processing rate
in the presence of random arrivals of the external loads.
28235816, ncctchennai@gmail.com, www.ncct.in
34. IEEE BASED
SOFTWARE PROJECTS
Controlling IP Spoofing
through Interdomain Packet Filters
Source IEEE Transactions on Dependable and Secure Computing archive
Volume 5, Issue 1 (January 2008)
Year of Publication: 2008
ISSN: 1545-5971
Abstract
The Distributed Denial of Services (DDoS) attack is a serious threat to the
legitimate use of the Internet. Prevention mechanisms are thwarted by the ability of
attackers to forge, or spoof, the source addresses in IP packets.
By employing IP spoofing, attackers can evade detection and put a substantial
burden on the destination network for policing attack packets. In this paper, we
propose an inter-domain packet filter (IDPF) architecture that can mitigate the level
of IP spoofing on the Internet.
A key feature of our scheme is that it does not require global routing information.
IDPFs are constructed from the information implicit in BGP route updates and are
deployed in network border routers. We establish the conditions under which the
IDPF framework works correctly in that it does not discard packets with valid
source addresses.
Based on extensive simulation studies, we show that even with partial deployment
on the Internet, IDPFs can proactively limit the spoofing capability of attackers. In
addition, they can help localize the origin of an attack packet to a small number of
candidate networks.
28235816, ncctchennai@gmail.com, www.ncct.in
35. IEEE BASED
SOFTWARE PROJECTS
C-TREND: Temporal Cluster Graphs for Identifying and
Visualizing Trends in Multiattribute Transactional Data
Adomavicius, G.; Bockstedt, J.
Knowledge and Data Engineering, IEEE Transactions on
Volume 20, Issue 6, June 2008 Page(s):721 - 735
Digital Object Identifier 10.1109/TKDE.2008.31
Summary:
Organizations and firms are capturing increasingly more data about their
customers, suppliers, competitors, and business environment.
Most of this data is multiattribute (multidimensional) and temporal in nature. Data
mining and business intelligence, techniques are often used to discover patterns in
such data; however, mining temporal relationships typically is a complex task.
We propose a new data analysis and visualization technique for representing
trends in multiattribute temporal data using a clustering- based approach.
We introduce Cluster-based Temporal Representation of EveNt Data (C-TREND),
a system that implements the temporal cluster graph construct, which maps
multiattribute temporal data to a two-dimensional directed graph that identifies
trends in dominant data types over time.
In this paper, we present our temporal clustering-based technique, discuss its
algorithmic implementation and performance, demonstrate applications of the
technique by analyzing data on wireless networking technologies and baseball
batting statistics, and introduce a set of metrics for further analysis of discovered
trends.
28235816, ncctchennai@gmail.com, www.ncct.in
36. IEEE BASED
SOFTWARE PROJECTS
Dynamic signature verification
using discriminative training
Russell, G.F. Jianying Hu Biem, A. Heilper, A. Markman, D.
IBM TJ, Watson Res. Center, Yorktown Heights, NY, USA
This paper appears in: Document Analysis and Recognition, 2005. Proceedings.
Eighth International Conference on
Publication Date: 29 Aug.-1Sept. 2005
On page(s): 1260 - 1264 Vol. 2
Number of Pages: xxv+1290
ISSN: 1520-5263
Digital Object Identifier: 10.1109/ICDAR.2005.95
Posted online: 2006-01-16 09:05:15.0
Abstract
In this paper we describe a new approach to dynamic signature verification using
the discriminative training framework. The authentic and forgery samples are
represented by two separate Gaussian Mixture models and discriminative training
is used to achieve optimal separation between the two models.
An enrollment sample clustering and screening procedure is described which
improves the robustness of the system. We also introduce a method to estimate
and apply subject norms representing the "typical" variation of the subject's
signatures.
The subject norm functions are parameterized, and the parameters are trained as
an integral part of the discriminative training. The system was evaluated using 480
authentic signature samples and 260 skilled forgery samples from 44 accounts and
achieved an equal error rate of 2.25%.
28235816, ncctchennai@gmail.com, www.ncct.in
37. IEEE BASED
SOFTWARE PROJECTS
An Augmented Lagrangian Approach for Distributed
Supply Chain Planning for Multiple Companies
Nishi, T.; Shinozaki, R.; Konishi, M.
Automation Science and Engineering, IEEE Transactions on
Volume 5, Issue 2, April 2008 Page(s):259 - 274
Digital Object Identifier 10.1109/TASE.2007.894727
Summary:
Planning coordination for multiple companies has received much attention from
viewpoints of global supply chain management. In practical situations, a plausible
plan for multiple companies should be created by mutual negotiation and
coordination without sharing such confidential information as inventory costs,
setup costs, and due date penalties for each company.
In this paper, we propose a framework for distributed optimization of supply chain
planning using an augmented Lagrangian decomposition and coordination
approach. A feature of the proposed method is that it can derive a near-optimal
solution without requiring all of the information.
The proposed method is applied to supply chain planning problems for a
petroleum complex, and a midterm planning problem for multiple companies.
Computational experiments demonstrate that the average gap between a solution
derived by the proposed method and the optimal solution is within 3% of the
performance index, even though only local information is used to derive a
solution for each company.
28235816, ncctchennai@gmail.com, www.ncct.in
38. IEEE BASED
SOFTWARE PROJECTS
An Assessment of Dynamic Signature Forgery and
Perception of Signature Strength
Elliott, S. Hunt, A.
Dept. of Ind. Technol., Purdue Univ., West Lafayette, IN
This paper appears in: Carnahan Conferences Security Technology, Proceedings
2006 40th Annual IEEE International
Publication Date: Oct. 2006
On page(s): 186 - 190
Number of Pages: 186 - 190
Location: Lexington, KY
Digital Object Identifier: 10.1109/CCST.2006.313448
Posted online: 2007-02-20 06:36:18.0
Abstract
Dynamic signature verification has many challenges associated with the creation of
the impostor dataset.
The literature discusses several ways of determining the impostor signature
provider, but this paper takes a different approach - that of the opportunistic forger
and his or her relationship to the genuine signature holder.
The paper examines the accuracy with which an opportunistic forger assesses the
various traits of the genuine signature, and whether the genuine signature holder
believes that his or her signature is easy to forge
28235816, ncctchennai@gmail.com, www.ncct.in
39. IEEE BASED
SOFTWARE PROJECTS
Continuous Delivery Message Dissemination Problems
under the Multicasting Communication Mode
Gonzalez, T.F.
Parallel and Distributed Systems, IEEE Transactions on
Volume 19, Issue 8, Aug. 2008 Page(s): 1034 - 1043
Digital Object Identifier 10.1109/TPDS.2007.70801
Summary:
We consider the continuously delivery message dissemination (CDMD) problem
over the n processor single-port complete (all links are present and are bi-
directional) static network with the multicasting communication primitive.
This problem has been shown to be NP-complete even when all messages have
equal length. For the CDMD problem we present an efficient approximation
algorithm to construct a message routing schedule with total communication time
at most 3.5d, where d is the total length of the messages that each processor
needs to send or receive.
The algorithm takes O(qn) time, where n is the number of processors and q is the
total number of messages that the processors receive.
28235816, ncctchennai@gmail.com, www.ncct.in
40. IEEE BASED
SOFTWARE PROJECTS
An agent-based testing approach for Web applications
Qi, Y.; Kung, D.; Wong, E.
Computer Software and Applications Conference, 2005. COMPSAC 2005. 29th
Annual International
Volume 2, Issue , 26-28 July 2005 Page(s): 45 - 50 Vol. 1
Digital Object Identifier 10.1109/COMPSAC.2005.42
Summary:
In recent years, Web applications have grown so quickly that they have already
become crucial to the success of businesses. However, since they are built on
Internet and open standard technologies, Web applications bring new challenges
to researchers, such as dynamic behaviors, heterogeneous representations, novel
control flow and data flow mechanisms, etc.
In this paper, we propose an agent-based approach for Web application testing.
While the agent-based framework greatly reduces the complexity of Web
applications, a four-level dataflow test approach can be employed to perform
structure testing on them.
In this approach, data flow analysis is performed as function level testing, function
cluster level testing, object level testing, and Web application level testing, from
low abstract level to high abstract level. Each test agent in the framework takes
charge of the testing in an abstract level for a particular type of Web document or
object.
28235816, ncctchennai@gmail.com, www.ncct.in
41. IEEE BASED
SOFTWARE PROJECTS
Dynamic signature verification system using stroked
based features
Tong Qu Abdulmotaleb El Saddik Adler, A. VIVA Lab, Ottawa Univ., Ont.,
Canada
This paper appears in: Haptic, Audio and Visual Environments and Their
Applications, 2003. HAVE 2003. Proceedings. The 2nd IEEE Internatioal Workshop
on
Publication Date: 20-21 Sept. 2003
On page(s): 83 - 88
Number of Pages: viii+124
Posted online: 2003-11-10 09:46:00.0
Abstract
This paper presents a novel feature-based dynamic signature verification system.
Data is acquired from a Patriot digital pad, using the Windows Pen API. The
signatures are analyzed dynamically by considering their spatial and time domain
characteristics.
A stroke-based feature extraction method is studied, in which strokes are
separated by the zero pressure points. Between each pair of signatures, the
correlation comparisons are conducted for strokes.
A significant stroke is discriminated by the maximum correlation with respect to the
reference signatures. The correlation value and stroke length for the significant
strokes are extracted as features for identifying genuine signatures against
forgeries.
The membership function and classifier are modeled based on the probabilistic
distribution of selected features. Experimental results were obtained for signatures
from 20 volunteers. The current 6-feature based signature verification system was
calculated to have a false accept rate of 1.67% and false reject rate of 6.67%.
28235816, ncctchennai@gmail.com, www.ncct.in
42. IEEE BASED
SOFTWARE PROJECTS
Grid computing QoS-aware connection resilience for
network-aware grid computing fault tolerance
Valcarenghi, L. Castoldi, P.
Center of Excellence for Commun. Networks Eng., Scuola Superiore Sant' Anna,
Pisa, Italy
This paper appears in: Transparent Optical Networks, 2005, Proceedings of 2005
7th International Conference
Publication Date: 3-7 July 2005
Volume: 1
On page(s): 417 - 422 Vol. 1
Number of Pages: (2 vol. (x+448)
Digital Object Identifier: 10.1109/ICTON.2005.1505834
Posted online: 2005-09-12 09:08:00.0
Abstract
Current grid computing fault tolerance leverages IP dynamic rerouting and
schemes implemented in the application or in the middleware to overcome both
software and hardware failures. Despite the flexibility of current grid computing
fault tolerant schemes in recovering inter-service connectivity from an almost
comprehensive set of failures, they might not be able to repristinate also
connection QoS guarantees, such as minimum bandwidth and maximum delay.
This phenomenon is exacerbated when, as in global grid computing, the grid
computing sites are not connected by dedicated network resources but share the
same network infrastructure with other Internet services. This paper aims at
showing the advantages of integrating grid computing fault tolerance schemes with
next generation networks (NGNs) resilient schemes.
Indeed, by combining the utilization of generalized multi-protocol label switching
(GMPLS) resilient schemes, such as path restoration, and application or
middleware layer fault tolerant schemes, such as service migration or replication, it
is possible to guarantee the necessary QoS to the connections between grid
computing sites while limiting the required network and computational resources.
28235816, ncctchennai@gmail.com, www.ncct.in
43. IEEE BASED
SOFTWARE PROJECTS
Performance Analysis of a P2P-Based VoIP Software
Gao Lisha Luo Junzhou
Southeast University, Nanjing, China
This paper appears in: Telecommunications, 2006. AICT-ICIW '06. International
Conference on Internet and Web Applications and Services/Advanced International
Conference on
Publication Date: 19-25 Feb. 2006
On page(s): 11 - 11
Digital Object Identifier: 10.1109/AICT-ICIW.2006.147
Posted online: 2006-04-03 15:44:59.0
Abstract
With the development of network, multimedia will be the main application in next
generation network. Voice is one of the most important applications. Recently a
kind of P2P-based VoIP software, Skype, has been receiving more and more
attention both in academia and industry.
Skype claims that it's better than other VoIP software, because of its high call
completion rate and superior sound quality. This paper will reveal Skype's
technique and have a performance comparison between Skype and MSN
Messenger, which uses traditional VoIP protocol.
The result indicates that the voice quality of Skype is no better than traditional
VoIP software and the great benefit of P2P involved with VoIP is that it can solve
NAT and firewall problems.
28235816, ncctchennai@gmail.com, www.ncct.in
44. IEEE BASED
SOFTWARE PROJECTS
A model-based approach to evaluation of the efficacy of
FEC coding in combating network packet losses
Source IEEE/ACM Transactions on Networking (TON) archive
Volume 16, Issue 3 (June 2008)
Year of Publication: 2008
ISSN: 1063-6692
Abstract
We propose a model-based analytic approach for evaluating the overall efficacy of
FEC coding combined with interleaving in combating packet losses in IP networks.
In particular, by modeling the network path in terms of a single bottleneck node,
described as a G/M/1/K queue, we develop a recursive procedure for the exact
evaluation of the packet-loss statistics for general arrival processes, based on the
framework originally introduced by Cidon et al., 1993.
To include the effects of interleaving, we incorporate a discrete-time Markov chain
(DTMC) into our analytic framework. We study both single-session and multiple-
session scenarios, and provide a simple algorithm for the more complicated
multiple-session scenario. We show that the unified approach provides an
integrated framework for exploring the tradeoffs between the key coding
parameters; specifically, interleaving depths, channel coding rates and block
lengths.
The approach facilitates the selection of optimal coding strategies for different
multimedia applications with various user quality-of-service (QoS) requirements
and system constraints.We also provide an information-theoretic bound on the
performance achievable with FEC coding in IP networks.
28235816, ncctchennai@gmail.com, www.ncct.in
45. IEEE BASED
SOFTWARE PROJECTS
Multicast communication in grid computing
networks with background traffic
Kouvatsos, D.D. Mkwawa, I.M.
Dept. of Comput., Univ. of Bradford, UK
This paper appears in: Software, IEE Proceedings-
Publication Date: 26 Aug. 2003
Volume: 150 ,
On page(s): 257 - 264
ISSN: 1462-5970
Digital Object Identifier: 10.1049/ip-sen:20030810
Posted online: 2003-10-27 09:52:26.0
Abstract
Grid computing is a computational concept based on an infrastructure that
integrates and collaborates the use of high end computers, networks, databases
and scientific instruments owned and managed by several organisations. It
involves large amounts of data and computing which require secure and reliable
resource sharing across organisational domains. Despite its high computing
performance orientation, communication delays between grid computing nodes is a
big hurdle due to geographical separation in a realistic grid computing
environment.
Communication schemes such as broadcasting, multicasting and routing should,
therefore, take communication delay into consideration. Such communication
schemes in a grid computing environment pose a great challenge due to the
arbitrary nature of its topology. In this context, a heuristic algorithm for multicast
communication is proposed for grid computing networks with finite capacity and
bursty background traffic. The scheme facilitates inter-node communication for grid
computing networks and it is applicable to a single-port mode of message passing
communication.
The scheme utilises a queue-by-queue decomposition algorithm for arbitrary open
queueing network models, based on the principle of maximum entropy, in
conjunction with an information theoretic decomposition criterion and graph
theoretic concepts. Evidence based on empirical studies indicates the suitability of
the scheme for achieving an optimal multicast communication cost, subject to
system decomposition constraints.
28235816, ncctchennai@gmail.com, www.ncct.in
46. IEEE BASED
SOFTWARE PROJECTS
A Signature-Based Indexing Method for Efficient Content-
Based Retrieval of Relative Temporal Patterns
June 2008 (vol. 20 no. 6) pp. 825-835
A number of algorithms have been proposed for the discovery of temporal
patterns. However, since the number of generated patterns can be large, selecting
which patterns to analyze can be non-trivial.
There is thus a need for algorithms and tools that can assist in the selection of
discovered patterns so that subsequent analysis can be performed in an efficient
and, ideally, interactive manner.
In this paper, we propose a signature-based indexing method, to optimise the
storage and retrieval of a large collection of relative temporal patterns.
28235816, ncctchennai@gmail.com, www.ncct.in
47. IEEE BASED
SOFTWARE PROJECTS
Concurrent Negotiations for Agent-Based Grid Computing
Xiong Li Yujin Wu Kai Wang Zongchang Xu
Dept. of Command & Adm. Acad., Armored Force Eng., Beijing
This paper appears in: Cognitive Informatics, 2006. ICCI 2006. 5th IEEE
International Conference on
Publication Date: 17-19 July 2006
Volume: 1
On page(s): 31 - 36
Number of Pages: 31 - 36
Location: Beijing
Digital Object Identifier: 10.1109/COGINF.2006.365673
Posted online: 2007-09-10 09:36:29.0
Abstract
Since the grid and agent communities both develop concepts and mechanisms for
open distributed systems, agent-based grid computing is put forward. However,
there are challenges about effective load balancing for grid computing, because of
the highly heterogeneous and complex computing environments, even though
agents and agent-based grid computing approach are used.
To solve the problems, in this paper, a concurrent negotiations model is presented,
in which an auction is mapped into a one-to-many negotiation between one seller
agent and many buyer agents in service-oriented contexts.
Thus, the mechanism and process of concurrent negotiations are studied. An
agent negotiates with many other agents and coordinates balance in grid
computing resources.
The results of exploratory evaluation show that this concurrent negotiations model
has a advantage in achieving more and higher utility agreements than other
models to optimize computing resources allocation
28235816, ncctchennai@gmail.com, www.ncct.in
48. IEEE BASED
SOFTWARE PROJECTS
Optimal multicast routing: modeling and discussion
Yue Liu Bao-Xian Zhang Chang-Jia Chen
Sch. of Electron. & Inf. Eng., Northern Jiaotong Univ., Beijing, China;
This paper appears in: Communication Technology Proceedings, 2000. WCC -
ICCT 2000. International Conference on
Publication Date: 21-25 Aug. 2000
Volume: 2
On page(s): 1449 - 1452 vol.2
Number of Pages: 2 vol. 1788
Meeting Date: 08/21/2000 - 08/25/2000
Location: Beijing
Digital Object Identifier: 10.1109/ICCT.2000.890933
Posted online: 2002-08-06 23:40:04.0
Abstract
Routing is an important issue in multicast and has great influence on system
performance and network resource usage. To make maximum use of the network
resources, the total cost in the system should be minimized, this corresponds to
the optimal multicast routing (OMR) problem.
Until now, there has been little work done on the modeling and theoretical analysis
of this problem. The purpose of this paper is to present a theoretical framework for
the OMR problem.
A system-optimal multicast routing (SOMR) model is proposed and several
conclusions are derived from this model, which give an insight into the OMR
problem: 1) in the presence of block effect, the OMR problem has a unique link
flow solution, 2) the optimal multicast routing is achieved only if the traffics are
distributed on the minimal first derivative cost (MFDC) trees, and 3) even if the
minimal tree (the Steiner tree) is built for each group, it usually doesn't mean the
optimal solution
28235816, ncctchennai@gmail.com, www.ncct.in