SlideShare una empresa de Scribd logo
1 de 107
Simulation and Hardware Implementation of NLMS algorithm on
TMS320C6713 Digital Signal Processor
A
Dissertation
submitted
in partial fulfilment
for the award of the Degree of
Master of Technology
in Department of Electronics & Communication Engineering
(with specialization in Digital Communication)

Supervisor

Submitted By:

S.K. Agrawal

Raj Kumar Thenua

Associate Professor

Enrolment No.:
07E2SODCM30P611

Department of Electronics & Communication Engineering
Sobhasaria Engineering College, Sikar
Rajasthan Technical University
April 2011
Candidate’s Declaration
I hereby declare that the work, which is being presented in the Dissertation, entitled
“Simulation

and

Hardware

Implementation

of

NLMS

algorithm

on

TMS320C6713 Digital Signal Processor” in partial fulfilment for the award of
Degree of “Master of Technology” in Deptt. of Electronics & Communication
Engineering with specialization in Digital Communication, and submitted to the
Department

of

Electronics

&

Communication

Engineering,

Sobhasaria

Engineering College Sikar, Rajasthan Technical University is a record of my own
investigations carried under the Guidance of Shri Surendra Kumar Agrawal,
Department of Electronics & Communication Engineering, , Sobhasaria Engineering
College Sikar, Rajasthan.
I have not submitted the matter presented in this Dissertation anywhere for the award
of any other Degree.

(Raj Kumar Thenua)
Digital Communication
Enrolment No.: 07E2SODCM30P611
Sobhasaria Engineering College
Sikar

Counter Signed by
Name(s) of Supervisor(s)

(S.K. Agrawal)

ii
ACKNOWLEDGEMENT
First of all, I would like to express my profound gratitude to my dissertation guide,
Mr. S.K. Agrawal (Head of the Department), for his outstanding guidance and
support during my dissertation work. I benefited greatly from working under his
guidance. His encouragement, motivation and support have been invaluable
throughout my studies at Sobhasaria Engineering College, Sikar.
I would like to thank Mohd. Sabir Khan (M.Tech coordinator) for his excellent
guidance and kind co-operation during the entire study at Sobhasaria Engineering
College, Sikar.
I would also like to thank all the faculty members of ECE department who have
co-operated and encouraged during the study course.
I would also like to thank all the staff (technical and non-technical) and librarians of
Sobhasaria Engineering College, Sikar who have directly or indirectly helped during
the course of my study.
Finally, I would like to thank my family & friends for their constant love and support
and for providing me with the opportunity and the encouragement to pursue my goals.

Raj Kumar Thenua

iii
CONTENTS

Candidate’s Declaration

ii

Acknowledgement

iii

Contents

iv-vi

List of Tables

vii

List of Figures

viii-x

List of Abbreviations

xi-xii

List of Symbols

xiii

ABSTRACT

1

CHAPTER 1: INTRODUCTION

2

1.1

Overview

2

1.2

Motivation

3

1.3

Scope of the work

4

1.4

Objectives of the thesis

5

1.5

Organization of the thesis

5

CHAPTER 2: LITERATURE SURVEY

7

CHAPTER 3: ADAPTIVE FILTERS

12

3.1

Introduction

12

3.1.1 Adaptive Filter Configuration

13

3.1.2 Adaptive Noise Canceller (ANC)

16

Approaches to Adaptive filtering Algorithms

19

3.2.1 Least Mean Square (LMS) Algorithm

20

3.2

3.2.1.1 Derivation of the LMS Algorithm

20

3.2.1.2 Implementation of the LMS Algorithm

21

3.2.2 Normalized Least Mean Square (NLMS) Algorithm

22

3.2.2.1 Derivation of the NLMS Algorithm

23

3.2.2.2 Implementation of the NLMS Algorithm

24

3.2.3 Recursive Least Square (RLS) Algorithm

iv

24
3.2.3.1 Derivation of the RLS Algorithm
3.2.3.2 Implementation of the RLS Algorithm
3.3

25
27

Adaptive filtering using MATLAB

28

CHAPTER 4: SIMULINK MODEL DESIGN FOR HARDWARE
IMPLEMENTATION

31

4.1

Introduction to Simulink

31

4.2

Model design

32

4.2.1 Common Blocks used in Building Model

32

4.2.1.1 C6713 DSK ADC Block

32

4.2.1.2 C6713 DSK DAC Block

33

4.2.1.3 C6713 DSK Target Preferences Block

33

4.2.1.4 C6713 DSK Reset Block

33

4.2.1.5 NLMS Filter Block

34

4.2.1.6 C6713 DSK LED Block

34

4.2.1.7 C6713 DSK DIP Switch Block

34

4.2.2 Building the model
Model Reconfiguration

37

4.3.1 The ADC Setting

38

4.3.2 The DAC Settings

39

4.3.3 Setting the NLMS Filter Parameters

40

4.3.4 Setting the Delay Parameters

41

4.3.5 DIP Switch Settings

41

4.3.6 Setting the Constant Value

42

4.3.7 Setting the Constant Data Type

43

4.3.8 Setting the Relational Operator Type

43

4.3.9 Setting the Relational Operator Data Type

43

4.3.10 Switch Setting

4.3

34

44

CHAPTER 5: REAL TIME IMPLEMENTATION ON DSP PROCESSOR 45
5.1

Introduction to Digital Signal Processor (TMS320C6713)

45

5.1.1 Central Processing Unit Architecture

48

5.1.2 General purpose registers overview

49

v
5.1.3 Interrupts

49

5.1.4 Audio Interface Codec

50

5.1.5 DSP/BIOS & RTDX

52

5.2

Code Composer Studio as Integrated Development Environment

54

5.3

MATLAB interfacing with CCS and DSP Processor

58

5.4

Real-time experimental Setup using DSP Processor

58

CHAPTER 6: RESULTS AND DISCUSSION

63

6.1

MATLAB simulation results for Adaptive Algorithms

63

6.1.1

LMS Algorithm Simulation Results

64

6.1.2

NLMS Algorithm Simulation Results

66

6.1.3

RLS Algorithm Simulation Results

67

6.1.4

Performance Comparison of Adaptive Algorithms

67

6.2

Hardware Implementation Results using TMS320C6713 Processor

71

6.2.1 Tone Signal Analysis using NLMS Algorithm

71

6.2.1.1 Effect on Filter Performance at Various Frequencies

73

6.2.1.2 Effect on Filter Performance at Various Amplitudes

75

6.2.2 ECG Signal Analysis using NLMS and LMS Algorithms and their

78

Performance Comparison
CHAPTER 7: CONCLUSIONS

85

7.1

Conclusion

85

7.2

Future Work

86

REFERENCES

88

APPENDIX-I

LIST OF PUBLICATIONS

93

APPENDIX-II

MATLAB COMMANDS

94

vi
LIST OF TABLES
Table No.

Title

Page No.

Table 6.1

Mean Squared Error (MSE) Versus Step Size (µ)

65

Table 6.2

Mean Squared Error versus Filter-order (N)

69

Table 6.3

Performance comparison of various adaptive algorithms

70

Table 6.4

Comparison of Various Parameters for Adaptive Algorithms

70

Table 6.5

SNR Improvement versus voltage and frequency

78

Table 6.6

SNR Improvement versus noise level for a Tone Signal

78

Table 6.7

SNR Improvement versus noise variance for an ECG Signal

84

 

vii
LIST OF FIGURES
Figure No.

Title

Page No.

Fig.3.1

General adaptive filter configuration

14

Fig.3.2

Transversal FIR filter architecture

15

Fig.3.3

Block diagram for Adaptive Noise Canceller

16

Fig.3.4

MATLAB versatility diagram

29

Fig.4.1

Simulink applications

32

Fig.4.2

Adaptive Noise cancellation Simulink model

33

Fig.4.3

Simulink library browser

35

Fig.4.4

Blank new model window

36

Fig.4.5

Model window with ADC block

37

Fig.4.6

Model illustration before connections

38

Fig.4.7

Setting up the ADC for mono microphone input

39

Fig.4.8

Setting the DAC parameters

39

Fig.4.9

Setting the NLMS filter parameters

40

Fig.4.10

Setting the delay unit

41

Fig.4.11

Setting up the DIP switch values

42

Fig.4.12

Setting the constant parameters

42

Fig.4.13

Data type conversion to 16-bit integer

43

Fig.4.14

Changing the output data type

44

Fig.5.1

Block diagram of TMS320C6713 processor

47

Fig.5.2

Physical overview of the TMS320C6713 processor

47

Fig.5.3

Functional block diagram of TMS320C6713 CPU

48

Fig.5.4

Interrupt priority diagram

49

Fig.5.5

Interrupt handling procedure

50

viii
Figure No.

Title

Page No.

Fig.5.6

Audio connection illustrating control and data signal

51

Fig.5.7

AIC23 codec interface

52

Fig.5.8

DSP BIOS and RTDX

53

Fig.5.9

Code composer studio platform

54

Fig.5.10

Embedded software development

54

Fig.5.11

Typical 67xx efficiency vs. efforts level for different codes

55

Fig.5.12

Code generation

55

Fig.5.13

Cross development environment

56

Fig.5.14

Signal flow during processing

56

Fig.5.15

Real-time analysis and data visualization

57

Fig.5.16

MATLAB interfacing with CCS and TI target processor

58

Fig.5.17

Experimental setup using Texas Instrument processor

59

Fig.5.18

Real-time setup using Texas Instrument processor

59

Fig.5.19

Model building using RTW

60

Fig.5.20

Code generation using RTDX link

60

Fig.5.21

Target processor in running status

61

Fig.5.22 (a) Switch at Position 0

62

Fig.5.22 (b) Switch at position 1 for NLMS noise reduction

62

Fig.6.1(a)

Clean tone(sinusoidal) signal s(n)

63

Fig.6.1(b)

Noise signal x(n)

63

Fig.6.1(c)

Delayed noise signal x1(n)

64

Fig.6.1(d)

Desired signal d(n)

64

Fig.6.2

MATLAB simulation for LMS algorithm; N=19, step size=0.001

64

Fig.6.3

MATLAB simulation for NLMS algorithm; N=19, step size=0.001

66

ix
Figure No.

Title

Page No.

Fig.6.4

MATLAB simulation for RLS algorithm; N=19, λ =1

67

Fig.6.5

MSE versus step-size (µ) for LMS algorithm

67

Fig.6.6

MSE versus filter order (N)

68

Fig.6.7

Clean tone signal of 1 kHz

72

Fig.6.8

Noise corrupted tone signal

72

Fig.6.9

Filtered tone signal

73

Fig.6.10

Time delay in filtered signal

73

Fig.6.11(a)

Filtered output signal at 2 kHz frequency

74

Fig.6.11(b)

Filtered output signal at 3 kHz frequency

74

Fig.6.11(c)

Filtered output signal at 4 kHz frequency

75

Fig.6.11(d)

Filtered output signal at 5 kHz frequency

75

Fig.6.12(a)

Filtered output signal at 3V

76

Fig.6.12(b)

Filtered output signal at 4V

76

Fig.6.12(c)

Filtered output signal at 5V

77

Fig.6.13

Filtered signal at high noise

77

Fig.6.14

ECG waveform

79

Fig.6.15

Clean ECG signal

80

Fig.6.16(a)

NLMS filtered output for low level noisy ECG signal

81

Fig.6.16(b)

LMS filtered output for low level noisy ECG signal

81

Fig.6.17(a)

NLMS filtered output for medium level noisy ECG signal

82

Fig.6.17(b)

LMS filtered output for medium level noisy ECG signal

82

Fig.6.18(a)

NLMS filtered output for high level noisy ECG signal

83

Fig.6.18(b)

LMS filtered output for high level noisy ECG signal

83

 

x
LIST OF ABBREBIATIONS
ANC

Adaptive Noise Cancellation

API

Application Program Interface

AWGN

Additive White Gaussian Noise

BSL

Board Support Library

BIOS

Basic Input Output System

CSL

Chip Support Library

CCS

Code Composer Studio

CODEC

Coder Decoder

COFF

Common Object File Format

COM

Component Object Model

CPLD

Complex Programmable Logic Device

CSV

Comma Separated Value

DIP

Dual Inline Package

DSK

Digital signal processor Starter Kit

DSO

Digital Storage Oscilloscope

DSP

Digital Signal Processor

ECG

Electrocardiogram

EDMA

Enhanced Direct Memory Access

EMIF

External Memory Interface

FIR

Finite Impulse Response

FPGA

Field Programmable Gate Array

FTRLS

Fast Transversal Recursive Least Square

GEL

General Extension Language

GPIO

General Purpose Input Output

GUI

Graphical User Interface

HPI

Host Port Interface

IDE

Integrated Development Environment

IIR

Infinite Impulse Response

JTAG

Joint Text Action Group

LMS

Least Mean Square

xi
LSE

Least Square Error

MA

Moving Average

McBSP

Multichannel Buffered Serial Port

McASP

Multichannel Audio Serial Port

MSE

Mean Square Error

MMSE

Minimum Mean Square Error

NLMS

Normalized Least Mean Square

RLS

Recursive Least Squares

RTDX

Real Time Data Exchange

RTW

Real Time Workshop

SNR

Signal to Noise Ratio

TI

Texas Instrument

TVLMS

Time Varying Least Mean Squared

VLIW

Very Long Instruction Word

VSLMS

Variable Step-size Least Mean Square

VSSNLMS

Variable Step Size Normalized Least Mean Square

xii
LIST OF SYMBOLS
s(n)

Source signal

x(n)

Noise signal or reference signal

x1(n)

Delayed noise signal

w(n)

Filter weights

d(n)

Desired signal

y(n)

FIR filter output

e(n)

Error signal

+

e (n)

Advance samples of error signal

e (n)

Error estimation

n

Sample number

i

Iteration

N

Filter order

E

Ensemble

Z-1

Unit delay

wT

Transpose of weight vector

µ

Step size
Gradient

ξ

Cost function
x(n)

2

Squared Euclidian norm of the input vector x(n) at iteration n.

c

Constant term for normalization

α

NLMS adaption constant

λ
~
Λ ( n)

Small positive constant

k(n)
~
ψ ( n)

Gain vector

λ

~

Diagonal matrix vector

Intermediate matrix

θλ

Intermediate vector

w ( n)

Estimation of filter weight vector

y ( n)

Estimation of FIR filter output

xiii
ABSTRACT
Adaptive filtering constitutes one of the core technology in the field of digital signal
processing and finds numerous application in the areas of science and technology viz. echo
cancellation, channel equalization, adaptive noise cancellation, adaptive beam-forming,
biomedical signal processing etc.
Noise problems in the environment have gained attention due to the tremendous growth in
upcoming technologies which gives spurious outcomes like noisy engines, heavy machinery,
high electromagnetic radiation devices and other noise sources. Therefore, the problem of
controlling the noise level in the area of signal processing has become the focus of a vast
amount of research over the years.
In this particular work an attempt has been made to explore the adaptive filtering techniques
for noise cancellation using Least Mean Square (LMS), Normalized Least Mean Square
(NLMS) and Recursive Least Mean Square (RLS) algorithms. The mentioned algorithms
have been simulated in MATLAB and compared for evaluating the best performance in terms
of Mean Squared Error (MSE), convergence rate, percentage noise removal, computational
complexity and stability.
In the specific example of tone signal, LMS has shown low convergence rate, with low
computational complexity while RLS has fast convergence rate and shows best performance
but at the cost of large computational complexity and memory requirement. However the
NLMS provides a trade-off in convergence rate and computational complexity which makes
it more suitable for hardware implementation.
The hardware implementation of NLMS algorithm is performed for that a simulink model is
designed to generate auto C code for the DSP processor. The generated C code is loaded on
the DSP processor hardware and the task of real-time noise cancellation is done for the two
types of signals i.e. tone signal and biomedical ECG signal. For both types of signals, three
noisy signals of different noise levels are used to judge the performance of the designed
system. The output results are analysed using Digital Storage Oscilloscope (DSO) in terms of
filtered signal SNR improvement. The results have also been compared with the LMS
algorithm to prove the superiority of NLMS algorithm.

1 
 
Chapter-1

INTRODUCTION
In the process of transmission of information from the source to receiver, noise from
the surroundings automatically gets added to the signal.  The noisy signal contains two
components, one carries the information of interest i.e. the useful signal; the other consists of
random errors or noise which is superimposed on the useful signal. These random errors or
noise are unwanted because they diminish the accuracy and precision of the measured signal.
Therefore the effective removal or reduction of noise in the field of signal processing is an
active area of research.

1.1

Overview
The use of adaptive filter [1] is one of the most popular proposed solutions to reduce

the signal corruption caused by predictable and unpredictable noise. An adaptive filter has
the property of self-modifying its frequency response to change its behavior with time. It
allows the filter to adapt to the response as the input signal characteristics change. Due to this
capability and the construction flexibility, the adaptive filters have been employed in many
different applications like telephonic echo cancellation, radar signal processing, navigation
systems, communications, channel equalization, bio-medical & biometric signal processing
etc.
In the field of adaptive filtering, there are mainly two algorithms that are used to force
the filter to adapt its coefficients – Stochastic gradient based algorithm and Recursive Least
Square based algorithm. Their implementations and adaptation properties are the determining
factors for choice of application. The main requirements and the performance parameters for
adaptive filters are the convergence speed and the asymptotic error. The convergence speed is
the primary property of an adaptive filter which enables one to measure how quickly the filter
is converging to the desired value. It is a major requirement as well as a limiting factor for
most of the applications of adaptive filters.
The asymptotic error represents the amount of error that the filter introduces at steady
state after it has converged to the desired value. The RLS filters due to their computational
structure have considerably better properties than the LMS filters both in terms of the

2 
 
convergence speed and the asymptotic error. The RLS filters which outperform the LMS
filters obtain their solution for the weight updated directly from the Mean Square Error
(MSE) [2]. However, they are computationally very demanding and also very dependent
upon the precision of the input signal. Their computational requirements are significant and
imply the use of expensive and power demanding high-speed processors. Also, for the
systems lacking the appropriate dynamic range, the adaptation algorithms can become
unstable. In this manner to match the computational requirements a DSP processor can be a
better substitute.

1.2

Motivation
In the field of signal processing there is a significant need of a special class of digital

filters known as adaptive filters. Adaptive filters are used commonly in many different
configurations for different applications. These filters have various advantages over the
standard digital filters. They can adapt their filter coefficients from the environment
according to preset rules. The filters are capable of learning from the statistics of current
conditions and change their coefficients in order to achieve a certain goal. In order to design a
filter prior knowledge of the desired response is required. When such knowledge is not
available due to the changing nature of the filter’s requirements, it is impossible to design a
standard digital filter. In such situations, adaptive filters are desirable.
The algorithms used to perform the adaptation and the configuration of the filter
depends directly on the application of the filter. However, the basic computational engine that
performs the adaptation of the filter coefficients can be the same for different algorithms and
it is based on the statistics of the input signals to the system. The two classes of adaptive
filtering algorithms namely Recursive Least Squares (RLS) and Least Mean Squared (LMS)
are capable of performing the adaptation of the filter coefficients.
When we talk about a real scenario where the information generated from the source
side gets contaminated by the noise signal, this situation demands for the adaptive filtering
algorithm which provides fast convergence while being numerically stable without
incorporating much memory.

3 
 
Hence, the motivation for the thesis is to search for an adaptive algorithm which has
reduced computational complexity, reasonable convergence speed and good stability without
degrading the performance of the adaptive filter and then realize the algorithm on an efficient
hardware which makes it more practical in real time applications.

1.3

Scope of the Work
In numerous application areas, including biomedical engineering, radar & sonar

engineering, digital communications etc., the goal is to extract the useful signal corrupted by
interferences and noises. In this wok an adaptive noise canceller will be designed that will
more effective than available ones. To achieve an effective adaptive noise canceller, the
simulation of various adaptive algorithms will be done on MATLAB. The obtained suitable
algorithm will be implemented on the TMS320C6713 DSK hardware. The designed system
will be tested for the filtering of a noisy ECG signal and tone signal and the system
performance will be compared with the early designed available systems. The designed
system may be useful for cancelling of interference in ECG signal, periodic interference in
audio signal and broad-band interference in the side-lobes of an antenna array.
In this work for the simulation, MATLAB version 7.4.0.287(R2007a) is used, though
Labview version7 may also be applicable. For the hardware implementation, Texas
Instrument (TI) TMS320C6713 digital signal processor is used. However, Field
Programmable Gate Array (FPGA) may also be suitable. To assist the hardware
implementation Simulink version 6.6 is appropriate to generate C code for the DSP hardware.
To communicate with DSP processor, Integrated Development Environment (IDE) software
Code Composer Studio V3.1 is essential. Function generator and noise generator or any other
audio device can be used as an input source for signal analysis. For the analysis of output data
DSO is essentially required however CRO may also be used.
Current adaptive noise cancellation models [5], [9], [11] works on relatively low
processing speed that is not suitable for real-time signals which results delay in output. In this
direction, to increase the processing speed and to improve signal-to-noise ratio, a DSP
processor can be useful because it is a fast special purpose microprocessor with a specialized
type of architecture and an appropriate instruction set for signal processing. It is also well
suited for numerically intensive calculations.

4 
 
1.4

Objectives of the Thesis
The core of this thesis is to analyze and filter the noisy signals (real-time as well as

non-real time) by various adaptive filtering techniques in software as well as in hardware,
using MATLAB & DSP processor respectively.
The basic objective is to focus on the hardware implementation of adaptive algorithms
for filtering so the DSP processor is employed in this work as it can deal more efficiently
with real-time as well as non-real time signals.
The objectives of the thesis are as follows:
(a) To perform the MATLAB simulation of Least Mean Squared (LMS), Normalized
Least Mean Squared (NLMS) and Recursive Least Square (RLS) algorithms and to
compare their relative performance with a tone signal.
(b) Design a simulink model to generate auto C code for the hardware implementation
of NLMS and LMS algorithms.
(c) Hardware implementation of NLMS and LMS algorithms to perform the analysis
of an ECG signal and tone signal.
(d) To compare the performance of NLMS and LMS algorithms in terms of SNR
improvement for an ECG signal.

1.5

Organization of the Thesis
The work emphasizing on the implementation of various adaptive filtering algorithms

using MATLAB, Simulink and DSP processor, in this regard the thesis is divided into seven
chapters as follows:
Chapter-2 deals with the literature survey for the presented work, where so many
papers from IEEE and other refereed journals or proceedings are taken which relate the
present work with recent research work going on worldwide and assure the consistency of the
work.
Chapter-3 presents a detailed introduction of adaptive filter theory and various
adaptive filtering algorithms with problem definition.

5 
 
Chapter-4 presents a brief introduction of simulink. An adaptive noise cancellation
model is designed for adaptive noise cancellation with the capability of C code generation to
implement on DSP processor.
Chapter-5 illustrates experimental setup for the real-time implementation of an
adaptive noise canceller on a DSK. Therefore a brief introduction of TMS320C6713
processor and code composer studio (CCS) with real-time workshop facility is also presented.
Chapter-6 shows the experimental outcomes for the various algorithms. This chapter
is divided in two parts, first part shows the MATLAB simulation results for a sinusoidal tone
signal and the second part illustrates the real time DSP Processor implementation results for
sinusoidal tone signal and ECG signal. The results from DSP processor are analyzed with the
help of DSO.
Chapter-7 summarizes the work and provides suggestions for future research.

6 
 
Chapter-2

LITERATURE SURVEY
In the last thirty years significant contributions have been made in the field of signal
processing. The advances in digital circuit design have been the key technological
development that sparked a growing interest in the field of digital signal processing. The
resulting digital signal processing systems are attractive due to their low cost, reliability,
accuracy, small physical sizes and flexibility.
In numerous applications of signal processing, communications and biomedical we
face the necessity to remove noise and distortion from the signals. These phenomena are due
to time-varying physical processes which are unknown sometimes. One of these situations is
during the transmission of a signal from one point to another. The channel which may be of
wires, fibers, microwave beam etc., introduces noise and distortion due to the variations of its
properties. These variations may be slow or fast. Since most of the time the variations are
unknown, so there is a requirement of such type of filters that can work effectively in such
unknown environment. The adaptive filter is the right choice that diminishes and sometimes
completely eliminates the signal distortion.
The most common adaptive filters which are used during the adaption process are the
finite impulse response (FIR) types. These are preferable because they are stable, and no
special adjustments are needed for their implementation. In adaptive filters, the filter weights
are needed to be updated continuously according to certain rules. These rules are presented in
form of algorithms. There are mainly two types of algorithms that are used for adaptive
filtering. The first is stochastic gradient based algorithm known as Least Mean Squared
(LMS) algorithm and second is based on least square estimation which is known as Recursive
Least Square (RLS) algorithm. A great deal of research [1]-[5], [14], [15] has been carried
out in subsequent years for finding new variant of these algorithms to achieve better
performance in noise cancellation applications.
Bernard Widrow et. al.[1] in 1975, described the adaptive noise cancelling as an
alternative method of estimating signals which are corrupted by additive noise or interference
by employing LMS algorithm. The method uses a “primary” input containing the corrupted
signal and a “reference” input containing noise correlated in some unknown way with the

7 
 
primary noise. The reference input is adaptively filtered and subtracted from the primary
input to obtain the signal estimate. Widrow [1] focused on the usefulness of the adaptive
noise cancellation technique in a variety of practical applications that included the cancelling
of various forms of periodic interference in electrocardiography, the cancelling of periodic
interference in speech signals, and the cancelling of broad-band interference in the side-lobes
of an antenna array.
In 1988, Ahmed S. Abutaleb [2] introduced a new principle- Pontryagin minimum
principal to reduce the computational time of LMS algorithm. The proposed method reduces
the computation time drastically without degrading the accuracy of the system. When
compared to the LMS-based widrow [1] model, it was shown to have superior performance.
The LMS based algorithms are simple and easy to implement but the convergence speed is
slow. Abhishek Tandon et. al.[3] introduced an efficient, low-complexity Normalized least
mean squared (NLMS) algorithm for echo cancellation in multiple audio channels. The
performance of the proposed algorithm was compared with other adaptive algorithms for
acoustic echo cancellation. It was shown that the proposed algorithm has reduced complexity,
while providing a good overall performance.
In NLMS algorithm, all the filter coefficients are updated for each input sample. Dong
Hang et. al.[4] presented a multi-rate algorithm which can dynamically change the update
rate of the coefficients of filter by analyzing the actual application environment. When the
environment is varying, the rate increases while it decreases when the environment is stable.
The results of noise cancellation indicate that the new method has faster convergence speed,
low computation complexity, and the same minimum error as the traditional method.
Ying He et. al.[5] presented the MATLAB simulation of RLS algorithm and the
performance was compared with LMS algorithm. The convergence speed of RLS algorithm
is much faster and produces a minimum mean squared error (MSE) among all available LMS
based algorithms but at the cost of increased computational complexity which makes its
implementation difficult on hardware.
Nowadays the availability of high speed digital signal processors has attracted the
attention of the research scholars towards the real-time implementation of the available
algorithms on the hardware platform. Digital signal processors are fast special-purpose

8 
 
microprocessors with a specialized type of architecture and an instruction set appropriate for
signal processing. The architecture of the digital signal processor is very well suited for
numerically intensive calculations. DSP techniques have been very successful because of the
development of low-cost software and hardware support. DSP processors are concerned
primarily with real-time signal processing.

DSP processors exploit the advantages of

microprocessors. They are easy to use, flexible, economical and can be reprogrammed easily.
The starting of real-time hardware implementation was done by Edgar Andrei [6]
initially on the Motorola DSP56307 in 2000. Later in year 2002, Michail D. Galanis et. al.[7]
presented a DSP course for real-time systems design and implementation based on the
TMS320C6211. This course emphasized the issue of transition from an advanced design and
simulation environment like MATLAB to a DSP software environment like Code Composer
Studio.
Boo-Shik Ryu et. al.[8] implemented and investigated the performance of a noise
canceller with DSP processor (TMS320C6713) using the LMS algorithm, NLMS algorithm
and VSS-NLMS algorithm. Results showed that the proposed combination of hardware and
VSS-NLMS algorithm has not only a faster convergence rate but also lower distortion when
compared with the fixed step size LMS algorithm and NLMS algorithm in real time
environments.
In 2009, J. Gerardo Avalos  et. al. [9] have done an implementation of a digital
adaptive filter on the digital signal processor TMS320C6713 using a variant of the LMS
algorithm which consists of error codification. The speed of convergence is increased and the
complexity of design for its implementation in digital adaptive filters is reduced because the
resulting codified error is composed of integer values. The LMS algorithm with codified error
(ECLMS) was tested in an environmental noise canceller and the results demonstrate an
increase in the convergence speed and a reduction of processing time.
C.A. Duran et. al. [10] presented an implementation of the LMS, NLMS and other
LMS based algorithms on the DSK TMS320C6713 with the intention to compare their
performance, analyze their time & frequency behavior along with the processing speed of the
algorithms. The objective of the NLMS algorithm is to obtain the best convergence factor
considering the input signal power in order to improve the filter convergence time. The

9 
 
obtained results show that the NLMS has better performance than the LMS. Unfortunately,
the computational complexity increases which means more processing time.
The work related to real-time implementation so far discussed was implemented on
DSP processor by writing either assembly or C program directly in the editor of Code
Composer Studio (CCS). The writing of assembly program needs so many efforts therefore
only professional person can do this similarly C programming are not simple as far as
hardware implementation concerned.
There is a simple way to create C code automatically which requires less effort and is
more efficient. Presently only few researchers [11]-[13] are aware about this facility which is
provided by the MATLAB Version 7.1 and higher versions, using embedded target Real-time
Workshop (RTW). Gaurav Saxena  et. al. [11] have used this auto code generation facility
and presented better results than the conventional C code writing.
Gaurav Saxena  et. al. [11] discussed the real time implementation of adaptive noise
cancellation based on an improved adaptive wiener filter on Texas Instruments
TMS320C6713 DSK. Then its performance was compared with the Lee’s adaptive wiener
filter. Furthermore, a model based design of adaptive noise cancellation based on LMS filter
using simulink was implemented on TI C6713. The auto-code generated by the Real Time
Workshop for the simulink model of LMS filter was compared with the ‘C’ implementation
of LMS filter on C6713 in terms of code length and computation time. It was found to have a
large improvement in computation time but at the cost of increased code length.
S.K. Daruwalla et. al. [12] focused on the development and the real time
implementation of various audio effects using simulink blocks by employing an audio signal
as input. This system has helped the sound engineers to easily configure/capture various
audio effects in advance by simply varying the values of predefined simulink blocks. The
digital signal processor is used to implement the designs; this broadens the versatility of
system by allowing the user to employ the processor for any audio input in real-time. The
work is enriched with the real-time concepts of controlling the various audio effects via onboard DIP switches on the C6713 DSK.

10 
 
In Nov-2009, Yaghoub Mollaei [13] designed an adaptive FIR filter with normalized
LMS algorithm to cancel the noise. A simulink model is created and linked to TMS320C6711
digital signal processor through embedded target for C6000 SIMULINK toolbox and realtime workshop to perform hardware adaptive noise cancellation. Three noises with different
powers were used to test and judge the system performance in software and hardware. The
background noises for speech and music track were eliminated adequately with reasonable
rate for all the tested noises.
The outcomes of the literature survey can be summarized as follows:
The adaptive filters are attractive to work in an unknown environment and are suitable
for noise cancellation applications in the field of digital signal processing.
To update the adaptive filter weights two types of algorithms, LMS & RLS are used.
RLS based algorithms have better performance but at the cost of larger computational
complexity therefore very less work [5], [15] is going on in this direction. On the
other hand, LMS based algorithms are simple to implement and its few variants like
NLMS have comparable performance with RLS algorithm. So a large amount of
research [1]-[5] through simulation has been carried out in this regard to improve the
performance of LMS based algorithms.
Simulation can be carried out on non-real time signals only. Therefore for real-time
application there is a need of the hardware implementation of LMS based algorithms.
The DSP processor has been found to be a suitable hardware for signal processing
applications.
Hence, there is a requirement to find out the easiest way for the hardware
implementation of adaptive filter algorithms on a particular DSP processor. The use
of simulink model [11]-[13] with embedded target and real time workshop has proved
to be helpful for the same.
Therefore the simulink based hardware implementation of NLMS algorithm for ECG
signal analysis can be a good contribution in the field of adaptive filtering.

11 
 
Chapter-3

ADAPTIVE FILTERS
3.1

Introduction
Filtering is a signal processing operation. Its objective is to process a signal in order to

manipulate the information contained in the signal. In other words, a filter is a device that
maps its input signal to another output signal facilitating the extraction of the desired
information contained in the input signal. A digital filter is the one that processes discretetime signals represented in digital format. For time-invariant filters the internal parameters
and the structure of the filter are fixed, and if the filter is linear the output signal is a linear
function of the input signal. Once the prescribed specifications are given, the design of timeinvariant linear filters entails three basic steps namely; the approximation of the
specifications by a rational transfer function, the choice of an appropriate structure defining
the algorithm, and the choice of the form of implementation for the algorithm.
An adaptive filter [1], [2] is required when either the fixed specifications are unknown
or the specifications cannot be satisfied by time-invariant filters. Strictly speaking, an
adaptive filter is a nonlinear filter since its characteristics are dependent on the input signal
and consequently the homogeneity and additivity conditions are not satisfied. However, if we
freeze the filter parameters at a given instant of time, most adaptive filters are linear in the
sense that their output signals are linear functions of their input signals.
The adaptive filters are time-varying since their parameters are continuously changing
in order to meet a performance requirement. In this sense, we can interpret an adaptive filter
as a filter that performs the approximation step on-line. Usually, the definition of the
performance criterion requires the existence of a reference signal that is usually hidden in the
approximation step of fixed-filter design.
Adaptive filters are considered nonlinear systems; therefore their behaviour analysis is
more complicated than for fixed filters. On the other hand, since the adaptive filters are self
designing filters from the practitioner’s point of view, their design can be considered less
involved than the digital filters with fixed coefficients.

12 
 
Adaptive filters work on the principle of minimizing the mean squared difference
(or error) between the filter output and a target (or desired) signal. Adaptive filters are used
for estimation of non-stationary signals and systems, or in applications where a sample-by
sample adaptation of a process and a low processing delay is required.
Adaptive filters are used in applications [26]-[29] that involve a combination of three
broad signal processing problems:
(1) De-noising and channel equalization – filtering a time-varying noisy signal to remove the
effect of noise and channel distortions.
(2) Trajectory estimation – tracking and prediction of the trajectory of a non stationary signal
or parameter observed in noise.
(3) System identification – adaptive estimation of the parameters of a time-varying system
from a related observation.
Adaptive linear filters work on the principle that the desired signal or parameters can
be extracted from the input through a filtering or estimation operation. The adaptation of the
filter parameters is based on minimizing the mean squared error between the filter output and
a target (or desired) signal. The use of the Least Square Estimation (LSE) criterion is
equivalent to the principal of orthogonality in which at any discrete time m the estimator is
expected to use all the available information such that any estimation error at time m is
orthogonal to all the information available up to time m.

3.1.1 Adaptive Filter Configuration
The general set up of an adaptive-filtering environment is illustrated in Fig.3.1 [43],
where n is the iteration number, x(n) denotes the input signal, y(n) is the adaptive-filter output
signal, and d(n) defines the desired signal. The error signal e (n) is calculated as d (n) – y (n).
The error signal is then used to form a performance function that is required by the adaptation
algorithm in order to determine the appropriate updating of the filter coefficients. The
minimization of the objective function implies that the adaptive-filter output signal is
matching the desired signal in some sense. At each sampling time, an adaptation algorithm
adjusts the filter coefficients w(n) =[w0(n)w1(n)….. wN−1(n)] to minimize the difference
between the filter output and a desired or target signal.

13 
 
d(n)

y(n)

Adaptive
Filter

x(n)

_

⊕ 
e(n)

Adaptive
Algorithm

Fig.3.1. General Adaptive filter configuration

The complete specification of an adaptive system, as shown in Fig. 3.1, consists of
three things:
(a) Input: The type of application is defined by the choice of the signals acquired
from the environment to be the input and desired-output signals. The number of different
applications in which adaptive techniques are being successfully used has increased
enormously during the last two decades. Some examples are echo cancellation, equalization
of dispersive channels, system identification, signal enhancement, adaptive beam-forming,
noise cancelling and control.
(b) Adaptive-filter structure: The adaptive filter can be implemented in a number of
different structures or realizations. The choice of the structure can influence the
computational complexity (amount of arithmetic operations per iteration) of the process and
also the necessary number of iterations to achieve a desired performance level. Basically,
there are two major classes of adaptive digital filter realization, distinguished by the form of
the impulse response, namely the finite-duration impulse response (FIR) filter and the
infinite-duration impulse response (IIR) filters. FIR filters are usually implemented with nonrecursive structures, whereas IIR filters utilize recursive realizations.
Adaptive FIR filter realizations: The most widely used adaptive FIR filter structure
is the transversal filter, also called tapped delay line, that implements an all-zero
transfer function with a canonic direct form realization without feedback. For this
realization, the output signal y(n) is a linear combination of the filter coefficients, that

14 
 
yields a quadratic mean-square error (MSE = E[|e(n)|2]) function with a unique
optimal solution. Other alternative adaptive FIR realizations are also used in order to
obtain improvements as compared to the transversal filter structure, in terms of
computational complexity, speed of convergence and finite word-length properties.
Adaptive IIR filter realizations: The most widely used realization of adaptive IIR
filters is the canonic direct form realization [42], due to its simple implementation and
analysis. However, there are some inherent problems related to recursive adaptive
filters which are structure dependent such as pole-stability monitoring requirement
and slow speed of convergence. To address these problems, different realizations
were proposed attempting to overcome the limitations of the direct form structure.
(c) Algorithm: The algorithm is the procedure used to adjust the adaptive filter
coefficients in order to minimize a prescribed criterion. The algorithm is determined by
defining the search method (or minimization algorithm), the objective function and the nature
of error signal. The choice of the algorithm determines several crucial aspects of the overall
adaptive process, such as existence of sub-optimal solutions, biased optimal solution and
computational complexity.

x(n)

w0

  Z‐1 

⊗

x(n-1)

w1

 Z-1

 Z-1

⊗

wN-1

⊕
y(n)

⊕

_ 
e(n)

d(n)

+

Fig.3.2. Transversal FIR filter architecture

15 
 

x(n-N+1)

⊗
3.1.2 Adaptive Noise Canceller (ANC)
The goal of adaptive noise cancellation system is to reduce the noise portion and to
obtain the uncorrupted desired signal. In order to achieve this task, a reference of the noise
signal is needed. That reference is fed to the system, and it is called a reference signal x(n).
However, the reference signal is typically not the same signal as the noise portion of the
primary signal; it can vary in amplitude, phase or time. Therefore, the reference signal cannot
be simply subtracted from the primary signal to obtain the desired portion at the output.

Signal
Source

Noise
Source

s(n)

Primary Input

d(n)

x1(n)

Reference Input
x(n)

Adaptive
Filter

+

Σ

e(n)

Output

_

y(n)

Adaptive Noise Canceller

Fig.3.3. Block diagram for Adaptive Noise Canceller

Consider the Adaptive Noise Canceller (ANC) shown in Fig.3.3 [1]. The ANC has
two inputs: the primary input d(n), which represents the desired signal corrupted with
undesired noise and the reference input x(n), which is the undesired noise to be filtered out of
the system. The primary input therefore comprises of two portions: - first, the desired signal
and the other one is noise signal corrupting the desired portion of the primary signal.
The basic idea for the adaptive filter is to predict the amount of noise in the primary
signal and then subtract that noise from it. The prediction is based on filtering the reference
signal x(n), which contains a solid reference of the noise present in the primary signal. The
noise in the reference signal is filtered to compensate for the amplitude, phase and time delay
and then subtracted from the primary signal. The filtered noise represented by y(n) is the
system’s prediction of the noise portion of the primary signal and is subtracted from desired
signal d(n) resulting in a signal called error signal e(n), and it presents the output of the
system. Ideally, the resulting error signal should be only the desired portion of the primary
signal.

16 
 
In practice, it is difficult to achieve this, but it is possible to significantly reduce the
amount of noise in the primary signal. This is the overall goal of the adaptive filters. This
goal is achieved by constantly changing (or adapting) the filter coefficients (weights). The
adaptation rules determine their performance and the requirements of the system used to
implement the filters.
A good example to illustrate the principles of adaptive noise cancellation is the noise
removal from the pilot’s microphone in the airplane. Due to the high environmental noise
produced by the airplane engine, the pilot’s voice in the microphone gets distorted with a
high amount of noise and is very difficult to comprehend. In order to overcome this problem,
an adaptive filter can be used. In this particular case, the desired signal is the pilot’s voice.
This signal is corrupted with the noise from the airplane’s engine. Here, the pilot’s voice and
the engine noise constitute primary signal d(n). Reference signal for the application would be
a signal containing only the engine noise, which can be easily obtained from the microphone
placed near the engine. This signal would not contain the pilot’s voice, and for this
application it is the reference signal x(n).
Adaptive filter shown in Fig.3.3 can be used for this application. The filter output y(n)
is the system’s estimate of the engine noise as received in the pilot’s microphone. This
estimate is subtracted from the primary signal (pilot’s voice plus engine noise), and at the
output of the system e(n) should contain only the pilot’s voice without any noise from the
airplane’s engine. It is not possible to subtract the engine noise from the pilot’s microphone
directly, since the engine noise received in the pilot’s microphone and the engine noise
received in the reference microphone are not the same signal. There are differences in
amplitude and time delay. Also, these differences are not fixed. They change in time with
pilot’s microphone position with respect to the airplane engine, and many other factors.
Therefore, designing the fixed filter to perform the task would not obtain the desired results.
The application requires adaptive solution.
There are many forms of the adaptive filters and their performance depends on the
objective set forth in the design. Theoretically, the major goal of any noise cancelling system
is to reduce the undesired portion of the primary signal as much as possible, while preserving
the integrity of the desired portion of the primary signal.

17 
 
As noted above, the filter produces estimate of the noise in the primary signal
adjusted for magnitude, phase and time delay. This estimate is then subtracted from the noise
corrupted primary signal to obtain the desired signal. For the filter to work well, the adaptive
algorithm has to adjust the filter coefficients such that output of the filter is a good estimate
of the noise present in the primary signal.
To determine the amount by which noise in the primary signal is reduced, the mean
squared error technique is used. The Minimum Mean Squared Error (MMSE) is defined as
[42]:
min E[d (n) − XW T ] 2 = min E[(d (n) − y (n)) 2 ]

(3.1)

where d is the desired signal, X and W are the vectors of the input reference signal and
the filter coefficients respectively. This represents the measure of how well the newly
constructed filter (given as a convolution product y(n) = XW) estimates the noise present in
the primary signal. The goal is to reduce this error to a minimum. Therefore, the algorithms
that perform adaptive noise cancellation are constantly searching for a coefficient vector W,
which produces the minimum mean squared error.
Minimizing the mean squared of the error signal minimizes the noise portion of the
primary signal but not the desired portion. To understand this principle, recall that the
primary signal is made of the desired portion and the noise portion. The filtered reference
signal y(n) is a reference of the noise portion of the primary signal and therefore is correlated
with it. However, the reference signal is not correlated with the desired portion of the primary
signal. Therefore, minimizing the mean squared of the error signal minimizes only the noise
in the primary signal. This principle can be mathematically described as follows:
If we denote the desired portion of primary signal with s(n), and the noise portion of
desired signal as x1(n), it follows that d(n) = s(n) + x1(n). As shown in Fig.3.3, the output of
the system can be written as [43]:
e(n) = d (n) − y (n)

(3.2)

e ( n ) = s ( n ) + x1 ( n ) − y ( n )

e(n) 2 = s (n) 2 + (( x1 (n) − y (n)) 2 + 2 s (n)(( x1 (n) − y (n))
E[e(n) 2 ] = E[ s (n) 2 ] + (( x1 (n) − y (n)) 2 + 2 s (n)(( x1 (n) − y (n))

18 
 
E[e(n) 2 ] = E[ s (n) 2 ] + E[(( x1 (n) − y (n)) 2 ] + 2 E[ s (n)(( x1 (n) − y (n))]

(3.3)

Due to the fact that the s(n) is un-correlated to both x1(n) and y(n), as noted earlier, the
last term is equal to zero, so we have
E[e(n) 2 ] = E[ s (n) 2 ] + E[(( x1 (n) − y (n)) 2 ]
min E[e(n) 2 ] = min E[ s(n) 2 ] + min E[(( x1 (n) − y (n))2 ]

(3.4)

and since s(n) is independent of W, we have
min E[e(n) 2 ] = E[ s (n) 2 ] + min E[(( x1 (n) − y (n)) 2 ]

(3.5)

Therefore, minimizing the error signal, minimizes the mean squared of the difference
between the noise portion of the primary signal x1(n), and the filter output y(n) .

3.2

Approaches to Adaptive Filtering Algorithms
Basically two approaches can be defined for deriving the recursive formula for the

operation of Adaptive Filters. They are as follows:
(i) Stochastic Gradient Approach: In this approach to develop a recursive algorithm for

updating the tap weights of the adaptive transversal filter, the process is carried out in two
stages. First we use an iterative procedure to find the optimum Wiener solution [43]. The
iterative procedure is based on the method of steepest descent. This method requires the
use of a gradient vector, the value of which depends on two parameters: the correlation
matrix of the tap inputs in the transversal filter and the cross-correlation vector between
the desired response and the same tap inputs. Secondly, instantaneous values for these
correlations are used to derive an estimate for the gradient vector. Least Mean Squared
(LMS) and Normalized Least Mean Squared (NLMS) algorithms lie under this approach
and are discussed in subsequent sections.
(ii) Least Square Estimation: This approach is based on the method of least squares.

According to this method, a cost function is minimized that is defined as the sum of
weighted error squares, where the error is the difference between some desired response
and actual filter output. This method is formulated with block estimation in mind. In
block estimation, the input data stream is arranged in the form of blocks of equal length
(duration) and the filtering of input data proceeds on a block by block basis, which
requires a large memory for computation. The Recursive Least Square (RLS) algorithm

19 
 
falls under this approach and is discussed in subsequent section.

3.2.1

Least Mean Square (LMS) Algorithm
The Least Mean Square (LMS) algorithm [1] was first developed by Widrow and

Hoff in 1959 through their studies of pattern recognition [42]. Thereon it has become one of
the most widely used algorithm in adaptive filtering. The LMS algorithm is a type of adaptive
filter known as stochastic gradient-based algorithm as it utilizes the gradient vector of the
filter tap weights to converge on the optimal wiener solution. It is well known and widely
used due to its computational simplicity. With each iteration of the LMS algorithm, the filter
tap weights of the adaptive filter are updated according to the following formula:

w( n + 1) = w( n) + 2 μe( n ) x ( n)

(3.6)

where x(n) is the input vector of time delayed input values, and is given by
x(n) = [ x(n) x(n − 1) x(n − 2)....x(n − N + 1)]T

(3.7)

w( n) = [ w0 ( n) w1 ( n) w2 ( n)....w N −1 (n)]T represents the coefficients of the adaptive FIR

filter tap weight vector at time n and μ is known as the step size parameter and is a small
positive constant.
The step size parameter controls the influence of the updating factor. Selection of a
suitable value for μ is imperative to the performance of the LMS algorithm. If the value of μ
is too small, the time an adaptive filter takes to converge on the optimal solution will be too
long; if the value of μ is too large the adaptive filter becomes unstable and its output diverges
[14], [15], [22].
3.2.1.1 Derivation of the LMS Algorithm

The derivation of the LMS algorithm builds upon the theory of the wiener solution for
the optimal filter tap weights, w0, as outlined above. It also depends on the steepest descent
algorithm that gives a formula which updates the filter coefficients using the current tap
weight vector and the current gradient of the cost function with respect to the filter tap weight
coefficient vector, ξ(n).
w(n + 1) = w(n) − μ∇ξ ( n)

(3.8)

20 
 
where

ξ (n) = E[e 2 (n)]

(3.9)

As the negative gradient vector points in the direction of steepest descent for the N
dimensional quadratic cost function each recursion shifts the value of the filter coefficients
closer towards their optimum value which corresponds to the minimum achievable value of
the cost function, ξ(n). The LMS algorithm is a random process implementation of the
steepest descent algorithm, from Eq. (3.9). Here the expectation for the error signal is not
known so the instantaneous value is used as an estimate. The gradient of the cost function,
ξ(n) can alternatively be expressed in the following form:
∇ξ (n) = ∇(e 2 (n))
= ∂e 2 (n) / ∂w
= 2e(n)∂e(n) / ∂w
= 2e( n)∂[d ( n) − y ( n)] / ∂w

= −2e(n)∂ewT (n).x(n)] / ∂w
= −2 e ( n ) x ( n )

(3.10)

Substituting this into the steepest descent algorithm of Eq. (3.9), we arrive at the
recursion for the LMS adaptive algorithm.
w( n + 1) = w( n) + 2 μe( n) x ( n)

(3.11)

3.2.1.2 Implementation of the LMS Algorithm

For the Implementation of each iteration of the LMS algorithm requires three distinct
steps in the following order:
1. The output of the FIR filter, y(n) is calculated using Eq. (3.12).
N −1

y (n) = ∑ w( n)x(n − i ) = wT (n) x(n)

(3.12)

i =0

2. The value of the error estimation is calculated using Eq. (3.13).

21 
 
e( n ) = d ( n ) − y ( n )

(3.13)

3. The tap weights of the FIR vector are updated in preparation for the next iteration, by
Eq. (3.14).
w( n + 1) = w( n) + 2 μe( n) x ( n)

(3.14)

The main reason for the popularity of LMS algorithms in adaptive filtering is its
computational simplicity that makes its implementation easier than all other commonly used
adaptive algorithms. For each iteration, the LMS algorithm requires 2N additions and 2N+1
multiplications (N for calculating the output, y(n), one for 2μe(n) and an additional N for the
scalar by vector multiplication) .

3.2.2

Normalized Least Mean Square (NLMS) Algorithm
In the standard LMS algorithm when the convergence factor μ is large, the algorithm

experiences a gradient noise amplification problem. In order to solve this difficulty we can
use the NLMS algorithm [14]-[17]. The correction applied to the weight vector w(n) at
iteration n+1 is “normalized” with respect to the squared Euclidian norm of the input vector
x(n) at iteration n. We may view the NLMS algorithm as a time-varying step-size algorithm,
calculating the convergence factor μ as in Eq. (3.15)[10].

μ ( n) =

α
c + x ( n)

(3.15)

2

where α is the NLMS adaption constant, which optimize the convergence rate of the
algorithm and should satisfy the condition 0<α<2, and c is the constant term for
normalization and is always less than 1.
The Filter weights are updated by the Eq. (3.16).
w(n + 1) = w(n) +

α
c + x ( n)

2

e( n ) x ( n )

(3.16)  

It is important to note that given an input data (at time n) represented by the input
vector x(n) and desired response d(n), the NLMS algorithm updates the weight vector in such
a way that the value w(n+1) computed at time n+1 exhibits the minimum change with respect

22 
 
to the known value w(n) at time n. Hence, the NLMS is a manifestation of the principle of
minimum disturbance [3].
3.2.2.1 Derivation of the NLMS Algorithm

This derivation of the normalized least mean square algorithm is based on FarhangBoroujeny and Diniz [43]. To derive the NLMS algorithm we consider the standard LMS
recursion in which we select a variable step size parameter, μ(n). This parameter is selected
so that the error value, e+(n), will be minimized using the updated filter tap weights, w(n+1),
and the current input vector, x(n).

w(n + 1) = w(n) + 2 μ (n)e(n) x(n)
e + (n) = d (n) − wT (n + 1) x(n)
= (1 − 2 μ (n) xT (n) x(n))e(n)

(3.17)

Next we minimize (e+(n))2, with respect to μ(n). Using this we can then find a value
for µ(n) which forces e+(n) to zero.

μ ( n) =

1
2 x ( n ) x ( n)

(3.18)

T

This μ(n) is then substituted into the standard LMS recursion replacing μ, resulting in
the following.
w(n + 1) = w(n) + 2 μ (n)e(n) x(n)
1
w(n + 1) = w(n) + T
e( n) x ( n)
x ( n) x ( n)

w(n + 1) = w(n ) + μ (n) x(n )

(3.19)

, where μ (n) =

α
x x+c
T

(3.20)

Often the NLMS algorithm as expressed in Eq. (3.20 is a slight modification of the
standard NLMS algorithm detailed above. Here the value of c is a small positive constant in
order to avoid division by zero when the values of the input vector are zero. This was not
implemented in the real time as in practice the input signal is never allowed to reach zero due
to noise from the microphone and from the ADC on the Texas Instruments DSK. The
parameter α is a constant step size value used to alter the convergence rate of the NLMS
algorithm, it is within the range of 0<α<2, usually being equal to 1.

23 
 
3.2.2.2 Implementation of the NLMS Algorithm

The NLMS algorithm is implemented in MATLAB as outlined later in Chapter 6. It is
essentially an improvement over LMS algorithm with the added calculation of step size
parameter for each iteration.
1. The output of the adaptive filter is calculated as:
N −1

y (n) = ∑ w( n)x(n − i ) = wT (n) x(n)

(3.21)

i =0

2. The error signal is calculated as the difference between the desired output and the filter
output given by:
e( n) = d ( n ) − y ( n)

(3.22)

3. The step size and filter tap weight vectors are updated using the following equations in
preparation for the next iteration:
For i=0,1,2,…….N-1

μ i ( n) =

α
c + xi ( n )

(3.23)

2

w(n + 1) = w(n) + μ i (n)e(n) xi (n)

(3.24)

where α is the NLMS adaption constant and c is the constant term for normalization.
With α =0.02 and c=0.001, each iteration of the NLMS algorithm requires 3N+1
multiplication operations.

3.2.3

Recursive Least Square (RLS) Algorithm
The other class of adaptive filtering technique studied in this thesis is known as

Recursive Least Squares (RLS) algorithm [42]-[44]. This algorithm attempts to minimize the
cost function in Eq. (3.25) where k=1 is the time at which the RLS algorithm commences and
λ is a small positive constant very close to, but smaller than 1. With values of λ<1, more
importance is given to the most recent error estimates and thus the more recent input samples,
that results in a scheme which emphasizes on recent samples of observed data and tends to
forget the past values.

24 
 
n

2
ξ ( n) = ∑ λ n − k e n ( k )

(3.25)

k =1

Unlike the LMS algorithm and NLMS algorithm, the RLS algorithm directly
considers the values of previous error estimations. RLS algorithm is known for excellent
performance when working in time varying environments. These advantages come at the cost
of an increased computational complexity and some stability problems.
3.2.3.1 Derivation of the RLS Algorithm

The RLS cost function of Eq. (3.25) shows that at time n, all previous values of the
estimation error since the commencement of the RLS algorithm are required. Clearly, as time
progresses the amount of data required to process this algorithm increases. Limited memory
and computation capabilities make the RLS algorithm a practical impossibility in its purest
form. However, the derivation still assumes that all data values are processed. In practice
only a finite number of previous values are considered, this number corresponds to the order
of the RLS FIR filter, N.
First we define yn(k) as the output of the FIR filter, at n, using the current tap weight
vector and the input vector of a previous time k. The estimation error value en(k) is the
difference between the desired output value at time k and the corresponding value of yn(k).
These and other appropriate definitions are expressed below, for k=1,2, 3,., n.
yn (k ) = wT (n) x(k )
en (k ) = d (k ) − yn (k )
d (n) = [d (1), d (2).....d (n)]T
y (n) = [ yn (1), yn (2)..... yn (n)]T
e(n) = [en (1), en (2).....en (n)]T
e( n ) = d ( n ) − y ( n )

(3.26)

If we define X(n) as the matrix consisting of the n previous input column vector up to
the present time then y(n) can also be expressed as Eq. (3.27).
X (n) = [ x(1), x(2),....... x( n)]

y (n) = X T (n) w(n)

(3.27)

25 
 
The cost function can be expressed in matrix vector form using a diagonal matrix,
Λ(n) consisting of the weighting factors.
n

2
ξ ( n) = ∑ λ n − k e n ( k )
k =1

~
= eT (n)Λ (n)e(n)
⎡ λn −1
⎢
⎢ 0
~
where Λ (n) = ⎢ 0
⎢
⎢ ...
⎢ 0
⎣

0

λ

n−2

0
0

λn −3

0
...
0

...
0

0
0
0
...
0

0
0
0
...
1

⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎦

(3.28)

Substituting values from Eq. (3.26) and (3.27), the cost function can be expanded and
then reduced as in Eq. (3.29). (Temporarily dropping (n) notation for clarity).

~

ξ ( n ) = eT ( n ) Λ ( n ) e ( n )

~
~
~
~
= d T Λd − d T Λy − y T Λd + yT Λy
~
~
~
~
= d T Λd − d T Λ ( X T w) − ( X T w)T Λd + ( X T w)T Λ ( X T w)
~
~
~
= d T Λd − 2θ w + wTψ w
λ

λ

(3.29)

where
~
~
ψ λ = X ( n) Λ ( n) X T ( n)
~
~
θ λ = X ( n) Λ ( n) d ( n)
We derive the gradient of the above expression for the cost function with respect to
the filter tap weights. By forcing this to zero we find the coefficients for the filter w(n), which
minimizes the cost function.
~
~
ψ λ (n) w(n) = θ λ (n)
~ −1 ~
w(n) = ψ λ (n)θ λ (n)

(3.30)

The matrix Ψ(n) in the above equation can be expanded and rearranged in recursive
form. We can use the special form of the matrix inversion lemma to find an inverse for this
matrix which is required to calculate the tap weight vector update. The vector k(n) is known
as the gain vector and is included in order to simplify the calculation.

26 
 
~−
~−
ψ λ 1 (n) = λψ λ 1 (n − 1) + x(n) xT (n)
~−
= λ−1ψ λ 1 (n − 1) −

~−
~−
λ− 2ψ λ 1 (n − 1) x(n) xT (n)ψ λ 1 (n − 1)
~−
1 + λ−1 xT (n)ψ λ 1 (n − 1) x(n)

~−
~−
= λ−1 (ψ λ 1 (n − 1) − k (n) xT (n)ψ λ 1 (n − 1))
where
~−
λ−1ψ λ 1 (n − 1) x(n)
~−
1 + λ−1 xT (n)ψ λ 1 (n − 1) x(n)

k ( n) =

~−
= ψ λ 1 ( n) x ( n)

(3.31)

The vector θλ(n) of Eq. (3.29) can also be expressed in a recursive form. Using this
and substituting Ψ-1(n) from equation (3.31) into Eq. (3.30) we can finally arrive at the filter
weight update vector for the RLS algorithm, as in Eq. (3.32).
~
~
θ λ (n) = λθ λ (n − 1) + x(n)d (n)
~ −1 ~
w (n) = ψ (n)θ (n)
λ

~
=ψ

λ

λ

−1

~
~
~ −1
(n − 1)θ λ (n − 1) − k (n) xTψ λ (n − 1)θ λ (n − 1) + k (n)d (n)

= w (n − 1) − k (n) xT (n) w (n − 1) + k (n)d (n)
= w (n − 1) + k (n)(d (n) − w T (n − 1) x(n))
w (n) = w (n − 1) + k (n)en −1 (n)

(3.32)

where en −1 (n) = d (n) − w T (n − 1) x(n)
3.2.3.2 Implementation of the RLS Algorithm:

As stated previously, the memory of the RLS algorithm is confined to a finite number
of values corresponding to the order of the filter tap weight vector. Two factors of the RLS
implementation must be noted: first, although matrix inversion is essential for derivation of
the RLS algorithm, no matrix inversion calculations are required for the implementation; thus
greatly reducing the amount of computational complexity of the algorithm. Secondly, unlike
the LMS based algorithms, current variables are updated within the iteration they are to be
used using values from the previous iteration.
To implement the RLS algorithm, the following steps are executed in the following
order:
1. The filter output is calculated using the filter tap weights from the previous iteration and
the current input vector.
yn −1 (n) = w T (n − 1) x(n)

(3.33)

27 
 
2. The intermediate gain vector is calculated using Eq. (3.34).
−1
u (n) = wλ (n − 1) x(n)
k (n) = u (n) /(λ + x T (n)u (n))

(3.34)

3. The estimation error value is calculated using Eq. (3.35).
en −1 (n) = d (n) − yn −1 (n)

(3.35)

4. The filter tap weight vector is updated using Eq. (3.36) and the gain vector calculated
in Eq. (3.34).
w(n) = w T (n − 1) + k (n)en −1 (n)

(3.36)

5. The inverse matrix is calculated using Eq. (3.37).

ψ λ −1 ( n) = λ −1 (ψ λ −1 ( n − 1) − k ( n)[ x T ( n)ψ λ −1 ( n − 1)]

(3.37)

When we calculate for each iteration of the RLS algorithm, it requires 4N2
multiplication and 3N2 addition operations.

3.3

Adaptive Filtering using MATLAB
MATLAB is the acronym of Matrix Laboratory was originally designed to serve as

the interactive link to the numerical computation libraries LINPACK and EISPACK that
were used by engineers and scientists when they were dealing with sets of equations. The
MATLAB software was originally developed at the University of New Mexico and Stanford
University in the late 1970s. By 1984, a company was established named as Matwork by Jack
Little and Cleve Moler with the clear objective of commercializing MATLAB. Over a million
engineers and scientists use MATLAB today in well over 3000 universities worldwide and it
is considered a standard tool in education, business, and industry.
The basic element in MATLAB is the matrix and unlike other computer languages it
does not have to be dimensioned or declared. MATLAB’s main objective was to solve
mathematical problems in linear algebra, numerical analysis, and optimization but it quickly
evolved as the preferred tool for data analysis, statistics, signal processing, control systems,
economics, weather forecast, and many other applications. Over the years, MATLAB has
evolved creating an extended library of specialized built-in functions that are used to generate
among other things two-dimensional (2-D) and 3-D graphics and animation and offers

28 
 
numerous supplemental packages called toolboxes that provide additional software power in
special areas of interest such as•

Curve fitting

•

Optimization

•

Signal processing

•

Image processing

•

Filter design

•

Neural network design

•

Control systems

MATLAB
Stand alone 
Application

Simulink 

Application 
Development 

Stateflow 
Toolboxes 
Blocksets 
  Data   
Sources 

Data Access Tools 
Student 
Products

Code Generation 
Tools
Mathworks Partner 
Products

C Code 

Fig.3.4. MATLAB versatility diagram

The MATLAB is an intuitive language and offers a technical computing environment.
It provides core mathematics and advance graphical tools for data analysis visualization and
algorithm and application development. The MATLAB is becoming a standard in industry,
education, and business because the MATLAB environment is user-friendly and the objective
of the software is to spend time in learning the physical and mathematical principles of a
problem and not about the software. The term friendly is used in the following sense: the
MATLAB software executes one instruction at a time. By analyzing the partial results and
based on these results, new instructions can be executed that interact with the existing
information already stored in the computer memory without the formal compiling required by
other competing high-level computer languages.

29 
 
Major Software Characteristics:
i.

Matrix based numeric computation.

ii.

High level programming language.

iii.

Toolboxes provide application-specific functionality.

iv.

Multiple Platform Support.

v.

Open and Extensible System Architecture.

vi.

Interfaces to other language (C, FORTRAN etc).
For the simulation of the algorithms discussed in sec.3.2 MATLAB Version

7.4.0.287(R2007a) software is used. In the experimental setup, first of all high level
MATLAB programs [5],[20] are written for LMS , NLMS and RLS algorithms as per the
implementation steps described in sec.3.2.1.2, sec.3.2.2.2 and sec. 3.2.3.2 respectively [44] .
Then the simulation of above algorithms is done with a noisy tone signal generated through
MATLAB commands (refer sec. 6.1). The inputs to the programs are; the tone signal as
primary input s(n), random noise signal as reference input x(n), order of filter (N), step size
value (µ) ,number of iterations (refer Fig. 6.1) whereas the outputs are: the filtered output and
MSE which can be seen in the graphical results obtained after simulation gets over( refer
Fig.6.2).
The output results for the MATLAB simulation of LMS, NLMS and RLS algorithm
are presented and discussed later in the chapter-6.

30 
 
Chapter-4

SIMULINK MODEL DESIGN FOR HARDWARE IMPLEMENTATION
4.1

Introduction to Simulink
Simulink is a software package for modeling, simulating and analyzing dynamic

systems [46]. It supports linear and nonlinear systems modeled in continuous time, sampled
time, or a hybrid of the two. Systems can also be multi rate, i.e. have different parts that are
sampled or updated at different rates. For modeling, simulink provides a graphical user
interface (GUI) for building models as block diagrams, using click-and-drag mouse
operations. With this interface, we can draw the models just as we would with pencil and
paper (or as most textbooks depict them). Simulink includes a comprehensive block library of
sinks, sources, linear and nonlinear components, and connectors. We can also customize and
create our own blocks.
Models are hierarchical, so we can build models using both top-down and bottom-up
approaches. We can view the system at a high level and then double-click blocks to go down
through the levels and thus visualize the model details. This approach provides insight into
how a model is organized and how its parts interact. After we define a model, we can
simulate it using a choice of integration methods either from the simulink menu or by
entering commands in the MATLAB command window.
In simulink, the menu is particularly convenient for interactive work. The command
line approach is very useful for running a batch of simulations (for example, if we want to
sweep a parameter across a range of values). Using scopes and other display blocks, we can
see the simulation results while the simulation is running. In addition, we can change many
parameters and see what happens. The simulation results can be put in the MATLAB
workspace for post processing and visualization.
The simulink model can be applied for modeling various time-varying systems that
includes control systems, signal processing systems, video processing systems, image
processing systems, communication and satellite systems, ship systems, automotive systems,
monetary systems, aircraft & spacecraft dynamics systems, and biological systems as
illustrated in Fig.4.1.

31 
 
Fig.4.1. Simulink Applications

4.2

Model Design
In the experimental setup for noise cancellation, simulink tool box has been used

which provides the capability to model a system and to analyze its behavior. Its library is
enriched with various functions which mimics the real system. The designed model for
Adaptive Noise Cancellation (ANC) using simulink toolbox is shown in Fig.4.2.

4.2.1 Common Blocks used in Building Model
4.2.1.1 C6713 DSK ADC Block

This block is used to capture and digitize analog signals from external sources such as
signal generators, frequency generators or audio devices. Dragging and dropping C6713 DSK
ADC block in simulink block diagram allows audio coder-decoder module (codec) on the
C6713 DSK to convert an analog input signal to a digital signal for the digital signal
processing. Most of the configuration options in the block affect the codec. However, the
output data type, samples per frame and scaling options are related to the model that we are
using in simulink.

32 
 
Fig.4.2. Adaptive Noise Cancellation Simulink model

4.2.1.2 C6713 DSK DAC Block

Simulink model provides the means to generate output of an analog signal through the
analog output jack on the C6713 DSK. When C6713 DSK DAC block is added to the model,
the digital signal received by the codec is converted to an analog signal. Codec sends signal
to the output jack after converting the digital signal to analog form using digital-to-analog
conversion (D/A).
4.2.1.3 C6713 DSK Target Preferences Block

This block provides access to the processor hardware settings that need to be
configured for generating the code from Real-Time Workshop (RTW) to run on the target. It
is mandatory to add this block to the simulink model for the embedded target C6713. This
block is located in the Target Preferences in Embedded Target for TI C6000 DSP for TI DSP
library.
4.2.1.4 C6713 DSK Reset Block

This block is used to reset the C6713 DSK to initial conditions from the simulink
model. Double-clicking this block in a simulink model window resets the C6713 DSK that is
running the executable code built from the model. When we double-click the Reset block, the

33 
 
block runs the software reset function provided by CCS that resets the processor on C6713
DSK. Applications running on the board stop and the signal processor returns to the initial
conditions that we defined.
4.2.1.5 NLMS Filter Block

This block adapts the filter weights based on the NLMS algorithm for filtering the
input signal. We select the adapt port check box to create an adapt port on the block. When
the input to this port is nonzero, the block continuously updates the filter weights. When the
input to this port is zero, the filter weights remain constant. If the reset port is enabled and a
reset event occurs, the block resets the filter weights to their initial values.
4.2.1.6 C6713 DSK LED Block

This block triggers the entire three user LEDs located on the C6711 DSK. When we
add this block to a model and send a real scalar to the block input, the block sets the LED
state based on the input value it receives: When the block receives an input value equal to 0,
the LEDs are turned OFF. When the block receives a nonzero input value, the LEDs are
turned ON.
4.2.1.7 C6713 DSK DIP Switch Block

Outputs state of user switches located on C6713 DSK board. In boolean mode,
output is a vector of 4 Boolean values, with the least-significant bit (LSB) first. In Integer
mode, output is an integer from 0 to 15. For simulation, checkboxes in the block dialog are
used in place of the physical switches.

4.2.2 Building the Model
To create the model, first type simulink in the MATLAB command window or
directly click on the shortcut icon

. On Microsoft Windows, the simulink library browser

appears as shown in Fig. 4.3.

34 
 
Fig.4.3. Simulink library browser

To create a new model, select Model from the New submenu of the simulink library
window's File menu. To create a new model on Windows, select the New Model button on
the Library Browser's toolbar.
Simulink opens a new model window like Fig. 4.4.

35 
 
Fig.4.4. Blank new model window

To create Adaptive Noise Cancellation (ANC) model, we will need to copy blocks
into the model from the following simulink block libraries:
•

Target for TI C6700 library (ADC, DAC, DIP, and LED blocks)

•

Signal processing library (NLMS filter block)

•

Commonly used blocks library (Constant block, Switch block and Relational block)

•

Discrete library (Delay block)
To copy the ADC block from the Library Browser, first expand the Library Browser

tree to display the blocks in the Target for TI C6700 library. Do this by clicking on the library
node to display the library blocks. Then select the C6713 DSK board support sub library and
finally, click on the respective block to select it.
Now drag the ADC block from the browser and drop it in the model window.
Simulink creates a copy of the blocks at the point where you dropped the node icon as
illustrated in Fig.4.5.

36 
 
Fig.4.5. Model window with ADC block

Copy the rest of the blocks in a similar manner from their respective libraries into the
model window. We can move a block from one place to another place by dragging the block
in the model window. We can move a block a short distance by selecting the block and then
pressing the arrow keys. With all the blocks copied into the model window, the model should
look something like Fig.4.6.
If we examine the block icons, we see an angle bracket on the right of the ADC block
and two on the left of the NLMS filter block. The > symbol pointing out of a block is an
output port; if the symbol points to a block, it is an input port. A signal travels out of an
output port and into an input port of another block through a connecting line. When the
blocks are connected, the port symbols disappear.
Now it's time to connect the blocks. Position the pointer over the output port on the
right side of the ADC block and connect it to the input port of delay, NLMS filter and switch
block. Similarly make all connection as in Fig.4.2.

4.3

Model Reconfiguration
Once the model is designed we have to reconfigure the model as per the requirement

of the desired application. The simulink blocks parameters are adjusted as per the input
output devices used. The input devices may be function generator or microphone and the
output devices may be DSO or headphone respectively. This section explains and illustrates
the reconfiguration setting of each simulink block like ADC, DAC, Adaptive filter, DIP,

37 
 
LED, relational operator, switch block, and all that are used in the design of adaptive noise
canceller.

Fig.4.6. Model illustration before connections

4.3.1 The ADC Settings
This block can be reconfigured to receive the input either from microphone or
function generator. Input is applied through microphone when ADC source is kept at “Mic
In” and through function generator when ADC source is kept at “Line In” as shown in
Fig.4.7. The other settings are as follows:
Double-click on the blue box to the left marked “DSK6713 ADC”.
The screen as shown in Fig.4.7 will appear.
Change the “ADC source” to “Line In” or “Mic In”.
If we have a quiet microphone, select “+20dB Mic gain boost”.
Set the “Sampling rate (Hz)” to “48 kHz”.
Set the “Samples per frame” to 64.
When done, click on “OK”.
Important: Make sure the “Stereo” box is empty.

38 
 
4.3.2 The DAC Settings
The DAC setting needs to be matched to those of the ADC. The major parameter is
the sampling rate that is kept at the same rate of ADC i.e. 48 kHz as shown in Fig.4.8.

Fig.4.7. Setting up the ADC for mono microphone input

Fig.4.8. Setting the DAC parameters

39 
 
4.3.3 NLMS Filter Parameters Settings
The most critical variable in an NLMS filter is the initial setup of “Step size (mu)”. If
“mu” is too small, the filter has very fine resolution but reacts too slowly to the input signal.
If “mu” is too large, the filter reacts very quickly but the error also remains large. The major
parameters values that we have to change for the designed model are (shown in Fig.4.9): Step
size (mu) = 0.001, Filter length =19
Select the Adapt port check box to create an Adapt port on the block. When the input
to this port is nonzero, the block continuously updates the filter weights. When the input to
this port is zero, the filter weights remain constant.

Fig.4.9. Setting the NLMS filter parameters

40 
 
4.3.4 Delay Parameters Settings
Delay parameter is required to delay the discrete-time input signal by a specified
number of samples or frames. Because we are working with frames of 64 samples, it is
convenient to configure the delay using frames. The steps for setting are described below and
are illustrated in Fig. 4.10.
Double-click on the “Delay” block.
Change the “Delay units” to Frames.
Set the “Delay (frames)” to 1. This makes the delay 64 samples.

Fig.4.10. Setting the delay unit

4.3.5 DIP Switches Settings
DIP switches are manual electric switches that are packaged in a group in a standard
dual in-line package (DIP).These switches can work in two modes; Boolean mode, Integer
mode. In Boolean mode, outputs are a vector of 4 boolean values with the least-significant bit
(LSB) first. In Integer mode, outputs are an integer from 0 to 15. The DIP switches needs to
be configured as shown in Fig. 4.11.

41 
 
The “Sample time” should set to be “–1”.

Fig.4.11. Setting up the DIP switch values

4.3.6 Constant Value Settings
The switch values lie between 0 and 15. We will use switch values 0 and 1. For
settings, Double-click on the “Constant” block. Set the “Constant value” to 1 and the
“Sample time” to “inf” as shown in Fig.4.12.

Fig.4.12. Setting the constant parameters

42 
 
4.3.7 Constant Data Type Settings
The signal data type for the constant used in ANC model is set to “int16” as shown in
Fig. 4.13. The setting of parameter can be done as follows:
Click on the “Signal Data Types” tab.
Set the “Output data type mode” to “int16”.
This is compatible with the DAC on the DSK6713.

Fig.4.13. Data type conversion to 16-bit integer

4.3.8 Relational Operator Type Settings
Relational operator is used to check the given condition for the input signal. The
relational operator setting for the designed model can be done as follows:
Double click on the “Relational Operator” block.
Change the “Relational operator” to “==”.
Click on the “Signal Data Types” tab.

4.3.9 Relational Operator Data Type Settings
Set the “Output data type mode” to “Boolean”.
Click on “OK”. ( refer Fig.4.14)

43 
 
Fig.4.14. Changing the output data type

4.3.10 Switch Settings
The switch which is used in this model has three inputs viz. input 1, input 2 and input
3 numbered from top to bottom (refer Fig 4.2). The input 1 & input 3 are data inputs and
input 2 is the control input. When input 2 satisfies the selection criteria, input 1 is passed to
the output port otherwise input 3. The switch is configured as:
Double click on the “switch”
Set the criteria for passing first input to “u2>=Threshold”
Click “ok”

The simulink model for the hardware implementation of NLMS algorithm is
designed successfully and the designed model is reconfigured to meet the requirement of
TMS320C6713 DSP Processor environment. The reconfigured model shown in Fig.4.2 is
ready to connect with Code Composer Studio [50] and DSP Processor with the help of RTDX
link and Real-Time Workshop [47]. This is presented in chapter5.

44 
 
Chapter-5

REAL-TIME IMPLEMENTATION ON DSP PROCESSOR
Digital signal processors are fast special-purpose microprocessors with a specialized
type of architecture and an instruction set appropriate for signal processing [45]. The
architecture of the digital signal processor is very well suited for numerically intensive
calculations. Digital signal processors are used for a wide range of applications which
includes communication, control, speech processing, image processing etc. These processors
have become the products of choice for a number of consumer applications, because they are
very cost-effective and can be reprogrammed easily for different applications.
DSP techniques have been very successful because of the development of low-cost
software and hardware support [48]. DSP processors are concerned primarily with real-time
signal processing. Real-time processing requires the processing to keep pace with some
external event, whereas non-real-time processing has no such timing constraint. The external
event is usually the analog input. Analog-based systems with discrete electronic components
such as resistors can be more sensitive to temperature changes whereas DSP-based systems
are less affected by environmental conditions.
In this chapter we will learn how we can realize or implement an adaptive filter on
hardware for real-time experiments. The model which was designed in previous chapter will
be linked to the DSP processor with help of Real Time Data Exchange (RTDX) utility
provided in simulink.

5.1

Introduction to Digital Signal Processor (TMS320C6713)
The TMS320C6713 is a low cost board designed to allow the user to evaluate the

capabilities of the C6713 DSP and develop C6713-based products [49]. It demonstrates how
the DSP can be interfaced with various kinds of memories, peripherals, Joint Text Action
Group (JTAG) and parallel peripheral interfaces.
The board is approximately 5 inches wide and 8 inches long as shown in Fig.5.2 and
is designed to sit on the desktop external to a host PC. It connects to the host PC through a
USB port. The processor board includes a C6713 floating-point digital signal processor and a

45 
 
32-bit stereo codec TLV320AIC23 (AIC23) for input and output. The onboard codec AIC23
uses a sigma–delta technology that provides ADC and DAC. It connects to a 12-MHz system
clock. Variable sampling rates from 8 to 96 kHz can be set readily [51].
A daughter card expansion is also provided on the DSK board. Two 80-pin connectors
provide for external peripheral and external memory interfaces. The external memory
interface (EMIF) performs the task of interfacing with the other memory subsystems. Lightemitting diodes (LEDs) and liquid-crystal displays (LCDs) are used for spectrum display.
The DSK board includes 16MB (Megabytes) of synchronous dynamic random access
memory (SDRAM) and 256kB (Kilobytes) of flash memory.
Four connectors on the board provide inputs and outputs: MIC IN for microphone
input, LINE IN for line input, LINE OUT for line output, and HEADPHONE for a
headphone output (multiplexed with line output). The status of the four users DIP switches on
the DSK board can be read from a program and provides the user with a feedback control
interface (refer Fig.5.1 & Fig.5.2). The DSK operates at 225MHz.Also onboard are the
voltage regulators that provide 1.26 V for the C6713 core and 3.3V for its memory and
peripherals.
The major DSK hardware features are:
A C6713 DSP operating at 225 MHz.
An AIC23 stereo codec with Line In, Line Out, MIC, and headphone stereo jacks.
16 Mbytes of synchronous DRAM (SDRAM).
512 Kbytes of non-volatile Flash memory (256 Kbytes usable in default
configuration).
Four user accessible LEDs and DIP switches.
Software board configuration through registers implemented in complex logic device.
Configurable boot options.
Expansion connectors for daughter cards.
JTAG emulation through onboard JTAG emulator with USB host interface or external
Emulator.
Single voltage power supply (+5V).

46 
 
Fig.5.1. Block diagram of TMS320C6713 processor

Fig.5.2. Physical overview of the TMS320C6713 processor

47 
 
5.1.1

Central Processing Unit Architecture
The CPU has Very Large Instruction Word (VLIW) architecture [53]. The CPU

always fetches eight 32-bit instructions at once and there is a 256-bit bus to the internal
program memory. Each group of eight instructions is called a fetch packet. The CPU has
eight functional units that can operate in parallel and are equally split into two halves, A and
B. All eight units do not have to be given instruction words if they are not ready. Therefore,
instructions are dispatched to the functional units as execution packets with a variable
number of 32-bit instruction words. The functional block diagram of Texas Instrument (TI)
processor architecture is shown below in Fig.5.3.
32 

 

       EMIF 
McASP1 
McASP0 

McBSP0 
Pin Multiplexing 

     I2C1 
I2C0 
Timer 1 
Timer 0 

Enhanced DMA Controller 
(16 Channel) 

McBSP1 

L2 Cache 
Memory 
4 Banks 
64K Bytes 
Total 
 
 
(Up to 4‐
way) 
 
 
 
 
 
 
 
L2 
Memory 
192K 
Bytes 

C67x CPU 
Instruction Fetch 

 

Instruction Dispatch 

Instruction Decode 
Data Path A            Data Path B 
A Register File       B Register 
.L1 .S1 .M1 .D1

.D2 .M2 .S2  .L2 

HPI 

Control 
Register 
Control 
Test
In‐Circuit 
Emulation 
Interrupt 
Control 

L1D Cache 
2‐Way 
Set Associative 
4K Bytes 
Clock Generator 
Oscillator and PLL 
×4 through ×25 
Multiplier 
/1 through /32 Divider 

GPIO 
16 

LIP Cache 
Direct Mapped 
4 Bytes Total 

Power –
Down Logic 

Fig.5.3. Functional block diagram of TMS320C6713 CPU

The eight functional units include:
Four ALU that can perform fixed and floating-point operations (.L1, .L2, .S1, .S2).
Two ALU’s that perform only fixed-point operations (.D1, .D2).

48 
 
Two multipliers that can perform fixed or floating-point multiplications (.M1, .M2).

5.1.2 General Purpose Registers Overview
The CPU has thirty two 32-bit general purpose registers split equally between the A
and B sides. The CPU has a load/store architecture in which all instructions operate on
registers. The data-addressing unit .D1 and .D2 are in charge of all data transfers between the
register files and memory. The four functional units on a side freely share the 16 registers on
that side. Each side has a single data bus connected to all the registers on the other side so
that functional units on one side can access data in the registers on the other side. Access to a
register on the same side uses one clock cycle while access to a register on the other side
requires two clock cycles i.e. read and write cycle.

5.1.3 Interrupts
The C6000 CPUs contain a vectored priority interrupt controller. The highest priority
interrupt is RESET which is connected to the hardware reset pin and cannot be masked. The
next priority interrupt is the NMI which is generally used to alert the CPU of a serious
hardware problem like a power failure. Then, there are twelve lower priority maskable
interrupts INT4–INT15 with INT4 having the highest and INT15 the lowest priority.

Fig.5.4. Interrupt priority diagram

49 
 
The following Fig. 5.5 depicts how the processor handles an interrupt when it arrives.
Interrupt handling mechanism is a vital feature of microprocessor.

Fig.5.5. Interrupt handling procedure

These maskable interrupts can be selected from up to 32 sources (C6000 family). The
sources vary between family members. For the C6713, they include external interrupt pins
selected by the GPIO unit, and interrupts from internal peripherals such as timers, McBSP
serial ports, McASP serial ports, EDMA channels, and the host port interface. The CPUs
have a multiplexer called the interrupt selector that allows the user to select and connect
interrupt sources to INT4 through INT15.As soon as the interrupt is serviced, processor
resumes to the same operation which was under processing prior to interrupt request.

5.1.4 Audio Interface Codec
The C6713 uses a Texas AIC23 codec. In the default configuration, the codec is
connected to the two serial ports, McBSP0 and McBSP1. McBSP0 is used as a unidirectional
channel to control the codec's internal configuration registers. It should be programmed to
send a 16-bit control word to the AIC23 in SPI format. The top 7 bits of the control word
specify the register to be modified and the lower 9 bits contain the register value. Once the

50 
 
codec is configured, the control channel is normally idle while audio data is being
transmitted. McBSP1 is used as the bi-directional data channel for ADC input and DAC
output samples. The codec supports a variety of sample formats. For the experiments in this
work, the codec should be configured to use 16-bit samples in two’s complement signed
format.
The codec should be set to operate in master mode so as to supply the frame
synchronization and bit clocks at the correct sample rate to McBSP1. The preferred serial
format is DSP mode which is designed specifically to operate with the McBSP ports on TI
DSPs. The codec has a 12 MHz system clock which is same as the frequency used in many
USB systems. The AIC23 can divide down the 12 MHz clock frequency to provide sampling
rates of 8000 Hz, 16000 Hz, 24000 Hz, 32000 Hz, 44100 Hz, 48000 Hz, and 96000 Hz.

DSK
 

DSP
CPU

McBSP

 

 
McBSP

AIC23

Fig.5.6. Audio connection illustrating control and data signal

The DSK uses two McBSPs to communicate with the AIC23 codec, one for control,
another for data. The C6713 supplies a 12 MHz clock to the AIC23 codec which is divided
down internally in the AIC23 to give the sampling rates. The codec can be set to these
sampling rates by using the function DSK6713_AIC23_setFreq (handle,freq ID) from the
BSL. This function puts the quantity “Value” into AIC23 control register 8. Some of the
AIC23 analog interface properties are:
The ADC for the line inputs has a full-scale range of 1.0 V RMS.

51 
 
The microphone input is a high-impedance, low-capacitance input compatible with a
wide range of microphones.
The DAC for the line outputs has a full-scale output voltage range of 1.0 V RMS.
The stereo headphone outputs are designed to drive 16 or 32-ohm headphones.
The AIC23 has an analog bypass mode that directly connects the analog line inputs to
the analog line outputs.
The AIC23 has a side tone insertion mode where the microphone input is routed to the
line and headphone outputs.
AIC23 Codec

FSX1 
CLKX1 
TX1 

CONTROL 
 SPI Format 

CS
SCLK
SD IN

 Digital 

Control Registers 

McBSP0 

0
1
2
3
4
5
6
7
8
9
15

LEFT IN VOL
RIGHT IN VOL
LEFT HP VOL
RIGHT HP VOL
ANAPATH
DIGPATH
POWER DOWN
DIGIF
SAMPLE RATE
DIGACT
RESET

DATA 

McBSP1 
DR2 
FX2 
CLKR 
CLKX 
FSR2 
DX2 

D OUT
LRC OUT
B CLK
LRC IN
D IN

   MIC IN

   LINE IN

 Analog 

 LINE OUT

MIC IN 

ADC

LINE IN 

DAC

LINE OUT 
HP OUT 

  HP OUT

Fig.5.7. AIC23 codec interface

5.1.5 DSP/BIOS & RTDX
The DSP/BIOS facilities utilize the Real-Time Data Exchange (RTDX) link to obtain
and monitor target data in real-time [47]. I utilized the RTDX link to create my own
customized interfaces to the DSP target by using the RTDX API Library. The RTDX
transfers data between a host computer and target devices without interfering with the target
application. This bi-directional communication path provides data collection by the host as
well as host interaction while running target application. RTDX also enables host systems to
provide data stimulation to the target application and algorithms.

52 
 
Data transfer to the host occurs in real-time while the target application is running. On
the host platform, an RTDX host library operates in conjunction with Code Composer Studio
IDE. Data visualization and analysis tools communicate with RTDX through COM APIs to
obtain the target data and/or to send data to the DSP application. The host library supports
two modes of receiving data from a target application: continuous and non-continuous.
Code Composer 
 
Studio CCS 

MATLAB
 

Embedded
Target for
Texas
Instruments
DSP
+
Real Time
Workshop

Texas Instruments 
DSP 

 
Simulink 
Model 

 
 

Build and 
Download

 

 

Application
+
DSP/BIOS
Kernel

 

RTDX 

DSP/BIOS
Tools

RTDX 

 
 

Fig.5.8. DSP BIOS and RTDX

In continuous mode, the data is simply buffered by the RTDX Host Library and is not
written to a log file. Continuous mode should be used when the developer wants to
continuously obtain and display the data from a target application and does not need to store
the data in a log file.
The realization of an interface is possible thanks to the Real-Time Data Exchange
(RTDX). RTDX allows transferring data between a host computer and target devices without
interfering with the target application. The data can be analyzed and visualized on the host
using the COM interface provided by RTDX. Clients such as Visual Basic, Visual C++,
Excel, LabView, MATLAB, and others are readily capable of utilizing the COM interface.

53 
 
5.2 Code Composer Studio as Integrated Development Environment
Code Composer Studio is the DSP industry's first fully integrated development
environment (IDE) [50] with DSP-specific functionality. With a familiar environment like
MS-based C++TM; Code Composer lets you edit, build, debug, profile and manage projects
from a single unified environment. Other unique features include graphical signal analysis,
injection/extraction of data signals via file I/O, multi-processor debugging, automated testing
and customization via a C-interpretive scripting language and much more.

Fig.5.9. Code compose studio platform
Real-time analysis can be performed using real-time data exchange (RTDX). RTDX allows
for data exchange between the host PC and the target DSK, as well as analysis in real time without
stopping the target. Key statistics and performance can be monitored in real time. Through the joint
team action group (JTAG), communication with on-chip emulation support occurs to control and
monitor program execution. The C6713 DSK board includes a JTAG interface through the USB port.
  

Fig.5.10. Embedded software development

54 
 
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation
Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation

Más contenido relacionado

La actualidad más candente

Fault detection and test minimization methods
Fault detection and test minimization methodsFault detection and test minimization methods
Fault detection and test minimization methodspraveenkaundal
 
105926921 cmos-digital-integrated-circuits-solution-manual-1
105926921 cmos-digital-integrated-circuits-solution-manual-1105926921 cmos-digital-integrated-circuits-solution-manual-1
105926921 cmos-digital-integrated-circuits-solution-manual-1Savvas Dimopoulos
 
Synchronous and asynchronous reset
Synchronous and asynchronous resetSynchronous and asynchronous reset
Synchronous and asynchronous resetNallapati Anindra
 
Pulse modulation
Pulse modulationPulse modulation
Pulse modulationmpsrekha83
 
ECE HOD PPT NEW_04.07.23_SSK.ppt
ECE HOD PPT NEW_04.07.23_SSK.pptECE HOD PPT NEW_04.07.23_SSK.ppt
ECE HOD PPT NEW_04.07.23_SSK.pptNuthalSrinivasan1
 
Design & implementation of high speed carry select adder
Design & implementation of high speed carry select adderDesign & implementation of high speed carry select adder
Design & implementation of high speed carry select adderssingh7603
 
2019 2 testing and verification of vlsi design_verification
2019 2 testing and verification of vlsi design_verification2019 2 testing and verification of vlsi design_verification
2019 2 testing and verification of vlsi design_verificationUsha Mehta
 
Designing of 8 BIT Arithmetic and Logical Unit and implementing on Xilinx Ver...
Designing of 8 BIT Arithmetic and Logical Unit and implementing on Xilinx Ver...Designing of 8 BIT Arithmetic and Logical Unit and implementing on Xilinx Ver...
Designing of 8 BIT Arithmetic and Logical Unit and implementing on Xilinx Ver...Rahul Borthakur
 
Design and Simulation of Local Area Network Using Cisco Packet Tracer
Design and Simulation of Local Area Network Using Cisco Packet TracerDesign and Simulation of Local Area Network Using Cisco Packet Tracer
Design and Simulation of Local Area Network Using Cisco Packet TracerAbhi abhishek
 
LPC for Speech Recognition
LPC for Speech RecognitionLPC for Speech Recognition
LPC for Speech RecognitionDr. Uday Saikia
 
Integrating a custom AXI IP Core in Vivado for Xilinx Zynq FPGA based embedde...
Integrating a custom AXI IP Core in Vivado for Xilinx Zynq FPGA based embedde...Integrating a custom AXI IP Core in Vivado for Xilinx Zynq FPGA based embedde...
Integrating a custom AXI IP Core in Vivado for Xilinx Zynq FPGA based embedde...Vincent Claes
 
Implementation of Soft-core processor on FPGA (Final Presentation)
Implementation of Soft-core processor on FPGA (Final Presentation)Implementation of Soft-core processor on FPGA (Final Presentation)
Implementation of Soft-core processor on FPGA (Final Presentation)Deepak Kumar
 

La actualidad más candente (20)

Fault detection and test minimization methods
Fault detection and test minimization methodsFault detection and test minimization methods
Fault detection and test minimization methods
 
105926921 cmos-digital-integrated-circuits-solution-manual-1
105926921 cmos-digital-integrated-circuits-solution-manual-1105926921 cmos-digital-integrated-circuits-solution-manual-1
105926921 cmos-digital-integrated-circuits-solution-manual-1
 
Synchronous and asynchronous reset
Synchronous and asynchronous resetSynchronous and asynchronous reset
Synchronous and asynchronous reset
 
Pulse modulation
Pulse modulationPulse modulation
Pulse modulation
 
ECE HOD PPT NEW_04.07.23_SSK.ppt
ECE HOD PPT NEW_04.07.23_SSK.pptECE HOD PPT NEW_04.07.23_SSK.ppt
ECE HOD PPT NEW_04.07.23_SSK.ppt
 
8 Bit ALU
8 Bit ALU8 Bit ALU
8 Bit ALU
 
Design & implementation of high speed carry select adder
Design & implementation of high speed carry select adderDesign & implementation of high speed carry select adder
Design & implementation of high speed carry select adder
 
2019 2 testing and verification of vlsi design_verification
2019 2 testing and verification of vlsi design_verification2019 2 testing and verification of vlsi design_verification
2019 2 testing and verification of vlsi design_verification
 
Designing of 8 BIT Arithmetic and Logical Unit and implementing on Xilinx Ver...
Designing of 8 BIT Arithmetic and Logical Unit and implementing on Xilinx Ver...Designing of 8 BIT Arithmetic and Logical Unit and implementing on Xilinx Ver...
Designing of 8 BIT Arithmetic and Logical Unit and implementing on Xilinx Ver...
 
Multiplexing : FDM
Multiplexing : FDMMultiplexing : FDM
Multiplexing : FDM
 
Design and Simulation of Local Area Network Using Cisco Packet Tracer
Design and Simulation of Local Area Network Using Cisco Packet TracerDesign and Simulation of Local Area Network Using Cisco Packet Tracer
Design and Simulation of Local Area Network Using Cisco Packet Tracer
 
FinFET design
FinFET design FinFET design
FinFET design
 
Spyglass dft
Spyglass dftSpyglass dft
Spyglass dft
 
Projeto integração.
Projeto integração.Projeto integração.
Projeto integração.
 
LPC for Speech Recognition
LPC for Speech RecognitionLPC for Speech Recognition
LPC for Speech Recognition
 
Integrating a custom AXI IP Core in Vivado for Xilinx Zynq FPGA based embedde...
Integrating a custom AXI IP Core in Vivado for Xilinx Zynq FPGA based embedde...Integrating a custom AXI IP Core in Vivado for Xilinx Zynq FPGA based embedde...
Integrating a custom AXI IP Core in Vivado for Xilinx Zynq FPGA based embedde...
 
Array multiplier
Array multiplierArray multiplier
Array multiplier
 
Implementation of Soft-core processor on FPGA (Final Presentation)
Implementation of Soft-core processor on FPGA (Final Presentation)Implementation of Soft-core processor on FPGA (Final Presentation)
Implementation of Soft-core processor on FPGA (Final Presentation)
 
Python Programming Essentials - M24 - math module
Python Programming Essentials - M24 - math modulePython Programming Essentials - M24 - math module
Python Programming Essentials - M24 - math module
 
Lecture6[1]
Lecture6[1]Lecture6[1]
Lecture6[1]
 

Destacado

Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Brati Sundar Nanda
 
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...Raj Kumar Thenua
 
Nlms algorithm for adaptive filter
Nlms algorithm for adaptive filterNlms algorithm for adaptive filter
Nlms algorithm for adaptive filterchintanajoshi
 
Real-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition ToolboxReal-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition ToolboxIDES Editor
 
Adaptive filter
Adaptive filterAdaptive filter
Adaptive filterA. Shamel
 
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713CSCJournals
 
Active noise control
Active noise controlActive noise control
Active noise controlRishikesh .
 
design of cabin noise cancellation
design of cabin noise cancellationdesign of cabin noise cancellation
design of cabin noise cancellationmohamud mire
 
Smart antenna algorithm and application
Smart antenna algorithm and applicationSmart antenna algorithm and application
Smart antenna algorithm and applicationVirak Sou
 
What Is Noise Cancellation? | Phiaton
What Is Noise Cancellation? | PhiatonWhat Is Noise Cancellation? | Phiaton
What Is Noise Cancellation? | PhiatonPhiaton
 
M.Tech_Thesis _surendra_singh
M.Tech_Thesis _surendra_singhM.Tech_Thesis _surendra_singh
M.Tech_Thesis _surendra_singhsurendra singh
 

Destacado (20)

Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
 
Adaptive filter
Adaptive filterAdaptive filter
Adaptive filter
 
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
 
Nlms algorithm for adaptive filter
Nlms algorithm for adaptive filterNlms algorithm for adaptive filter
Nlms algorithm for adaptive filter
 
Real-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition ToolboxReal-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
 
Adaptive filter
Adaptive filterAdaptive filter
Adaptive filter
 
Echo Cancellation Paper
Echo Cancellation Paper Echo Cancellation Paper
Echo Cancellation Paper
 
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
 
Adaptive filters
Adaptive filtersAdaptive filters
Adaptive filters
 
Active noise control
Active noise controlActive noise control
Active noise control
 
ANC Tutorial (2013)
ANC Tutorial (2013)ANC Tutorial (2013)
ANC Tutorial (2013)
 
Fixed-point Multi-Core DSP Application Examples
Fixed-point Multi-Core DSP Application ExamplesFixed-point Multi-Core DSP Application Examples
Fixed-point Multi-Core DSP Application Examples
 
design of cabin noise cancellation
design of cabin noise cancellationdesign of cabin noise cancellation
design of cabin noise cancellation
 
Smart antenna algorithm and application
Smart antenna algorithm and applicationSmart antenna algorithm and application
Smart antenna algorithm and application
 
M.Tech Thesis
M.Tech ThesisM.Tech Thesis
M.Tech Thesis
 
What Is Noise Cancellation? | Phiaton
What Is Noise Cancellation? | PhiatonWhat Is Noise Cancellation? | Phiaton
What Is Noise Cancellation? | Phiaton
 
Multidimensional Approaches for Noise Cancellation of ECG signal
Multidimensional Approaches for Noise Cancellation of ECG signalMultidimensional Approaches for Noise Cancellation of ECG signal
Multidimensional Approaches for Noise Cancellation of ECG signal
 
VEDA Climate Change Solutions Ltd - Improving Rural Livelihoods Through Carbo...
VEDA Climate Change Solutions Ltd - Improving Rural Livelihoods Through Carbo...VEDA Climate Change Solutions Ltd - Improving Rural Livelihoods Through Carbo...
VEDA Climate Change Solutions Ltd - Improving Rural Livelihoods Through Carbo...
 
Introduction to tms320c6745 dsp
Introduction to tms320c6745 dspIntroduction to tms320c6745 dsp
Introduction to tms320c6745 dsp
 
M.Tech_Thesis _surendra_singh
M.Tech_Thesis _surendra_singhM.Tech_Thesis _surendra_singh
M.Tech_Thesis _surendra_singh
 

Similar a Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation

Certificates for bist including index
Certificates for bist including indexCertificates for bist including index
Certificates for bist including indexPrabhu Kiran
 
Design and implementation of 32 bit alu using verilog
Design and implementation of 32 bit alu using verilogDesign and implementation of 32 bit alu using verilog
Design and implementation of 32 bit alu using verilogSTEPHEN MOIRANGTHEM
 
Prof Chethan Raj C, Final year Project Report Format
Prof Chethan Raj C, Final year Project Report FormatProf Chethan Raj C, Final year Project Report Format
Prof Chethan Raj C, Final year Project Report FormatProf Chethan Raj C
 
Report star topology using noc router
Report star topology using noc router Report star topology using noc router
Report star topology using noc router Vikas Tiwari
 
Embedded System -Lyla B Das.pdf
Embedded System -Lyla B Das.pdfEmbedded System -Lyla B Das.pdf
Embedded System -Lyla B Das.pdfJohnMcClaine2
 
(R18) B.Tech. CSE Syllabus.pdf
(R18) B.Tech. CSE Syllabus.pdf(R18) B.Tech. CSE Syllabus.pdf
(R18) B.Tech. CSE Syllabus.pdffisdfg
 
2016-17_BE Electronics Engineering-Course Book 2016 RCOEM.pdf
2016-17_BE Electronics Engineering-Course Book 2016 RCOEM.pdf2016-17_BE Electronics Engineering-Course Book 2016 RCOEM.pdf
2016-17_BE Electronics Engineering-Course Book 2016 RCOEM.pdfMarshalsubash
 
Major project report
Major project reportMajor project report
Major project reportPraveen Singh
 
Summary Of Academic Projects
Summary Of Academic ProjectsSummary Of Academic Projects
Summary Of Academic Projectsawan2008
 
Semi-custom Layout Design and Simulation of CMOS NAND Gate
Semi-custom Layout Design and Simulation of CMOS NAND GateSemi-custom Layout Design and Simulation of CMOS NAND Gate
Semi-custom Layout Design and Simulation of CMOS NAND GateIJEEE
 
Optimized Layout Design of Priority Encoder using 65nm Technology
Optimized Layout Design of Priority Encoder using 65nm TechnologyOptimized Layout Design of Priority Encoder using 65nm Technology
Optimized Layout Design of Priority Encoder using 65nm TechnologyIJEEE
 
Netlist Optimization for CMOS Place and Route in MICROWIND
Netlist Optimization for CMOS Place and Route in MICROWINDNetlist Optimization for CMOS Place and Route in MICROWIND
Netlist Optimization for CMOS Place and Route in MICROWINDIRJET Journal
 
Klessydra-T: Designing Configurable Vector Co-Processors for Multi-Threaded E...
Klessydra-T: Designing Configurable Vector Co-Processors for Multi-Threaded E...Klessydra-T: Designing Configurable Vector Co-Processors for Multi-Threaded E...
Klessydra-T: Designing Configurable Vector Co-Processors for Multi-Threaded E...RISC-V International
 

Similar a Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation (20)

Certificates for bist including index
Certificates for bist including indexCertificates for bist including index
Certificates for bist including index
 
Design and implementation of 32 bit alu using verilog
Design and implementation of 32 bit alu using verilogDesign and implementation of 32 bit alu using verilog
Design and implementation of 32 bit alu using verilog
 
Prof Chethan Raj C, Final year Project Report Format
Prof Chethan Raj C, Final year Project Report FormatProf Chethan Raj C, Final year Project Report Format
Prof Chethan Raj C, Final year Project Report Format
 
Front Pages_pdf_format
Front Pages_pdf_formatFront Pages_pdf_format
Front Pages_pdf_format
 
Thesis_Final
Thesis_FinalThesis_Final
Thesis_Final
 
report.pdf
report.pdfreport.pdf
report.pdf
 
3rd sem atm basics for wcdma networks M.TECH ( PDF FILE )
3rd sem atm basics for wcdma networks M.TECH ( PDF FILE )3rd sem atm basics for wcdma networks M.TECH ( PDF FILE )
3rd sem atm basics for wcdma networks M.TECH ( PDF FILE )
 
3rd sem atm basics for wcdma networks M.TECH ( M S WORD FILE )
3rd sem atm basics for wcdma networks M.TECH ( M S WORD FILE )3rd sem atm basics for wcdma networks M.TECH ( M S WORD FILE )
3rd sem atm basics for wcdma networks M.TECH ( M S WORD FILE )
 
Report star topology using noc router
Report star topology using noc router Report star topology using noc router
Report star topology using noc router
 
Embedded System -Lyla B Das.pdf
Embedded System -Lyla B Das.pdfEmbedded System -Lyla B Das.pdf
Embedded System -Lyla B Das.pdf
 
(R18) B.Tech. CSE Syllabus.pdf
(R18) B.Tech. CSE Syllabus.pdf(R18) B.Tech. CSE Syllabus.pdf
(R18) B.Tech. CSE Syllabus.pdf
 
2016-17_BE Electronics Engineering-Course Book 2016 RCOEM.pdf
2016-17_BE Electronics Engineering-Course Book 2016 RCOEM.pdf2016-17_BE Electronics Engineering-Course Book 2016 RCOEM.pdf
2016-17_BE Electronics Engineering-Course Book 2016 RCOEM.pdf
 
Major project report
Major project reportMajor project report
Major project report
 
Summary Of Academic Projects
Summary Of Academic ProjectsSummary Of Academic Projects
Summary Of Academic Projects
 
My project
My projectMy project
My project
 
Semi-custom Layout Design and Simulation of CMOS NAND Gate
Semi-custom Layout Design and Simulation of CMOS NAND GateSemi-custom Layout Design and Simulation of CMOS NAND Gate
Semi-custom Layout Design and Simulation of CMOS NAND Gate
 
Optimized Layout Design of Priority Encoder using 65nm Technology
Optimized Layout Design of Priority Encoder using 65nm TechnologyOptimized Layout Design of Priority Encoder using 65nm Technology
Optimized Layout Design of Priority Encoder using 65nm Technology
 
Netlist Optimization for CMOS Place and Route in MICROWIND
Netlist Optimization for CMOS Place and Route in MICROWINDNetlist Optimization for CMOS Place and Route in MICROWIND
Netlist Optimization for CMOS Place and Route in MICROWIND
 
Dsp lab manual 15 11-2016
Dsp lab manual 15 11-2016Dsp lab manual 15 11-2016
Dsp lab manual 15 11-2016
 
Klessydra-T: Designing Configurable Vector Co-Processors for Multi-Threaded E...
Klessydra-T: Designing Configurable Vector Co-Processors for Multi-Threaded E...Klessydra-T: Designing Configurable Vector Co-Processors for Multi-Threaded E...
Klessydra-T: Designing Configurable Vector Co-Processors for Multi-Threaded E...
 

Último

Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmStan Meyer
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfVanessa Camilleri
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptxmary850239
 
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQ-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQuiz Club NITW
 
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptxBIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptxSayali Powar
 
Narcotic and Non Narcotic Analgesic..pdf
Narcotic and Non Narcotic Analgesic..pdfNarcotic and Non Narcotic Analgesic..pdf
Narcotic and Non Narcotic Analgesic..pdfPrerana Jadhav
 
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4JOYLYNSAMANIEGO
 
Reading and Writing Skills 11 quarter 4 melc 1
Reading and Writing Skills 11 quarter 4 melc 1Reading and Writing Skills 11 quarter 4 melc 1
Reading and Writing Skills 11 quarter 4 melc 1GloryAnnCastre1
 
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxGrade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxkarenfajardo43
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptxmary850239
 
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptxDecoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptxDhatriParmar
 
4.11.24 Mass Incarceration and the New Jim Crow.pptx
4.11.24 Mass Incarceration and the New Jim Crow.pptx4.11.24 Mass Incarceration and the New Jim Crow.pptx
4.11.24 Mass Incarceration and the New Jim Crow.pptxmary850239
 
How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17Celine George
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management systemChristalin Nelson
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptxDhatriParmar
 
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...DhatriParmar
 
ARTERIAL BLOOD GAS ANALYSIS........pptx
ARTERIAL BLOOD  GAS ANALYSIS........pptxARTERIAL BLOOD  GAS ANALYSIS........pptx
ARTERIAL BLOOD GAS ANALYSIS........pptxAneriPatwari
 
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxQ4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxlancelewisportillo
 

Último (20)

Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and Film
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdf
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx
 
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQ-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
 
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptxBIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
 
Narcotic and Non Narcotic Analgesic..pdf
Narcotic and Non Narcotic Analgesic..pdfNarcotic and Non Narcotic Analgesic..pdf
Narcotic and Non Narcotic Analgesic..pdf
 
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4
 
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptxINCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
 
Reading and Writing Skills 11 quarter 4 melc 1
Reading and Writing Skills 11 quarter 4 melc 1Reading and Writing Skills 11 quarter 4 melc 1
Reading and Writing Skills 11 quarter 4 melc 1
 
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxGrade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx
 
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptxDecoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
 
4.11.24 Mass Incarceration and the New Jim Crow.pptx
4.11.24 Mass Incarceration and the New Jim Crow.pptx4.11.24 Mass Incarceration and the New Jim Crow.pptx
4.11.24 Mass Incarceration and the New Jim Crow.pptx
 
How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management system
 
prashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Professionprashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Profession
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
 
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
 
ARTERIAL BLOOD GAS ANALYSIS........pptx
ARTERIAL BLOOD  GAS ANALYSIS........pptxARTERIAL BLOOD  GAS ANALYSIS........pptx
ARTERIAL BLOOD GAS ANALYSIS........pptx
 
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxQ4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
 

Simulation and Hardware Implementation of NLMS Algorithm for Noise Cancellation

  • 1. Simulation and Hardware Implementation of NLMS algorithm on TMS320C6713 Digital Signal Processor A Dissertation submitted in partial fulfilment for the award of the Degree of Master of Technology in Department of Electronics & Communication Engineering (with specialization in Digital Communication) Supervisor Submitted By: S.K. Agrawal Raj Kumar Thenua Associate Professor Enrolment No.: 07E2SODCM30P611 Department of Electronics & Communication Engineering Sobhasaria Engineering College, Sikar Rajasthan Technical University April 2011
  • 2. Candidate’s Declaration I hereby declare that the work, which is being presented in the Dissertation, entitled “Simulation and Hardware Implementation of NLMS algorithm on TMS320C6713 Digital Signal Processor” in partial fulfilment for the award of Degree of “Master of Technology” in Deptt. of Electronics & Communication Engineering with specialization in Digital Communication, and submitted to the Department of Electronics & Communication Engineering, Sobhasaria Engineering College Sikar, Rajasthan Technical University is a record of my own investigations carried under the Guidance of Shri Surendra Kumar Agrawal, Department of Electronics & Communication Engineering, , Sobhasaria Engineering College Sikar, Rajasthan. I have not submitted the matter presented in this Dissertation anywhere for the award of any other Degree. (Raj Kumar Thenua) Digital Communication Enrolment No.: 07E2SODCM30P611 Sobhasaria Engineering College Sikar Counter Signed by Name(s) of Supervisor(s) (S.K. Agrawal) ii
  • 3. ACKNOWLEDGEMENT First of all, I would like to express my profound gratitude to my dissertation guide, Mr. S.K. Agrawal (Head of the Department), for his outstanding guidance and support during my dissertation work. I benefited greatly from working under his guidance. His encouragement, motivation and support have been invaluable throughout my studies at Sobhasaria Engineering College, Sikar. I would like to thank Mohd. Sabir Khan (M.Tech coordinator) for his excellent guidance and kind co-operation during the entire study at Sobhasaria Engineering College, Sikar. I would also like to thank all the faculty members of ECE department who have co-operated and encouraged during the study course. I would also like to thank all the staff (technical and non-technical) and librarians of Sobhasaria Engineering College, Sikar who have directly or indirectly helped during the course of my study. Finally, I would like to thank my family & friends for their constant love and support and for providing me with the opportunity and the encouragement to pursue my goals. Raj Kumar Thenua iii
  • 4. CONTENTS Candidate’s Declaration ii Acknowledgement iii Contents iv-vi List of Tables vii List of Figures viii-x List of Abbreviations xi-xii List of Symbols xiii ABSTRACT 1 CHAPTER 1: INTRODUCTION 2 1.1 Overview 2 1.2 Motivation 3 1.3 Scope of the work 4 1.4 Objectives of the thesis 5 1.5 Organization of the thesis 5 CHAPTER 2: LITERATURE SURVEY 7 CHAPTER 3: ADAPTIVE FILTERS 12 3.1 Introduction 12 3.1.1 Adaptive Filter Configuration 13 3.1.2 Adaptive Noise Canceller (ANC) 16 Approaches to Adaptive filtering Algorithms 19 3.2.1 Least Mean Square (LMS) Algorithm 20 3.2 3.2.1.1 Derivation of the LMS Algorithm 20 3.2.1.2 Implementation of the LMS Algorithm 21 3.2.2 Normalized Least Mean Square (NLMS) Algorithm 22 3.2.2.1 Derivation of the NLMS Algorithm 23 3.2.2.2 Implementation of the NLMS Algorithm 24 3.2.3 Recursive Least Square (RLS) Algorithm iv 24
  • 5. 3.2.3.1 Derivation of the RLS Algorithm 3.2.3.2 Implementation of the RLS Algorithm 3.3 25 27 Adaptive filtering using MATLAB 28 CHAPTER 4: SIMULINK MODEL DESIGN FOR HARDWARE IMPLEMENTATION 31 4.1 Introduction to Simulink 31 4.2 Model design 32 4.2.1 Common Blocks used in Building Model 32 4.2.1.1 C6713 DSK ADC Block 32 4.2.1.2 C6713 DSK DAC Block 33 4.2.1.3 C6713 DSK Target Preferences Block 33 4.2.1.4 C6713 DSK Reset Block 33 4.2.1.5 NLMS Filter Block 34 4.2.1.6 C6713 DSK LED Block 34 4.2.1.7 C6713 DSK DIP Switch Block 34 4.2.2 Building the model Model Reconfiguration 37 4.3.1 The ADC Setting 38 4.3.2 The DAC Settings 39 4.3.3 Setting the NLMS Filter Parameters 40 4.3.4 Setting the Delay Parameters 41 4.3.5 DIP Switch Settings 41 4.3.6 Setting the Constant Value 42 4.3.7 Setting the Constant Data Type 43 4.3.8 Setting the Relational Operator Type 43 4.3.9 Setting the Relational Operator Data Type 43 4.3.10 Switch Setting 4.3 34 44 CHAPTER 5: REAL TIME IMPLEMENTATION ON DSP PROCESSOR 45 5.1 Introduction to Digital Signal Processor (TMS320C6713) 45 5.1.1 Central Processing Unit Architecture 48 5.1.2 General purpose registers overview 49 v
  • 6. 5.1.3 Interrupts 49 5.1.4 Audio Interface Codec 50 5.1.5 DSP/BIOS & RTDX 52 5.2 Code Composer Studio as Integrated Development Environment 54 5.3 MATLAB interfacing with CCS and DSP Processor 58 5.4 Real-time experimental Setup using DSP Processor 58 CHAPTER 6: RESULTS AND DISCUSSION 63 6.1 MATLAB simulation results for Adaptive Algorithms 63 6.1.1 LMS Algorithm Simulation Results 64 6.1.2 NLMS Algorithm Simulation Results 66 6.1.3 RLS Algorithm Simulation Results 67 6.1.4 Performance Comparison of Adaptive Algorithms 67 6.2 Hardware Implementation Results using TMS320C6713 Processor 71 6.2.1 Tone Signal Analysis using NLMS Algorithm 71 6.2.1.1 Effect on Filter Performance at Various Frequencies 73 6.2.1.2 Effect on Filter Performance at Various Amplitudes 75 6.2.2 ECG Signal Analysis using NLMS and LMS Algorithms and their 78 Performance Comparison CHAPTER 7: CONCLUSIONS 85 7.1 Conclusion 85 7.2 Future Work 86 REFERENCES 88 APPENDIX-I LIST OF PUBLICATIONS 93 APPENDIX-II MATLAB COMMANDS 94 vi
  • 7. LIST OF TABLES Table No. Title Page No. Table 6.1 Mean Squared Error (MSE) Versus Step Size (µ) 65 Table 6.2 Mean Squared Error versus Filter-order (N) 69 Table 6.3 Performance comparison of various adaptive algorithms 70 Table 6.4 Comparison of Various Parameters for Adaptive Algorithms 70 Table 6.5 SNR Improvement versus voltage and frequency 78 Table 6.6 SNR Improvement versus noise level for a Tone Signal 78 Table 6.7 SNR Improvement versus noise variance for an ECG Signal 84   vii
  • 8. LIST OF FIGURES Figure No. Title Page No. Fig.3.1 General adaptive filter configuration 14 Fig.3.2 Transversal FIR filter architecture 15 Fig.3.3 Block diagram for Adaptive Noise Canceller 16 Fig.3.4 MATLAB versatility diagram 29 Fig.4.1 Simulink applications 32 Fig.4.2 Adaptive Noise cancellation Simulink model 33 Fig.4.3 Simulink library browser 35 Fig.4.4 Blank new model window 36 Fig.4.5 Model window with ADC block 37 Fig.4.6 Model illustration before connections 38 Fig.4.7 Setting up the ADC for mono microphone input 39 Fig.4.8 Setting the DAC parameters 39 Fig.4.9 Setting the NLMS filter parameters 40 Fig.4.10 Setting the delay unit 41 Fig.4.11 Setting up the DIP switch values 42 Fig.4.12 Setting the constant parameters 42 Fig.4.13 Data type conversion to 16-bit integer 43 Fig.4.14 Changing the output data type 44 Fig.5.1 Block diagram of TMS320C6713 processor 47 Fig.5.2 Physical overview of the TMS320C6713 processor 47 Fig.5.3 Functional block diagram of TMS320C6713 CPU 48 Fig.5.4 Interrupt priority diagram 49 Fig.5.5 Interrupt handling procedure 50 viii
  • 9. Figure No. Title Page No. Fig.5.6 Audio connection illustrating control and data signal 51 Fig.5.7 AIC23 codec interface 52 Fig.5.8 DSP BIOS and RTDX 53 Fig.5.9 Code composer studio platform 54 Fig.5.10 Embedded software development 54 Fig.5.11 Typical 67xx efficiency vs. efforts level for different codes 55 Fig.5.12 Code generation 55 Fig.5.13 Cross development environment 56 Fig.5.14 Signal flow during processing 56 Fig.5.15 Real-time analysis and data visualization 57 Fig.5.16 MATLAB interfacing with CCS and TI target processor 58 Fig.5.17 Experimental setup using Texas Instrument processor 59 Fig.5.18 Real-time setup using Texas Instrument processor 59 Fig.5.19 Model building using RTW 60 Fig.5.20 Code generation using RTDX link 60 Fig.5.21 Target processor in running status 61 Fig.5.22 (a) Switch at Position 0 62 Fig.5.22 (b) Switch at position 1 for NLMS noise reduction 62 Fig.6.1(a) Clean tone(sinusoidal) signal s(n) 63 Fig.6.1(b) Noise signal x(n) 63 Fig.6.1(c) Delayed noise signal x1(n) 64 Fig.6.1(d) Desired signal d(n) 64 Fig.6.2 MATLAB simulation for LMS algorithm; N=19, step size=0.001 64 Fig.6.3 MATLAB simulation for NLMS algorithm; N=19, step size=0.001 66 ix
  • 10. Figure No. Title Page No. Fig.6.4 MATLAB simulation for RLS algorithm; N=19, λ =1 67 Fig.6.5 MSE versus step-size (µ) for LMS algorithm 67 Fig.6.6 MSE versus filter order (N) 68 Fig.6.7 Clean tone signal of 1 kHz 72 Fig.6.8 Noise corrupted tone signal 72 Fig.6.9 Filtered tone signal 73 Fig.6.10 Time delay in filtered signal 73 Fig.6.11(a) Filtered output signal at 2 kHz frequency 74 Fig.6.11(b) Filtered output signal at 3 kHz frequency 74 Fig.6.11(c) Filtered output signal at 4 kHz frequency 75 Fig.6.11(d) Filtered output signal at 5 kHz frequency 75 Fig.6.12(a) Filtered output signal at 3V 76 Fig.6.12(b) Filtered output signal at 4V 76 Fig.6.12(c) Filtered output signal at 5V 77 Fig.6.13 Filtered signal at high noise 77 Fig.6.14 ECG waveform 79 Fig.6.15 Clean ECG signal 80 Fig.6.16(a) NLMS filtered output for low level noisy ECG signal 81 Fig.6.16(b) LMS filtered output for low level noisy ECG signal 81 Fig.6.17(a) NLMS filtered output for medium level noisy ECG signal 82 Fig.6.17(b) LMS filtered output for medium level noisy ECG signal 82 Fig.6.18(a) NLMS filtered output for high level noisy ECG signal 83 Fig.6.18(b) LMS filtered output for high level noisy ECG signal 83   x
  • 11. LIST OF ABBREBIATIONS ANC Adaptive Noise Cancellation API Application Program Interface AWGN Additive White Gaussian Noise BSL Board Support Library BIOS Basic Input Output System CSL Chip Support Library CCS Code Composer Studio CODEC Coder Decoder COFF Common Object File Format COM Component Object Model CPLD Complex Programmable Logic Device CSV Comma Separated Value DIP Dual Inline Package DSK Digital signal processor Starter Kit DSO Digital Storage Oscilloscope DSP Digital Signal Processor ECG Electrocardiogram EDMA Enhanced Direct Memory Access EMIF External Memory Interface FIR Finite Impulse Response FPGA Field Programmable Gate Array FTRLS Fast Transversal Recursive Least Square GEL General Extension Language GPIO General Purpose Input Output GUI Graphical User Interface HPI Host Port Interface IDE Integrated Development Environment IIR Infinite Impulse Response JTAG Joint Text Action Group LMS Least Mean Square xi
  • 12. LSE Least Square Error MA Moving Average McBSP Multichannel Buffered Serial Port McASP Multichannel Audio Serial Port MSE Mean Square Error MMSE Minimum Mean Square Error NLMS Normalized Least Mean Square RLS Recursive Least Squares RTDX Real Time Data Exchange RTW Real Time Workshop SNR Signal to Noise Ratio TI Texas Instrument TVLMS Time Varying Least Mean Squared VLIW Very Long Instruction Word VSLMS Variable Step-size Least Mean Square VSSNLMS Variable Step Size Normalized Least Mean Square xii
  • 13. LIST OF SYMBOLS s(n) Source signal x(n) Noise signal or reference signal x1(n) Delayed noise signal w(n) Filter weights d(n) Desired signal y(n) FIR filter output e(n) Error signal + e (n) Advance samples of error signal e (n) Error estimation n Sample number i Iteration N Filter order E Ensemble Z-1 Unit delay wT Transpose of weight vector µ Step size Gradient ξ Cost function x(n) 2 Squared Euclidian norm of the input vector x(n) at iteration n. c Constant term for normalization α NLMS adaption constant λ ~ Λ ( n) Small positive constant k(n) ~ ψ ( n) Gain vector λ ~ Diagonal matrix vector Intermediate matrix θλ Intermediate vector w ( n) Estimation of filter weight vector y ( n) Estimation of FIR filter output xiii
  • 14. ABSTRACT Adaptive filtering constitutes one of the core technology in the field of digital signal processing and finds numerous application in the areas of science and technology viz. echo cancellation, channel equalization, adaptive noise cancellation, adaptive beam-forming, biomedical signal processing etc. Noise problems in the environment have gained attention due to the tremendous growth in upcoming technologies which gives spurious outcomes like noisy engines, heavy machinery, high electromagnetic radiation devices and other noise sources. Therefore, the problem of controlling the noise level in the area of signal processing has become the focus of a vast amount of research over the years. In this particular work an attempt has been made to explore the adaptive filtering techniques for noise cancellation using Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Mean Square (RLS) algorithms. The mentioned algorithms have been simulated in MATLAB and compared for evaluating the best performance in terms of Mean Squared Error (MSE), convergence rate, percentage noise removal, computational complexity and stability. In the specific example of tone signal, LMS has shown low convergence rate, with low computational complexity while RLS has fast convergence rate and shows best performance but at the cost of large computational complexity and memory requirement. However the NLMS provides a trade-off in convergence rate and computational complexity which makes it more suitable for hardware implementation. The hardware implementation of NLMS algorithm is performed for that a simulink model is designed to generate auto C code for the DSP processor. The generated C code is loaded on the DSP processor hardware and the task of real-time noise cancellation is done for the two types of signals i.e. tone signal and biomedical ECG signal. For both types of signals, three noisy signals of different noise levels are used to judge the performance of the designed system. The output results are analysed using Digital Storage Oscilloscope (DSO) in terms of filtered signal SNR improvement. The results have also been compared with the LMS algorithm to prove the superiority of NLMS algorithm. 1   
  • 15. Chapter-1 INTRODUCTION In the process of transmission of information from the source to receiver, noise from the surroundings automatically gets added to the signal.  The noisy signal contains two components, one carries the information of interest i.e. the useful signal; the other consists of random errors or noise which is superimposed on the useful signal. These random errors or noise are unwanted because they diminish the accuracy and precision of the measured signal. Therefore the effective removal or reduction of noise in the field of signal processing is an active area of research. 1.1 Overview The use of adaptive filter [1] is one of the most popular proposed solutions to reduce the signal corruption caused by predictable and unpredictable noise. An adaptive filter has the property of self-modifying its frequency response to change its behavior with time. It allows the filter to adapt to the response as the input signal characteristics change. Due to this capability and the construction flexibility, the adaptive filters have been employed in many different applications like telephonic echo cancellation, radar signal processing, navigation systems, communications, channel equalization, bio-medical & biometric signal processing etc. In the field of adaptive filtering, there are mainly two algorithms that are used to force the filter to adapt its coefficients – Stochastic gradient based algorithm and Recursive Least Square based algorithm. Their implementations and adaptation properties are the determining factors for choice of application. The main requirements and the performance parameters for adaptive filters are the convergence speed and the asymptotic error. The convergence speed is the primary property of an adaptive filter which enables one to measure how quickly the filter is converging to the desired value. It is a major requirement as well as a limiting factor for most of the applications of adaptive filters. The asymptotic error represents the amount of error that the filter introduces at steady state after it has converged to the desired value. The RLS filters due to their computational structure have considerably better properties than the LMS filters both in terms of the 2   
  • 16. convergence speed and the asymptotic error. The RLS filters which outperform the LMS filters obtain their solution for the weight updated directly from the Mean Square Error (MSE) [2]. However, they are computationally very demanding and also very dependent upon the precision of the input signal. Their computational requirements are significant and imply the use of expensive and power demanding high-speed processors. Also, for the systems lacking the appropriate dynamic range, the adaptation algorithms can become unstable. In this manner to match the computational requirements a DSP processor can be a better substitute. 1.2 Motivation In the field of signal processing there is a significant need of a special class of digital filters known as adaptive filters. Adaptive filters are used commonly in many different configurations for different applications. These filters have various advantages over the standard digital filters. They can adapt their filter coefficients from the environment according to preset rules. The filters are capable of learning from the statistics of current conditions and change their coefficients in order to achieve a certain goal. In order to design a filter prior knowledge of the desired response is required. When such knowledge is not available due to the changing nature of the filter’s requirements, it is impossible to design a standard digital filter. In such situations, adaptive filters are desirable. The algorithms used to perform the adaptation and the configuration of the filter depends directly on the application of the filter. However, the basic computational engine that performs the adaptation of the filter coefficients can be the same for different algorithms and it is based on the statistics of the input signals to the system. The two classes of adaptive filtering algorithms namely Recursive Least Squares (RLS) and Least Mean Squared (LMS) are capable of performing the adaptation of the filter coefficients. When we talk about a real scenario where the information generated from the source side gets contaminated by the noise signal, this situation demands for the adaptive filtering algorithm which provides fast convergence while being numerically stable without incorporating much memory. 3   
  • 17. Hence, the motivation for the thesis is to search for an adaptive algorithm which has reduced computational complexity, reasonable convergence speed and good stability without degrading the performance of the adaptive filter and then realize the algorithm on an efficient hardware which makes it more practical in real time applications. 1.3 Scope of the Work In numerous application areas, including biomedical engineering, radar & sonar engineering, digital communications etc., the goal is to extract the useful signal corrupted by interferences and noises. In this wok an adaptive noise canceller will be designed that will more effective than available ones. To achieve an effective adaptive noise canceller, the simulation of various adaptive algorithms will be done on MATLAB. The obtained suitable algorithm will be implemented on the TMS320C6713 DSK hardware. The designed system will be tested for the filtering of a noisy ECG signal and tone signal and the system performance will be compared with the early designed available systems. The designed system may be useful for cancelling of interference in ECG signal, periodic interference in audio signal and broad-band interference in the side-lobes of an antenna array. In this work for the simulation, MATLAB version 7.4.0.287(R2007a) is used, though Labview version7 may also be applicable. For the hardware implementation, Texas Instrument (TI) TMS320C6713 digital signal processor is used. However, Field Programmable Gate Array (FPGA) may also be suitable. To assist the hardware implementation Simulink version 6.6 is appropriate to generate C code for the DSP hardware. To communicate with DSP processor, Integrated Development Environment (IDE) software Code Composer Studio V3.1 is essential. Function generator and noise generator or any other audio device can be used as an input source for signal analysis. For the analysis of output data DSO is essentially required however CRO may also be used. Current adaptive noise cancellation models [5], [9], [11] works on relatively low processing speed that is not suitable for real-time signals which results delay in output. In this direction, to increase the processing speed and to improve signal-to-noise ratio, a DSP processor can be useful because it is a fast special purpose microprocessor with a specialized type of architecture and an appropriate instruction set for signal processing. It is also well suited for numerically intensive calculations. 4   
  • 18. 1.4 Objectives of the Thesis The core of this thesis is to analyze and filter the noisy signals (real-time as well as non-real time) by various adaptive filtering techniques in software as well as in hardware, using MATLAB & DSP processor respectively. The basic objective is to focus on the hardware implementation of adaptive algorithms for filtering so the DSP processor is employed in this work as it can deal more efficiently with real-time as well as non-real time signals. The objectives of the thesis are as follows: (a) To perform the MATLAB simulation of Least Mean Squared (LMS), Normalized Least Mean Squared (NLMS) and Recursive Least Square (RLS) algorithms and to compare their relative performance with a tone signal. (b) Design a simulink model to generate auto C code for the hardware implementation of NLMS and LMS algorithms. (c) Hardware implementation of NLMS and LMS algorithms to perform the analysis of an ECG signal and tone signal. (d) To compare the performance of NLMS and LMS algorithms in terms of SNR improvement for an ECG signal. 1.5 Organization of the Thesis The work emphasizing on the implementation of various adaptive filtering algorithms using MATLAB, Simulink and DSP processor, in this regard the thesis is divided into seven chapters as follows: Chapter-2 deals with the literature survey for the presented work, where so many papers from IEEE and other refereed journals or proceedings are taken which relate the present work with recent research work going on worldwide and assure the consistency of the work. Chapter-3 presents a detailed introduction of adaptive filter theory and various adaptive filtering algorithms with problem definition. 5   
  • 19. Chapter-4 presents a brief introduction of simulink. An adaptive noise cancellation model is designed for adaptive noise cancellation with the capability of C code generation to implement on DSP processor. Chapter-5 illustrates experimental setup for the real-time implementation of an adaptive noise canceller on a DSK. Therefore a brief introduction of TMS320C6713 processor and code composer studio (CCS) with real-time workshop facility is also presented. Chapter-6 shows the experimental outcomes for the various algorithms. This chapter is divided in two parts, first part shows the MATLAB simulation results for a sinusoidal tone signal and the second part illustrates the real time DSP Processor implementation results for sinusoidal tone signal and ECG signal. The results from DSP processor are analyzed with the help of DSO. Chapter-7 summarizes the work and provides suggestions for future research. 6   
  • 20. Chapter-2 LITERATURE SURVEY In the last thirty years significant contributions have been made in the field of signal processing. The advances in digital circuit design have been the key technological development that sparked a growing interest in the field of digital signal processing. The resulting digital signal processing systems are attractive due to their low cost, reliability, accuracy, small physical sizes and flexibility. In numerous applications of signal processing, communications and biomedical we face the necessity to remove noise and distortion from the signals. These phenomena are due to time-varying physical processes which are unknown sometimes. One of these situations is during the transmission of a signal from one point to another. The channel which may be of wires, fibers, microwave beam etc., introduces noise and distortion due to the variations of its properties. These variations may be slow or fast. Since most of the time the variations are unknown, so there is a requirement of such type of filters that can work effectively in such unknown environment. The adaptive filter is the right choice that diminishes and sometimes completely eliminates the signal distortion. The most common adaptive filters which are used during the adaption process are the finite impulse response (FIR) types. These are preferable because they are stable, and no special adjustments are needed for their implementation. In adaptive filters, the filter weights are needed to be updated continuously according to certain rules. These rules are presented in form of algorithms. There are mainly two types of algorithms that are used for adaptive filtering. The first is stochastic gradient based algorithm known as Least Mean Squared (LMS) algorithm and second is based on least square estimation which is known as Recursive Least Square (RLS) algorithm. A great deal of research [1]-[5], [14], [15] has been carried out in subsequent years for finding new variant of these algorithms to achieve better performance in noise cancellation applications. Bernard Widrow et. al.[1] in 1975, described the adaptive noise cancelling as an alternative method of estimating signals which are corrupted by additive noise or interference by employing LMS algorithm. The method uses a “primary” input containing the corrupted signal and a “reference” input containing noise correlated in some unknown way with the 7   
  • 21. primary noise. The reference input is adaptively filtered and subtracted from the primary input to obtain the signal estimate. Widrow [1] focused on the usefulness of the adaptive noise cancellation technique in a variety of practical applications that included the cancelling of various forms of periodic interference in electrocardiography, the cancelling of periodic interference in speech signals, and the cancelling of broad-band interference in the side-lobes of an antenna array. In 1988, Ahmed S. Abutaleb [2] introduced a new principle- Pontryagin minimum principal to reduce the computational time of LMS algorithm. The proposed method reduces the computation time drastically without degrading the accuracy of the system. When compared to the LMS-based widrow [1] model, it was shown to have superior performance. The LMS based algorithms are simple and easy to implement but the convergence speed is slow. Abhishek Tandon et. al.[3] introduced an efficient, low-complexity Normalized least mean squared (NLMS) algorithm for echo cancellation in multiple audio channels. The performance of the proposed algorithm was compared with other adaptive algorithms for acoustic echo cancellation. It was shown that the proposed algorithm has reduced complexity, while providing a good overall performance. In NLMS algorithm, all the filter coefficients are updated for each input sample. Dong Hang et. al.[4] presented a multi-rate algorithm which can dynamically change the update rate of the coefficients of filter by analyzing the actual application environment. When the environment is varying, the rate increases while it decreases when the environment is stable. The results of noise cancellation indicate that the new method has faster convergence speed, low computation complexity, and the same minimum error as the traditional method. Ying He et. al.[5] presented the MATLAB simulation of RLS algorithm and the performance was compared with LMS algorithm. The convergence speed of RLS algorithm is much faster and produces a minimum mean squared error (MSE) among all available LMS based algorithms but at the cost of increased computational complexity which makes its implementation difficult on hardware. Nowadays the availability of high speed digital signal processors has attracted the attention of the research scholars towards the real-time implementation of the available algorithms on the hardware platform. Digital signal processors are fast special-purpose 8   
  • 22. microprocessors with a specialized type of architecture and an instruction set appropriate for signal processing. The architecture of the digital signal processor is very well suited for numerically intensive calculations. DSP techniques have been very successful because of the development of low-cost software and hardware support. DSP processors are concerned primarily with real-time signal processing. DSP processors exploit the advantages of microprocessors. They are easy to use, flexible, economical and can be reprogrammed easily. The starting of real-time hardware implementation was done by Edgar Andrei [6] initially on the Motorola DSP56307 in 2000. Later in year 2002, Michail D. Galanis et. al.[7] presented a DSP course for real-time systems design and implementation based on the TMS320C6211. This course emphasized the issue of transition from an advanced design and simulation environment like MATLAB to a DSP software environment like Code Composer Studio. Boo-Shik Ryu et. al.[8] implemented and investigated the performance of a noise canceller with DSP processor (TMS320C6713) using the LMS algorithm, NLMS algorithm and VSS-NLMS algorithm. Results showed that the proposed combination of hardware and VSS-NLMS algorithm has not only a faster convergence rate but also lower distortion when compared with the fixed step size LMS algorithm and NLMS algorithm in real time environments. In 2009, J. Gerardo Avalos  et. al. [9] have done an implementation of a digital adaptive filter on the digital signal processor TMS320C6713 using a variant of the LMS algorithm which consists of error codification. The speed of convergence is increased and the complexity of design for its implementation in digital adaptive filters is reduced because the resulting codified error is composed of integer values. The LMS algorithm with codified error (ECLMS) was tested in an environmental noise canceller and the results demonstrate an increase in the convergence speed and a reduction of processing time. C.A. Duran et. al. [10] presented an implementation of the LMS, NLMS and other LMS based algorithms on the DSK TMS320C6713 with the intention to compare their performance, analyze their time & frequency behavior along with the processing speed of the algorithms. The objective of the NLMS algorithm is to obtain the best convergence factor considering the input signal power in order to improve the filter convergence time. The 9   
  • 23. obtained results show that the NLMS has better performance than the LMS. Unfortunately, the computational complexity increases which means more processing time. The work related to real-time implementation so far discussed was implemented on DSP processor by writing either assembly or C program directly in the editor of Code Composer Studio (CCS). The writing of assembly program needs so many efforts therefore only professional person can do this similarly C programming are not simple as far as hardware implementation concerned. There is a simple way to create C code automatically which requires less effort and is more efficient. Presently only few researchers [11]-[13] are aware about this facility which is provided by the MATLAB Version 7.1 and higher versions, using embedded target Real-time Workshop (RTW). Gaurav Saxena  et. al. [11] have used this auto code generation facility and presented better results than the conventional C code writing. Gaurav Saxena  et. al. [11] discussed the real time implementation of adaptive noise cancellation based on an improved adaptive wiener filter on Texas Instruments TMS320C6713 DSK. Then its performance was compared with the Lee’s adaptive wiener filter. Furthermore, a model based design of adaptive noise cancellation based on LMS filter using simulink was implemented on TI C6713. The auto-code generated by the Real Time Workshop for the simulink model of LMS filter was compared with the ‘C’ implementation of LMS filter on C6713 in terms of code length and computation time. It was found to have a large improvement in computation time but at the cost of increased code length. S.K. Daruwalla et. al. [12] focused on the development and the real time implementation of various audio effects using simulink blocks by employing an audio signal as input. This system has helped the sound engineers to easily configure/capture various audio effects in advance by simply varying the values of predefined simulink blocks. The digital signal processor is used to implement the designs; this broadens the versatility of system by allowing the user to employ the processor for any audio input in real-time. The work is enriched with the real-time concepts of controlling the various audio effects via onboard DIP switches on the C6713 DSK. 10   
  • 24. In Nov-2009, Yaghoub Mollaei [13] designed an adaptive FIR filter with normalized LMS algorithm to cancel the noise. A simulink model is created and linked to TMS320C6711 digital signal processor through embedded target for C6000 SIMULINK toolbox and realtime workshop to perform hardware adaptive noise cancellation. Three noises with different powers were used to test and judge the system performance in software and hardware. The background noises for speech and music track were eliminated adequately with reasonable rate for all the tested noises. The outcomes of the literature survey can be summarized as follows: The adaptive filters are attractive to work in an unknown environment and are suitable for noise cancellation applications in the field of digital signal processing. To update the adaptive filter weights two types of algorithms, LMS & RLS are used. RLS based algorithms have better performance but at the cost of larger computational complexity therefore very less work [5], [15] is going on in this direction. On the other hand, LMS based algorithms are simple to implement and its few variants like NLMS have comparable performance with RLS algorithm. So a large amount of research [1]-[5] through simulation has been carried out in this regard to improve the performance of LMS based algorithms. Simulation can be carried out on non-real time signals only. Therefore for real-time application there is a need of the hardware implementation of LMS based algorithms. The DSP processor has been found to be a suitable hardware for signal processing applications. Hence, there is a requirement to find out the easiest way for the hardware implementation of adaptive filter algorithms on a particular DSP processor. The use of simulink model [11]-[13] with embedded target and real time workshop has proved to be helpful for the same. Therefore the simulink based hardware implementation of NLMS algorithm for ECG signal analysis can be a good contribution in the field of adaptive filtering. 11   
  • 25. Chapter-3 ADAPTIVE FILTERS 3.1 Introduction Filtering is a signal processing operation. Its objective is to process a signal in order to manipulate the information contained in the signal. In other words, a filter is a device that maps its input signal to another output signal facilitating the extraction of the desired information contained in the input signal. A digital filter is the one that processes discretetime signals represented in digital format. For time-invariant filters the internal parameters and the structure of the filter are fixed, and if the filter is linear the output signal is a linear function of the input signal. Once the prescribed specifications are given, the design of timeinvariant linear filters entails three basic steps namely; the approximation of the specifications by a rational transfer function, the choice of an appropriate structure defining the algorithm, and the choice of the form of implementation for the algorithm. An adaptive filter [1], [2] is required when either the fixed specifications are unknown or the specifications cannot be satisfied by time-invariant filters. Strictly speaking, an adaptive filter is a nonlinear filter since its characteristics are dependent on the input signal and consequently the homogeneity and additivity conditions are not satisfied. However, if we freeze the filter parameters at a given instant of time, most adaptive filters are linear in the sense that their output signals are linear functions of their input signals. The adaptive filters are time-varying since their parameters are continuously changing in order to meet a performance requirement. In this sense, we can interpret an adaptive filter as a filter that performs the approximation step on-line. Usually, the definition of the performance criterion requires the existence of a reference signal that is usually hidden in the approximation step of fixed-filter design. Adaptive filters are considered nonlinear systems; therefore their behaviour analysis is more complicated than for fixed filters. On the other hand, since the adaptive filters are self designing filters from the practitioner’s point of view, their design can be considered less involved than the digital filters with fixed coefficients. 12   
  • 26. Adaptive filters work on the principle of minimizing the mean squared difference (or error) between the filter output and a target (or desired) signal. Adaptive filters are used for estimation of non-stationary signals and systems, or in applications where a sample-by sample adaptation of a process and a low processing delay is required. Adaptive filters are used in applications [26]-[29] that involve a combination of three broad signal processing problems: (1) De-noising and channel equalization – filtering a time-varying noisy signal to remove the effect of noise and channel distortions. (2) Trajectory estimation – tracking and prediction of the trajectory of a non stationary signal or parameter observed in noise. (3) System identification – adaptive estimation of the parameters of a time-varying system from a related observation. Adaptive linear filters work on the principle that the desired signal or parameters can be extracted from the input through a filtering or estimation operation. The adaptation of the filter parameters is based on minimizing the mean squared error between the filter output and a target (or desired) signal. The use of the Least Square Estimation (LSE) criterion is equivalent to the principal of orthogonality in which at any discrete time m the estimator is expected to use all the available information such that any estimation error at time m is orthogonal to all the information available up to time m. 3.1.1 Adaptive Filter Configuration The general set up of an adaptive-filtering environment is illustrated in Fig.3.1 [43], where n is the iteration number, x(n) denotes the input signal, y(n) is the adaptive-filter output signal, and d(n) defines the desired signal. The error signal e (n) is calculated as d (n) – y (n). The error signal is then used to form a performance function that is required by the adaptation algorithm in order to determine the appropriate updating of the filter coefficients. The minimization of the objective function implies that the adaptive-filter output signal is matching the desired signal in some sense. At each sampling time, an adaptation algorithm adjusts the filter coefficients w(n) =[w0(n)w1(n)….. wN−1(n)] to minimize the difference between the filter output and a desired or target signal. 13   
  • 27. d(n) y(n) Adaptive Filter x(n) _ ⊕  e(n) Adaptive Algorithm Fig.3.1. General Adaptive filter configuration The complete specification of an adaptive system, as shown in Fig. 3.1, consists of three things: (a) Input: The type of application is defined by the choice of the signals acquired from the environment to be the input and desired-output signals. The number of different applications in which adaptive techniques are being successfully used has increased enormously during the last two decades. Some examples are echo cancellation, equalization of dispersive channels, system identification, signal enhancement, adaptive beam-forming, noise cancelling and control. (b) Adaptive-filter structure: The adaptive filter can be implemented in a number of different structures or realizations. The choice of the structure can influence the computational complexity (amount of arithmetic operations per iteration) of the process and also the necessary number of iterations to achieve a desired performance level. Basically, there are two major classes of adaptive digital filter realization, distinguished by the form of the impulse response, namely the finite-duration impulse response (FIR) filter and the infinite-duration impulse response (IIR) filters. FIR filters are usually implemented with nonrecursive structures, whereas IIR filters utilize recursive realizations. Adaptive FIR filter realizations: The most widely used adaptive FIR filter structure is the transversal filter, also called tapped delay line, that implements an all-zero transfer function with a canonic direct form realization without feedback. For this realization, the output signal y(n) is a linear combination of the filter coefficients, that 14   
  • 28. yields a quadratic mean-square error (MSE = E[|e(n)|2]) function with a unique optimal solution. Other alternative adaptive FIR realizations are also used in order to obtain improvements as compared to the transversal filter structure, in terms of computational complexity, speed of convergence and finite word-length properties. Adaptive IIR filter realizations: The most widely used realization of adaptive IIR filters is the canonic direct form realization [42], due to its simple implementation and analysis. However, there are some inherent problems related to recursive adaptive filters which are structure dependent such as pole-stability monitoring requirement and slow speed of convergence. To address these problems, different realizations were proposed attempting to overcome the limitations of the direct form structure. (c) Algorithm: The algorithm is the procedure used to adjust the adaptive filter coefficients in order to minimize a prescribed criterion. The algorithm is determined by defining the search method (or minimization algorithm), the objective function and the nature of error signal. The choice of the algorithm determines several crucial aspects of the overall adaptive process, such as existence of sub-optimal solutions, biased optimal solution and computational complexity. x(n) w0   Z‐1  ⊗ x(n-1) w1  Z-1  Z-1 ⊗ wN-1 ⊕ y(n) ⊕ _  e(n) d(n) + Fig.3.2. Transversal FIR filter architecture 15    x(n-N+1) ⊗
  • 29. 3.1.2 Adaptive Noise Canceller (ANC) The goal of adaptive noise cancellation system is to reduce the noise portion and to obtain the uncorrupted desired signal. In order to achieve this task, a reference of the noise signal is needed. That reference is fed to the system, and it is called a reference signal x(n). However, the reference signal is typically not the same signal as the noise portion of the primary signal; it can vary in amplitude, phase or time. Therefore, the reference signal cannot be simply subtracted from the primary signal to obtain the desired portion at the output. Signal Source Noise Source s(n) Primary Input d(n) x1(n) Reference Input x(n) Adaptive Filter + Σ e(n) Output _ y(n) Adaptive Noise Canceller Fig.3.3. Block diagram for Adaptive Noise Canceller Consider the Adaptive Noise Canceller (ANC) shown in Fig.3.3 [1]. The ANC has two inputs: the primary input d(n), which represents the desired signal corrupted with undesired noise and the reference input x(n), which is the undesired noise to be filtered out of the system. The primary input therefore comprises of two portions: - first, the desired signal and the other one is noise signal corrupting the desired portion of the primary signal. The basic idea for the adaptive filter is to predict the amount of noise in the primary signal and then subtract that noise from it. The prediction is based on filtering the reference signal x(n), which contains a solid reference of the noise present in the primary signal. The noise in the reference signal is filtered to compensate for the amplitude, phase and time delay and then subtracted from the primary signal. The filtered noise represented by y(n) is the system’s prediction of the noise portion of the primary signal and is subtracted from desired signal d(n) resulting in a signal called error signal e(n), and it presents the output of the system. Ideally, the resulting error signal should be only the desired portion of the primary signal. 16   
  • 30. In practice, it is difficult to achieve this, but it is possible to significantly reduce the amount of noise in the primary signal. This is the overall goal of the adaptive filters. This goal is achieved by constantly changing (or adapting) the filter coefficients (weights). The adaptation rules determine their performance and the requirements of the system used to implement the filters. A good example to illustrate the principles of adaptive noise cancellation is the noise removal from the pilot’s microphone in the airplane. Due to the high environmental noise produced by the airplane engine, the pilot’s voice in the microphone gets distorted with a high amount of noise and is very difficult to comprehend. In order to overcome this problem, an adaptive filter can be used. In this particular case, the desired signal is the pilot’s voice. This signal is corrupted with the noise from the airplane’s engine. Here, the pilot’s voice and the engine noise constitute primary signal d(n). Reference signal for the application would be a signal containing only the engine noise, which can be easily obtained from the microphone placed near the engine. This signal would not contain the pilot’s voice, and for this application it is the reference signal x(n). Adaptive filter shown in Fig.3.3 can be used for this application. The filter output y(n) is the system’s estimate of the engine noise as received in the pilot’s microphone. This estimate is subtracted from the primary signal (pilot’s voice plus engine noise), and at the output of the system e(n) should contain only the pilot’s voice without any noise from the airplane’s engine. It is not possible to subtract the engine noise from the pilot’s microphone directly, since the engine noise received in the pilot’s microphone and the engine noise received in the reference microphone are not the same signal. There are differences in amplitude and time delay. Also, these differences are not fixed. They change in time with pilot’s microphone position with respect to the airplane engine, and many other factors. Therefore, designing the fixed filter to perform the task would not obtain the desired results. The application requires adaptive solution. There are many forms of the adaptive filters and their performance depends on the objective set forth in the design. Theoretically, the major goal of any noise cancelling system is to reduce the undesired portion of the primary signal as much as possible, while preserving the integrity of the desired portion of the primary signal. 17   
  • 31. As noted above, the filter produces estimate of the noise in the primary signal adjusted for magnitude, phase and time delay. This estimate is then subtracted from the noise corrupted primary signal to obtain the desired signal. For the filter to work well, the adaptive algorithm has to adjust the filter coefficients such that output of the filter is a good estimate of the noise present in the primary signal. To determine the amount by which noise in the primary signal is reduced, the mean squared error technique is used. The Minimum Mean Squared Error (MMSE) is defined as [42]: min E[d (n) − XW T ] 2 = min E[(d (n) − y (n)) 2 ] (3.1) where d is the desired signal, X and W are the vectors of the input reference signal and the filter coefficients respectively. This represents the measure of how well the newly constructed filter (given as a convolution product y(n) = XW) estimates the noise present in the primary signal. The goal is to reduce this error to a minimum. Therefore, the algorithms that perform adaptive noise cancellation are constantly searching for a coefficient vector W, which produces the minimum mean squared error. Minimizing the mean squared of the error signal minimizes the noise portion of the primary signal but not the desired portion. To understand this principle, recall that the primary signal is made of the desired portion and the noise portion. The filtered reference signal y(n) is a reference of the noise portion of the primary signal and therefore is correlated with it. However, the reference signal is not correlated with the desired portion of the primary signal. Therefore, minimizing the mean squared of the error signal minimizes only the noise in the primary signal. This principle can be mathematically described as follows: If we denote the desired portion of primary signal with s(n), and the noise portion of desired signal as x1(n), it follows that d(n) = s(n) + x1(n). As shown in Fig.3.3, the output of the system can be written as [43]: e(n) = d (n) − y (n) (3.2) e ( n ) = s ( n ) + x1 ( n ) − y ( n ) e(n) 2 = s (n) 2 + (( x1 (n) − y (n)) 2 + 2 s (n)(( x1 (n) − y (n)) E[e(n) 2 ] = E[ s (n) 2 ] + (( x1 (n) − y (n)) 2 + 2 s (n)(( x1 (n) − y (n)) 18   
  • 32. E[e(n) 2 ] = E[ s (n) 2 ] + E[(( x1 (n) − y (n)) 2 ] + 2 E[ s (n)(( x1 (n) − y (n))] (3.3) Due to the fact that the s(n) is un-correlated to both x1(n) and y(n), as noted earlier, the last term is equal to zero, so we have E[e(n) 2 ] = E[ s (n) 2 ] + E[(( x1 (n) − y (n)) 2 ] min E[e(n) 2 ] = min E[ s(n) 2 ] + min E[(( x1 (n) − y (n))2 ] (3.4) and since s(n) is independent of W, we have min E[e(n) 2 ] = E[ s (n) 2 ] + min E[(( x1 (n) − y (n)) 2 ] (3.5) Therefore, minimizing the error signal, minimizes the mean squared of the difference between the noise portion of the primary signal x1(n), and the filter output y(n) . 3.2 Approaches to Adaptive Filtering Algorithms Basically two approaches can be defined for deriving the recursive formula for the operation of Adaptive Filters. They are as follows: (i) Stochastic Gradient Approach: In this approach to develop a recursive algorithm for updating the tap weights of the adaptive transversal filter, the process is carried out in two stages. First we use an iterative procedure to find the optimum Wiener solution [43]. The iterative procedure is based on the method of steepest descent. This method requires the use of a gradient vector, the value of which depends on two parameters: the correlation matrix of the tap inputs in the transversal filter and the cross-correlation vector between the desired response and the same tap inputs. Secondly, instantaneous values for these correlations are used to derive an estimate for the gradient vector. Least Mean Squared (LMS) and Normalized Least Mean Squared (NLMS) algorithms lie under this approach and are discussed in subsequent sections. (ii) Least Square Estimation: This approach is based on the method of least squares. According to this method, a cost function is minimized that is defined as the sum of weighted error squares, where the error is the difference between some desired response and actual filter output. This method is formulated with block estimation in mind. In block estimation, the input data stream is arranged in the form of blocks of equal length (duration) and the filtering of input data proceeds on a block by block basis, which requires a large memory for computation. The Recursive Least Square (RLS) algorithm 19   
  • 33. falls under this approach and is discussed in subsequent section. 3.2.1 Least Mean Square (LMS) Algorithm The Least Mean Square (LMS) algorithm [1] was first developed by Widrow and Hoff in 1959 through their studies of pattern recognition [42]. Thereon it has become one of the most widely used algorithm in adaptive filtering. The LMS algorithm is a type of adaptive filter known as stochastic gradient-based algorithm as it utilizes the gradient vector of the filter tap weights to converge on the optimal wiener solution. It is well known and widely used due to its computational simplicity. With each iteration of the LMS algorithm, the filter tap weights of the adaptive filter are updated according to the following formula: w( n + 1) = w( n) + 2 μe( n ) x ( n) (3.6) where x(n) is the input vector of time delayed input values, and is given by x(n) = [ x(n) x(n − 1) x(n − 2)....x(n − N + 1)]T (3.7) w( n) = [ w0 ( n) w1 ( n) w2 ( n)....w N −1 (n)]T represents the coefficients of the adaptive FIR filter tap weight vector at time n and μ is known as the step size parameter and is a small positive constant. The step size parameter controls the influence of the updating factor. Selection of a suitable value for μ is imperative to the performance of the LMS algorithm. If the value of μ is too small, the time an adaptive filter takes to converge on the optimal solution will be too long; if the value of μ is too large the adaptive filter becomes unstable and its output diverges [14], [15], [22]. 3.2.1.1 Derivation of the LMS Algorithm The derivation of the LMS algorithm builds upon the theory of the wiener solution for the optimal filter tap weights, w0, as outlined above. It also depends on the steepest descent algorithm that gives a formula which updates the filter coefficients using the current tap weight vector and the current gradient of the cost function with respect to the filter tap weight coefficient vector, ξ(n). w(n + 1) = w(n) − μ∇ξ ( n) (3.8) 20   
  • 34. where ξ (n) = E[e 2 (n)] (3.9) As the negative gradient vector points in the direction of steepest descent for the N dimensional quadratic cost function each recursion shifts the value of the filter coefficients closer towards their optimum value which corresponds to the minimum achievable value of the cost function, ξ(n). The LMS algorithm is a random process implementation of the steepest descent algorithm, from Eq. (3.9). Here the expectation for the error signal is not known so the instantaneous value is used as an estimate. The gradient of the cost function, ξ(n) can alternatively be expressed in the following form: ∇ξ (n) = ∇(e 2 (n)) = ∂e 2 (n) / ∂w = 2e(n)∂e(n) / ∂w = 2e( n)∂[d ( n) − y ( n)] / ∂w = −2e(n)∂ewT (n).x(n)] / ∂w = −2 e ( n ) x ( n ) (3.10) Substituting this into the steepest descent algorithm of Eq. (3.9), we arrive at the recursion for the LMS adaptive algorithm. w( n + 1) = w( n) + 2 μe( n) x ( n) (3.11) 3.2.1.2 Implementation of the LMS Algorithm For the Implementation of each iteration of the LMS algorithm requires three distinct steps in the following order: 1. The output of the FIR filter, y(n) is calculated using Eq. (3.12). N −1 y (n) = ∑ w( n)x(n − i ) = wT (n) x(n) (3.12) i =0 2. The value of the error estimation is calculated using Eq. (3.13). 21   
  • 35. e( n ) = d ( n ) − y ( n ) (3.13) 3. The tap weights of the FIR vector are updated in preparation for the next iteration, by Eq. (3.14). w( n + 1) = w( n) + 2 μe( n) x ( n) (3.14) The main reason for the popularity of LMS algorithms in adaptive filtering is its computational simplicity that makes its implementation easier than all other commonly used adaptive algorithms. For each iteration, the LMS algorithm requires 2N additions and 2N+1 multiplications (N for calculating the output, y(n), one for 2μe(n) and an additional N for the scalar by vector multiplication) . 3.2.2 Normalized Least Mean Square (NLMS) Algorithm In the standard LMS algorithm when the convergence factor μ is large, the algorithm experiences a gradient noise amplification problem. In order to solve this difficulty we can use the NLMS algorithm [14]-[17]. The correction applied to the weight vector w(n) at iteration n+1 is “normalized” with respect to the squared Euclidian norm of the input vector x(n) at iteration n. We may view the NLMS algorithm as a time-varying step-size algorithm, calculating the convergence factor μ as in Eq. (3.15)[10]. μ ( n) = α c + x ( n) (3.15) 2 where α is the NLMS adaption constant, which optimize the convergence rate of the algorithm and should satisfy the condition 0<α<2, and c is the constant term for normalization and is always less than 1. The Filter weights are updated by the Eq. (3.16). w(n + 1) = w(n) + α c + x ( n) 2 e( n ) x ( n ) (3.16)   It is important to note that given an input data (at time n) represented by the input vector x(n) and desired response d(n), the NLMS algorithm updates the weight vector in such a way that the value w(n+1) computed at time n+1 exhibits the minimum change with respect 22   
  • 36. to the known value w(n) at time n. Hence, the NLMS is a manifestation of the principle of minimum disturbance [3]. 3.2.2.1 Derivation of the NLMS Algorithm This derivation of the normalized least mean square algorithm is based on FarhangBoroujeny and Diniz [43]. To derive the NLMS algorithm we consider the standard LMS recursion in which we select a variable step size parameter, μ(n). This parameter is selected so that the error value, e+(n), will be minimized using the updated filter tap weights, w(n+1), and the current input vector, x(n). w(n + 1) = w(n) + 2 μ (n)e(n) x(n) e + (n) = d (n) − wT (n + 1) x(n) = (1 − 2 μ (n) xT (n) x(n))e(n) (3.17) Next we minimize (e+(n))2, with respect to μ(n). Using this we can then find a value for µ(n) which forces e+(n) to zero. μ ( n) = 1 2 x ( n ) x ( n) (3.18) T This μ(n) is then substituted into the standard LMS recursion replacing μ, resulting in the following. w(n + 1) = w(n) + 2 μ (n)e(n) x(n) 1 w(n + 1) = w(n) + T e( n) x ( n) x ( n) x ( n) w(n + 1) = w(n ) + μ (n) x(n ) (3.19) , where μ (n) = α x x+c T (3.20) Often the NLMS algorithm as expressed in Eq. (3.20 is a slight modification of the standard NLMS algorithm detailed above. Here the value of c is a small positive constant in order to avoid division by zero when the values of the input vector are zero. This was not implemented in the real time as in practice the input signal is never allowed to reach zero due to noise from the microphone and from the ADC on the Texas Instruments DSK. The parameter α is a constant step size value used to alter the convergence rate of the NLMS algorithm, it is within the range of 0<α<2, usually being equal to 1. 23   
  • 37. 3.2.2.2 Implementation of the NLMS Algorithm The NLMS algorithm is implemented in MATLAB as outlined later in Chapter 6. It is essentially an improvement over LMS algorithm with the added calculation of step size parameter for each iteration. 1. The output of the adaptive filter is calculated as: N −1 y (n) = ∑ w( n)x(n − i ) = wT (n) x(n) (3.21) i =0 2. The error signal is calculated as the difference between the desired output and the filter output given by: e( n) = d ( n ) − y ( n) (3.22) 3. The step size and filter tap weight vectors are updated using the following equations in preparation for the next iteration: For i=0,1,2,…….N-1 μ i ( n) = α c + xi ( n ) (3.23) 2 w(n + 1) = w(n) + μ i (n)e(n) xi (n) (3.24) where α is the NLMS adaption constant and c is the constant term for normalization. With α =0.02 and c=0.001, each iteration of the NLMS algorithm requires 3N+1 multiplication operations. 3.2.3 Recursive Least Square (RLS) Algorithm The other class of adaptive filtering technique studied in this thesis is known as Recursive Least Squares (RLS) algorithm [42]-[44]. This algorithm attempts to minimize the cost function in Eq. (3.25) where k=1 is the time at which the RLS algorithm commences and λ is a small positive constant very close to, but smaller than 1. With values of λ<1, more importance is given to the most recent error estimates and thus the more recent input samples, that results in a scheme which emphasizes on recent samples of observed data and tends to forget the past values. 24   
  • 38. n 2 ξ ( n) = ∑ λ n − k e n ( k ) (3.25) k =1 Unlike the LMS algorithm and NLMS algorithm, the RLS algorithm directly considers the values of previous error estimations. RLS algorithm is known for excellent performance when working in time varying environments. These advantages come at the cost of an increased computational complexity and some stability problems. 3.2.3.1 Derivation of the RLS Algorithm The RLS cost function of Eq. (3.25) shows that at time n, all previous values of the estimation error since the commencement of the RLS algorithm are required. Clearly, as time progresses the amount of data required to process this algorithm increases. Limited memory and computation capabilities make the RLS algorithm a practical impossibility in its purest form. However, the derivation still assumes that all data values are processed. In practice only a finite number of previous values are considered, this number corresponds to the order of the RLS FIR filter, N. First we define yn(k) as the output of the FIR filter, at n, using the current tap weight vector and the input vector of a previous time k. The estimation error value en(k) is the difference between the desired output value at time k and the corresponding value of yn(k). These and other appropriate definitions are expressed below, for k=1,2, 3,., n. yn (k ) = wT (n) x(k ) en (k ) = d (k ) − yn (k ) d (n) = [d (1), d (2).....d (n)]T y (n) = [ yn (1), yn (2)..... yn (n)]T e(n) = [en (1), en (2).....en (n)]T e( n ) = d ( n ) − y ( n ) (3.26) If we define X(n) as the matrix consisting of the n previous input column vector up to the present time then y(n) can also be expressed as Eq. (3.27). X (n) = [ x(1), x(2),....... x( n)] y (n) = X T (n) w(n) (3.27) 25   
  • 39. The cost function can be expressed in matrix vector form using a diagonal matrix, Λ(n) consisting of the weighting factors. n 2 ξ ( n) = ∑ λ n − k e n ( k ) k =1 ~ = eT (n)Λ (n)e(n) ⎡ λn −1 ⎢ ⎢ 0 ~ where Λ (n) = ⎢ 0 ⎢ ⎢ ... ⎢ 0 ⎣ 0 λ n−2 0 0 λn −3 0 ... 0 ... 0 0 0 0 ... 0 0 0 0 ... 1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (3.28) Substituting values from Eq. (3.26) and (3.27), the cost function can be expanded and then reduced as in Eq. (3.29). (Temporarily dropping (n) notation for clarity). ~ ξ ( n ) = eT ( n ) Λ ( n ) e ( n ) ~ ~ ~ ~ = d T Λd − d T Λy − y T Λd + yT Λy ~ ~ ~ ~ = d T Λd − d T Λ ( X T w) − ( X T w)T Λd + ( X T w)T Λ ( X T w) ~ ~ ~ = d T Λd − 2θ w + wTψ w λ λ (3.29) where ~ ~ ψ λ = X ( n) Λ ( n) X T ( n) ~ ~ θ λ = X ( n) Λ ( n) d ( n) We derive the gradient of the above expression for the cost function with respect to the filter tap weights. By forcing this to zero we find the coefficients for the filter w(n), which minimizes the cost function. ~ ~ ψ λ (n) w(n) = θ λ (n) ~ −1 ~ w(n) = ψ λ (n)θ λ (n) (3.30) The matrix Ψ(n) in the above equation can be expanded and rearranged in recursive form. We can use the special form of the matrix inversion lemma to find an inverse for this matrix which is required to calculate the tap weight vector update. The vector k(n) is known as the gain vector and is included in order to simplify the calculation. 26   
  • 40. ~− ~− ψ λ 1 (n) = λψ λ 1 (n − 1) + x(n) xT (n) ~− = λ−1ψ λ 1 (n − 1) − ~− ~− λ− 2ψ λ 1 (n − 1) x(n) xT (n)ψ λ 1 (n − 1) ~− 1 + λ−1 xT (n)ψ λ 1 (n − 1) x(n) ~− ~− = λ−1 (ψ λ 1 (n − 1) − k (n) xT (n)ψ λ 1 (n − 1)) where ~− λ−1ψ λ 1 (n − 1) x(n) ~− 1 + λ−1 xT (n)ψ λ 1 (n − 1) x(n) k ( n) = ~− = ψ λ 1 ( n) x ( n) (3.31) The vector θλ(n) of Eq. (3.29) can also be expressed in a recursive form. Using this and substituting Ψ-1(n) from equation (3.31) into Eq. (3.30) we can finally arrive at the filter weight update vector for the RLS algorithm, as in Eq. (3.32). ~ ~ θ λ (n) = λθ λ (n − 1) + x(n)d (n) ~ −1 ~ w (n) = ψ (n)θ (n) λ ~ =ψ λ λ −1 ~ ~ ~ −1 (n − 1)θ λ (n − 1) − k (n) xTψ λ (n − 1)θ λ (n − 1) + k (n)d (n) = w (n − 1) − k (n) xT (n) w (n − 1) + k (n)d (n) = w (n − 1) + k (n)(d (n) − w T (n − 1) x(n)) w (n) = w (n − 1) + k (n)en −1 (n) (3.32) where en −1 (n) = d (n) − w T (n − 1) x(n) 3.2.3.2 Implementation of the RLS Algorithm: As stated previously, the memory of the RLS algorithm is confined to a finite number of values corresponding to the order of the filter tap weight vector. Two factors of the RLS implementation must be noted: first, although matrix inversion is essential for derivation of the RLS algorithm, no matrix inversion calculations are required for the implementation; thus greatly reducing the amount of computational complexity of the algorithm. Secondly, unlike the LMS based algorithms, current variables are updated within the iteration they are to be used using values from the previous iteration. To implement the RLS algorithm, the following steps are executed in the following order: 1. The filter output is calculated using the filter tap weights from the previous iteration and the current input vector. yn −1 (n) = w T (n − 1) x(n) (3.33) 27   
  • 41. 2. The intermediate gain vector is calculated using Eq. (3.34). −1 u (n) = wλ (n − 1) x(n) k (n) = u (n) /(λ + x T (n)u (n)) (3.34) 3. The estimation error value is calculated using Eq. (3.35). en −1 (n) = d (n) − yn −1 (n) (3.35) 4. The filter tap weight vector is updated using Eq. (3.36) and the gain vector calculated in Eq. (3.34). w(n) = w T (n − 1) + k (n)en −1 (n) (3.36) 5. The inverse matrix is calculated using Eq. (3.37). ψ λ −1 ( n) = λ −1 (ψ λ −1 ( n − 1) − k ( n)[ x T ( n)ψ λ −1 ( n − 1)] (3.37) When we calculate for each iteration of the RLS algorithm, it requires 4N2 multiplication and 3N2 addition operations. 3.3 Adaptive Filtering using MATLAB MATLAB is the acronym of Matrix Laboratory was originally designed to serve as the interactive link to the numerical computation libraries LINPACK and EISPACK that were used by engineers and scientists when they were dealing with sets of equations. The MATLAB software was originally developed at the University of New Mexico and Stanford University in the late 1970s. By 1984, a company was established named as Matwork by Jack Little and Cleve Moler with the clear objective of commercializing MATLAB. Over a million engineers and scientists use MATLAB today in well over 3000 universities worldwide and it is considered a standard tool in education, business, and industry. The basic element in MATLAB is the matrix and unlike other computer languages it does not have to be dimensioned or declared. MATLAB’s main objective was to solve mathematical problems in linear algebra, numerical analysis, and optimization but it quickly evolved as the preferred tool for data analysis, statistics, signal processing, control systems, economics, weather forecast, and many other applications. Over the years, MATLAB has evolved creating an extended library of specialized built-in functions that are used to generate among other things two-dimensional (2-D) and 3-D graphics and animation and offers 28   
  • 42. numerous supplemental packages called toolboxes that provide additional software power in special areas of interest such as• Curve fitting • Optimization • Signal processing • Image processing • Filter design • Neural network design • Control systems MATLAB Stand alone  Application Simulink  Application  Development  Stateflow  Toolboxes  Blocksets    Data    Sources  Data Access Tools  Student  Products Code Generation  Tools Mathworks Partner  Products C Code  Fig.3.4. MATLAB versatility diagram The MATLAB is an intuitive language and offers a technical computing environment. It provides core mathematics and advance graphical tools for data analysis visualization and algorithm and application development. The MATLAB is becoming a standard in industry, education, and business because the MATLAB environment is user-friendly and the objective of the software is to spend time in learning the physical and mathematical principles of a problem and not about the software. The term friendly is used in the following sense: the MATLAB software executes one instruction at a time. By analyzing the partial results and based on these results, new instructions can be executed that interact with the existing information already stored in the computer memory without the formal compiling required by other competing high-level computer languages. 29   
  • 43. Major Software Characteristics: i. Matrix based numeric computation. ii. High level programming language. iii. Toolboxes provide application-specific functionality. iv. Multiple Platform Support. v. Open and Extensible System Architecture. vi. Interfaces to other language (C, FORTRAN etc). For the simulation of the algorithms discussed in sec.3.2 MATLAB Version 7.4.0.287(R2007a) software is used. In the experimental setup, first of all high level MATLAB programs [5],[20] are written for LMS , NLMS and RLS algorithms as per the implementation steps described in sec.3.2.1.2, sec.3.2.2.2 and sec. 3.2.3.2 respectively [44] . Then the simulation of above algorithms is done with a noisy tone signal generated through MATLAB commands (refer sec. 6.1). The inputs to the programs are; the tone signal as primary input s(n), random noise signal as reference input x(n), order of filter (N), step size value (µ) ,number of iterations (refer Fig. 6.1) whereas the outputs are: the filtered output and MSE which can be seen in the graphical results obtained after simulation gets over( refer Fig.6.2). The output results for the MATLAB simulation of LMS, NLMS and RLS algorithm are presented and discussed later in the chapter-6. 30   
  • 44. Chapter-4 SIMULINK MODEL DESIGN FOR HARDWARE IMPLEMENTATION 4.1 Introduction to Simulink Simulink is a software package for modeling, simulating and analyzing dynamic systems [46]. It supports linear and nonlinear systems modeled in continuous time, sampled time, or a hybrid of the two. Systems can also be multi rate, i.e. have different parts that are sampled or updated at different rates. For modeling, simulink provides a graphical user interface (GUI) for building models as block diagrams, using click-and-drag mouse operations. With this interface, we can draw the models just as we would with pencil and paper (or as most textbooks depict them). Simulink includes a comprehensive block library of sinks, sources, linear and nonlinear components, and connectors. We can also customize and create our own blocks. Models are hierarchical, so we can build models using both top-down and bottom-up approaches. We can view the system at a high level and then double-click blocks to go down through the levels and thus visualize the model details. This approach provides insight into how a model is organized and how its parts interact. After we define a model, we can simulate it using a choice of integration methods either from the simulink menu or by entering commands in the MATLAB command window. In simulink, the menu is particularly convenient for interactive work. The command line approach is very useful for running a batch of simulations (for example, if we want to sweep a parameter across a range of values). Using scopes and other display blocks, we can see the simulation results while the simulation is running. In addition, we can change many parameters and see what happens. The simulation results can be put in the MATLAB workspace for post processing and visualization. The simulink model can be applied for modeling various time-varying systems that includes control systems, signal processing systems, video processing systems, image processing systems, communication and satellite systems, ship systems, automotive systems, monetary systems, aircraft & spacecraft dynamics systems, and biological systems as illustrated in Fig.4.1. 31   
  • 45. Fig.4.1. Simulink Applications 4.2 Model Design In the experimental setup for noise cancellation, simulink tool box has been used which provides the capability to model a system and to analyze its behavior. Its library is enriched with various functions which mimics the real system. The designed model for Adaptive Noise Cancellation (ANC) using simulink toolbox is shown in Fig.4.2. 4.2.1 Common Blocks used in Building Model 4.2.1.1 C6713 DSK ADC Block This block is used to capture and digitize analog signals from external sources such as signal generators, frequency generators or audio devices. Dragging and dropping C6713 DSK ADC block in simulink block diagram allows audio coder-decoder module (codec) on the C6713 DSK to convert an analog input signal to a digital signal for the digital signal processing. Most of the configuration options in the block affect the codec. However, the output data type, samples per frame and scaling options are related to the model that we are using in simulink. 32   
  • 46. Fig.4.2. Adaptive Noise Cancellation Simulink model 4.2.1.2 C6713 DSK DAC Block Simulink model provides the means to generate output of an analog signal through the analog output jack on the C6713 DSK. When C6713 DSK DAC block is added to the model, the digital signal received by the codec is converted to an analog signal. Codec sends signal to the output jack after converting the digital signal to analog form using digital-to-analog conversion (D/A). 4.2.1.3 C6713 DSK Target Preferences Block This block provides access to the processor hardware settings that need to be configured for generating the code from Real-Time Workshop (RTW) to run on the target. It is mandatory to add this block to the simulink model for the embedded target C6713. This block is located in the Target Preferences in Embedded Target for TI C6000 DSP for TI DSP library. 4.2.1.4 C6713 DSK Reset Block This block is used to reset the C6713 DSK to initial conditions from the simulink model. Double-clicking this block in a simulink model window resets the C6713 DSK that is running the executable code built from the model. When we double-click the Reset block, the 33   
  • 47. block runs the software reset function provided by CCS that resets the processor on C6713 DSK. Applications running on the board stop and the signal processor returns to the initial conditions that we defined. 4.2.1.5 NLMS Filter Block This block adapts the filter weights based on the NLMS algorithm for filtering the input signal. We select the adapt port check box to create an adapt port on the block. When the input to this port is nonzero, the block continuously updates the filter weights. When the input to this port is zero, the filter weights remain constant. If the reset port is enabled and a reset event occurs, the block resets the filter weights to their initial values. 4.2.1.6 C6713 DSK LED Block This block triggers the entire three user LEDs located on the C6711 DSK. When we add this block to a model and send a real scalar to the block input, the block sets the LED state based on the input value it receives: When the block receives an input value equal to 0, the LEDs are turned OFF. When the block receives a nonzero input value, the LEDs are turned ON. 4.2.1.7 C6713 DSK DIP Switch Block Outputs state of user switches located on C6713 DSK board. In boolean mode, output is a vector of 4 Boolean values, with the least-significant bit (LSB) first. In Integer mode, output is an integer from 0 to 15. For simulation, checkboxes in the block dialog are used in place of the physical switches. 4.2.2 Building the Model To create the model, first type simulink in the MATLAB command window or directly click on the shortcut icon . On Microsoft Windows, the simulink library browser appears as shown in Fig. 4.3. 34   
  • 48. Fig.4.3. Simulink library browser To create a new model, select Model from the New submenu of the simulink library window's File menu. To create a new model on Windows, select the New Model button on the Library Browser's toolbar. Simulink opens a new model window like Fig. 4.4. 35   
  • 49. Fig.4.4. Blank new model window To create Adaptive Noise Cancellation (ANC) model, we will need to copy blocks into the model from the following simulink block libraries: • Target for TI C6700 library (ADC, DAC, DIP, and LED blocks) • Signal processing library (NLMS filter block) • Commonly used blocks library (Constant block, Switch block and Relational block) • Discrete library (Delay block) To copy the ADC block from the Library Browser, first expand the Library Browser tree to display the blocks in the Target for TI C6700 library. Do this by clicking on the library node to display the library blocks. Then select the C6713 DSK board support sub library and finally, click on the respective block to select it. Now drag the ADC block from the browser and drop it in the model window. Simulink creates a copy of the blocks at the point where you dropped the node icon as illustrated in Fig.4.5. 36   
  • 50. Fig.4.5. Model window with ADC block Copy the rest of the blocks in a similar manner from their respective libraries into the model window. We can move a block from one place to another place by dragging the block in the model window. We can move a block a short distance by selecting the block and then pressing the arrow keys. With all the blocks copied into the model window, the model should look something like Fig.4.6. If we examine the block icons, we see an angle bracket on the right of the ADC block and two on the left of the NLMS filter block. The > symbol pointing out of a block is an output port; if the symbol points to a block, it is an input port. A signal travels out of an output port and into an input port of another block through a connecting line. When the blocks are connected, the port symbols disappear. Now it's time to connect the blocks. Position the pointer over the output port on the right side of the ADC block and connect it to the input port of delay, NLMS filter and switch block. Similarly make all connection as in Fig.4.2. 4.3 Model Reconfiguration Once the model is designed we have to reconfigure the model as per the requirement of the desired application. The simulink blocks parameters are adjusted as per the input output devices used. The input devices may be function generator or microphone and the output devices may be DSO or headphone respectively. This section explains and illustrates the reconfiguration setting of each simulink block like ADC, DAC, Adaptive filter, DIP, 37   
  • 51. LED, relational operator, switch block, and all that are used in the design of adaptive noise canceller. Fig.4.6. Model illustration before connections 4.3.1 The ADC Settings This block can be reconfigured to receive the input either from microphone or function generator. Input is applied through microphone when ADC source is kept at “Mic In” and through function generator when ADC source is kept at “Line In” as shown in Fig.4.7. The other settings are as follows: Double-click on the blue box to the left marked “DSK6713 ADC”. The screen as shown in Fig.4.7 will appear. Change the “ADC source” to “Line In” or “Mic In”. If we have a quiet microphone, select “+20dB Mic gain boost”. Set the “Sampling rate (Hz)” to “48 kHz”. Set the “Samples per frame” to 64. When done, click on “OK”. Important: Make sure the “Stereo” box is empty. 38   
  • 52. 4.3.2 The DAC Settings The DAC setting needs to be matched to those of the ADC. The major parameter is the sampling rate that is kept at the same rate of ADC i.e. 48 kHz as shown in Fig.4.8. Fig.4.7. Setting up the ADC for mono microphone input Fig.4.8. Setting the DAC parameters 39   
  • 53. 4.3.3 NLMS Filter Parameters Settings The most critical variable in an NLMS filter is the initial setup of “Step size (mu)”. If “mu” is too small, the filter has very fine resolution but reacts too slowly to the input signal. If “mu” is too large, the filter reacts very quickly but the error also remains large. The major parameters values that we have to change for the designed model are (shown in Fig.4.9): Step size (mu) = 0.001, Filter length =19 Select the Adapt port check box to create an Adapt port on the block. When the input to this port is nonzero, the block continuously updates the filter weights. When the input to this port is zero, the filter weights remain constant. Fig.4.9. Setting the NLMS filter parameters 40   
  • 54. 4.3.4 Delay Parameters Settings Delay parameter is required to delay the discrete-time input signal by a specified number of samples or frames. Because we are working with frames of 64 samples, it is convenient to configure the delay using frames. The steps for setting are described below and are illustrated in Fig. 4.10. Double-click on the “Delay” block. Change the “Delay units” to Frames. Set the “Delay (frames)” to 1. This makes the delay 64 samples. Fig.4.10. Setting the delay unit 4.3.5 DIP Switches Settings DIP switches are manual electric switches that are packaged in a group in a standard dual in-line package (DIP).These switches can work in two modes; Boolean mode, Integer mode. In Boolean mode, outputs are a vector of 4 boolean values with the least-significant bit (LSB) first. In Integer mode, outputs are an integer from 0 to 15. The DIP switches needs to be configured as shown in Fig. 4.11. 41   
  • 55. The “Sample time” should set to be “–1”. Fig.4.11. Setting up the DIP switch values 4.3.6 Constant Value Settings The switch values lie between 0 and 15. We will use switch values 0 and 1. For settings, Double-click on the “Constant” block. Set the “Constant value” to 1 and the “Sample time” to “inf” as shown in Fig.4.12. Fig.4.12. Setting the constant parameters 42   
  • 56. 4.3.7 Constant Data Type Settings The signal data type for the constant used in ANC model is set to “int16” as shown in Fig. 4.13. The setting of parameter can be done as follows: Click on the “Signal Data Types” tab. Set the “Output data type mode” to “int16”. This is compatible with the DAC on the DSK6713. Fig.4.13. Data type conversion to 16-bit integer 4.3.8 Relational Operator Type Settings Relational operator is used to check the given condition for the input signal. The relational operator setting for the designed model can be done as follows: Double click on the “Relational Operator” block. Change the “Relational operator” to “==”. Click on the “Signal Data Types” tab. 4.3.9 Relational Operator Data Type Settings Set the “Output data type mode” to “Boolean”. Click on “OK”. ( refer Fig.4.14) 43   
  • 57. Fig.4.14. Changing the output data type 4.3.10 Switch Settings The switch which is used in this model has three inputs viz. input 1, input 2 and input 3 numbered from top to bottom (refer Fig 4.2). The input 1 & input 3 are data inputs and input 2 is the control input. When input 2 satisfies the selection criteria, input 1 is passed to the output port otherwise input 3. The switch is configured as: Double click on the “switch” Set the criteria for passing first input to “u2>=Threshold” Click “ok” The simulink model for the hardware implementation of NLMS algorithm is designed successfully and the designed model is reconfigured to meet the requirement of TMS320C6713 DSP Processor environment. The reconfigured model shown in Fig.4.2 is ready to connect with Code Composer Studio [50] and DSP Processor with the help of RTDX link and Real-Time Workshop [47]. This is presented in chapter5. 44   
  • 58. Chapter-5 REAL-TIME IMPLEMENTATION ON DSP PROCESSOR Digital signal processors are fast special-purpose microprocessors with a specialized type of architecture and an instruction set appropriate for signal processing [45]. The architecture of the digital signal processor is very well suited for numerically intensive calculations. Digital signal processors are used for a wide range of applications which includes communication, control, speech processing, image processing etc. These processors have become the products of choice for a number of consumer applications, because they are very cost-effective and can be reprogrammed easily for different applications. DSP techniques have been very successful because of the development of low-cost software and hardware support [48]. DSP processors are concerned primarily with real-time signal processing. Real-time processing requires the processing to keep pace with some external event, whereas non-real-time processing has no such timing constraint. The external event is usually the analog input. Analog-based systems with discrete electronic components such as resistors can be more sensitive to temperature changes whereas DSP-based systems are less affected by environmental conditions. In this chapter we will learn how we can realize or implement an adaptive filter on hardware for real-time experiments. The model which was designed in previous chapter will be linked to the DSP processor with help of Real Time Data Exchange (RTDX) utility provided in simulink. 5.1 Introduction to Digital Signal Processor (TMS320C6713) The TMS320C6713 is a low cost board designed to allow the user to evaluate the capabilities of the C6713 DSP and develop C6713-based products [49]. It demonstrates how the DSP can be interfaced with various kinds of memories, peripherals, Joint Text Action Group (JTAG) and parallel peripheral interfaces. The board is approximately 5 inches wide and 8 inches long as shown in Fig.5.2 and is designed to sit on the desktop external to a host PC. It connects to the host PC through a USB port. The processor board includes a C6713 floating-point digital signal processor and a 45   
  • 59. 32-bit stereo codec TLV320AIC23 (AIC23) for input and output. The onboard codec AIC23 uses a sigma–delta technology that provides ADC and DAC. It connects to a 12-MHz system clock. Variable sampling rates from 8 to 96 kHz can be set readily [51]. A daughter card expansion is also provided on the DSK board. Two 80-pin connectors provide for external peripheral and external memory interfaces. The external memory interface (EMIF) performs the task of interfacing with the other memory subsystems. Lightemitting diodes (LEDs) and liquid-crystal displays (LCDs) are used for spectrum display. The DSK board includes 16MB (Megabytes) of synchronous dynamic random access memory (SDRAM) and 256kB (Kilobytes) of flash memory. Four connectors on the board provide inputs and outputs: MIC IN for microphone input, LINE IN for line input, LINE OUT for line output, and HEADPHONE for a headphone output (multiplexed with line output). The status of the four users DIP switches on the DSK board can be read from a program and provides the user with a feedback control interface (refer Fig.5.1 & Fig.5.2). The DSK operates at 225MHz.Also onboard are the voltage regulators that provide 1.26 V for the C6713 core and 3.3V for its memory and peripherals. The major DSK hardware features are: A C6713 DSP operating at 225 MHz. An AIC23 stereo codec with Line In, Line Out, MIC, and headphone stereo jacks. 16 Mbytes of synchronous DRAM (SDRAM). 512 Kbytes of non-volatile Flash memory (256 Kbytes usable in default configuration). Four user accessible LEDs and DIP switches. Software board configuration through registers implemented in complex logic device. Configurable boot options. Expansion connectors for daughter cards. JTAG emulation through onboard JTAG emulator with USB host interface or external Emulator. Single voltage power supply (+5V). 46   
  • 60. Fig.5.1. Block diagram of TMS320C6713 processor Fig.5.2. Physical overview of the TMS320C6713 processor 47   
  • 61. 5.1.1 Central Processing Unit Architecture The CPU has Very Large Instruction Word (VLIW) architecture [53]. The CPU always fetches eight 32-bit instructions at once and there is a 256-bit bus to the internal program memory. Each group of eight instructions is called a fetch packet. The CPU has eight functional units that can operate in parallel and are equally split into two halves, A and B. All eight units do not have to be given instruction words if they are not ready. Therefore, instructions are dispatched to the functional units as execution packets with a variable number of 32-bit instruction words. The functional block diagram of Texas Instrument (TI) processor architecture is shown below in Fig.5.3. 32           EMIF  McASP1  McASP0  McBSP0  Pin Multiplexing       I2C1  I2C0  Timer 1  Timer 0  Enhanced DMA Controller  (16 Channel)  McBSP1  L2 Cache  Memory  4 Banks  64K Bytes  Total      (Up to 4‐ way)                L2  Memory  192K  Bytes  C67x CPU  Instruction Fetch    Instruction Dispatch  Instruction Decode  Data Path A            Data Path B  A Register File       B Register  .L1 .S1 .M1 .D1 .D2 .M2 .S2  .L2  HPI  Control  Register  Control  Test In‐Circuit  Emulation  Interrupt  Control  L1D Cache  2‐Way  Set Associative  4K Bytes  Clock Generator  Oscillator and PLL  ×4 through ×25  Multiplier  /1 through /32 Divider  GPIO  16  LIP Cache  Direct Mapped  4 Bytes Total  Power – Down Logic  Fig.5.3. Functional block diagram of TMS320C6713 CPU The eight functional units include: Four ALU that can perform fixed and floating-point operations (.L1, .L2, .S1, .S2). Two ALU’s that perform only fixed-point operations (.D1, .D2). 48   
  • 62. Two multipliers that can perform fixed or floating-point multiplications (.M1, .M2). 5.1.2 General Purpose Registers Overview The CPU has thirty two 32-bit general purpose registers split equally between the A and B sides. The CPU has a load/store architecture in which all instructions operate on registers. The data-addressing unit .D1 and .D2 are in charge of all data transfers between the register files and memory. The four functional units on a side freely share the 16 registers on that side. Each side has a single data bus connected to all the registers on the other side so that functional units on one side can access data in the registers on the other side. Access to a register on the same side uses one clock cycle while access to a register on the other side requires two clock cycles i.e. read and write cycle. 5.1.3 Interrupts The C6000 CPUs contain a vectored priority interrupt controller. The highest priority interrupt is RESET which is connected to the hardware reset pin and cannot be masked. The next priority interrupt is the NMI which is generally used to alert the CPU of a serious hardware problem like a power failure. Then, there are twelve lower priority maskable interrupts INT4–INT15 with INT4 having the highest and INT15 the lowest priority. Fig.5.4. Interrupt priority diagram 49   
  • 63. The following Fig. 5.5 depicts how the processor handles an interrupt when it arrives. Interrupt handling mechanism is a vital feature of microprocessor. Fig.5.5. Interrupt handling procedure These maskable interrupts can be selected from up to 32 sources (C6000 family). The sources vary between family members. For the C6713, they include external interrupt pins selected by the GPIO unit, and interrupts from internal peripherals such as timers, McBSP serial ports, McASP serial ports, EDMA channels, and the host port interface. The CPUs have a multiplexer called the interrupt selector that allows the user to select and connect interrupt sources to INT4 through INT15.As soon as the interrupt is serviced, processor resumes to the same operation which was under processing prior to interrupt request. 5.1.4 Audio Interface Codec The C6713 uses a Texas AIC23 codec. In the default configuration, the codec is connected to the two serial ports, McBSP0 and McBSP1. McBSP0 is used as a unidirectional channel to control the codec's internal configuration registers. It should be programmed to send a 16-bit control word to the AIC23 in SPI format. The top 7 bits of the control word specify the register to be modified and the lower 9 bits contain the register value. Once the 50   
  • 64. codec is configured, the control channel is normally idle while audio data is being transmitted. McBSP1 is used as the bi-directional data channel for ADC input and DAC output samples. The codec supports a variety of sample formats. For the experiments in this work, the codec should be configured to use 16-bit samples in two’s complement signed format. The codec should be set to operate in master mode so as to supply the frame synchronization and bit clocks at the correct sample rate to McBSP1. The preferred serial format is DSP mode which is designed specifically to operate with the McBSP ports on TI DSPs. The codec has a 12 MHz system clock which is same as the frequency used in many USB systems. The AIC23 can divide down the 12 MHz clock frequency to provide sampling rates of 8000 Hz, 16000 Hz, 24000 Hz, 32000 Hz, 44100 Hz, 48000 Hz, and 96000 Hz. DSK   DSP CPU McBSP     McBSP AIC23 Fig.5.6. Audio connection illustrating control and data signal The DSK uses two McBSPs to communicate with the AIC23 codec, one for control, another for data. The C6713 supplies a 12 MHz clock to the AIC23 codec which is divided down internally in the AIC23 to give the sampling rates. The codec can be set to these sampling rates by using the function DSK6713_AIC23_setFreq (handle,freq ID) from the BSL. This function puts the quantity “Value” into AIC23 control register 8. Some of the AIC23 analog interface properties are: The ADC for the line inputs has a full-scale range of 1.0 V RMS. 51   
  • 65. The microphone input is a high-impedance, low-capacitance input compatible with a wide range of microphones. The DAC for the line outputs has a full-scale output voltage range of 1.0 V RMS. The stereo headphone outputs are designed to drive 16 or 32-ohm headphones. The AIC23 has an analog bypass mode that directly connects the analog line inputs to the analog line outputs. The AIC23 has a side tone insertion mode where the microphone input is routed to the line and headphone outputs. AIC23 Codec FSX1  CLKX1  TX1  CONTROL   SPI Format  CS SCLK SD IN  Digital  Control Registers  McBSP0  0 1 2 3 4 5 6 7 8 9 15 LEFT IN VOL RIGHT IN VOL LEFT HP VOL RIGHT HP VOL ANAPATH DIGPATH POWER DOWN DIGIF SAMPLE RATE DIGACT RESET DATA  McBSP1  DR2  FX2  CLKR  CLKX  FSR2  DX2  D OUT LRC OUT B CLK LRC IN D IN    MIC IN    LINE IN  Analog   LINE OUT MIC IN  ADC LINE IN  DAC LINE OUT  HP OUT    HP OUT Fig.5.7. AIC23 codec interface 5.1.5 DSP/BIOS & RTDX The DSP/BIOS facilities utilize the Real-Time Data Exchange (RTDX) link to obtain and monitor target data in real-time [47]. I utilized the RTDX link to create my own customized interfaces to the DSP target by using the RTDX API Library. The RTDX transfers data between a host computer and target devices without interfering with the target application. This bi-directional communication path provides data collection by the host as well as host interaction while running target application. RTDX also enables host systems to provide data stimulation to the target application and algorithms. 52   
  • 66. Data transfer to the host occurs in real-time while the target application is running. On the host platform, an RTDX host library operates in conjunction with Code Composer Studio IDE. Data visualization and analysis tools communicate with RTDX through COM APIs to obtain the target data and/or to send data to the DSP application. The host library supports two modes of receiving data from a target application: continuous and non-continuous. Code Composer    Studio CCS  MATLAB   Embedded Target for Texas Instruments DSP + Real Time Workshop Texas Instruments  DSP    Simulink  Model      Build and  Download     Application + DSP/BIOS Kernel   RTDX  DSP/BIOS Tools RTDX      Fig.5.8. DSP BIOS and RTDX In continuous mode, the data is simply buffered by the RTDX Host Library and is not written to a log file. Continuous mode should be used when the developer wants to continuously obtain and display the data from a target application and does not need to store the data in a log file. The realization of an interface is possible thanks to the Real-Time Data Exchange (RTDX). RTDX allows transferring data between a host computer and target devices without interfering with the target application. The data can be analyzed and visualized on the host using the COM interface provided by RTDX. Clients such as Visual Basic, Visual C++, Excel, LabView, MATLAB, and others are readily capable of utilizing the COM interface. 53   
  • 67. 5.2 Code Composer Studio as Integrated Development Environment Code Composer Studio is the DSP industry's first fully integrated development environment (IDE) [50] with DSP-specific functionality. With a familiar environment like MS-based C++TM; Code Composer lets you edit, build, debug, profile and manage projects from a single unified environment. Other unique features include graphical signal analysis, injection/extraction of data signals via file I/O, multi-processor debugging, automated testing and customization via a C-interpretive scripting language and much more. Fig.5.9. Code compose studio platform Real-time analysis can be performed using real-time data exchange (RTDX). RTDX allows for data exchange between the host PC and the target DSK, as well as analysis in real time without stopping the target. Key statistics and performance can be monitored in real time. Through the joint team action group (JTAG), communication with on-chip emulation support occurs to control and monitor program execution. The C6713 DSK board includes a JTAG interface through the USB port.    Fig.5.10. Embedded software development 54