SlideShare una empresa de Scribd logo
1 de 6
Descargar para leer sin conexión
I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7



                   Video Compression Using Spiht and Neural Network
                                     1,
                                       Sangeeta Mishra,2,Sudhir Sawarkar


Abstract
         Apart from the existing technology on image compression represented by series of JPEG, MPEG and H.26x
standards, new technology such as neural networks and genetic algorithms are being developed to explo re the future of image
coding. Successful applications of neural networks to basic propagation algorithm have now become well established and
other aspects of neural network involvement in this technology. In this paper different algorithms were imp lemented like
gradient descent back propagation, gradient descent with mo mentum back propagation, gradient descent with adaptive
learning back propagation, gradient descent with mo mentu m and adaptive learn ing back propagation and Levenberg -
Marquardt algorithm. The co mpression ratio obtained is 1.1737089:1. It was observed that the size remains same after
compression but the difference is in the clarity.

Keyword: Macroblock, Neural Net work,SPIHT

1. Introduction
          Reducing the transmission bit-rate wh ile concomitantly retaining image quality is the most daunting challenge to
overcome in the area of very low b it-rate video coding, e.g., H.26X standards [3].[6]. The MPEG-4 [2] v ideo standard
introduced the concept of content-based coding, by dividing video frames into separate segments comprising a background
and one or more moving objects. This idea has been exp loited in several low bit -rate macroblock-based video coding
algorith ms [1][8] using a simplified segmentation process which avoids handling arbitrary shaped objects, and therefore can
emp loy popular macroblock-based motion estimat ion techniques. Such algorithms focus on moving regions through the use of
regular pattern temp lates, fro m a pattern codebook (Figure 1), of non-overlapping rectangular blocks of 16×16 pixels, called
macroblocks (M B).

2. SPIHT
          When the decomposition image is obtained, we try to find a way how to code the wavelet coefficients into an
efficient result, taking redundancy and storage space into consideration. SPIHT [2] is one of the most advanced schemes
available, even outperforming the state-of-the-art JPEG 2000 in some situations. The basic principle is the same; a progressive
coding is applied, processing the image respectively to a lowering threshold. The difference is in the concept of zerotrees
(spatial orientation trees in SPIHT). Th is is an idea that takes bounds between coefficients across subbands in different levels
into consideration. The first idea is always the same: if there is an coefficient in the highest level of transform in a part icular
subband considered insignificant against a particular threshold, it is very probable that its descendants in lower levels will be
insignificant too, so we can code quite a large group of coefficients with one symbol. SPIHT makes use of three lists – the List
of Significant Pixels (LSP), List of Insignificant Pixels (LIP) and List of Insignificant Sets (LIS). These are coefficient
location lists that contain their coordinates. After the initialization, the algorithm takes two stages for each level of threshold –
the sorting pass (in wh ich lists are organized) and the refinement pass (which does the actual progressive coding transmission).
The result is in the form of a bitstream. Detailed scheme of the algorith m is presented in Fig. 3.The algorith m has several
advantages. The first one is an intensive progressive capability – we can interrupt the decoding (or coding) at any time and a
result of maximu m possible detail can be reconstructed with one-bit precision. This is very desirable when transmitting files
over the internet, since users with slower connection speeds can download only a small part of the file, obtaining much more
usable result when compared to other codec such as progressive JPEG. Second advantage is a very compact output bitstream
with large bit variab ility – no additional entropy coding or scrambling has to be applied. It is also possible to insert a
watermarking scheme into the SPIHT coding domain [3] and this watermarking technique is considered to be very strong
regarding to watermark invisibility and attack resiliency. But we also have to consider disadvantages. SPIHT is very
vulnerable to bit corruption, as a single bit error can introduce significant image distortion depending of its location. Much
worse property is the need of precise bit synchronization, because a leak in bit transmission can lead to complete
misinterpretation fro m the side of the decoder. For SPIHT to be emp loyed in real-t ime applicat ions, error handling and
synchronization methods must be introduced in order to make the codec more resilient.


Issn 2250-3005(online)                                  November| 2012                                                  Page   98
I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7



3.   Principle Of SPIHT Coder:
           SPIHT is based on three principles: i) Exp loitation of the hierarchical structure of the wavelet trans form, by using a
tree-based organization of the coefficients;
     ii) Partial ordering of the transformed coefficients by magnitude, with the ordering data not explicitly transmitted but
     recalculated by the decoder; and
     iii) Ordered bit p lane transmission of refinement bits for the coefficient values.
This leads to a compressed bitstream in which the most important coefficients are transmitted first, the values of all
coefficients are progressively refined, and the relationship between coefficients representin g the same location at different
scales is fully exp loited for co mpression efficiency. SPIHT is a very effective, popular and computationally simp le technique
for image co mpression, it belongs to the next generation of wavelet encoders. SPIHT exp lo its the properties of the wavelet-
transformed images to increase its efficiency. It offers better performance than previously reported image compression
techniques. In this technique the pixels are div ided into sets and subsets. Under the same assumption of zerotrees used in
EZW, SPIHT coder scans wavelet coefficients along quad tree instead of subband and SPIHT identifies significance of
wavelet coefficients only by the magnitude of coefficients and encodes the sign separately into a new bit stream. Init ial
threshold is calculated using the equation. The significance test is applied to these pixels. If these pixels are found to be
significant with respect to threshold, they are taken into consideration for transmission otherwise ignored.
The Coding / decoding algorith m has following steps:
1) Initi alization: Set n = |log2 (max (i,j){|c(i,j)|}) & transmit n.
Set the LSP as an empty set list, and add the coordinates
 (i, j) H to the LIP and only those with descendants also to the LIS, as the type A entries.
2) S orting pass:
2.1 fo r each entry (i, j) in the LIP do:
    2.1.1 output Sn(i, j);
    2.1.2 if Sn(i,j)=1,move(i,j) to the LSP and output the sign of ci,j;
2.2 fo r each entry (i, j) in the LIS do:
  2.2.1 if the entry is of type A then
  2.2.1.1 output Sn(D(i, j));
  2.2.1.2 if Sn(D(i, j)) =1 then
  2.2.1.2.1 for each (k, l) ɛO(i, j) do:
  2.2.1.2.1.1 output Sn(k, l);
  2.2.1.2.1.2 if Sn(k, l) = 1, add (k, l) to the LSP and       output the sign of ck, l;
  2.2.1.2.1.3 if Sn(k, l) =0, add (k, l) to the LIP;
  2.2.1.2.2 if £(i, J)≠0, move (i, j) to the end of the LIS as an entry of type B and go to Step 2.2.2 ;
otherwise, remove entry (i, j) fro m the LIS;
2.2.2 if the entry is of type B then
2.2.2.1 output Sn(£(i, j));
2.2.2.2 if Sn(£(i, j)) = 1 then
2.2.2.2.1 add each (k, l) ɛ O(i, j) to the end of the LIS as entry of type A;
2.2.2.3 remove (i, j) fro m the LIS;
3) Refinement pass: for each entry (i, j) in the LSP, except those included in the last sorting pass (i.e. with same n ), output
the nth most significant bit of |ci, j|;
4) Quantization step update: decrement n by 1 and go to Step 2 if needed.

4. Neural network
1.   Neural network outline
         Neural network (NN) is a nonlinearity complex network system which consists of large quantities of processing unit
analogizing neural cells, has the capability of approaching the nonlinear function, very strong fault -tolerant ability and the
quick study speed of local network. A mong them, feed fo rward network consists of three layers: the input layer, the hidden
layer and the output layer. The hidden layer may have many layers, each layer neurons only accepts the previous layer
neurons’ output. And then ,this realizes the input and output nonlinear mapping relationship by adjusting the connected
weighted value and the network structure[1].At present the neural network research method has formed many schools, the
most wealthiest research work includes: the multi-layer network BP algorith m, Hopfield neural network model, adaptive
resonance theory, self-organizing feature map theory and so on. This paper is based on researching BP neural network training

Issn 2250-3005(online)                                 November| 2012                                                 Page   99
I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7



quantification parameter [2],presents a network that is better than BP (Back Pro pagation) network in its property of optimal
approximation.

2. Feed Forward Network
A single-layer network of neurons having R inputs is shown below in fig. 1, fu ll detail on the left and with a layer diagram on
the right.




                                                 Fig.1. Feed Forward Network

Feed forward networks often have one or more hidden layers of sigmoid neurons followed by an output layer of linear
neurons. Multiple layers of neurons with nonlinear transfer functions allow the network to learn nonlinear and linear
relationships between input and output vectors. The linear output layer lets the network produce values outside the range -1 to
+1.For mu ltip le-layer networks we use the number of the layers to determine the superscript on the weight matrices. The
appropriate notation is used in the two-layer tansig/purelin network shown in fig 2.




                                                  Fig.2 Mult iple Layer Input

This network can be used as a general function approximat ion. It can approximate any function with a finite number of
discontinuities, arbitrarily well, given sufficient neurons in the hidden layer.

3.   Back propag ation algorithm:
         Back propagation was created by generalizing the Widrow-Hoff learn ing rule to mult iple-layer networks and
nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until
it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate
way as defined by you. Networks with biases, a sigmoid layer, and a linear output layer are c apable of approximat ing any
function with a finite number of discontinuities. Standard back propagation is a gradient descent algorithm, as is the Widrow-
Hoff learning rule, in which the network weights are moved along the negative of the gradient of the p erformance function.
The term back propagation refers to the manner in which the gradient is co mputed for nonlinear mu lt ilayer networks. There
are a number of variations on the basic algorithm that are based on other standard optimization techniques, such as conjugate
gradient and Newton methods. There are generally four steps in the training process:
     1. Assemble the train ing data
     2. Create the network object
     3. Train the network
     4. Simu late the network response to new inputs

Issn 2250-3005(online)                                November| 2012                                                Page   100
I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7



The generalized delta rule, also known as back propagation algorithm is exp lained here briefly for feed forward Neural
Network (NN). The NN exp lained here contains three layers. These are input, hidden, and output Layers. During the training
phase, the training data is fed into to the input layer. The data is propagated to the hidden layer and then to the output layer.
This is called the forward pass of the back propagation algorithm. In forward pass, each node in hidden layer gets input from
all the nodes from input layer, which are mult iplied with appropriate weights and then summed. The output of the hidden node
is the non- linear transformation of this resulting sum. Similarly each node in output layer gets input from all the nodes from
hidden layer, which are mult iplied with appropriate weights and then summed. The output of this node is the non -linear
transformation of the resulting sum. The output values of the output layer are compared with the target output values. The
target output values are those that we attempt to teach our network. The erro r between actual output values and target output
values is calculated and propagated back towards hidden layer. This is called the backward pass of the back propagation
algorith m. The error is used to update the connection strengths between nodes, i.e. weight matrices between input-hidden
layers and hidden-output layers are updated.

4.   Results And Discussions
         During the testing phase no learning takes place i.e., weight matrices are not changed. Each test vector is fed into the
input layer. The feed forward of the testing data is similar to the feed forward of the training data. Various algorith ms were
tried and tested.




                                                Fig. 3 Orig inal Residue Frame




           Fig.4 Frame after SPIHT with gradient descent with mo mentum and adaptive learn ing back propagation




                   Fig.5 Frame after SPIHT with gradient descent with adaptive learning back propagation

Issn 2250-3005(online)                                November| 2012                                                Page   101
I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7




                   Fig.6 Frame after SPIHT with gradient descent with mo mentum back propagation




                                       Fig.7 Traning Achieved




                           Fig.8 Frame after SPIHT with gradient descent back propagation




                            Fig.9 Frame after SPIHT with Levenberg-Marquardt algorith m


Issn 2250-3005(online)                           November| 2012                                              Page   102
I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7




                                      Fig.10 Perfo rmance Achieved


5. References
1.  M.Vetterli and J Kovaccevic, Wavelets & Subband Coding, Prentice Hall PTR, Englewood cliffs, NJ, 1995.
2.  C. Sidney Burrus, Ramesh A Gopinath, and Haitao Guo, Introduction to Wavelets & Wavelets Transforms, Prentice Hall
    Inc, Upper Saddle River, NJ,1998.
3. O Rioul and M Vetterli,“Wavelets and signal processing,” IEEE Signal Processing Magazine, vol. 8, no. 4, pp. 14–38,
    1991.
4. A. Said, W. A. Pearlman, “SPIHT Image Co mpression: Properties of the Method”, Html
5. A. Said, W.A. Pearlman : “A New Fast and Efficient Image Codec Based on Set Partitioning in Hierarch ical Trees”, IEEE
    Transactions on Circu its and Systems for Video Technology, vol. 6,1996.
6. J.A.Freeman and D.M.Skapura, Neural Networks:
    algorith ms, Applications and Programming Techniques
    Addition-Wesley), 1991.
7. H.Demuth and M. Beale. Neural Network TOOLBOX User’s Guide. For use with MATLAB. The Math Works lne..
    (1998)
8. Kiamal Z. Pekmestzi. Mult iplexer-Based Array Multip liers. IEEE Transcation on computers, vol, 48 , JA N ( 1998)
9. Hennessy. J.L and Patterson. D.A. Co mpter Architecture: A quantitative Approach. Morgan Kaufmanns, (1990)
10. J. Jiang. Image co mpression with neural networks. Signal Processing: Image Co mmun ication 14 (1999) 737 -760




Issn 2250-3005(online)                             November| 2012                                              Page   103

Más contenido relacionado

La actualidad más candente

A STUDY OF IMAGE COMPRESSION BASED TRANSMISSION ALGORITHM USING SPIHT FOR LOW...
A STUDY OF IMAGE COMPRESSION BASED TRANSMISSION ALGORITHM USING SPIHT FOR LOW...A STUDY OF IMAGE COMPRESSION BASED TRANSMISSION ALGORITHM USING SPIHT FOR LOW...
A STUDY OF IMAGE COMPRESSION BASED TRANSMISSION ALGORITHM USING SPIHT FOR LOW...acijjournal
 
A NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYS
A NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYSA NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYS
A NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYSEditor IJMTER
 
Object gripping algorithm for robotic assistance by means of deep leaning
Object gripping algorithm for robotic assistance by means of deep leaning Object gripping algorithm for robotic assistance by means of deep leaning
Object gripping algorithm for robotic assistance by means of deep leaning IJECEIAES
 
Low Memory Low Complexity Image Compression Using HSSPIHT Encoder
Low Memory Low Complexity Image Compression Using HSSPIHT EncoderLow Memory Low Complexity Image Compression Using HSSPIHT Encoder
Low Memory Low Complexity Image Compression Using HSSPIHT EncoderIJERA Editor
 
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...Dr. Amarjeet Singh
 
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHMPIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHMijcsit
 
Bivariatealgebraic integerencoded arai algorithm for
Bivariatealgebraic integerencoded arai algorithm forBivariatealgebraic integerencoded arai algorithm for
Bivariatealgebraic integerencoded arai algorithm foreSAT Publishing House
 
HardNet: Convolutional Network for Local Image Description
HardNet: Convolutional Network for Local Image DescriptionHardNet: Convolutional Network for Local Image Description
HardNet: Convolutional Network for Local Image DescriptionDmytro Mishkin
 
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMS
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSCOLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMS
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
 
Convolutional neural networks deepa
Convolutional neural networks deepaConvolutional neural networks deepa
Convolutional neural networks deepadeepa4466
 
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose EstimationHRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimationtaeseon ryu
 
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis taeseon ryu
 
RTL Implementation of image compression techniques in WSN
RTL Implementation of image compression techniques in WSNRTL Implementation of image compression techniques in WSN
RTL Implementation of image compression techniques in WSNIJECEIAES
 

La actualidad más candente (18)

Ijcatr04021016
Ijcatr04021016Ijcatr04021016
Ijcatr04021016
 
A STUDY OF IMAGE COMPRESSION BASED TRANSMISSION ALGORITHM USING SPIHT FOR LOW...
A STUDY OF IMAGE COMPRESSION BASED TRANSMISSION ALGORITHM USING SPIHT FOR LOW...A STUDY OF IMAGE COMPRESSION BASED TRANSMISSION ALGORITHM USING SPIHT FOR LOW...
A STUDY OF IMAGE COMPRESSION BASED TRANSMISSION ALGORITHM USING SPIHT FOR LOW...
 
B046050711
B046050711B046050711
B046050711
 
I017425763
I017425763I017425763
I017425763
 
F010232834
F010232834F010232834
F010232834
 
A NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYS
A NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYSA NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYS
A NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYS
 
Object gripping algorithm for robotic assistance by means of deep leaning
Object gripping algorithm for robotic assistance by means of deep leaning Object gripping algorithm for robotic assistance by means of deep leaning
Object gripping algorithm for robotic assistance by means of deep leaning
 
Low Memory Low Complexity Image Compression Using HSSPIHT Encoder
Low Memory Low Complexity Image Compression Using HSSPIHT EncoderLow Memory Low Complexity Image Compression Using HSSPIHT Encoder
Low Memory Low Complexity Image Compression Using HSSPIHT Encoder
 
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
 
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHMPIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
 
Bivariatealgebraic integerencoded arai algorithm for
Bivariatealgebraic integerencoded arai algorithm forBivariatealgebraic integerencoded arai algorithm for
Bivariatealgebraic integerencoded arai algorithm for
 
HardNet: Convolutional Network for Local Image Description
HardNet: Convolutional Network for Local Image DescriptionHardNet: Convolutional Network for Local Image Description
HardNet: Convolutional Network for Local Image Description
 
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMS
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSCOLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMS
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMS
 
Convolutional neural networks deepa
Convolutional neural networks deepaConvolutional neural networks deepa
Convolutional neural networks deepa
 
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose EstimationHRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
 
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
 
RTL Implementation of image compression techniques in WSN
RTL Implementation of image compression techniques in WSNRTL Implementation of image compression techniques in WSN
RTL Implementation of image compression techniques in WSN
 
Journal_ICACT
Journal_ICACTJournal_ICACT
Journal_ICACT
 

Destacado

International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
Neural network based image compression with lifting scheme and rlc
Neural network based image compression with lifting scheme and rlcNeural network based image compression with lifting scheme and rlc
Neural network based image compression with lifting scheme and rlceSAT Publishing House
 
Design And Analysis of a Rocker Arm
Design And Analysis of a Rocker ArmDesign And Analysis of a Rocker Arm
Design And Analysis of a Rocker Armijceronline
 
Study on groundwater quality in and around sipcot industrial complex, area cu...
Study on groundwater quality in and around sipcot industrial complex, area cu...Study on groundwater quality in and around sipcot industrial complex, area cu...
Study on groundwater quality in and around sipcot industrial complex, area cu...ijceronline
 
Development of a Cassava Starch Extraction Machine
Development of a Cassava Starch Extraction MachineDevelopment of a Cassava Starch Extraction Machine
Development of a Cassava Starch Extraction Machineijceronline
 
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy GraphStrong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graphijceronline
 
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...ijceronline
 
Engendering sustainable socio-spatial environment for tourism activities in t...
Engendering sustainable socio-spatial environment for tourism activities in t...Engendering sustainable socio-spatial environment for tourism activities in t...
Engendering sustainable socio-spatial environment for tourism activities in t...ijceronline
 
Stress Analysis of a Centrifugal Supercharger Impeller Blade
Stress Analysis of a Centrifugal Supercharger Impeller BladeStress Analysis of a Centrifugal Supercharger Impeller Blade
Stress Analysis of a Centrifugal Supercharger Impeller Bladeijceronline
 

Destacado (9)

International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Neural network based image compression with lifting scheme and rlc
Neural network based image compression with lifting scheme and rlcNeural network based image compression with lifting scheme and rlc
Neural network based image compression with lifting scheme and rlc
 
Design And Analysis of a Rocker Arm
Design And Analysis of a Rocker ArmDesign And Analysis of a Rocker Arm
Design And Analysis of a Rocker Arm
 
Study on groundwater quality in and around sipcot industrial complex, area cu...
Study on groundwater quality in and around sipcot industrial complex, area cu...Study on groundwater quality in and around sipcot industrial complex, area cu...
Study on groundwater quality in and around sipcot industrial complex, area cu...
 
Development of a Cassava Starch Extraction Machine
Development of a Cassava Starch Extraction MachineDevelopment of a Cassava Starch Extraction Machine
Development of a Cassava Starch Extraction Machine
 
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy GraphStrong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
 
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
The Myth of Softening behavior of the Cohesive Zone Model Exact derivation of...
 
Engendering sustainable socio-spatial environment for tourism activities in t...
Engendering sustainable socio-spatial environment for tourism activities in t...Engendering sustainable socio-spatial environment for tourism activities in t...
Engendering sustainable socio-spatial environment for tourism activities in t...
 
Stress Analysis of a Centrifugal Supercharger Impeller Blade
Stress Analysis of a Centrifugal Supercharger Impeller BladeStress Analysis of a Centrifugal Supercharger Impeller Blade
Stress Analysis of a Centrifugal Supercharger Impeller Blade
 

Similar a IJCER (www.ijceronline.com) International Journal of computational Engineering research

Comparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression TechniquesComparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression Techniquessipij
 
Improvement in Traditional Set Partitioning in Hierarchical Trees (SPIHT) Alg...
Improvement in Traditional Set Partitioning in Hierarchical Trees (SPIHT) Alg...Improvement in Traditional Set Partitioning in Hierarchical Trees (SPIHT) Alg...
Improvement in Traditional Set Partitioning in Hierarchical Trees (SPIHT) Alg...AM Publications
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)inventionjournals
 
IRJET- Wavelet Transform along with SPIHT Algorithm used for Image Compre...
IRJET-  	  Wavelet Transform along with SPIHT Algorithm used for Image Compre...IRJET-  	  Wavelet Transform along with SPIHT Algorithm used for Image Compre...
IRJET- Wavelet Transform along with SPIHT Algorithm used for Image Compre...IRJET Journal
 
Common image compression formats
Common image compression formatsCommon image compression formats
Common image compression formatsClyde Lettsome
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
11.secure compressed image transmission using self organizing feature maps
11.secure compressed image transmission using self organizing feature maps11.secure compressed image transmission using self organizing feature maps
11.secure compressed image transmission using self organizing feature mapsAlexander Decker
 
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...sipij
 
Minimum image disortion of reversible data hiding
Minimum image disortion of reversible data hidingMinimum image disortion of reversible data hiding
Minimum image disortion of reversible data hidingIRJET Journal
 
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEM
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEMA NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEM
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEMVLSICS Design
 
International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER) International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER) ijceronline
 
Optimizing Data Encoding Technique For Dynamic Power Reduction In Network On ...
Optimizing Data Encoding Technique For Dynamic Power Reduction In Network On ...Optimizing Data Encoding Technique For Dynamic Power Reduction In Network On ...
Optimizing Data Encoding Technique For Dynamic Power Reduction In Network On ...IRJET Journal
 
A Novel Low Complexity Histogram Algorithm for High Performance Image Process...
A Novel Low Complexity Histogram Algorithm for High Performance Image Process...A Novel Low Complexity Histogram Algorithm for High Performance Image Process...
A Novel Low Complexity Histogram Algorithm for High Performance Image Process...IRJET Journal
 
IRJET-ASIC Implementation for SOBEL Accelerator
IRJET-ASIC Implementation for SOBEL AcceleratorIRJET-ASIC Implementation for SOBEL Accelerator
IRJET-ASIC Implementation for SOBEL AcceleratorIRJET Journal
 
ASIC Implementation for SOBEL Accelerator
ASIC Implementation for SOBEL AcceleratorASIC Implementation for SOBEL Accelerator
ASIC Implementation for SOBEL AcceleratorIRJET Journal
 
IRJET- Data Embedding using Image Steganography
IRJET-  	  Data Embedding using Image SteganographyIRJET-  	  Data Embedding using Image Steganography
IRJET- Data Embedding using Image SteganographyIRJET Journal
 
Analysis of Image Compression Using Wavelet
Analysis of Image Compression Using WaveletAnalysis of Image Compression Using Wavelet
Analysis of Image Compression Using WaveletIOSR Journals
 
Analysis of Image Compression Using Wavelet
Analysis of Image Compression Using WaveletAnalysis of Image Compression Using Wavelet
Analysis of Image Compression Using WaveletIOSR Journals
 
A Configurable and Low Power Hard-Decision Viterbi Decoder in VLSI Architecture
A Configurable and Low Power Hard-Decision Viterbi Decoder in VLSI ArchitectureA Configurable and Low Power Hard-Decision Viterbi Decoder in VLSI Architecture
A Configurable and Low Power Hard-Decision Viterbi Decoder in VLSI ArchitectureIRJET Journal
 

Similar a IJCER (www.ijceronline.com) International Journal of computational Engineering research (20)

Comparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression TechniquesComparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression Techniques
 
Improvement in Traditional Set Partitioning in Hierarchical Trees (SPIHT) Alg...
Improvement in Traditional Set Partitioning in Hierarchical Trees (SPIHT) Alg...Improvement in Traditional Set Partitioning in Hierarchical Trees (SPIHT) Alg...
Improvement in Traditional Set Partitioning in Hierarchical Trees (SPIHT) Alg...
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
 
B070306010
B070306010B070306010
B070306010
 
IRJET- Wavelet Transform along with SPIHT Algorithm used for Image Compre...
IRJET-  	  Wavelet Transform along with SPIHT Algorithm used for Image Compre...IRJET-  	  Wavelet Transform along with SPIHT Algorithm used for Image Compre...
IRJET- Wavelet Transform along with SPIHT Algorithm used for Image Compre...
 
Common image compression formats
Common image compression formatsCommon image compression formats
Common image compression formats
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
11.secure compressed image transmission using self organizing feature maps
11.secure compressed image transmission using self organizing feature maps11.secure compressed image transmission using self organizing feature maps
11.secure compressed image transmission using self organizing feature maps
 
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...
 
Minimum image disortion of reversible data hiding
Minimum image disortion of reversible data hidingMinimum image disortion of reversible data hiding
Minimum image disortion of reversible data hiding
 
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEM
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEMA NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEM
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEM
 
International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER) International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)
 
Optimizing Data Encoding Technique For Dynamic Power Reduction In Network On ...
Optimizing Data Encoding Technique For Dynamic Power Reduction In Network On ...Optimizing Data Encoding Technique For Dynamic Power Reduction In Network On ...
Optimizing Data Encoding Technique For Dynamic Power Reduction In Network On ...
 
A Novel Low Complexity Histogram Algorithm for High Performance Image Process...
A Novel Low Complexity Histogram Algorithm for High Performance Image Process...A Novel Low Complexity Histogram Algorithm for High Performance Image Process...
A Novel Low Complexity Histogram Algorithm for High Performance Image Process...
 
IRJET-ASIC Implementation for SOBEL Accelerator
IRJET-ASIC Implementation for SOBEL AcceleratorIRJET-ASIC Implementation for SOBEL Accelerator
IRJET-ASIC Implementation for SOBEL Accelerator
 
ASIC Implementation for SOBEL Accelerator
ASIC Implementation for SOBEL AcceleratorASIC Implementation for SOBEL Accelerator
ASIC Implementation for SOBEL Accelerator
 
IRJET- Data Embedding using Image Steganography
IRJET-  	  Data Embedding using Image SteganographyIRJET-  	  Data Embedding using Image Steganography
IRJET- Data Embedding using Image Steganography
 
Analysis of Image Compression Using Wavelet
Analysis of Image Compression Using WaveletAnalysis of Image Compression Using Wavelet
Analysis of Image Compression Using Wavelet
 
Analysis of Image Compression Using Wavelet
Analysis of Image Compression Using WaveletAnalysis of Image Compression Using Wavelet
Analysis of Image Compression Using Wavelet
 
A Configurable and Low Power Hard-Decision Viterbi Decoder in VLSI Architecture
A Configurable and Low Power Hard-Decision Viterbi Decoder in VLSI ArchitectureA Configurable and Low Power Hard-Decision Viterbi Decoder in VLSI Architecture
A Configurable and Low Power Hard-Decision Viterbi Decoder in VLSI Architecture
 

IJCER (www.ijceronline.com) International Journal of computational Engineering research

  • 1. I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7 Video Compression Using Spiht and Neural Network 1, Sangeeta Mishra,2,Sudhir Sawarkar Abstract Apart from the existing technology on image compression represented by series of JPEG, MPEG and H.26x standards, new technology such as neural networks and genetic algorithms are being developed to explo re the future of image coding. Successful applications of neural networks to basic propagation algorithm have now become well established and other aspects of neural network involvement in this technology. In this paper different algorithms were imp lemented like gradient descent back propagation, gradient descent with mo mentum back propagation, gradient descent with adaptive learning back propagation, gradient descent with mo mentu m and adaptive learn ing back propagation and Levenberg - Marquardt algorithm. The co mpression ratio obtained is 1.1737089:1. It was observed that the size remains same after compression but the difference is in the clarity. Keyword: Macroblock, Neural Net work,SPIHT 1. Introduction Reducing the transmission bit-rate wh ile concomitantly retaining image quality is the most daunting challenge to overcome in the area of very low b it-rate video coding, e.g., H.26X standards [3].[6]. The MPEG-4 [2] v ideo standard introduced the concept of content-based coding, by dividing video frames into separate segments comprising a background and one or more moving objects. This idea has been exp loited in several low bit -rate macroblock-based video coding algorith ms [1][8] using a simplified segmentation process which avoids handling arbitrary shaped objects, and therefore can emp loy popular macroblock-based motion estimat ion techniques. Such algorithms focus on moving regions through the use of regular pattern temp lates, fro m a pattern codebook (Figure 1), of non-overlapping rectangular blocks of 16×16 pixels, called macroblocks (M B). 2. SPIHT When the decomposition image is obtained, we try to find a way how to code the wavelet coefficients into an efficient result, taking redundancy and storage space into consideration. SPIHT [2] is one of the most advanced schemes available, even outperforming the state-of-the-art JPEG 2000 in some situations. The basic principle is the same; a progressive coding is applied, processing the image respectively to a lowering threshold. The difference is in the concept of zerotrees (spatial orientation trees in SPIHT). Th is is an idea that takes bounds between coefficients across subbands in different levels into consideration. The first idea is always the same: if there is an coefficient in the highest level of transform in a part icular subband considered insignificant against a particular threshold, it is very probable that its descendants in lower levels will be insignificant too, so we can code quite a large group of coefficients with one symbol. SPIHT makes use of three lists – the List of Significant Pixels (LSP), List of Insignificant Pixels (LIP) and List of Insignificant Sets (LIS). These are coefficient location lists that contain their coordinates. After the initialization, the algorithm takes two stages for each level of threshold – the sorting pass (in wh ich lists are organized) and the refinement pass (which does the actual progressive coding transmission). The result is in the form of a bitstream. Detailed scheme of the algorith m is presented in Fig. 3.The algorith m has several advantages. The first one is an intensive progressive capability – we can interrupt the decoding (or coding) at any time and a result of maximu m possible detail can be reconstructed with one-bit precision. This is very desirable when transmitting files over the internet, since users with slower connection speeds can download only a small part of the file, obtaining much more usable result when compared to other codec such as progressive JPEG. Second advantage is a very compact output bitstream with large bit variab ility – no additional entropy coding or scrambling has to be applied. It is also possible to insert a watermarking scheme into the SPIHT coding domain [3] and this watermarking technique is considered to be very strong regarding to watermark invisibility and attack resiliency. But we also have to consider disadvantages. SPIHT is very vulnerable to bit corruption, as a single bit error can introduce significant image distortion depending of its location. Much worse property is the need of precise bit synchronization, because a leak in bit transmission can lead to complete misinterpretation fro m the side of the decoder. For SPIHT to be emp loyed in real-t ime applicat ions, error handling and synchronization methods must be introduced in order to make the codec more resilient. Issn 2250-3005(online) November| 2012 Page 98
  • 2. I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7 3. Principle Of SPIHT Coder: SPIHT is based on three principles: i) Exp loitation of the hierarchical structure of the wavelet trans form, by using a tree-based organization of the coefficients; ii) Partial ordering of the transformed coefficients by magnitude, with the ordering data not explicitly transmitted but recalculated by the decoder; and iii) Ordered bit p lane transmission of refinement bits for the coefficient values. This leads to a compressed bitstream in which the most important coefficients are transmitted first, the values of all coefficients are progressively refined, and the relationship between coefficients representin g the same location at different scales is fully exp loited for co mpression efficiency. SPIHT is a very effective, popular and computationally simp le technique for image co mpression, it belongs to the next generation of wavelet encoders. SPIHT exp lo its the properties of the wavelet- transformed images to increase its efficiency. It offers better performance than previously reported image compression techniques. In this technique the pixels are div ided into sets and subsets. Under the same assumption of zerotrees used in EZW, SPIHT coder scans wavelet coefficients along quad tree instead of subband and SPIHT identifies significance of wavelet coefficients only by the magnitude of coefficients and encodes the sign separately into a new bit stream. Init ial threshold is calculated using the equation. The significance test is applied to these pixels. If these pixels are found to be significant with respect to threshold, they are taken into consideration for transmission otherwise ignored. The Coding / decoding algorith m has following steps: 1) Initi alization: Set n = |log2 (max (i,j){|c(i,j)|}) & transmit n. Set the LSP as an empty set list, and add the coordinates (i, j) H to the LIP and only those with descendants also to the LIS, as the type A entries. 2) S orting pass: 2.1 fo r each entry (i, j) in the LIP do: 2.1.1 output Sn(i, j); 2.1.2 if Sn(i,j)=1,move(i,j) to the LSP and output the sign of ci,j; 2.2 fo r each entry (i, j) in the LIS do: 2.2.1 if the entry is of type A then 2.2.1.1 output Sn(D(i, j)); 2.2.1.2 if Sn(D(i, j)) =1 then 2.2.1.2.1 for each (k, l) ɛO(i, j) do: 2.2.1.2.1.1 output Sn(k, l); 2.2.1.2.1.2 if Sn(k, l) = 1, add (k, l) to the LSP and output the sign of ck, l; 2.2.1.2.1.3 if Sn(k, l) =0, add (k, l) to the LIP; 2.2.1.2.2 if £(i, J)≠0, move (i, j) to the end of the LIS as an entry of type B and go to Step 2.2.2 ; otherwise, remove entry (i, j) fro m the LIS; 2.2.2 if the entry is of type B then 2.2.2.1 output Sn(£(i, j)); 2.2.2.2 if Sn(£(i, j)) = 1 then 2.2.2.2.1 add each (k, l) ɛ O(i, j) to the end of the LIS as entry of type A; 2.2.2.3 remove (i, j) fro m the LIS; 3) Refinement pass: for each entry (i, j) in the LSP, except those included in the last sorting pass (i.e. with same n ), output the nth most significant bit of |ci, j|; 4) Quantization step update: decrement n by 1 and go to Step 2 if needed. 4. Neural network 1. Neural network outline Neural network (NN) is a nonlinearity complex network system which consists of large quantities of processing unit analogizing neural cells, has the capability of approaching the nonlinear function, very strong fault -tolerant ability and the quick study speed of local network. A mong them, feed fo rward network consists of three layers: the input layer, the hidden layer and the output layer. The hidden layer may have many layers, each layer neurons only accepts the previous layer neurons’ output. And then ,this realizes the input and output nonlinear mapping relationship by adjusting the connected weighted value and the network structure[1].At present the neural network research method has formed many schools, the most wealthiest research work includes: the multi-layer network BP algorith m, Hopfield neural network model, adaptive resonance theory, self-organizing feature map theory and so on. This paper is based on researching BP neural network training Issn 2250-3005(online) November| 2012 Page 99
  • 3. I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7 quantification parameter [2],presents a network that is better than BP (Back Pro pagation) network in its property of optimal approximation. 2. Feed Forward Network A single-layer network of neurons having R inputs is shown below in fig. 1, fu ll detail on the left and with a layer diagram on the right. Fig.1. Feed Forward Network Feed forward networks often have one or more hidden layers of sigmoid neurons followed by an output layer of linear neurons. Multiple layers of neurons with nonlinear transfer functions allow the network to learn nonlinear and linear relationships between input and output vectors. The linear output layer lets the network produce values outside the range -1 to +1.For mu ltip le-layer networks we use the number of the layers to determine the superscript on the weight matrices. The appropriate notation is used in the two-layer tansig/purelin network shown in fig 2. Fig.2 Mult iple Layer Input This network can be used as a general function approximat ion. It can approximate any function with a finite number of discontinuities, arbitrarily well, given sufficient neurons in the hidden layer. 3. Back propag ation algorithm: Back propagation was created by generalizing the Widrow-Hoff learn ing rule to mult iple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate way as defined by you. Networks with biases, a sigmoid layer, and a linear output layer are c apable of approximat ing any function with a finite number of discontinuities. Standard back propagation is a gradient descent algorithm, as is the Widrow- Hoff learning rule, in which the network weights are moved along the negative of the gradient of the p erformance function. The term back propagation refers to the manner in which the gradient is co mputed for nonlinear mu lt ilayer networks. There are a number of variations on the basic algorithm that are based on other standard optimization techniques, such as conjugate gradient and Newton methods. There are generally four steps in the training process: 1. Assemble the train ing data 2. Create the network object 3. Train the network 4. Simu late the network response to new inputs Issn 2250-3005(online) November| 2012 Page 100
  • 4. I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7 The generalized delta rule, also known as back propagation algorithm is exp lained here briefly for feed forward Neural Network (NN). The NN exp lained here contains three layers. These are input, hidden, and output Layers. During the training phase, the training data is fed into to the input layer. The data is propagated to the hidden layer and then to the output layer. This is called the forward pass of the back propagation algorithm. In forward pass, each node in hidden layer gets input from all the nodes from input layer, which are mult iplied with appropriate weights and then summed. The output of the hidden node is the non- linear transformation of this resulting sum. Similarly each node in output layer gets input from all the nodes from hidden layer, which are mult iplied with appropriate weights and then summed. The output of this node is the non -linear transformation of the resulting sum. The output values of the output layer are compared with the target output values. The target output values are those that we attempt to teach our network. The erro r between actual output values and target output values is calculated and propagated back towards hidden layer. This is called the backward pass of the back propagation algorith m. The error is used to update the connection strengths between nodes, i.e. weight matrices between input-hidden layers and hidden-output layers are updated. 4. Results And Discussions During the testing phase no learning takes place i.e., weight matrices are not changed. Each test vector is fed into the input layer. The feed forward of the testing data is similar to the feed forward of the training data. Various algorith ms were tried and tested. Fig. 3 Orig inal Residue Frame Fig.4 Frame after SPIHT with gradient descent with mo mentum and adaptive learn ing back propagation Fig.5 Frame after SPIHT with gradient descent with adaptive learning back propagation Issn 2250-3005(online) November| 2012 Page 101
  • 5. I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7 Fig.6 Frame after SPIHT with gradient descent with mo mentum back propagation Fig.7 Traning Achieved Fig.8 Frame after SPIHT with gradient descent back propagation Fig.9 Frame after SPIHT with Levenberg-Marquardt algorith m Issn 2250-3005(online) November| 2012 Page 102
  • 6. I nternational Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 7 Fig.10 Perfo rmance Achieved 5. References 1. M.Vetterli and J Kovaccevic, Wavelets & Subband Coding, Prentice Hall PTR, Englewood cliffs, NJ, 1995. 2. C. Sidney Burrus, Ramesh A Gopinath, and Haitao Guo, Introduction to Wavelets & Wavelets Transforms, Prentice Hall Inc, Upper Saddle River, NJ,1998. 3. O Rioul and M Vetterli,“Wavelets and signal processing,” IEEE Signal Processing Magazine, vol. 8, no. 4, pp. 14–38, 1991. 4. A. Said, W. A. Pearlman, “SPIHT Image Co mpression: Properties of the Method”, Html 5. A. Said, W.A. Pearlman : “A New Fast and Efficient Image Codec Based on Set Partitioning in Hierarch ical Trees”, IEEE Transactions on Circu its and Systems for Video Technology, vol. 6,1996. 6. J.A.Freeman and D.M.Skapura, Neural Networks: algorith ms, Applications and Programming Techniques Addition-Wesley), 1991. 7. H.Demuth and M. Beale. Neural Network TOOLBOX User’s Guide. For use with MATLAB. The Math Works lne.. (1998) 8. Kiamal Z. Pekmestzi. Mult iplexer-Based Array Multip liers. IEEE Transcation on computers, vol, 48 , JA N ( 1998) 9. Hennessy. J.L and Patterson. D.A. Co mpter Architecture: A quantitative Approach. Morgan Kaufmanns, (1990) 10. J. Jiang. Image co mpression with neural networks. Signal Processing: Image Co mmun ication 14 (1999) 737 -760 Issn 2250-3005(online) November| 2012 Page 103