2. What is a Design?
• Simply – A collection of code files that define the
functionality of a single digital electronics component
• Piece of code in hardware description language
(Verilog or VHDL)
• A Design goes through various phases:
List
of
List
of
Code
Silicon
Requirements
Gates
3. What is Design Verification?
• Process used to ensure a design s functional
correctness with respect to requirements and
specification prior to manufacture
• Ultimate goal is to discover as many bugs on a
design as possible before shipping product to
customer
• Maximize bug discovery if we maximize coverage
(code, functional) on design by means of testing
4. Simulation-based DV
Test Learning
Directed Test Generation
TG
Tests
ML
Biased Random
Biases
SIM
Bias Learning
Test Generation
Technique
DUV
Coverage
5. Why is DV Hard?
• Increasing digital design complexity
• Greater automation of Design process
than Verification process
• Increasing miniaturization level of silicon
chips
• Competition – increasing feature demands
by customers
• Power management
• Increasing Verification Effort
• Many hundreds of tests, 7 month projects
6. Why is DV Hard?
Design
Complexity
Technology
Process
ReputaIon
Limited
Engineers
Resources
Product
to
Market
Get
it
Right!
Limited
Budget
Make
it
Fast!
7. Previous attempts
BN
GA
•
Good
results
•
Mature
plaKorm
•
Approx.
CDG
process
well
•
Decent
results
-‐-‐-‐
-‐-‐-‐
•
Domain
Knowledge
•
Non-‐universal
environment
•
Difficult
to
interpret
know-‐ •
Finds
one/few
soluIons
for
ledge
CDG
enIre
search
space
via
ML
Markov
Models
GP
•
Excellent
bug
discovery
•
Good
results
•
Approx.
CDG
process
well
•
No
Domain
Knowledge
-‐-‐-‐
-‐-‐-‐
•
Effort
to
setup
environment
•
Code
Diversity
•
Difficult
to
interpret
know-‐ •
Finds
one/few
soluIons
for
ledge
enIre
search
space
For details: http://www.cs.bris.ac.uk/Publications/pub_master.jsp?id=2001405
8. Why LCS (XCS) on DV?
• Adaptive Learning Systems
Ø Could formulate the problem at a range of different
possibilities
Ø Designs change over time and also coverage requirements
change during a simulation run
• Develop a complete, accurate and minimal
representation of a problem
Ø Achieve coverage in more than one ways and balance it
• Rules developed easy to understand, analyse,
combine and alter. No domain knowledge required.
10. Why not LCS (XCS) on DV?
• XCS has issue with Boolean problems that require
overlapping rules
• Problem itself is too big for XCS, but can scale it
down
• Need to make any future attempt noise-proof
FUTURE RESEARCH WILL TELL!
11. First XCS attempt on DV
• Single step problem
• Learn relationship between biases for a Test
Generator (Condition) and the coverage (Action) they
achieve
• Noiseless environment as single randomization seed
used by the TG
• Both Conditions and Action are bit strings and we use
the ternary alphabet {0,1,#} for expressing the learnt
relationships
• Use the standard XCS parameters as in the 2002
XCS algorithmic description
19. Learnt Rules (DV3)
Classifiers
ID Cond. : Action R E F AS EXP NUM • ID1 – which 32
biases to avoid
1 0###1## : 0000 1000 0 1.00 26.98 22805 21 • ID2 – wrong, but
2 ###10## : 0000 0 0 0.48 31.76 390 6 tells us that the 32
3 0#110## : 1110 1000 0 0.63 23.31 803 6 bias vectors will cover
at least one signal
4 01#10## : 1110 1000 0 0.58 26.06 392 2
• ID3 & 5 or ID4 & 5
5 1#01100 : 0001 1000 0 0.42 9.34 102 3 achieve max
6 0#100## : 0010 1000 0 0.50 13.29 1534 2 coverage, longer are
7 01#00## : 0010 1000 0 0.82 17.97 3520 8 ID 5, 6 & 8 or 5, 7 & 8
• ID9 & 10 – tell us
8 1###0#0 : 1100 1000 0 0.62 20.85 403 4
what cannot be
9 ####### : 0011 0 0 1.00 36.16 6285 36 achieved
10 ####### : 0110 0 0 1.00 44.35 6157 30
20. Why deal with DV?
• DV is a hard real world problem
• Designs have complex interactions and becoming more
complex
• Maximise coverage, minimizing resources for it.
• Wicked fitness landscape resembling needle in haystack or
deceptive problems
• 80/20 Rule applies
• Chance to compete with other EA and probabilistic
ML techniques
• Formulate the problem as either multistep and single
step, using a variety of representations (binary,
integers, real numbers, graphs etc.)
23. Previous attempts 1
• Genetic Algorithms
• Test structure or bias for maximising coverage
• Pros:
• Decent results in both Code and Functional Coverage
(>70%)
• Easy to understand evolved knowledge
• Mature platforms
• Cons:
• Some techniques required domain knowledge (setting
fitness function or tweaking other parameters)
• Non-universal verification environment
• Search for a single solution for the entire search space
– this is not very helpful for DV problems
24. Previous attempts 2
• Genetic Programming
• Test structure for maximising coverage by learning DAGs
• Pros:
• Good results in Code Coverage (>90%)
• Only point of user involvement is the Instruction Library
• Mature platform
• Cons:
• Earlier versions had problem with code diversity
• Verification environment mostly for microprocessors
• Search for a single solution for the entire search space
– this is not very helpful for DV problems
25. Previous attempts 3
• Bayesian Networks
• Probabilistic Network model to answer MPE questions on
coverage to be achieved
• Pros:
• Good results in Functional Coverage (~90-100%)
• Approximates the CDG process well
• Cons:
• Domain Knowledge in constructing initial Network
(though automation techniques have been tried)
• Verification environment mostly for sub-systems of
microprocessors (i.e. doesn’t scale on larger systems)
• Difficult to understand what has been learnt, difficult to
later manually improve
26. Previous attempts 4
• Markov Models
• Probabilistic Network model (FSM) to generate stimuli for
achieving maximum coverage
• Pros:
• Excellent results in bug coverage (100%)
• Approximates the CDG process very well
• Cons:
• Effort in constructing the template files (TG) and activity
monitors
• Difficult to understand what has been learnt, difficult to
later manually improve