Making communications land - Are they received and understood as intended? we...
2 lab qaqc-fall2013
1. 9/10/2013
1
Quality Assurance in Field and
Lab Work
Quality Assurance
Laboratory experiments require a level of data quality that give
confidence in the reported values.
A Quality Assurance Program establishes statistical procedures
to identify and minimize errors.
Quality Assurance includes Quality Control and
Quality Assessment.
2. 9/10/2013
2
Quality Control Elements
• Trained personnel
• Equipment maintenance
• Proper analytical methods
• Good Laboratory Practices
• http://en.wikipedia.org/wiki/Good_Laboratory_Practice
• http://www.fda.gov/ICECI/EnforcementActions/BioresearchMonitoring/ucm135197.htm
• Good Measurement Practices
• http://www.nist.gov/pml/wmd/labmetrology/upload/GMP_13_20120229.pdf
QC Elements (cont.)
• Standard Operation Procedures - established or prescribed
methods to be followed routinely for the performance of designated
operations or in designated situations
• Documentation
• Inspection and Validation
• Mechanism to control errors that provides adequate
quality level at reasonable cost
3. 9/10/2013
3
QA/QC How? Where?
Other Guides:
ASTM Standards
EPA Guidance documents
Manufacturer’s guidelines
TCEQ or state environmental agency
documents
Quality Assessment (QA)
Is the mechanism to verify that the system is operating
within acceptable limits by assuring that quality control
elements are followed
The quality procedures are described in a QA/QC manual
4. 9/10/2013
4
Quality Assurance Program Elements
Sampling
Sample Custody
Sample preparation
Analytical Methodology
Calibration
Detection Limits ( http://www.chemiasoft.com/node/58)
Statistics in lab analysis
Quality Control Charts
(http://en.wikipedia.org/wiki/Control_chart)
Ex: Quality Control Chart/
http://environmentalqa.com/2012/07/04/quality-control-charting-in-the-laboratory/
5. 9/10/2013
5
Sampling
Quality of data depends primarily of the sample analyzed
Sample must be Representative of the system being
evaluated
Grab or Individual Samples are those collected at specific
sites and times
Composite samples are made of parts of grab samples so
they represent averages of several sites or times
Samples must be completely identifiable by Labeling
Field Quality Control Elements
Equipment blanks before sampling detect contamination in the
sampling equipment with clean (rinsed, evacuated) sampling
containers on pure media (ex. analyte-free water, zero air)
Field blanks are collected similarly at the end of the sampling
Trip blanks prepared at the lab and do the trip to the field and
back to evaluate contamination during transportation
Duplicate samples to check precision of the sampling process
Split samples are taken from a container and a part separated into
another container to check analytical performance
6. 9/10/2013
6
Sample Custody and Documentation
Documentation of how the samples are collected, preserved,
stored, transported, treated, labeled, tracked
State names of persons at each stage of the process
Other documentation include sample prep, analytical
methods, calibration, reagents and standards, calculations,
detection limits, quality control checks, data validation and
reduction, and reporting
Ex: Sample Custody
Forms
7. 9/10/2013
7
Analytical Method Elements
Adequate for the objective
Sensitivity- ratio of the change of output to the change in input. The slope of the
curve relating the input to an output representing the measurement
Selectivity - Selectivity refers to the extent to which a method can determine particular
analytes in mixtures or matrices without interferences from other components.
Accuracy
Precision
Calibration requirements
Reproducibility- is the degree of agreement between measurements or observations
conducted on replicate specimens in different locations by different people
Calibration
A calibration curve is a plot of the response of the instrument to
different concentrations of the standard
Concentrations prepared by dilution of the standard
Frequency of calibration established, initial calibration and
continuing calibrations
8. 9/10/2013
8
Detection Limits
Detection Limit is the smallest concentration that can be
measured with a stated probability of significance
The Method Detection Limit is measured at 98% confidence
Instrument Detection Limit is the concentration of the
analyte standard that produces a signal greater than five
times the signal-to-noise ratio
http://en.wikipedia.org/wiki/Detection_limit
9. 9/10/2013
9
General Lab Discussion
What kind of information should be kept or documented in
relationship to equipment performance and use?
pH Meter, Gas Chromatograph, Glassware (http://www.astm.org/Standards/E288.htm)
Lab Refrigerator for sample storage
DI water
Chemicals (http://www.sciencecompany.com/Chemical-Grade-Designations-W53C665.aspx)
LIMS – Laboratory Information Management System
On November 6, 2006, a giant step towards achieving a long-term goal of the environmental
laboratory and monitoring communities to have a national accreditation program was realized.
After years of an evolving program under the auspices of the National Environmental Laboratory
Accreditation Conference (NELAC) and the Institute for National Environmental Laboratory
Accreditation (INELA), both Boards of Directors took action to form The NELAC Institute
(TNI).http://www.nelac-institute.org/standards.php?pab=1_1#pab1_2
Statistics
Accuracy (Bias) is the degree of agreement of a measured value with the true or expected value (concentration of the standard), it is
measured as Percentage of Recovery:
%R = (Measured Value x 100)/True Value
The assessment of Accuracy for monitors is accomplished by challenging the analyzer with at least one audit gas of known
concentration
Precision is a measure of the repeatability of the results. A precision check consists of introducing a known concentration of the
pollutant into the monitor in the concentration range required. The resulting measured concentration is then compared to the
known concentration.
Precision is the degree of agreement of measurements (of the standard) performed under the same conditions, it is assessed by the
standard deviation,
Precision can also be assessed by the Confidence Limit (CL) determined by the probability that n multiple
measurements (of the standard) are within a certain interval of the mean.
CL = [(t)(Ϭ)]/ n
where t is the student probability factor at the desired probability.
Upper Control Limits (UCL) and Lower Control Limits (LCL) - defined at +/- 2 Ϭ or 3Ϭ serve to monitor that Accuracy remains
under statistical control by checking that %R remains within the LCL and UCL. Precision can be monitored in a similar fashion.
10. 9/10/2013
10
Statistics Review
Error analysis:
Measured value = true value ± error.
If error is random, we can make replicate measurements and
calculate an average.
The calculated average is an estimate of the true mean.
Basic statistical parameters
Sample average, which is estimate of true mean η of the underlying random variable, is calculated:
Sample variance s2, which is estimate of true variance Ϭ2 of the underlying random variable, is
calculated:
Sample standard deviation s is equal to the square root of the sample variance s2.
11. 9/10/2013
11
Basic statistical parameters
Often more interested in the variance of the average value, or the standard error of the
mean (SEM) :
or standard deviation divided by square root of number of samples.
The SEM is an estimate of the uncertainty involved in estimating the mean of a
sample.
In contrast, the standard deviation (s) is an estimate of the variability involved in
measuring the data from which mean is calculated.
The SEM (i.e., the uncertainty of the estimate of the mean) decreases as the sample
size increases.
The normal (or Gaussian) distribution
Valid to assume that sample averages have normally distributed errors.
Figure on left shows a probability distribution function (PDF) for a normal distribution with a
mean of 0 and a standard deviation of 1.
The area under any PDF equals unity.
For a normal distribution, 68% of PDF lies within η±Ϭ, 95% within η±2Ϭ , and 99.7% within η±3Ϭ .
Figure on right shows cumulative distribution function (CDF).
12. 9/10/2013
12
Confidence Intervals for the Mean Value
Uncertainty is typically characterized by calculating confidence intervals for
the true mean:
t = the t-distribution value. The value of t is both a function of the degrees of
freedom (n-1) and the level of significance.
For example, assume α = 0.05 to calculate a 95% CI.
= ±
√
= ± ( )( )
The value is the probability that a random variable will fall in
the upper or lower tail of a probability distribution.
For example, α = 0.05 implies that there is a 0.95 probability
that a random variable will not fall in the upper or lower
tail of the probability distribution.
Statistical tables of probability distributions (e.g., normal
and “student t”) list probabilities that a random variable will
fall in the upper tail only.
α values for probability distributions
13. 9/10/2013
13
α values and confidence intervals
We typically want to determine a confidence interval for which we
are 90% confident that a random variable will not fall in either
tail.
In this case, we use an α/2 = 0.05.
Similarly, to determine 95% and 99% confidence intervals, we
would use α/2 = 0.025 and 0.005, respectively.
t α/2,n-1 can be determined in EXCEL using the function:
“=TINV[α, n-1]”.
http://office.microsoft.com/en-us/excel-help/tinv-
HP005209317.aspx
T-table
As a statistical tool, a t-table lists critical values for two-
tailed tests. You then use these values to determine
confidence values. The following t-table shows degrees of
freedom for selected percentiles from the 90th to the 99th:
http://www.dummies.com/how-to/content/statistical-
tdistribution-the-147ttable148.html
Excel has a function to obtain the t-value.
=TINV(probability,deg_freedom)
14. 9/10/2013
14
T-test
A t-test is any statistical hypothesis test in which the test statistic follows a
Student's t distribution if the null hypothesis is supported. It can be used to
determine if two sets of data are significantly different from each other, and is
most commonly applied when the test statistic would follow a normal
distribution if the value of a scaling term in the test statistic were known.
When the scaling term is unknown and is replaced by an estimate based on the
data, the test statistic (under certain conditions) follows a Student's t
distribution.
The t-statistic was introduced in 1908 by William Sealy Gosset, a chemist
working for the Guinness brewery in Dublin, Ireland ("Student" was his pen
name).
http://en.wikipedia.org/wiki/Student%27s_t-test
TTEST(array1,array2,tails,type) available in Excel.
Other uses of the t-test
Compare the data sets from two labs (or lab groups) for the same
compound samples with the same technique.
Compare a parameter such as temperature from two different, but
perhaps fairly close locations.
Compare two different treatment methods.
This test is only used when both:
the two sample sizes (that is, the number, n, of participants
of each group) are equal;
it can be assumed that the two distributions have the same
variance.