Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-23T05:12:53.110Z Has data issue: false hasContentIssue false

On hierarchical vs. non-hierarchical comparisons in metrologyand testing

Published online by Cambridge University Press:  19 April 2010

F. Pavese*
Affiliation:
Istituto Nazionale di Ricerca Metrologica (INRIM), Strada delle Cacce 73-91, 10139, Torino, Italy
*
* Correspondence:f.pavese@inrim.it
Get access

Abstract

The type of data treatment is different depending on whether the comparison, inparticular a key comparison of the MRA (mutual recognition agreement), is of thehierarchical or non-hierarchical type. This term does not mean a possible hierarchy amongthe participant laboratories; nor, in the opposite sense, a non-hierarchy among them likein the MRA key comparisons, but an intrinsic characteristic of the comparison measurand ordesign. It is a typical hierarchical comparison when the comparisoninvolves artefact standards. In this case, the summary parameters of the comparison arehierarchically higher than the input dataset. In case of non-hierarchicalcomparisons, the summary parameters are generally not of a hierarchicallyhigher level than the input dataset, because the comparison dataset can be considereddrawn from a single super-population. This happens, when a single standard is circulatedfor measurement; when the measured samples are all drawn from a single batch of areference material; when the standards are all realisations of a single condition – namelya physical or chemical state. This paper will discuss in detail these two categories.

Type
Research Article
Copyright
© EDP Sciences 2010

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

F. Pavese, An introduction to data modeling principles in metrology and testing, in Data Modeling for Metrology and Testing in Measurement Science. Series: Modeling and Simulation in Science, Engineering and Technology, edited by F. Pavese, A.B. Forbes (Birkhäuser, Boston, 2009), Chap. 1, pp. 1–30
Pavese, F., A metrologist viewpoint on some statistical issues concerning the comparison of non-repeated measurement data, namely MRA key comparisons, Measurement 39, 821 (2006) CrossRefGoogle Scholar
Kacker, R.N., Datla, R.U., Parr, A.C., Statistical analysis of CIPM key comparisons based on the ISO, Metrologia 41, 340 (2004) CrossRefGoogle Scholar
Pavese, F., The Definition of the Measurand in Key Comparisons: lessons learnt with thermal standards, Metrologia 44, 327 (2007) CrossRefGoogle Scholar
Pavese, F., Metrologia 42, L10 (2005) CrossRef
Ciarlini, P., Cox, M.G., Pavese, F., Regoliosi, G., The use of a mixture of probability distributions in temperature interlaboratory comparisons, Metrologia 41, 116 (2004) CrossRefGoogle Scholar
Duewer, D.L., A comparison of location estimators for interlaboratory data contaminated with value and uncertainty outliers, Accred. Qual. Assur. 13, 193 (2008) CrossRefGoogle Scholar
Douglas, R.J., Steele, A.G., Pair-differences chi-squared statistics for Key Comparisons, Metrologia 43, 89 (2006) CrossRefGoogle Scholar
A.G. Steele, R.J. Douglas, Establishing confidence from measurement comparisons, Measur. Sci. Technol. 19, 064003 (2008), doi: 10.1088/0957-0233/19/6/064003