Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-31
... Conservation Program: Test Procedure and Energy Conservation Standard for Set-Top Boxes and Network Equipment... comments on the request for information pertaining to the development of test procedures and energy conservation standards for set-top boxes and network equipment. The comment period is extended to March 15...
Setting, Evaluating, and Maintaining Certification Standards with the Rasch Model.
ERIC Educational Resources Information Center
Grosse, Martin E.; Wright, Benjamin D.
1986-01-01
Based on the standard setting procedures or the American Board of Preventive Medicine for their Core Test, this article describes how Rasch measurement can facilitate using test content judgments in setting a standard. Rasch measurement can then be used to evaluate and improve the precision of the standard and to hold it constant across time.…
ERIC Educational Resources Information Center
Cramer, Stephen E.
A standard-setting procedure was developed for the Georgia Teacher Certification Testing Program as tests in 30 teaching fields were revised. A list of important characteristics of a standard-setting procedure was derived, drawing on the work of R. A. Berk (1986). The best method was found to be a highly formalized judgmental, empirical Angoff…
Proficiency Standards and Cut-Scores for Language Proficiency Tests.
ERIC Educational Resources Information Center
Moy, Raymond H.
The problem of standard setting on language proficiency tests is often approached by the use of norms derived from the group being tested, a process commonly known as "grading on the curve." One particular problem with this ad hoc method of standard setting is that it will usually result in a fluctuating standard dependent on the particular group…
Issues and Methods for Standard-Setting.
ERIC Educational Resources Information Center
Hambleton, Ronald K.; And Others
Issues involved in standard setting along with methods for standard setting are reviewed, with specific reference to their relevance for criterion referenced testing. Definitions are given of continuum and state models, and traditional and normative standard setting procedures. Since continuum models are considered more appropriate for criterion…
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
Diagnostic Profiles: A Standard Setting Method for Use with a Cognitive Diagnostic Model
ERIC Educational Resources Information Center
Skaggs, Gary; Hein, Serge F.; Wilkins, Jesse L. M.
2016-01-01
This article introduces the Diagnostic Profiles (DP) standard setting method for setting a performance standard on a test developed from a cognitive diagnostic model (CDM), the outcome of which is a profile of mastered and not-mastered skills or attributes rather than a single test score. In the DP method, the key judgment task for panelists is a…
Proficiency Standards and Cut-Scores for Language Proficiency Tests.
ERIC Educational Resources Information Center
Moy, Raymond H.
1984-01-01
Discusses the problems associated with "grading on a curve," the approach often used for standard setting on language proficiency tests. Proposes four main steps presented in the setting of a non-arbitrary cut-score. These steps not only establish a proficiency standard checked by external criteria, but also check to see that the test covers the…
ERIC Educational Resources Information Center
Derico, Vontrice L.
2017-01-01
The purpose of the proposed quasi-experimental quantitative study was to determine if students who were taught in the inclusive setting yielded higher standardized test scores compared to students who were taught in the resource setting. The researcher analyzed the standardized test scores, in the areas of Language Arts, Reading, and Mathematics…
ERIC Educational Resources Information Center
Fowell, S. L.; Fewtrell, R.; McLaughlin, P. J.
2008-01-01
Absolute standard setting procedures are recommended for assessment in medical education. Absolute, test-centred standard setting procedures were introduced for written assessments in the Liverpool MBChB in 2001. The modified Angoff and Ebel methods have been used for short answer question-based and extended matching question-based papers,…
A new IRT-based standard setting method: application to eCat-listening.
García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David
2013-01-01
Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.
Higher Education Faculty Engagement in a Modified Mapmark Standard Setting
ERIC Educational Resources Information Center
Horst, S. Jeanne; DeMars, Christine E.
2016-01-01
The Mapmark standard setting method was adapted to a higher education setting in which faculty leaders were highly involved. Eighteen university faculty members participated in a day-long standard setting for a general education communications test. In Round 1, faculty set initial cut-scores for each of four student learning objectives. In Rounds…
Standard Setting in Specific-Purpose Language Testing: What Can a Qualitative Study Add?
ERIC Educational Resources Information Center
Manias, Elizabeth; McNamara, Tim
2016-01-01
This paper explores the views of nursing and medical domain experts in considering the standards for a specific-purpose English language screening test, the Occupational English Test (OET), for professional registration for immigrant health professionals. Since individuals who score performances in the test setting are often language experts…
Setting Standards for Minimum Competency Tests.
ERIC Educational Resources Information Center
Mehrens, William A.
Some general questions about minimum competency tests are discussed, and various methods of setting standards are reviewed with major attention devoted to those methods used for dichotomizing a continuum. Methods reviewed under the heading of Absolute Judgments of Test Content include Nedelsky's, Angoff's, Ebel's, and Jaeger's. These methods are…
Building an Evaluation Scale using Item Response Theory.
Lalor, John P; Wu, Hao; Yu, Hong
2016-11-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.
Building an Evaluation Scale using Item Response Theory
Lalor, John P.; Wu, Hao; Yu, Hong
2016-01-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.1 PMID:28004039
The Introduction of Standardized External Testing in Ukraine: Challenges and Successes
ERIC Educational Resources Information Center
Kovalchuk, Serhiy; Koroliuk, Svitlana
2012-01-01
Standardized external testing (SET) began to be implemented in Ukraine in 2008 as an instrument for combating corruption in higher education and ensuring fair university admission. This article examines the conditions and processes that led to the introduction of SET, overviews its implementation over three years (2008-10), analyzes SET and…
Standard setting: comparison of two methods.
George, Sanju; Haque, M Sayeed; Oyebode, Femi
2006-09-14
The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.
The Objective Borderline Method: A Probabilistic Method for Standard Setting
ERIC Educational Resources Information Center
Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim
2015-01-01
A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Kathleen; Tiemann, Gregg
2016-08-03
The U.S. Department of Energy’s Appliance Standards and Equipment Program tests, sets and helps enforce efficiency standards on more than 60 U.S. products. A majority of that testing is performed at the Intertek laboratory in Cortland, NY.
Hogan, Kathleen; Tiemann, Gregg
2018-01-16
The U.S. Department of Energyâs Appliance Standards and Equipment Program tests, sets and helps enforce efficiency standards on more than 60 U.S. products. A majority of that testing is performed at the Intertek laboratory in Cortland, NY.
40 CFR 160.81 - Standard operating procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Standard operating procedures. 160.81... GOOD LABORATORY PRACTICE STANDARDS Testing Facilities Operation § 160.81 Standard operating procedures. (a) A testing facility shall have standard operating procedures in writing setting forth study...
40 CFR 160.81 - Standard operating procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Standard operating procedures. 160.81... GOOD LABORATORY PRACTICE STANDARDS Testing Facilities Operation § 160.81 Standard operating procedures. (a) A testing facility shall have standard operating procedures in writing setting forth study...
40 CFR 160.81 - Standard operating procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Standard operating procedures. 160.81... GOOD LABORATORY PRACTICE STANDARDS Testing Facilities Operation § 160.81 Standard operating procedures. (a) A testing facility shall have standard operating procedures in writing setting forth study...
Consistency of Standard Setting in an Augmented State Testing System
ERIC Educational Resources Information Center
Lissitz, Robert W.; Wei, Hua
2008-01-01
In this article we address the issue of consistency in standard setting in the context of an augmented state testing program. Information gained from the external NRT scores is used to help make an informed decision on the determination of cut scores on the state test. The consistency of cut scores on the CRT across grades is maintained by forcing…
40 CFR 1065.5 - Overview of this part 1065 and its relationship to the standard-setting part.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Applicability and General... part specifies procedures that apply generally to testing various categories of engines. See the... engine. Before using this part's procedures, read the standard-setting part to answer at least the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-23
... Board (CARB) its request for a waiver of preemption for emission standards and related test procedures... standards and test procedures for heavy-duty urban bus engines and vehicles. The 2000 rulemaking included... to emission standards and test procedures resulting from these five sets of amendments were codified...
Effect of Content Knowledge on Angoff-Style Standard Setting Judgments
ERIC Educational Resources Information Center
Margolis, Melissa J.; Mee, Janet; Clauser, Brian E.; Winward, Marcia; Clauser, Jerome C.
2016-01-01
Evidence to support the credibility of standard setting procedures is a critical part of the validity argument for decisions made based on tests that are used for classification. One area in which there has been limited empirical study is the impact of standard setting judge selection on the resulting cut score. One important issue related to…
Maintaining Equivalent Cut Scores for Small Sample Test Forms
ERIC Educational Resources Information Center
Dwyer, Andrew C.
2016-01-01
This study examines the effectiveness of three approaches for maintaining equivalent performance standards across test forms with small samples: (1) common-item equating, (2) resetting the standard, and (3) rescaling the standard. Rescaling the standard (i.e., applying common-item equating methodology to standard setting ratings to account for…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-14
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-01
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-04
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-02
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the..., ``Certification of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-10
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-14
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
USL/DBMS NASA/PC R and D project system testing standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Kavi, Srinu; Moreau, Dennis R.; Yan, Lin
1984-01-01
A set of system testing standards to be used in the development of all C software within the NASA/PC Research and Development Project is established. Testing will be considered in two phases: the program testing phase and the system testing phase. The objective of these standards is to provide guidelines for the planning and conduct of program and software system testing.
Eblen, Denise R; Barlow, Kristina E; Naugle, Alecia Larew
2006-11-01
The U.S. Food Safety and Inspection Service (FSIS) pathogen reduction-hazard analysis critical control point systems final rule, published in 1996, established Salmonella performance standards for broiler chicken, cow and bull, market hog, and steer and heifer carcasses and for ground beef, chicken, and turkey meat. In 1998, the FSIS began testing to verify that establishments are meeting performance standards. Samples are collected in sets in which the number of samples is defined but varies according to product class. A sample set fails when the number of positive Salmonella samples exceeds the maximum number of positive samples allowed under the performance standard. Salmonella sample sets collected at 1,584 establishments from 1998 through 2003 were examined to identify factors associated with failure of one or more sets. Overall, 1,282 (80.9%) of establishments never had failed sets. In establishments that did experience set failure(s), generally the failed sets were collected early in the establishment testing history, with the exception of broiler establishments where failure(s) occurred both early and late in the course of testing. Small establishments were more likely to have experienced a set failure than were large or very small establishments, and broiler establishments were more likely to have failed than were ground beef, market hog, or steer-heifer establishments. Agency response to failed Salmonella sample sets in the form of in-depth verification reviews and related establishment-initiated corrective actions have likely contributed to declines in the number of establishments that failed sets. A focus on food safety measures in small establishments and broiler processing establishments should further reduce the number of sample sets that fail to meet the Salmonella performance standard.
21 CFR 58.81 - Standard operating procedures.
Code of Federal Regulations, 2010 CFR
2010-04-01
... LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Testing Facilities Operation § 58.81 Standard operating procedures. (a) A testing facility shall have standard operating procedures in writing setting... following: (1) Animal room preparation. (2) Animal care. (3) Receipt, identification, storage, handling...
21 CFR 58.81 - Standard operating procedures.
Code of Federal Regulations, 2013 CFR
2013-04-01
... LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Testing Facilities Operation § 58.81 Standard operating procedures. (a) A testing facility shall have standard operating procedures in writing setting... following: (1) Animal room preparation. (2) Animal care. (3) Receipt, identification, storage, handling...
21 CFR 58.81 - Standard operating procedures.
Code of Federal Regulations, 2012 CFR
2012-04-01
... LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Testing Facilities Operation § 58.81 Standard operating procedures. (a) A testing facility shall have standard operating procedures in writing setting... following: (1) Animal room preparation. (2) Animal care. (3) Receipt, identification, storage, handling...
21 CFR 58.81 - Standard operating procedures.
Code of Federal Regulations, 2014 CFR
2014-04-01
... LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Testing Facilities Operation § 58.81 Standard operating procedures. (a) A testing facility shall have standard operating procedures in writing setting... following: (1) Animal room preparation. (2) Animal care. (3) Receipt, identification, storage, handling...
An Enclosed Laser Calibration Standard
NASA Astrophysics Data System (ADS)
Adams, Thomas E.; Fecteau, M. L.
1985-02-01
We have designed, evaluated and calibrated an enclosed, safety-interlocked laser calibration standard for use in US Army Secondary Reference Calibration Laboratories. This Laser Test Set Calibrator (LTSC) represents the Army's first-generation field laser calibration standard. Twelve LTSC's are now being fielded world-wide. The main requirement on the LTSC is to provide calibration support for the Test Set (TS3620) which, in turn, is a GO/NO GO tester of the Hand-Held Laser Rangefinder (AN/GVS-5). However, we believe it's design is flexible enough to accommodate the calibration of other laser test, measurement and diagnostic equipment (TMDE) provided that single-shot capability is adequate to perform the task. In this paper we describe the salient aspects and calibration requirements of the AN/GVS-5 Rangefinder and the Test Set which drove the basic LTSC design. Also, we detail our evaluation and calibration of the LTSC, in particular, the LTSC system standards. We conclude with a review of our error analysis from which uncertainties were assigned to the LTSC calibration functions.
16 CFR 1203.13 - Test schedule.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Test schedule. 1203.13 Section 1203.13... STANDARD FOR BICYCLE HELMETS The Standard § 1203.13 Test schedule. (a) Helmet sample 1 of the set of eight... environments, respectively) shall be tested in accordance with the dynamic retention system strength test at...
16 CFR 1633.4 - Prototype testing requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Prototype testing requirements. 1633.4... STANDARD FOR THE FLAMMABILITY (OPEN FLAME) OF MATTRESS SETS The Standard § 1633.4 Prototype testing... three specimens of each prototype to be tested according to § 1633.7 and obtain passing test results...
NASA Technical Reports Server (NTRS)
Waggoner, J. T.; Phinney, D. E. (Principal Investigator)
1981-01-01
Foreign Commodity Production Forecasting testing activities through June 1981 are documented. A log of test reports is presented. Standard documentation sets are included for each test. The documentation elements presented in each set are summarized.
Testing the statistical compatibility of independent data sets
NASA Astrophysics Data System (ADS)
Maltoni, M.; Schwetz, T.
2003-08-01
We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.
A Low-Cost Inkjet-Printed Glucose Test Strip System for Resource-Poor Settings.
Gainey Wilson, Kayla; Ovington, Patrick; Dean, Delphine
2015-06-12
The prevalence of diabetes is increasing in low-resource settings; however, accessing glucose monitoring is extremely difficult and expensive in these regions. Work is being done to address the multitude of issues surrounding diabetes care in low-resource settings, but an affordable glucose monitoring solution has yet to be presented. An inkjet-printed test strip solution is being proposed as a solution to this problem. The use of a standard inkjet printer is being proposed as a manufacturing method for low-cost glucose monitoring test strips. The printer cartridges are filled with enzyme and dye solutions that are printed onto filter paper. The result is a colorimetric strip that turns a blue/green color in the presence of blood glucose. Using a light-based spectroscopic reading, the strips show a linear color change with an R(2) = .99 using glucose standards and an R(2) = .93 with bovine blood. Initial testing with bovine blood indicates that the strip accuracy is comparable to the International Organization for Standardization (ISO) standard 15197 for glucose testing in the 0-350 mg/dL range. However, further testing with human blood will be required to confirm this. A visible color gradient was observed with both the glucose standard and bovine blood experiment, which could be used as a visual indicator in cases where an electronic glucose meter was unavailable. These results indicate that an inkjet-printed filter paper test strip is a feasible method for monitoring blood glucose levels. The use of inkjet printers would allow for local manufacturing to increase supply in remote regions. This system has the potential to address the dire need for glucose monitoring in low-resource settings. © 2015 Diabetes Technology Society.
Suh, Young Joo; Kim, Young Jin; Kim, Jin Young; Chang, Suyon; Im, Dong Jin; Hong, Yoo Jin; Choi, Byoung Wook
2017-11-01
We aimed to determine the effect of a whole-heart motion-correction algorithm (new-generation snapshot freeze, NG SSF) on the image quality of cardiac computed tomography (CT) images in patients with mechanical valve prostheses compared to standard images without motion correction and to compare the diagnostic accuracy of NG SSF and standard CT image sets for the detection of prosthetic valve abnormalities. A total of 20 patients with 32 mechanical valves who underwent wide-coverage detector cardiac CT with single-heartbeat acquisition were included. The CT image quality for subvalvular (below the prosthesis) and valvular regions (valve leaflets) of mechanical valves was assessed by two observers on a four-point scale (1 = poor, 2 = fair, 3 = good, and 4 = excellent). Paired t-tests or Wilcoxon signed rank tests were used to compare image quality scores and the number of diagnostic phases (image quality score≥3) between the standard image sets and NG SSF image sets. Diagnostic performance for detection of prosthetic valve abnormalities was compared between two image sets with the final diagnosis set by re-operation or clinical findings as the standard reference. NG SSF image sets had better image quality scores than standard image sets for both valvular and subvalvular regions (P < 0.05 for both). The number of phases that were of diagnostic image quality per patient was significantly greater in the NG SSF image set than standard image set for both valvular and subvalvular regions (P < 0.0001). Diagnostic performance of NG SSF image sets for the detection of prosthetic abnormalities (20 pannus and two paravalvular leaks) was greater than that of standard image sets (P < 0.05). Application of NG SSF can improve CT image quality and diagnostic accuracy in patients with mechanical valves compared to standard images. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Evaluation of the Eberline AMS-3A and AMS-4 Beta continuous air monitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, M.L.; Sisk, D.R.
1996-03-01
Eberline AMS-3A-1 and AMS-4 beta continuous air monitors were tested against the criteria set forth in the ANSI Standards N42.18, Specification and Performance of On-site Instrumentation for Continuously Monitoring Radioactivity in Effluents, and ANSI N42.17B, Performance Specification for Health Physics Instrumentation - Occupational Airborne Radioactivity Monitoring Instrumentation. ANSI N42.18 does not, in general, specify testing procedures for demonstrating compliance with the criteria set forth in the standard; therefore, wherever possible, the testing procedures given in ANSI N42.17B were adopted. In all cases, the more restrictive acceptance criteria and/or the more demanding test conditions of the two standards were used.
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
A Critical Analysis of the Body of Work Method for Setting Cut-Scores
ERIC Educational Resources Information Center
Radwan, Nizam; Rogers, W. Todd
2006-01-01
The recent increase in the use of constructed-response items in educational assessment and the dissatisfaction with the nature of the decision that the judges must make using traditional standard-setting methods created a need to develop new and effective standard-setting procedures for tests that include both multiple-choice and…
Korosue, Kenji; Murase, Harutaka; Sato, Fumio; Ishimaru, Mutsuki; Kotoyori, Yasumitsu; Tsujimura, Koji; Nambo, Yasuo
2013-01-15
To test the usefulness of measuring pH and refractometry index, compared with measuring calcium carbonate concentration, of preparturient mammary gland secretions for predicting parturition in mares. Evaluation study. 27 pregnant Thoroughbred mares. Preparturient mammary gland secretion samples were obtained once or twice daily 10 days prior to foaling until parturition. The samples were analyzed for calcium carbonate concentration with a water hardness kit (151 samples), pH with pH test paper (222 samples), and refractometry index with a Brix refractometer (214 samples). The sensitivity, specificity, and positive and negative predictive values for each test were calculated for evaluation of predicting parturition. The PPV within 72 hours and the NPV within 24 hours for calcium carbonate concentration determination (standard value set to 400 μg/g) were 93.8% and 98.3%, respectively. The PPV within 72 hours and the NPV within 24 hours for the pH test (standard value set at 6.4) were 97.9% and 99.4%, respectively. The PPV within 72 hours and the NPV within 24 hours for the Brix test (standard value set to 20%) were 73.2% and 96.5%, respectively. Results suggested that the pH test with the standard value set at a pH of 6.4 would be useful in the management of preparturient mares by predicting when mares are not ready to foal. This was accomplished with equal effectiveness of measuring calcium carbonate concentration with a water hardness kit.
ERIC Educational Resources Information Center
Matter, M. Kevin
The Cherry Creek School district (Englewood, Colorado) is a growing district of 37,000 students in the Denver area. The 1988 Colorado State School Finance Act required district-set proficiencies (standards), and forced agreement on a set of values for student knowledge and skills. State-adopted standards added additional requirements for the…
16 CFR 1633.3 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FLAMMABILITY (OPEN FLAME) OF MATTRESS SETS The Standard § 1633.3 General requirements. (a) Summary of test method. The test method set forth in § 1633.7 measures the flammability (fire test response... allowing it to burn freely under well-ventilated, controlled environmental conditions. The flaming ignition...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roach, Dennis Patrick; Rackow, Kirk A.
The FAA's Airworthiness Assurance NDI Validation Center, in conjunction with the Commercial Aircraft Composite Repair Committee, developed a set of composite reference standards to be used in NDT equipment calibration for accomplishment of damage assessment and post-repair inspection of all commercial aircraft composites. In this program, a series of NDI tests on a matrix of composite aircraft structures and prototype reference standards were completed in order to minimize the number of standards needed to carry out composite inspections on aircraft. Two tasks, related to composite laminates and non-metallic composite honeycomb configurations, were addressed. A suite of 64 honeycomb panels, representingmore » the bounding conditions of honeycomb construction on aircraft, was inspected using a wide array of NDI techniques. An analysis of the resulting data determined the variables that play a key role in setting up NDT equipment. This has resulted in a set of minimum honeycomb NDI reference standards that include these key variables. A sequence of subsequent tests determined that this minimum honeycomb reference standard set is able to fully support inspections over the full range of honeycomb construction scenarios found on commercial aircraft. In the solid composite laminate arena, G11 Phenolic was identified as a good generic solid laminate reference standard material. Testing determined matches in key velocity and acoustic impedance properties, as well as, low attenuation relative to carbon laminates. Furthermore, comparisons of resonance testing response curves from the G11 Phenolic NDI reference standard was very similar to the resonance response curves measured on the existing carbon and fiberglass laminates. NDI data shows that this material should work for both pulse-echo (velocity-based) and resonance (acoustic impedance-based) inspections.« less
Langley Wind Tunnel Data Quality Assurance-Check Standard Results
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.
2000-01-01
A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.
Patient Core Data Set. Standard for a longitudinal health/medical record.
Renner, A L; Swart, J C
1997-01-01
Blue Chip Computers Company, in collaboration with Wright State University-Miami Valley College of Nursing and Health, with support from the Agency for Health Care Policy and Research, Public Health Service, completed Small Business innovative Research research to design a comprehensive integrated Patient information System. The Wright State University consultants undertook the development of a Patient Core Data Set (PCDS) in response to the lack of uniform standards of minimum data sets, and lack of standards in data transfer for continuity of care. The purpose of the Patient Core Data Set is to develop a longitudinal patient health record and medical history using a common set of standard data elements with uniform definitions and coding consistent with Health Level 7 (HL7) protocol and the American Society for Testing and Materials (ASTM) standards. The PCDS, intended for transfer across all patient-care settings, is essential information for clinicians, administrators, researchers, and health policy makers.
Standardization of Analysis Sets for Reporting Results from ADNI MRI Data
Wyman, Bradley T.; Harvey, Danielle J.; Crawford, Karen; Bernstein, Matt A.; Carmichael, Owen; Cole, Patricia E.; Crane, Paul; DeCarli, Charles; Fox, Nick C.; Gunter, Jeffrey L.; Hill, Derek; Killiany, Ronald J.; Pachai, Chahin; Schwarz, Adam J.; Schuff, Norbert; Senjem, Matthew L.; Suhy, Joyce; Thompson, Paul M.; Weiner, Michael; Jack, Clifford R.
2013-01-01
The ADNI 3D T1-weighted MRI acquisitions provide a rich dataset for developing and testing analysis techniques for extracting structural endpoints. To promote greater rigor in analysis and meaningful comparison of different algorithms, the ADNI MRI Core has created standardized analysis sets of data comprising scans that met minimum quality control requirements. We encourage researchers to test and report their techniques against these data. Standard analysis sets of volumetric scans from ADNI-1 have been created, comprising: screening visits, 1 year completers (subjects who all have screening, 6 and 12 month scans), two year annual completers (screening, 1, and 2 year scans), two year completers (screening, 6 months, 1 year, 18 months (MCI only) and 2 years) and complete visits (screening, 6 months, 1 year, 18 months (MCI only), 2, and 3 year (normal and MCI only) scans). As the ADNI-GO/ADNI-2 data becomes available, updated standard analysis sets will be posted regularly. PMID:23110865
ERIC Educational Resources Information Center
van der Linden, Wim J.; Vos, Hans J.; Chang, Lei
In judgmental standard setting experiments, it may be difficult to specify subjective probabilities that adequately take the properties of the items into account. As a result, these probabilities are not consistent with each other in the sense that they do not refer to the same borderline level of performance. Methods to check standard setting…
Adaptive Set-Based Methods for Association Testing
Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo
2017-01-01
With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371
40 CFR 92.5 - Reference materials.
Code of Federal Regulations, 2010 CFR
2010-07-01
...: (1) ASTM material. The following table sets forth material from the American Society for Testing and...., Philadelphia, PA 19103. The table follows: Document number and name 40 CFR part 92 reference ASTM D 86-95, Standard Test Method for Distillation of Petroleum Products § 92.113 ASTM D 93-94, Standard Test Methods...
16 CFR 1203.14 - Peripheral vision test.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Peripheral vision test. 1203.14 Section 1203... SAFETY STANDARD FOR BICYCLE HELMETS The Standard § 1203.14 Peripheral vision test. Position the helmet on... the helmet to set the comfort or fit padding. (Note: Peripheral vision clearance may be determined...
16 CFR 1203.14 - Peripheral vision test.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Peripheral vision test. 1203.14 Section 1203... SAFETY STANDARD FOR BICYCLE HELMETS The Standard § 1203.14 Peripheral vision test. Position the helmet on... the helmet to set the comfort or fit padding. (Note: Peripheral vision clearance may be determined...
Realistic metrics and methods for testing household biomass cookstoves are required to develop standards needed by international policy makers, donors, and investors. Application of consistent test practices allows emissions and energy efficiency performance to be benchmarked and...
Krohne, Kariann; Torres, Sandra; Slettebø, Åshild; Bergland, Astrid
2014-02-17
Health professionals are required to collect data from standardized tests when assessing older patients' functional ability. Such data provide quantifiable documentation on health outcomes. Little is known, however, about how physiotherapists and occupational therapists who administer standardized tests use test information in their daily clinical work. This article aims to investigate how test administrators in a geriatric setting justify the everyday use of standardized test information. Qualitative study of physiotherapists and occupational therapists on two geriatric hospital wards in Norway that routinely tested their patients with standardized tests. Data draw on seven months of fieldwork, semi-structured interviews with eight physiotherapists and six occupational therapists (12 female, two male), as well as observations of 26 test situations. Data were analyzed using Systematic Text Condensation. We identified two test information components in everyday use among physiotherapist and occupational therapist test administrators. While the primary component drew on the test administrators' subjective observations during testing, the secondary component encompassed the communication of objective test results and test performance. The results of this study illustrate the overlap between objective and subjective data in everyday practice. In clinical practice, by way of the clinicians' gaze on how the patient functions, the subjective and objective components of test information are merged, allowing individual characteristics to be noticed and made relevant as test performance justifications and as rationales in the overall communication of patient needs.
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
ERIC Educational Resources Information Center
Ellwein, Mary Catherine; Glass, Gene V.
A qualitative case study involving five educational institutions assessed the use of competency testing as a prerequisite for high school graduation, criterion for admission into college, criterion for teacher certification, and statewide assessment tool. Focus was on persons and processes involved in setting educational standards associated with…
Standardized Definitions for Code Verification Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William
This document contains standardized definitions for several commonly used code verification test problems. These definitions are intended to contain sufficient information to set up the test problem in a computational physics code. These definitions are intended to be used in conjunction with exact solutions to these problems generated using Exact- Pack, www.github.com/lanl/exactpack.
Test Anxiety and High-Stakes Test Performance between School Settings: Implications for Educators
ERIC Educational Resources Information Center
von der Embse, Nathaniel; Hasson, Ramzi
2012-01-01
With the enactment of standards-based accountability in education, high-stakes tests have become the dominant method for measuring school effectiveness and student achievement. Schools and educators are under increasing pressure to meet achievement standards. However, there are variables which may interfere with the authentic measurement of…
Setting Academic Performance Standards: MCAS vs. PARCC. Policy Brief
ERIC Educational Resources Information Center
Phelps, Richard P.
2015-01-01
The Massachusetts Comprehensive Assessment System (MCAS) high school test is administered to all Bay State students--both those intending to enroll in college and the many with no such intention. The MCAS high school test is a retrospectively focused standards-based achievement test, designed to measure how well students have mastered the material…
ERIC Educational Resources Information Center
Kavakli, Nurdan; Arslan, Sezen
2017-01-01
Within the scope of educational testing and assessment, setting standards and creating guidelines as a code of practice provide more prolific and sustainable outcomes. In this sense, internationally accepted and regionally accredited principles are suggested for standardization in language testing and assessment practices. Herein, ILTA guidelines…
Considerations for setting up an order entry system for nuclear medicine tests.
Hara, Narihiro; Onoguchi, Masahisa; Nishida, Toshihiko; Honda, Minoru; Houjou, Osamu; Yuhi, Masaru; Takayama, Teruhiko; Ueda, Jun
2007-12-01
Integrating the Healthcare Enterprise-Japan (IHE-J) was established in Japan in 2001 and has been working to standardize health information and make it accessible on the basis of the fundamental Integrating Healthcare Enterprise (IHE) specifications. However, because specialized operations are used in nuclear medicine tests, online sharing of patient information and test order information from the order entry system as shown by the scheduled workflow (SWF) is difficult, making information inconsistent throughout the facility and uniform management of patient information impossible. Therefore, we examined the basic design (subsystem design) for order entry systems, which are considered an important aspect of information management for nuclear medicine tests and needs to be consistent with the system used throughout the rest of the facility. There are many items that are required by the subsystem when setting up an order entry system for nuclear medicine tests. Among these items, those that are the most important in the order entry system are constructed using exclusion settings, because of differences in the conditions for using radiopharmaceuticals and contrast agents and appointment frame settings for differences in the imaging method and test items. To establish uniform management of patient information for nuclear medicine tests throughout the facility, it is necessary to develop an order entry system with exclusion settings and appointment frames as standard features. Thereby, integration of health information with the Radiology Information System (RIS) or Picture Archiving Communication System (PACS) based on Digital Imaging Communications in Medicine (DICOM) standards and real-time health care assistance can be attained, achieving the IHE agenda of improving health care service and efficiently sharing information.
NASA Technical Reports Server (NTRS)
Warm, J. S.; Riechmann, S. W.; Grasha, A. F.; Seibel, B.
1973-01-01
This study tested the prediction, derived from the goal-setting hypothesis, that the facilitating effects of knowledge of results (KR) in a simple vigilance task should be related directly to the level of the performance standard used to regulate KR. Two groups of Ss received dichotomous KR in terms of whether Ss response times (RTs) to signal detections exceeded a high or low standard of performance. The aperiodic offset of a visual signal was the critical event for detection. The vigil was divided into a training phase followed by testing, during which KR was withdrawn. Knowledge of results enhanced performance in both phases. However, the two standards used to regulate feedback contributed little to these effects.
NASA Technical Reports Server (NTRS)
Lauenstein, Jean-Marie
2015-01-01
The JEDEC JESD57 test standard, Procedures for the Measurement of Single-Event Effects in Semiconductor Devices from Heavy-Ion Irradiation, is undergoing its first revision since 1996. In this talk, we place this test standard into context with other relevant radiation test standards to show its importance for single-event effect radiation testing for space applications. We show the range of industry, government, and end-user party involvement in the revision. Finally, we highlight some of the key changes being made and discuss the trade-space in which setting standards must be made to be both useful and broadly adopted.
A Criterion-Referenced Viewpoint on Standards/Cutscores in Language Testing.
ERIC Educational Resources Information Center
Davidson, Fred; Lynch, Brian K.
"Standard" is distinguished from "criterion" as it is used in criterion-referenced testing. The former is argued to refer to the real-world cutpoint at which a decision is made based on a test's result (e.g., exemption from a special training program). The latter is a skill or set of skills to which a test is referenced.…
Adaptive Set-Based Methods for Association Testing.
Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo
2016-02-01
With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.
Impacts of Teacher Testing: State Educational Governance through Standard-Setting.
ERIC Educational Resources Information Center
Wise, Arthur E.; And Others
Focusing on the experiences of five southern states using teacher testing policies, this survey provides evidence of shared experiences and lessons learned despite the existence of five different sets of certification requirements. The first section of the report concentrates on the national context for the movement toward teacher testing.…
ERIC Educational Resources Information Center
Croghan, Emma; Aveyard, Paul; Johnson, Carol
2005-01-01
Purpose: There is a discrepancy between the ease of purchase of cigarettes reported by young people themselves and the results of ease of purchase obtained by tests done by official sources such as Trading Standards Units. This discrepancy suggests that either data from young people or from trading standards are unreliable. This research set out…
Krohne, Kariann; Torres, Sandra; Slettebø, Ashild; Bergland, Astrid
2013-09-01
In assessing geriatric patients' functional status, health care professionals use a number of standardized tests. These tests have defined administration procedures that restrict communication and interaction with patients. In this article, we explore the experiences of occupational therapists and physiotherapists acting as standardized test administrators. Drawing on fieldwork, interviews with physiotherapists and occupational therapists, and observations of test situations on acute geriatric wards, we suggest that the test situation generates a tension between what standardization demands and what individualization requires. Our findings illustrate how physiotherapists and occupational therapists navigate between adherence to the test standard and meeting what they consider to be the individual patient's needs in the test situation. We problematize this navigation, and argue that the health care professional's use of relational competence is the means to reach and maintain individualization.
The Standards Movement: A Child-Centered Response.
ERIC Educational Resources Information Center
Crain, William
2003-01-01
Discusses how child-centered educational philosophies, including Montessori, share positions differing radically from those of the educational standards movement. Focuses on adult-set goals and standards, social promotion, external motivators, demands for more challenging work, and standardized tests. Reports that children in child-centered…
Will the "Real" Proficiency Standard Please Stand Up?
ERIC Educational Resources Information Center
Baron, Joan Boykoff; And Others
Connecticut's experience with four different standard-setting methods regarding multiple choice proficiency tests is described. The methods include Angoff, Nedelsky, Borderline Group, and Contrasting Groups Methods. All Connecticut ninth graders were administered proficiency tests in reading, language arts, and mathematics. As soon as final test…
Determination of service standard time for liquid waste parameter in certification institution
NASA Astrophysics Data System (ADS)
Sembiring, M. T.; Kusumawaty, D.
2018-02-01
Baristand Industry Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industry Medan is liquid waste testing service. The company set the standard of service 9 working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company. The purpose of this research is to specify the standard time of each parameter in testing services liquid waste. The method used is the stopwatch time study. There are 45 test parameters in liquid waste laboratory. The measurement of the time done 4 samples per test parameters using the stopwatch. From the measurement results obtained standard time that the standard Minimum Service test of liquid waste is 13 working days if there is testing E. coli.
Lim, Cherry; Wannapinij, Prapass; White, Lisa; Day, Nicholas P J; Cooper, Ben S; Peacock, Sharon J; Limmathurotsakul, Direk
2013-01-01
Estimates of the sensitivity and specificity for new diagnostic tests based on evaluation against a known gold standard are imprecise when the accuracy of the gold standard is imperfect. Bayesian latent class models (LCMs) can be helpful under these circumstances, but the necessary analysis requires expertise in computational programming. Here, we describe open-access web-based applications that allow non-experts to apply Bayesian LCMs to their own data sets via a user-friendly interface. Applications for Bayesian LCMs were constructed on a web server using R and WinBUGS programs. The models provided (http://mice.tropmedres.ac) include two Bayesian LCMs: the two-tests in two-population model (Hui and Walter model) and the three-tests in one-population model (Walter and Irwig model). Both models are available with simplified and advanced interfaces. In the former, all settings for Bayesian statistics are fixed as defaults. Users input their data set into a table provided on the webpage. Disease prevalence and accuracy of diagnostic tests are then estimated using the Bayesian LCM, and provided on the web page within a few minutes. With the advanced interfaces, experienced researchers can modify all settings in the models as needed. These settings include correlation among diagnostic test results and prior distributions for all unknown parameters. The web pages provide worked examples with both models using the original data sets presented by Hui and Walter in 1980, and by Walter and Irwig in 1988. We also illustrate the utility of the advanced interface using the Walter and Irwig model on a data set from a recent melioidosis study. The results obtained from the web-based applications were comparable to those published previously. The newly developed web-based applications are open-access and provide an important new resource for researchers worldwide to evaluate new diagnostic tests.
Identifying and Evaluating External Validity Evidence for Passing Scores
ERIC Educational Resources Information Center
Davis-Becker, Susan L.; Buckendahl, Chad W.
2013-01-01
A critical component of the standard setting process is collecting evidence to evaluate the recommended cut scores and their use for making decisions and classifying students based on test performance. Kane (1994, 2001) proposed a framework by which practitioners can identify and evaluate evidence of the results of the standard setting from (1)…
40 CFR 1066.5 - Overview of this part 1066 and its relationship to the standard-setting part.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Overview of this part 1066 and its relationship to the standard-setting part. 1066.5 Section 1066.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Applicability and General...
40 CFR 1065.5 - Overview of this part 1065 and its relationship to the standard-setting part.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Overview of this part 1065 and its relationship to the standard-setting part. 1065.5 Section 1065.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Applicability and General...
40 CFR 1066.5 - Overview of this part 1066 and its relationship to the standard-setting part.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Overview of this part 1066 and its relationship to the standard-setting part. 1066.5 Section 1066.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Applicability and General...
40 CFR 1065.5 - Overview of this part 1065 and its relationship to the standard-setting part.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Overview of this part 1065 and its relationship to the standard-setting part. 1065.5 Section 1065.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Applicability and General...
40 CFR 1065.5 - Overview of this part 1065 and its relationship to the standard-setting part.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Overview of this part 1065 and its relationship to the standard-setting part. 1065.5 Section 1065.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Applicability and General...
40 CFR 1066.5 - Overview of this part 1066 and its relationship to the standard-setting part.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Overview of this part 1066 and its relationship to the standard-setting part. 1066.5 Section 1066.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Applicability and General...
40 CFR 1065.5 - Overview of this part 1065 and its relationship to the standard-setting part.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Overview of this part 1065 and its relationship to the standard-setting part. 1065.5 Section 1065.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Applicability and General...
ERIC Educational Resources Information Center
Johnson, Dale L.
This investigation compares child language obtained with standardized tests and samples of spontaneous speech obtained in natural settings. It was hypothesized that differences would exist between social class and racial groups on the unfamiliar standard tests, but such differences would not be evident on spontaneous speech measures. Also, higher…
ERIC Educational Resources Information Center
Ericson, David P.
1984-01-01
Explores the many meanings of the minimal competency testing movement and the more recent mobilization for educational excellence in the schools. Argues that increasing the value of the diploma by setting performance standards on minimal competency tests and by elevating academic graduation standards may strongly conflict with policies encouraging…
Standardized development of computer software. Part 2: Standards
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1978-01-01
This monograph contains standards for software development and engineering. The book sets forth rules for design, specification, coding, testing, documentation, and quality assurance audits of software; it also contains detailed outlines for the documentation to be produced.
A minimal standardization setting for language mapping tests: an Italian example.
Rofes, Adrià; de Aguiar, Vânia; Miceli, Gabriele
2015-07-01
During awake surgery, picture-naming tests are administered to identify brain structures related to language function (language mapping), and to avoid iatrogenic damage. Before and after surgery, naming tests and other neuropsychological procedures aim at charting naming abilities, and at detecting which items the subject can respond to correctly. To achieve this goal, sufficiently large samples of normed and standardized stimuli must be available for preoperative and postoperative testing, and to prepare intraoperative tasks, the latter only including items named flawlessly preoperatively. To discuss design, norming and presentation of stimuli, and to describe the minimal standardization setting used to develop two sets of Italian stimuli, one for object naming and one for verb naming, respectively. The setting includes a naming study (to obtain picture-name agreement ratings), two on-line questionnaires (to acquire age-of-acquisition and imageability ratings for all test items), and the norming of other relevant language variables. The two sets of stimuli have >80 % picture-name agreement, high levels of internal consistency and reliability for imageability and age of acquisition ratings. They are normed for psycholinguistic variables known to affect lexical access and retrieval, and are validated in a clinical population. This framework can be used to increase the probability of reliably detecting language impairments before and after surgery, to prepare intraoperative tests based on sufficient knowledge of pre-surgical language abilities in each patient, and to decrease the probability of false positives during surgery. Examples of data usage are provided. Normative data can be found in the supplementary materials.
Gu, Z.; Sam, S. S.; Sun, Y.; Tang, L.; Pounds, S.; Caliendo, A. M.
2016-01-01
A potential benefit of digital PCR is a reduction in result variability across assays and platforms. Three sets of PCR reagents were tested on two digital PCR systems (Bio-Rad and RainDance), using three different sets of PCR reagents for quantitation of cytomegalovirus (CMV). Both commercial quantitative viral standards and 16 patient samples (n = 16) were tested. Quantitative accuracy (compared to nominal values) and variability were determined based on viral standard testing results. Quantitative correlation and variability were assessed with pairwise comparisons across all reagent-platform combinations for clinical plasma sample results. The three reagent sets, when used to assay quantitative standards on the Bio-Rad system, all showed a high degree of accuracy, low variability, and close agreement with one another. When used on the RainDance system, one of the three reagent sets appeared to have a much better correlation to nominal values than did the other two. Quantitative results for patient samples showed good correlation in most pairwise comparisons, with some showing poorer correlations when testing samples with low viral loads. Digital PCR is a robust method for measuring CMV viral load. Some degree of result variation may be seen, depending on platform and reagents used; this variation appears to be greater in samples with low viral load values. PMID:27535685
40 CFR 792.81 - Standard operating procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 33 2013-07-01 2013-07-01 false Standard operating procedures. 792.81... operating procedures. (a) A testing facility shall have standard operating procedures in writing, setting... data generated in the course of a study. All deviations in a study from standard operating procedures...
40 CFR 792.81 - Standard operating procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 33 2012-07-01 2012-07-01 false Standard operating procedures. 792.81... operating procedures. (a) A testing facility shall have standard operating procedures in writing, setting... data generated in the course of a study. All deviations in a study from standard operating procedures...
40 CFR 792.81 - Standard operating procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 32 2014-07-01 2014-07-01 false Standard operating procedures. 792.81... operating procedures. (a) A testing facility shall have standard operating procedures in writing, setting... data generated in the course of a study. All deviations in a study from standard operating procedures...
Background Variables, Levels of Aggregation, and Standardized Test Scores
ERIC Educational Resources Information Center
Paulson, Sharon E.; Marchant, Gregory J.
2009-01-01
This article examines the role of student demographic characteristics in standardized achievement test scores at both the individual level and aggregated at the state, district, school levels. For several data sets, the majority of the variance among states, districts, and schools was related to demographic characteristics. Where these background…
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neymark, J.; Kennedy, M.; Judkoff, R.
This report documents a set of diagnostic analytical verification cases for testing the ability of whole building simulation software to model the air distribution side of typical heating, ventilating and air conditioning (HVAC) equipment. These cases complement the unitary equipment cases included in American National Standards Institute (ANSI)/American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs, which test the ability to model the heat-transfer fluid side of HVAC equipment.
Comparison of two methods of standard setting: the performance of the three-level Angoff method.
Jalili, Mohammad; Hejri, Sara M; Norcini, John J
2011-12-01
Cut-scores, reliability and validity vary among standard-setting methods. The modified Angoff method (MA) is a well-known standard-setting procedure, but the three-level Angoff approach (TLA), a recent modification, has not been extensively evaluated. This study aimed to compare standards and pass rates in an objective structured clinical examination (OSCE) obtained using two methods of standard setting with discussion and reality checking, and to assess the reliability and validity of each method. A sample of 105 medical students participated in a 14-station OSCE. Fourteen and 10 faculty members took part in the MA and TLA procedures, respectively. In the MA, judges estimated the probability that a borderline student would pass each station. In the TLA, judges estimated whether a borderline examinee would perform the task correctly or not. Having given individual ratings, judges discussed their decisions. One week after the examination, the procedure was repeated using normative data. The mean score for the total test was 54.11% (standard deviation: 8.80%). The MA cut-scores for the total test were 49.66% and 51.52% after discussion and reality checking, respectively (the consequent percentages of passing students were 65.7% and 58.1%, respectively). The TLA yielded mean pass scores of 53.92% and 63.09% after discussion and reality checking, respectively (rates of passing candidates were 44.8% and 12.4%, respectively). Compared with the TLA, the MA showed higher agreement between judges (0.94 versus 0.81) and a narrower 95% confidence interval in standards (3.22 versus 11.29). The MA seems a more credible and reliable procedure with which to set standards for an OSCE than does the TLA, especially when a reality check is applied. © Blackwell Publishing Ltd 2011.
Beyond Standardization: State Standards and School Improvement.
ERIC Educational Resources Information Center
Wise, Arthur E.; Darling-Hammond, Linda
This paper focuses on ways in which one state policy for improving education--standard-setting through testing mechanisms--affects the classroom teacher-learner relationship. That uniform policy-making is problematic is clear from observations of 43 Mid-Atlantic school district teachers. Responding to three types of standards, 45 percent found…
A Comparison of Three Types of Test Development Procedures Using Classical and Latent Trait Methods.
ERIC Educational Resources Information Center
Benson, Jeri; Wilson, Michael
Three methods of item selection were used to select sets of 38 items from a 50-item verbal analogies test and the resulting item sets were compared for internal consistency, standard errors of measurement, item difficulty, biserial item-test correlations, and relative efficiency. Three groups of 1,500 cases each were used for item selection. First…
ERIC Educational Resources Information Center
Eckes, Thomas
2017-01-01
This paper presents an approach to standard setting that combines the prototype group method (PGM; Eckes, 2012) with a receiver operating characteristic (ROC) analysis. The combined PGM-ROC approach is applied to setting cut scores on a placement test of English as a foreign language (EFL). To implement the PGM, experts first named learners whom…
The Effect of Baggase Ash on Fly Ash-Based Geopolimer Binder
NASA Astrophysics Data System (ADS)
Bayuaji, R.; Darmawan, M. S.; Husin, N. A.; Banugraha, R.; Alfi, M.; Abdullah, M. M. A. B.
2018-06-01
Geopolymer concrete is an environmentally friendly concrete. However, the geopolymer binder has a problem with setting time; mainly the composition comprises high calcium fly ash. This study utilized bagasse ash to improve setting time on fly ash-based geopolymer binder. The characterization of bagasse ash was carried out by using chemical and phase analysis, while the morphology characterization was examined by scanning electron microscope (SEM). The setting time test and the compressive strength test used standard ASTM C 191-04 and ASTM C39 / C39M respectively. The compressive strength of the samples determined at 3, 28 and 56 days. The result compared the requirement of the standards.
Assembling Appliances Standards from a Basket of Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siderious, Hans-Paul; Meier, Alan
2014-08-11
Rapid innovation in product design challenges the current methodology for setting standards and labels, especially for electronics, software and networking. Major problems include defining the product, measuring its energy consumption, and choosing the appropriate metric and level for the standard. Most governments have tried to solve these problems by defining ever more specific product subcategories, along with their corresponding test methods and metrics. An alternative approach would treat each energy-using product as something that delivers a basket of functions. Then separate standards would be constructed for the individual functions that can be defined, tested, and evaluated. Case studies of thermostats,more » displays and network equipment are presented to illustrate the problems with the classical approach for setting standards and indicate the merits and drawbacks of the alternative. The functional approach appears best suited to products whose primary purpose is processing information and that have multiple functions.« less
A comprehensive evaluation of strip performance in multiple blood glucose monitoring systems.
Katz, Laurence B; Macleod, Kirsty; Grady, Mike; Cameron, Hilary; Pfützner, Andreas; Setford, Steven
2015-05-01
Accurate self-monitoring of blood glucose is a key component of effective self-management of glycemic control. Accurate self-monitoring of blood glucose results are required for optimal insulin dosing and detection of hypoglycemia. However, blood glucose monitoring systems may be susceptible to error from test strip, user, environmental and pharmacological factors. This report evaluated 5 blood glucose monitoring systems that each use Verio glucose test strips for precision, effect of hematocrit and interferences in laboratory testing, and lay user and system accuracy in clinical testing according to the guidelines in ISO15197:2013(E). Performance of OneTouch(®) VerioVue™ met or exceeded standards described in ISO15197:2013 for precision, hematocrit performance and interference testing in a laboratory setting. Performance of OneTouch(®) Verio IQ™, OneTouch(®) Verio Pro™, OneTouch(®) Verio™, OneTouch(®) VerioVue™ and Omni Pod each met or exceeded accuracy standards for user performance and system accuracy in a clinical setting set forth in ISO15197:2013(E).
Dimech, Wayne; Karakaltsas, Marina; Vincini, Giuseppe A
2018-05-25
A general trend towards conducting infectious disease serology testing in centralized laboratories means that quality control (QC) principles used for clinical chemistry testing are applied to infectious disease testing. However, no systematic assessment of methods used to establish QC limits has been applied to infectious disease serology testing. A total of 103 QC data sets, obtained from six different infectious disease serology analytes, were parsed through standard methods for establishing statistical control limits, including guidelines from Public Health England, USA Clinical and Laboratory Standards Institute (CLSI), German Richtlinien der Bundesärztekammer (RiliBÄK) and Australian QConnect. The percentage of QC results failing each method was compared. The percentage of data sets having more than 20% of QC results failing Westgard rules when the first 20 results were used to calculate the mean±2 standard deviation (SD) ranged from 3 (2.9%) for R4S to 66 (64.1%) for 10X rule, whereas the percentage ranged from 0 (0%) for R4S to 32 (40.5%) for 10X when the first 100 results were used to calculate the mean±2 SD. By contrast, the percentage of data sets with >20% failing the RiliBÄK control limits was 25 (24.3%). Only two data sets (1.9%) had more than 20% of results outside the QConnect Limits. The rate of failure of QCs using QConnect Limits was more applicable for monitoring infectious disease serology testing compared with UK Public Health, CLSI and RiliBÄK, as the alternatives to QConnect Limits reported an unacceptably high percentage of failures across the 103 data sets.
Ballistic Resistance of Body Armor. NIJ Standard-0101.06
2008-07-01
49.2 ft ± 3.28 ft) Length to be adjusted to meet velocity accuracy requirements Test Barrel Armor Panel Backing Material Fixture Start Sensor Set...Systems, Testing and Evaluation Amanda Forster, Materials Research Engineer The preparation of this standard was sponsored by the National...manufacturers seek NIJ compliance of their armor to this standard and the armor contains unique materials or forms of construction that may not have
ERIC Educational Resources Information Center
Pan, Tianshu; Yin, Yue
2012-01-01
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
American Alcohol Photo Stimuli (AAPS): A standardized set of alcohol and matched non-alcohol images.
Stauffer, Christopher S; Dobberteen, Lily; Woolley, Joshua D
2017-11-01
Photographic stimuli are commonly used to assess cue reactivity in the research and treatment of alcohol use disorder. The stimuli used are often non-standardized, not properly validated, and poorly controlled. There are no previously published, validated, American-relevant sets of alcohol images created in a standardized fashion. We aimed to: 1) make available a standardized, matched set of photographic alcohol and non-alcohol beverage stimuli, 2) establish face validity, the extent to which the stimuli are subjectively viewed as what they are purported to be, and 3) establish construct validity, the degree to which a test measures what it claims to be measuring. We produced a standardized set of 36 images consisting of American alcohol and non-alcohol beverages matched for basic color, form, and complexity. A total of 178 participants (95 male, 82 female, 1 genderqueer) rated each image for appetitiveness. An arrow-probe task, in which matched pairs were categorized after being presented for 200 ms, assessed face validity. Criteria for construct validity were met if variation in AUDIT scores were associated with variation in performance on tasks during alcohol image presentation. Overall, images were categorized with >90% accuracy. Participants' AUDIT scores correlated significantly with alcohol "want" and "like" ratings [r(176) = 0.27, p = <0.001; r(176) = 0.36, p = <0.001] and arrow-probe latency [r(176) = -0.22, p = 0.004], but not with non-alcohol outcomes. Furthermore, appetitive ratings and arrow-probe latency for alcohol, but not non-alcohol, differed significantly for heavy versus light drinkers. Our image set provides valid and reliable alcohol stimuli for both explicit and implicit tests of cue reactivity. The use of standardized, validated, reliable image sets may improve consistency across research and treatment paradigms.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-22
... amendment to the continued listing requirements in Section 802.01B of the Exchange's Listed Company Manual... provided that any company that qualified to list under the Earnings Test set out in Section 102.01C(I) or... Standard for Companies Transferring from NYSE Arca'' (the ``NYSE Arca Transfer Standard'') set forth in...
Castelein, Birgit; Cagnie, Barbara; Parlevliet, Thierry; Danneels, Lieven; Cools, Ann
2015-10-01
To identify maximum voluntary isometric contraction (MVIC) test positions for the deeper-lying scapulothoracic muscles (ie, levator scapulae, pectoralis minor, rhomboid major), and to provide a standard set of a limited number of test positions that generate an MVIC in all scapulothoracic muscles. Cross-sectional study. Physical and rehabilitation medicine department. Healthy subjects (N=21). Not applicable. Mean peak electromyographic activity from levator scapulae, pectoralis minor, and rhomboid major (investigated with fine-wire electromyography) and from upper trapezius, middle trapezius, lower trapezius, and serratus anterior (investigated with surface electromyography) during the performance of 12 different MVICs. The results indicated that various test positions generated similar high mean electromyographic activity and that no single test generated maximum activity for a specific muscle in all subjects. The results of this study support using a series of test positions for normalization procedures rather than a single exercise to increase the likelihood of recruiting the highest activity in the scapulothoracic muscles. A standard set of 5 test positions was identified as being sufficient for generating an MVIC of all scapulothoracic muscles: seated T, seated U 135°, prone T-thumbs up, prone V-thumbs up, and supine V-thumbs up. A standard set of test positions for normalization of scapulothoracic electromyographic data that also incorporates the levator scapulae, pectoralis minor, and rhomboid major muscles is 1 step toward a more comprehensive understanding of normal and abnormal muscle function of these muscles and will help to standardize the presentation of scapulothoracic electromyographic muscle activity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Standard Specimen Reference Set: Breast Cancer and Imaging — EDRN Public Portal
The primary objective of this study is to assemble a well-characterized set of blood specimens and images to test biomarkers that, in conjunction with mammography, can detect and discriminate breast cancer. These samples will be divided to provide “sets” of specimens that can be tested in a number of different laboratories. Since tests will be performed on the same sets of samples, the data will be directly comparable and decisions regarding which biomarker or set of biomarkers have value in breast cancer detection can be made. These sets will reside at a National Cancer Institute facility at Frederick, MD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorrell, L.; Roach, D.
1999-03-04
The rapidly increasing use of composites on commercial airplanes coupled with the potential for economic savings associated with their use in aircraft structures means that the demand for composite materials technology will continue to increase. Inspecting these composite structures is a critical element in assuring their continued airworthiness. The FAA's Airworthiness Assurance NDI Validation Center, in conjunction with the Commercial Aircraft Composite Repair Committee (CACRC), is developing a set of composite reference standards to be used in NDT equipment calibration for accomplishment of damage assessment and post-repair inspection of all commercial aircraft composites. In this program, a series of NDImore » tests on a matrix of composite aircraft structures and prototype reference standards were completed in order to minimize the number of standards needed to carry out composite inspections on aircraft. Two tasks, related to composite laminates and non-metallic composite honeycomb configurations, were addressed. A suite of 64 honeycomb panels, representing the bounding conditions of honeycomb construction on aircraft, were inspected using a wide array of NDI techniques. An analysis of the resulting data determined the variables that play a key role in setting up NDT equipment. This has resulted in a prototype set of minimum honeycomb reference standards that include these key variables. A sequence of subsequent tests determined that this minimum honeycomb reference standard set is able to fully support inspections over the fill range of honeycomb construction scenarios. Current tasks are aimed at optimizing the methods used to engineer realistic flaws into the specimens. In the solid composite laminate arena, we have identified what appears to be an excellent candidate, G11 Phenolic, as a generic solid laminate reference standard material. Testing to date has determined matches in key velocity and acoustic impedance properties, as well as, low attenuation relative to carbon laminates. Furthermore, comparisons of resonance testing response curves from the G11 Phenolic prototype standard was very similar to the resonance response curves measured on the existing carbon and fiberglass laminates. NDI data shows that this material should work for both pulse-echo (velocity-based) and resonance (acoustic impedance-based) inspections. Additional testing and industry review activities are underway to complete the validation of this material.« less
On the Equivalence of Constructed-Response and Multiple-Choice Tests.
ERIC Educational Resources Information Center
Traub, Ross E.; Fisher, Charles W.
Two sets of mathematical reasoning and two sets of verbal comprehension items were cast into each of three formats--constructed response, standard multiple-choice, and Coombs multiple-choice--in order to assess whether tests with indentical content but different formats measure the same attribute, except for possible differences in error variance…
Why Educational Standards Are Not Truly Objective
ERIC Educational Resources Information Center
Metzgar, Matthew
2015-01-01
Educational standards have become a popular choice for setting clear educational targets for students. The language of standards is that they are "objective" as opposed to typical tests which may suffer from bias. This article seeks to further analyze the claims that standards are objective and fair to all. The author focuses on six…
40 CFR 63.743 - Standards: General.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.743 Standards: General. (a... of the device or equipment, test data verifying the performance of the device or equipment in... chemical milling maskants, as determined in accordance with the applicable procedures set forth in § 63.750...
40 CFR 63.743 - Standards: General.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.743 Standards: General. (a... of the device or equipment, test data verifying the performance of the device or equipment in... chemical milling maskants, as determined in accordance with the applicable procedures set forth in § 63.750...
40 CFR 63.743 - Standards: General.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.743 Standards: General. (a... of the device or equipment, test data verifying the performance of the device or equipment in... chemical milling maskants, as determined in accordance with the applicable procedures set forth in § 63.750...
40 CFR 63.743 - Standards: General.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.743 Standards: General. (a... of the device or equipment, test data verifying the performance of the device or equipment in... chemical milling maskants, as determined in accordance with the applicable procedures set forth in § 63.750...
40 CFR 63.743 - Standards: General.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.743 Standards: General. (a... of the device or equipment, test data verifying the performance of the device or equipment in... chemical milling maskants, as determined in accordance with the applicable procedures set forth in § 63.750...
Laboratory Performance Evaluation Report of SEL 421 Phasor Measurement Unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; faris, Anthony J.; Martin, Kenneth E.
2007-12-01
PNNL and BPA have been in close collaboration on laboratory performance evaluation of phasor measurement units for over ten years. A series of evaluation tests are designed to confirm accuracy and determine measurement performance under a variety of conditions that may be encountered in actual use. Ultimately the testing conducted should provide parameters that can be used to adjust all measurements to a standardized basis. These tests are performed with a standard relay test set using recorded files of precisely generated test signals. The test set provides test signals at a level and in a format suitable for input tomore » a PMU that accurately reproduces the signals in both signal amplitude and timing. Test set outputs are checked to confirm the accuracy of the output signal. The recorded signals include both current and voltage waveforms and a digital timing track used to relate the PMU measured value with the test signal. Test signals include steady-state waveforms to test amplitude, phase, and frequency accuracy, modulated signals to determine measurement and rejection bands, and step tests to determine timing and response accuracy. Additional tests are included as necessary to fully describe the PMU operation. Testing is done with a BPA phasor data concentrator (PDC) which provides communication support and monitors data input for dropouts and data errors.« less
The visual standards for the selection and retention of astronauts, part 2
NASA Technical Reports Server (NTRS)
Allen, M. J.; Levene, J. R.; Heath, G. G.
1972-01-01
In preparation for the various studies planned for assessing visual capabilities and tasks in order to set vision standards for astronauts, the following pieces of equipment have been assembled and tested: a spectacle obstruction measuring device, a biometric glare susceptibility tester, a variable vergence amplitude testing device, an eye movement recorder, a lunar illumination simulation chamber, a night myopia testing apparatus, and retinal adaption measuring devices.
Ab Initio Density Fitting: Accuracy Assessment of Auxiliary Basis Sets from Cholesky Decompositions.
Boström, Jonas; Aquilante, Francesco; Pedersen, Thomas Bondo; Lindh, Roland
2009-06-09
The accuracy of auxiliary basis sets derived by Cholesky decompositions of the electron repulsion integrals is assessed in a series of benchmarks on total ground state energies and dipole moments of a large test set of molecules. The test set includes molecules composed of atoms from the first three rows of the periodic table as well as transition metals. The accuracy of the auxiliary basis sets are tested for the 6-31G**, correlation consistent, and atomic natural orbital basis sets at the Hartree-Fock, density functional theory, and second-order Møller-Plesset levels of theory. By decreasing the decomposition threshold, a hierarchy of auxiliary basis sets is obtained with accuracies ranging from that of standard auxiliary basis sets to that of conventional integral treatments.
Advancing Ohio's P-16 Agenda: Exit and Entrance Exam?
ERIC Educational Resources Information Center
Rochford, Joseph A.
2004-01-01
Tests like the Ohio Graduation Test are part of what has become known as the "standards-based" reform movement in education. Simply put, they allow states to measure whether or not students are learning according to whatever set of standards, benchmarks and indicators are adopted by that state. They also help meet, in part, the reporting…
A Study of Minimum Competency Programs. Final Comprehensive Report. Vol. 1. Vol. 2.
ERIC Educational Resources Information Center
Gorth, William Phillip; Perkins, Marcy R.
The status of minimum competency testing programs, as of June 30, 1979, is given through descriptions of 31 state programs and 20 local district programs. For each program, the following information is provided: legislative and policy history; implementation phase; goals; competencies to be tested; standards and standard setting; target groups and…
A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes
ERIC Educational Resources Information Center
Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.
2008-01-01
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…
Gargis, Amy S; Kalman, Lisa; Lubin, Ira M
2016-12-01
Clinical microbiology and public health laboratories are beginning to utilize next-generation sequencing (NGS) for a range of applications. This technology has the potential to transform the field by providing approaches that will complement, or even replace, many conventional laboratory tests. While the benefits of NGS are significant, the complexities of these assays require an evolving set of standards to ensure testing quality. Regulatory and accreditation requirements, professional guidelines, and best practices that help ensure the quality of NGS-based tests are emerging. This review highlights currently available standards and guidelines for the implementation of NGS in the clinical and public health laboratory setting, and it includes considerations for NGS test validation, quality control procedures, proficiency testing, and reference materials. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Cantrill, Richard C
2008-01-01
Methods of analysis for products of modern biotechnology are required for national and international trade in seeds, grain and food in order to meet the labeling or import/export requirements of different nations and trading blocks. Although many methods were developed by the originators of transgenic events, governments, universities, and testing laboratories, trade is less complicated if there exists a set of international consensus-derived analytical standards. In any analytical situation, multiple methods may exist for testing for the same analyte. These methods may be supported by regional preferences and regulatory requirements. However, tests need to be sensitive enough to determine low levels of these traits in commodity grain for regulatory purposes and also to indicate purity of seeds containing these traits. The International Organization for Standardization (ISO) and its European counterpart have worked to produce a suite of standards through open, balanced and consensus-driven processes. Presently, these standards are approaching the time for their first review. In fact, ISO 21572, the "protein standard" has already been circulated for systematic review. In order to expedite the review and revision of the nucleic acid standards an ISO Technical Specification (ISO/TS 21098) was drafted to set the criteria for the inclusion of precision data from collaborative studies into the annexes of these standards.
Impact resistance and prescription compliance with AS/NZS 1337.6:2010.
Dain, Stephen J; Ngo, Thao P T; Cheng, Brian B
2013-09-01
Australian/New Zealand Standard 1337.6 deals with prescription eye protection and has been in place since 2007. There have been many standards marking licences granted since then. The issue of the worst-case situations for assessment in a certification scheme, in particular -1.50 m(-1) lenses, has been the subject of discussion in Standards Australia/Standards New Zealand Committee SF-006. Given that a body of data from testing exists, this was explored to advise the Committee. Data from testing 40 sets of prescription eye protectors were analysed retrospectively for compliance with the impact and refractive power requirements in 2010-11. The testing had been carried out according to the methods of AS/NZS 1337.6:2007 under the terms and conditions of the accreditation of the Optics & Radiometry Laboratory by the National Association of Testing Authorities. No eye protector failed the low-impact resistance test. Failure rates of 1.6 per cent (two of the 40 sets) to the medium impact test and 1.6 per cent (three of the sets) to the medium impact test in the elevated temperature stability test were seen. These are too small for useful statistical analysis. Only -1.50 m(-1) lenses were in all failing sets and these lenses were over-represented in the failures and borderlines, especially compared with the +1.50 D lenses. Failures in prismatic power were equally distributed over all prescriptions. This over-representation of -1.50 m(-1) lenses was not related to the ocular/lens material or to the company manufacturing the eye protectors. The proposal is made that glazing lenses tightly to ensure they are retained in the frame on impact may result in unwanted refractive power in those lenses most prone to flex. These data support the proposal that -1.50 m(-1) lenses should form part of a worst-case testing regime in a certification scheme. © 2012 The Authors. Clinical and Experimental Optometry © 2012 Optometrists Association Australia.
Fatemi, Mohammad Hossein; Ghorbanzad'e, Mehdi
2009-11-01
Quantitative structure-property relationship models for the prediction of the nematic transition temperature (T (N)) were developed by using multilinear regression analysis and a feedforward artificial neural network (ANN). A collection of 42 thermotropic liquid crystals was chosen as the data set. The data set was divided into three sets: for training, and an internal and external test set. Training and internal test sets were used for ANN model development, and the external test set was used for evaluation of the predictive power of the model. In order to build the models, a set of six descriptors were selected by the best multilinear regression procedure of the CODESSA program. These descriptors were: atomic charge weighted partial negatively charged surface area, relative negative charged surface area, polarity parameter/square distance, minimum most negative atomic partial charge, molecular volume, and the A component of moment of inertia, which encode geometrical and electronic characteristics of molecules. These descriptors were used as inputs to ANN. The optimized ANN model had 6:6:1 topology. The standard errors in the calculation of T (N) for the training, internal, and external test sets using the ANN model were 1.012, 4.910, and 4.070, respectively. To further evaluate the ANN model, a crossvalidation test was performed, which produced the statistic Q (2) = 0.9796 and standard deviation of 2.67 based on predicted residual sum of square. Also, the diversity test was performed to ensure the model's stability and prove its predictive capability. The obtained results reveal the suitability of ANN for the prediction of T (N) for liquid crystals using molecular structural descriptors.
The robustness of the horizontal gaze nystagmus test
DOT National Transportation Integrated Search
2007-09-01
Police officers follow procedures set forth in the NHTSA/IACP curriculum when they administer the Standardized Field Sobriety Tests (SFSTs) to suspected alcohol-impaired drivers. The SFSTs include Horizontal Gaze Nystagmus (HGN) test, Walk-and-Turn (...
Software OT&E Guidelines. Volume 1. Software Test Manager’s Handbook
1981-02-01
on reverse side If neceeary and identify by block number) The Software OT&E Guidelines is a set of handbooks prepared by the Computer / Support Systems...is one of a set of handbooks prepared by the Computer /Support Systems Division of the Test and Evaluation Directorate, Air Force Test and Evaluation...15 E. Software Maintainability .. .. ........ ... 16 F. Standard Questionnaires. .. .. ....... .... 16 1. Operator- Computer Interface Evaluation
Automotive Lubricant Specification and Testing
NASA Astrophysics Data System (ADS)
Fox, M. F.
This chapter concerns commercial lubricant specification and testing, drawing together the many themes of previous chapters. Military lubricant standards were a very strong initial influence during World War II and led to the separate historical development of the North American and European specification systems. The wide range of functions that a successful lubricant must satisfy is discussed, together with issues of balancing special or universal applications, single or multiple engine tests, the philosophy of accelerated testing and the question of 'who sets the standards?' The role of engine tests and testing organisations is examined.
Lloyd, C H; Yearn, J A; Cowper, G A; Blavier, J; Vanderdonckt, M
2004-07-01
The setting expansion is an important property for a phosphate-bonded investment material. This research was undertaken to investigate a test that might be suitable for its measurement when used in a Standard. In the 'Casting-Ring Test', the investment sample is contained in a steel ring and expands to displace a precisely positioned pin. Variables with the potential to alter routine reproduction of the value were investigated. The vacuum-mixer model is a production laboratory variable that must not be ignored and for this reason, experiments were repeated using a different vacuum-mixer located at a second test site. Restraint by the rigid ring material increased expansion, while force on the pin reduced it. Expansion was specific to the lining selected. Increased environmental temperature decreased the final value. Expansion was still taking place at a time at which its value might be measured. However, when these factors are set, the reproducibility of values for setting expansion was good at both test sites (coefficient of variation 14%, at most). The results revealed that with the control that is available reliable routine measurement is possible in a Standard test. The inter-laboratory variable, vacuum-mixer model, produced significant differences and it should be the subject of further investigation.
Implementing standard setting into the Conjoint MAFP/FRACGP Part 1 examination - Process and issues.
Chan, S C; Mohd Amin, S; Lee, T W
2016-01-01
The College of General Practitioners of Malaysia and the Royal Australian College of General Practitioners held the first Conjoint Member of the College of General Practitioners (MCGP)/Fellow of Royal Australian College of General Practitioners (FRACGP) examination in 1982, later renamed the Conjoint MAFP/FRACGP examinations. The examination assesses competency for safe independent general practice and as family medicine specialists in Malaysia. Therefore, a defensible standard set pass mark is imperative to separate the competent from the incompetent. This paper discusses the process and issues encountered in implementing standard setting to the Conjoint Part 1 examination. Critical to success in standard setting were judges' understanding of the process of the modified Angoff method, defining the borderline candidate's characteristics and the composition of judges. These were overcome by repeated hands-on training, provision of detailed guidelines and careful selection of judges. In December 2013, 16 judges successfully standard set the Part 1 Conjoint examinations, with high inter-rater reliability: Cronbach's alpha coefficient 0.926 (Applied Knowledge Test), 0.921 (Key Feature Problems).
Evaluating Different Standard-Setting Methods in an ESL Placement Testing Context
ERIC Educational Resources Information Center
Shin, Sun-Young; Lidster, Ryan
2017-01-01
In language programs, it is crucial to place incoming students into appropriate levels to ensure that course curriculum and materials are well targeted to their learning needs. Deciding how and where to set cutscores on placement tests is thus of central importance to programs, but previous studies in educational measurement disagree as to which…
40 CFR 1065.550 - Gas analyzer range validation, drift validation, and drift correction.
Code of Federal Regulations, 2011 CFR
2011-07-01
... given test interval (i.e., do not set them to zero). A third calculation of composite brake-specific...) values from each test interval and sets any negative mass (or mass rate) values to zero before... less than the standard by at least two times the absolute difference between the uncorrected and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-11
... Fuel Economy (CAFE) standards for light vehicles since 1978 under the statutory authority of the Energy... 19, 2007, amended EPCA and mandated that NHTSA, in consultation with EPA, set fuel economy standards... agency to implement test methods, measurement metrics, fuel economy standards, and compliance and...
After Common Core, States Set Rigorous Standards
ERIC Educational Resources Information Center
Peterson, Paul E.; Barrows, Samuel; Gift, Thomas
2016-01-01
In spite of Tea Party criticism, union skepticism, and anti-testing outcries, the campaign to implement Common Core State Standards (otherwise known as Common Core) has achieved phenomenal success in statehouses across the country. Since 2011, 45 states have raised their standards for student proficiency in reading and math, with the greatest…
NASA Technical Reports Server (NTRS)
Dankanich, John W.; Swiatek, Michael W.; Yim, John T.
2012-01-01
The electric propulsion community has been implored to establish and implement a set of universally applicable test standards during the research, development, and qualification of electric propulsion systems. Existing practices are fallible and result in testing variations which leads to suspicious results, large margins in application, or aversion to mission infusion. Performance measurements and life testing under appropriate conditions can be costly and lengthy. Measurement practices must be consistent, accurate, and repeatable. Additionally, the measurements must be universally transportable across facilities throughout the development, qualification, spacecraft integration and on-orbit performance. A preliminary step to progress towards universally applicable testing standards is outlined for facility pressure measurements and effective pumping speed calculations. The standard has been applied to multiple facilities at the NASA Glenn Research Center. Test results and analyses of universality of measurements are presented herein.
Measurements by a Vector Network Analyzer at 325 to 508 GHz
NASA Technical Reports Server (NTRS)
Fung, King Man; Samoska, Lorene; Chattopadhyay, Goutam; Gaier, Todd; Kangaslahti, Pekka; Pukala, David; Lau, Yuenie; Oleson, Charles; Denning, Anthony
2008-01-01
Recent experiments were performed in which return loss and insertion loss of waveguide test assemblies in the frequency range from 325 to 508 GHz were measured by use of a swept-frequency two-port vector network analyzer (VNA) test set. The experiments were part of a continuing effort to develop means of characterizing passive and active electronic components and systems operating at ever increasing frequencies. The waveguide test assemblies comprised WR-2.2 end sections collinear with WR-3.3 middle sections. The test set, assembled from commercially available components, included a 50-GHz VNA scattering- parameter test set and external signal synthesizers, augmented with recently developed frequency extenders, and further augmented with attenuators and amplifiers as needed to adjust radiofrequency and intermediate-frequency power levels between the aforementioned components. The tests included line-reflect-line calibration procedures, using WR-2.2 waveguide shims as the "line" standards and waveguide flange short circuits as the "reflect" standards. Calibrated dynamic ranges somewhat greater than about 20 dB for return loss and 35 dB for insertion loss were achieved. The measurement data of the test assemblies were found to substantially agree with results of computational simulations.
ERIC Educational Resources Information Center
Miyamoto, Kenichiro
2008-01-01
Every state in the United States, under the NCLB act, has set state standards and is testing all students in grades 3-8. Students are given printed questions to which they write answers with a pencil on an answer sheet. These written tests are usually given to determine the academic achievements of students. This paper traces the early history of…
ERIC Educational Resources Information Center
Papajohn, Dean
2006-01-01
While many institutions have utilized TOEFL scores for international admissions for many years, a speaking section has never before been a required part of TOEFL until the development of the iBT/Next Generation TOEFL. So institutions will need to determine how to set standards for the speaking section of TOEFL, also known as TOEFL Academic…
ERIC Educational Resources Information Center
Opperman, Prudence; And Others
The Promotional Gates Program was initiated in the New York City Public Schools in order to set and maintain citywide curriculum and performance standards, identify students unable to meet the minimum standards, and provide remedial instruction. Under this program, the promotional policy sets "gates" at grades 4 and 7; students unable to…
Holman, N; Lewis-Barned, N; Bell, R; Stephens, H; Modder, J; Gardosi, J; Dornhorst, A; Hillson, R; Young, B; Murphy, H R
2011-07-01
To develop and evaluate a standardized data set for measuring pregnancy outcomes in women with Type 1 and Type 2 diabetes and to compare recent outcomes with those of the 2002-2003 Confidential Enquiry into Maternal and Child Health. Existing regional, national and international data sets were compared for content, consistency and validity to develop a standardized data set for diabetes in pregnancy of 46 key clinical items. The data set was tested retrospectively using data from 2007-2008 pregnancies included in three regional audits (Northern, North West and East Anglia). Obstetric and neonatal outcomes of pregnancies resulting in a stillbirth or live birth were compared with those from the same regions during 2002-2003. Details of 1381 pregnancies, 812 (58.9%) in women with Type 1 diabetes and 556 (40.3%) in women with Type 2 diabetes, were available to test the proposed standardized data set. Of the 46 data items proposed, only 16 (34.8%), predominantly the delivery and neonatal items, achieved ≥ 85% completeness. Ethnic group data were available for 746 (54.0%) pregnancies and BMI for 627 (46.5%) pregnancies. Glycaemic control data were most complete-available for 1217 pregnancies (88.1%), during the first trimester. Only 239 women (19.9%) had adequate pregnancy preparation, defined as pre-conception folic acid and first trimester HbA(1c) ≤ 7% (≤ 53 mmol/mol). Serious adverse outcome rates (major malformation and perinatal mortality) were 55/1000 and had not improved since 2002-2003. A standardized data set for diabetes in pregnancy may improve consistency of data collection and allow for more meaningful evaluation of pregnancy outcomes in women with pregestational diabetes. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-18
... Log Sets b. Vented Hearth Products C. National Energy Savings D. Other Comments 1. Test Procedures 2... address vented gas log sets. DOE clarified its position on vented gas log sets in a document published on... vented gas log sets are included in the definition of ``vented hearth heater''; DOE has reached this...
ERIC Educational Resources Information Center
Sinharay, Sandip; Haberman, Shelby J.; Jia, Helena
2011-01-01
Standard 3.9 of the "Standards for Educational and Psychological Testing" (American Educational Research Association, American Psychological Association, & National Council for Measurement in Education, 1999) demands evidence of model fit when an item response theory (IRT) model is used to make inferences from a data set. We applied two recently…
49 CFR 173.121 - Class 3-Assignment of packing group.
Code of Federal Regulations, 2011 CFR
2011-10-01
... packing group must be determined by applying the following criteria: Flash point (closed-cup) Initial... (73.4 °F) using the ISO standard cup with a 4 mm (0.16 inch) jet as set forth in ISO 2431 (IBR, see... using the ISO standard cup with a 6 mm (0.24 inch) jet. (ii) Solvent Separation Test. This test is...
16 CFR § 1611.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2013 CFR
2013-01-01
... COMMISSION FLAMMABLE FABRICS ACT REGULATIONS STANDARD FOR THE FLAMMABILITY OF VINYL PLASTIC FILM Rules and... applicable procedures set forth in section 4(a) of the act. Note: If the outer layer of plastic film or... shall be tested under part 1611—Standard for the Flammability of Vinyl Plastic Film. If the outer layer...
Shuttleworth-Edwards, A B
2016-10-01
The aim of this paper is to address the issue of IQ testing within the multicultural context, with a focus on the adequacy of nationwide population-based norms vs. demographically stratified within-group norms for valid assessment purposes. Burgeoning cultural diversity worldwide creates a pressing need to cultivate culturally fair psychological assessment practices. Commentary is provided to highlight sources of test-taking bias on tests of intellectual ability that may incur invalid placement and diagnostic decisions in multicultural settings. Methodological aspects of population vs. within-group norming solutions are delineated and the challenges of culturally relevant norm development are discussed. Illustrative South African within-group comparative data are supplied to support the review. A critical evaluation of the South African WAIS-III and the WAIS-IV standardizations further serves to exemplify the issues. A flaw in both South African standardizations is failure to differentiate between African first language individuals with a background of advantaged education vs. those from educationally disadvantaged settings. In addition, the standardizations merge the performance outcomes of distinct racial/ethnic groups that are characterized by differentially advantaged or disadvantaged backgrounds. Consequently, the conversion tables are without relevance for any one of the disparate South African cultural groups. It is proposed that the traditional notion of a countrywide unitary norming (also known as 'population-based norms') of an IQ test is an unsatisfactory model for valid assessment practices in diverse cultural contexts. The challenge is to develop new solutions incorporating data from finely stratified within-group norms that serve to reveal rather than obscure cross-cultural disparity in cognitive test performance.
Virtual occlusal definition for orthognathic surgery.
Liu, X J; Li, Q Q; Zhang, Z; Li, T T; Xie, Z; Zhang, Y
2016-03-01
Computer-assisted surgical simulation is being used increasingly in orthognathic surgery. However, occlusal definition is still undertaken using model surgery with subsequent digitization via surface scanning or cone beam computed tomography. A software tool has been developed and a workflow set up in order to achieve a virtual occlusal definition. The results of a validation study carried out on 60 models of normal occlusion are presented. Inter- and intra-user correlation tests were used to investigate the reproducibility of the manual setting point procedure. The errors between the virtually set positions (test) and the digitized manually set positions (gold standard) were compared. The consistency in virtual set positions performed by three individual users was investigated by one way analysis of variance test. Inter- and intra-observer correlation coefficients for manual setting points were all greater than 0.95. Overall, the median error between the test and the gold standard positions was 1.06mm. Errors did not differ among teeth (F=0.371, P>0.05). The errors were not significantly different from 1mm (P>0.05). There were no significant differences in the errors made by the three independent users (P>0.05). In conclusion, this workflow for virtual occlusal definition was found to be reliable and accurate. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Chen, Henry W; Du, Jingcheng; Song, Hsing-Yi; Liu, Xiangyu; Jiang, Guoqian
2018-01-01
Background Today, there is an increasing need to centralize and standardize electronic health data within clinical research as the volume of data continues to balloon. Domain-specific common data elements (CDEs) are emerging as a standard approach to clinical research data capturing and reporting. Recent efforts to standardize clinical study CDEs have been of great benefit in facilitating data integration and data sharing. The importance of the temporal dimension of clinical research studies has been well recognized; however, very few studies have focused on the formal representation of temporal constraints and temporal relationships within clinical research data in the biomedical research community. In particular, temporal information can be extremely powerful to enable high-quality cancer research. Objective The objective of the study was to develop and evaluate an ontological approach to represent the temporal aspects of cancer study CDEs. Methods We used CDEs recorded in the National Cancer Institute (NCI) Cancer Data Standards Repository (caDSR) and created a CDE parser to extract time-relevant CDEs from the caDSR. Using the Web Ontology Language (OWL)–based Time Event Ontology (TEO), we manually derived representative patterns to semantically model the temporal components of the CDEs using an observing set of randomly selected time-related CDEs (n=600) to create a set of TEO ontological representation patterns. In evaluating TEO’s ability to represent the temporal components of the CDEs, this set of representation patterns was tested against two test sets of randomly selected time-related CDEs (n=425). Results It was found that 94.2% (801/850) of the CDEs in the test sets could be represented by the TEO representation patterns. Conclusions In conclusion, TEO is a good ontological model for representing the temporal components of the CDEs recorded in caDSR. Our representative model can harness the Semantic Web reasoning and inferencing functionalities and present a means for temporal CDEs to be machine-readable, streamlining meaningful searches. PMID:29472179
Saha, Sreemanti; Narang, Rahul; Deshmukh, Pradeep; Pote, Kiran; Anvikar, Anup; Narang, Pratibha
2017-01-01
The diagnostic techniques for malaria are undergoing a change depending on the availability of newer diagnostics and annual parasite index of infection in a particular area. At the country level, guidelines are available for selection of diagnostic tests; however, at the local level, this decision is made based on malaria situation in the area. The tests are evaluated against the gold standard, and if that standard has limitations, it becomes difficult to compare other available tests. Bayesian latent class analysis computes its internal standard rather than using the conventional gold standard and helps comparison of various tests including the conventional gold standard. In a cross-sectional study conducted in a tertiary care hospital setting, we have evaluated smear microscopy, rapid diagnostic test (RDT), and polymerase chain reaction (PCR) for diagnosis of malaria using Bayesian latent class analysis. We found the magnitude of malaria to be 17.7% (95% confidence interval: 12.5%-23.9%) among the study subjects. In the present study, the sensitivity of microscopy was 63%, but it had very high specificity (99.4%). Sensitivity and specificity of RDT and PCR were high with RDT having a marginally higher sensitivity (94% vs. 90%) and specificity (99% vs. 95%). On comparison of likelihood ratios (LRs), RDT had the highest LR for positive test result (175) and the lowest LR for negative test result (0.058) among the three tests. In settings like ours conventional smear microscopy may be replaced with RDT and as we move toward elimination and facilities become available PCR may be roped into detect cases with lower parasitaemia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This report documents the development and testing of a set of recommendations generated to serve as a primary basis for the Congressionally-mandated residential standard. This report treats only the residential building recommendations.
[Mokken scaling of the Cognitive Screening Test].
Diesfeldt, H F A
2009-10-01
The Cognitive Screening Test (CST) is a twenty-item orientation questionnaire in Dutch, that is commonly used to evaluate cognitive impairment. This study applied Mokken Scale Analysis, a non-parametric set of techniques derived from item response theory (IRT), to CST-data of 466 consecutive participants in psychogeriatric day care. The full item set and the standard short version of fourteen items both met the assumptions of the monotone homogeneity model, with scalability coefficient H = 0.39, which is considered weak. In order to select items that would fulfil the assumption of invariant item ordering or the double monotonicity model, the subjects were randomly partitioned into a training set (50% of the sample) and a test set (the remaining half). By means of an automated item selection eleven items were found to measure one latent trait, with H = 0.67 and item H coefficients larger than 0.51. Cross-validation of the item analysis in the remaining half of the subjects gave comparable values (H = 0.66; item H coefficients larger than 0.56). The selected items involve year, place of residence, birth date, the monarch's and prime minister's names, and their predecessors. Applying optimal discriminant analysis (ODA) it was found that the full set of twenty CST items performed best in distinguishing two predefined groups of patients of lower or higher cognitive ability, as established by an independent criterion derived from the Amsterdam Dementia Screening Test. The chance corrected predictive value or prognostic utility was 47.5% for the full item set, 45.2% for the fourteen items of the standard short version of the CST, and 46.1% for the homogeneous, unidimensional set of selected eleven items. The results of the item analysis support the application of the CST in cognitive assessment, and revealed a more reliable 'short' version of the CST than the standard short version (CST14).
NASA Technical Reports Server (NTRS)
Feller, A.
1978-01-01
The entire complement of standard cells and components, except for the set-reset flip-flop, was completed. Two levels of checking were performed on each device. Logic cells and topological layout are described. All the related computer programs were coded and one level of debugging was completed. The logic for the test chip was modified and updated. This test chip served as the first test vehicle to exercise the standard cell complementary MOS(C-MOS) automatic artwork generation capability.
ERIC Educational Resources Information Center
George-Ezzelle, Carol E.; Skaggs, Gary
2004-01-01
Current testing standards call for test developers to provide evidence that testing procedures and test scores, and the inferences made based on the test scores, show evidence of validity and are comparable across subpopulations (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on…
Robert F. Powers
1972-01-01
Four sets of standard site index curves based on statewide or regionwide averages were compared with data on natural growth from nine young stands of ponderosa pine in northern California. The curves tested were by Meyer; Dunning; Dunning and Reineke; and Arvanitis, Lindquist, and Palley. The effects of soils on height growth were also studied. Among the curves tested...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2014 CFR
2014-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... grades are specified in the following table: Table 1 of § 1065.703—Test Fuel Specifications for...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2011 CFR
2011-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... inhibitor. (5) Pour depressant. (6) Dye. (7) Dispersant. (8) Biocide. Table 1 of § 1065.703—Test Fuel...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2013 CFR
2013-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... inhibitor. (5) Pour depressant. (6) Dye. (7) Dispersant. (8) Biocide. Table 1 of § 1065.703—Test Fuel...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2012 CFR
2012-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... inhibitor. (5) Pour depressant. (6) Dye. (7) Dispersant. (8) Biocide. Table 1 of § 1065.703—Test Fuel...
ERIC Educational Resources Information Center
Seligman, Martin E. P.; Rashid, Tayyab; Parks, Acacia C.
2006-01-01
Positive psychotherapy (PPT) contrasts with standard interventions for depression by increasing positive emotion, engagement, and meaning rather than directly targeting depressive symptoms. The authors have tested the effects of these interventions in a variety of settings. In informal student and clinical settings, people not uncommonly reported…
HIV testing in correctional institutions: evaluating existing strategies, setting new standards.
Basu, Sanjay; Smith-Rohrberg, Duncan; Hanck, Sarah; Altice, Frederick L
2005-01-01
Before introducing an HIV testing protocol into correctional facilities, the unique nature of these environments must be taken into account. We analyze three testing strategies that have been used in correctional settings--mandatory, voluntary, and routine "opt out" testing--and conclude that routine testing is most likely beneficial to inmates, the correctional system, and the outside community. The ethics of pre-release testing, and the issues surrounding segregation, confidentiality, and linking prisoners with community-based care, also play a role in determining how best to establish HIV testing strategies in correctional facilities. Testing must be performed in a manner that is not simply beneficial to public health, but also enhances the safety and health status of individual inmates. Longer-stay prison settings provide ample opportunities not just for testing but also for in-depth counseling, mental health and substance abuse treatment, and antiretroviral therapy. Jails present added complexities because of their shorter stay with respect to prisons, and testing, treatment, and counseling policies must be adapted to these settings.
Wang, Steven Q; Xu, Haoming; Stanfield, Joseph W; Osterwalder, Uli; Herzog, Bernd
2017-07-01
The importance of adequate ultraviolet A light (UVA) protection has become apparent in recent years. The United States and Europe have different standards for assessing UVA protection in sunscreen products. We sought to measure the in vitro critical wavelength (CW) and UVA protection factor (PF) of commercially available US sunscreen products and see if they meet standards set by the United States and the European Union. Twenty sunscreen products with sun protection factors ranging from 15 to 100+ were analyzed. Two in vitro UVA protection tests were conducted in accordance with the 2011 US Food and Drug Administration final rule and the 2012 International Organization for Standardization method for sunscreen effectiveness testing. The CW of the tested sunscreens ranged from 367 to 382 nm, and the UVA PF of the products ranged from 6.1 to 32. Nineteen of 20 sunscreens (95%) met the US requirement of CW >370 nm. Eleven of 20 sunscreens (55%) met the EU desired ratio of UVA PF/SPF > 1:3. The study only evaluated a small number of sunscreen products. The majority of tested sunscreens offered adequate UVA protection according to US Food and Drug Administration guidelines for broad-spectrum status, but almost half of the sunscreens tested did not pass standards set in the European Union. Copyright © 2017. Published by Elsevier Inc.
Shanks, Leslie; Siddiqui, M Ruby; Abebe, Almaz; Piriou, Erwan; Pearce, Neil; Ariti, Cono; Masiga, Johnson; Muluneh, Libsework; Wazome, Joseph; Ritmeijer, Koert; Klarkowski, Derryck
2015-05-14
Current WHO testing guidelines for resource limited settings diagnose HIV on the basis of screening tests without a confirmation test due to cost constraints. This leads to a potential risk of false positive HIV diagnosis. In this paper, we evaluate the dilution test, a novel method for confirmation testing, which is simple, rapid, and low cost. The principle of the dilution test is to alter the sensitivity of a rapid diagnostic test (RDT) by dilution of the sample, in order to screen out the cross reacting antibodies responsible for falsely positive RDT results. Participants were recruited from two testing centres in Ethiopia where a tiebreaker algorithm using 3 different RDTs in series is used to diagnose HIV. All samples positive on the initial screening RDT and every 10th negative sample underwent testing with the gold standard and dilution test. Dilution testing was performed using Determine™ rapid diagnostic test at 6 different dilutions. Results were compared to the gold standard of Western Blot; where Western Blot was indeterminate, PCR testing determined the final result. 2895 samples were recruited to the study. 247 were positive for a prevalence of 8.5 % (247/2895). A total of 495 samples underwent dilution testing. The RDT diagnostic algorithm misclassified 18 samples as positive. Dilution at the level of 1/160 was able to correctly identify all these 18 false positives, but at a cost of a single false negative result (sensitivity 99.6 %, 95 % CI 97.8-100; specificity 100 %, 95 % CI: 98.5-100). Concordance between the gold standard and the 1/160 dilution strength was 99.8 %. This study provides proof of concept for a new, low cost method of confirming HIV diagnosis in resource-limited settings. It has potential for use as a supplementary test in a confirmatory algorithm, whereby double positive RDT results undergo dilution testing, with positive results confirming HIV infection. Negative results require nucleic acid testing to rule out false negative results due to seroconversion or misclassification by the lower sensitivity dilution test. Further research is needed to determine if these results can be replicated in other settings. ClinicalTrials.gov, NCT01716299 .
Tu, Xiao-Ming; Zhang, Zuo-Heng; Wan, Cheng; Zheng, Yu; Xu, Jin-Mei; Zhang, Yuan-Yuan; Luo, Jian-Ping; Wu, Hai-Wei
2012-12-01
To develop a software that can be used to standardize optical density to normalize the procedures and results of standardization in order to effectively solve several problems generated during standardization of in-direct ELISA results. The software was designed based on the I-STOD method with operation settings to solve the problems that one might encounter during the standardization. Matlab GUI was used as a tool for the development. The software was tested with the results of the detection of sera of persons from schistosomiasis japonica endemic areas. I-STOD V1.0 (WINDOWS XP/WIN 7, 0.5 GB) was successfully developed to standardize optical density. A serial of serum samples from schistosomiasis japonica endemic areas were used to examine the operational effects of I-STOD V1.0 software. The results indicated that the software successfully overcame several problems including reliability of standard curve, applicable scope of samples and determination of dilution for samples outside the scope, so that I-STOD was performed more conveniently and the results of standardization were more consistent. I-STOD V1.0 is a professional software based on I-STOD. It can be easily operated and can effectively standardize the testing results of in-direct ELISA.
NASA Technical Reports Server (NTRS)
Marley, Mike
2008-01-01
The focus of this paper will be on the thermal balance testing for the Operationally Responsive Space Standard Bus Battery. The Standard Bus thermal design required that the battery be isolated from the bus itself. This required the battery to have its own thermal control, including heaters and a radiator surface. Since the battery was not ready for testing during the overall bus thermal balance testing, a separate test was conducted to verify the thermal design for the battery. This paper will discuss in detail, the test set up, test procedure, and results from this test. Additionally this paper will consider the methods taken to determine the heat dissipation of the battery during charge and discharge. It seems that the heat dissipation for Lithium Ion batteries is relatively unknown and hard to quantify. The methods used during test and the post test analysis to estimate the heat dissipation of the battery will be discussed.
Keteyian, Steven J; Hibner, Brooks A; Bronsteen, Kyle; Kerrigan, Dennis; Aldred, Heather A; Reasons, Lisa M; Saval, Mathew A; Brawner, Clinton A; Schairer, John R; Thompson, Tracey M S; Hill, Jason; McCulloch, Derek; Ehrman, Jonathon K
2014-01-01
We tested the hypothesis that higher-intensity interval training (HIIT) could be deployed into a standard cardiac rehabilitation (CR) setting and would result in a greater increase in cardiorespiratory fitness (ie, peak oxygen uptake, (·)VO₂) versus moderate-intensity continuous training (MCT). Thirty-nine patients participating in a standard phase 2 CR program were randomized to HIIT or MCT; 15 patients and 13 patients in the HIIT and MCT groups, respectively, completed CR and baseline and followup cardiopulmonary exercise testing. No patients in either study group experienced an event that required hospitalization during or within 3 hours after exercise. The changes in resting heart rate and blood pressure at followup testing were similar for both HIIT and MCT. (·)VO₂ at ventilatory-derived anaerobic threshold increased more (P < .05) with HIIT (3.0 ± 2.8 mL·kg⁻¹·min⁻¹) versus MCT (0.7 ± 2.2 mL·kg⁻¹·min⁻¹). During followup testing, submaximal heart rate at the end of stage 2 of the exercise test was significantly lower within both the HIIT and MCT groups, with no difference noted between groups. Peak (·)VO₂ improved more after CR in patients in HIIT versus MCT (3.6 ± 3.1 mL·kg⁻¹·min⁻¹ vs 1.7 ± 1.7 mL·kg⁻¹·min⁻¹; P < .05). Among patients with stable coronary heart disease on evidence-based therapy, HIIT was successfully integrated into a standard CR setting and, when compared to MCT, resulted in greater improvement in peak exercise capacity and submaximal endurance.
ERIC Educational Resources Information Center
Klenowski, Val
2013-01-01
Curriculum and standards-referenced assessment reform in accountability contexts are increasingly dominated by the use of testing, evidence, comparative analyses of achievement data and policy as numbers all of which have given rise to a set of related developments. Internationally these developments towards the use of standards for assessment and…
Coelho, Luiz Gonzaga Vaz; Silva, Arilto Eleutério da; Coelho, Maria Clara de Freitas; Penna, Francisco Guilherme Cancela e; Ferreira, Rafael Otto Antunes; Santa-Cecilia, Elisa Viana
2011-01-01
The standard doses of (13)C-urea in (13)C-urea breath test is 75 mg. To assess the diagnostic accuracy of (13)C-urea breath test containing 25 mg of (13)C-urea comparing with the standard doses of 75 mg in the diagnosis of Helicobacter pylori infection. Two hundred seventy adult patients (96 males, 174 females, median age 41 years) performed the standard (13)C-urea breath test (75 mg (13)C-urea) and repeated the (13)C-urea breath test using only 25 mg of (13)C-urea within a 2 week interval. The test was performed using an infrared isotope analyzer. Patients were considered positive if delta over baseline was >4.0‰ at the gold standard test. One hundred sixty-one (59.6%) patients were H. pylori negative and 109 (40.4%) were positive by the gold standard test. Using receiver operating characteristic analysis we established a cut-off value of 3.4% as the best value of 25 mg (13)C-urea breath test to discriminate positive and negative patients, considering the H. pylori prevalence (95% CI: 23.9-37.3) at our setting. Therefore, we obtained to 25 mg (13)C-urea breath test a diagnostic accuracy of 92.9% (95% CI: 88.1-97.9), sensitivity 83.5% (95% CI: 75.4-89.3), specificity 99.4% (95% CI: 96.6-99.9), positive predictive value 98.3% (95% CI: 92.4-99.4), and negative predictive value 93.0% (95% CI: 88.6-96.1). Low-dose (13)C-urea breath test (25 mg (13)C-urea) does not reach accuracy sufficient to be recommended in clinical setting where a 30% prevalence of H. pylori infection is observed. Further studies should be done to determine the diagnostic accuracy of low doses of (13)C-urea in the urea breath test.
Towards a rational antimicrobial testing policy in the laboratory.
Banaji, N; Oommen, S
2011-01-01
Antimicrobial policy for prophylactic and therapeutic use of antimicrobials in a tertiary care setting has gained importance. A hospital's antimicrobial policy as laid down by its hospital infection control team needs to include inputs from the microbiology laboratory, besides the pharmacy and therapeutic committee. Therefore, it is of utmost importance that clinical microbiologists across India follow international guidelines and also take into account local settings, especially detection and presence of resistance enzymes. This article draws a framework for rational antimicrobial testing in our laboratories in tertiary care centers, from the Clinical and Laboratory Standards Institute guidelines. It does not address testing methodologies but suggests ways and means by which antimicrobial susceptibility reporting can be rendered meaningful not only to the treating physician but also to the resistance monitoring epidemiologist. It hopes to initiate some standardization in rational choice of antimicrobial testing in laboratories in the country pertaining to nonfastidious bacteria.
A novel Python program for implementation of quality control in the ELISA.
Wetzel, Hanna N; Cohen, Cinder; Norman, Andrew B; Webster, Rose P
2017-09-01
The use of semi-quantitative assays such as the enzyme-linked immunosorbent assay (ELISA) requires stringent quality control of the data. However, such quality control is often lacking in academic settings due to unavailability of software and knowledge. Therefore, our aim was to develop methods to easily implement Levey-Jennings quality control methods. For this purpose, we created a program written in Python (a programming language with an open-source license) and tested it using a training set of ELISA standard curves quantifying the Fab fragment of an anti-cocaine monoclonal antibody in mouse blood. A colorimetric ELISA was developed using a goat anti-human anti-Fab capture method. Mouse blood samples spiked with the Fab fragment were tested against a standard curve of known concentrations of Fab fragment in buffer over a period of 133days stored at 4°C to assess stability of the Fab fragment and to generate a test dataset to assess the program. All standard curves were analyzed using our program to batch process the data and to generate Levey-Jennings control charts and statistics regarding the datasets. The program was able to identify values outside of two standard deviations, and this identification of outliers was consistent with the results of a two-way ANOVA. This program is freely available, which will help laboratories implement quality control methods, thus improving reproducibility within and between labs. We report here successful testing of the program with our training set and development of a method for quantification of the Fab fragment in mouse blood. Copyright © 2017 Elsevier B.V. All rights reserved.
Teaching science through literature
NASA Astrophysics Data System (ADS)
Barth, Daniel
2007-12-01
The hypothesis of this study was that a multidisciplinary, activity rich science curriculum based around science fiction literature, rather than a conventional text book would increase student engagement with the curriculum and improve student performance on standards-based test instruments. Science fiction literature was chosen upon the basis of previous educational research which indicated that science fiction literature was able to stimulate and maintain interest in science. The study was conducted on a middle school campus during the regular summer school session. Students were self-selected from the school's 6 th, 7th, and 8th grade populations. The students used the science fiction novel Maurice on the Moon as their only text. Lessons and activities closely followed the adventures of the characters in the book. The student's initial level of knowledge in Earth and space science was assessed by a pre test. After the four week program was concluded, the students took a post test made up of an identical set of questions. The test included 40 standards-based questions that were based upon concepts covered in the text of the novel and in the classroom lessons and activities. The test also included 10 general knowledge questions that were based upon Earth and space science standards that were not covered in the novel or the classroom lessons or activities. Student performance on the standards-based question set increased an average of 35% for all students in the study group. Every subgroup disaggregated by gender and ethnicity improved from 28-47%. There was no statistically significant change in the performance on the general knowledge question set for any subgroup. Student engagement with the material was assessed by three independent methods, including student self-reports, percentage of classroom work completed, and academic evaluation of student work by the instructor. These assessments of student engagement were correlated with changes in student performance on the standards-based assessment tests. A moderate correlation was found to exist between the level of student engagement with the material and improvement in performance from pre to post test.
NASA Astrophysics Data System (ADS)
Salyer, Terry
2017-06-01
For the bulk of detonation performance experiments, a fairly basic set of diagnostic techniques has evolved as the standard for acquiring the necessary measurements. Gold standard techniques such as pin switches and streak cameras still produce the high-quality data required, yet much room remains for improvement with regard to ease of use, cost of fielding, breadth of data, and diagnostic versatility. Over the past several years, an alternate set of diagnostics has been under development to replace many of these traditional techniques. Pulse Correlation Reflectometry (PCR) is a capable substitute for pin switches with the advantage of obtaining orders of magnitude more data at a small fraction of the cost and fielding time. Spectrally Encoded Imaging (SEI) can replace most applications of streak camera with the advantage of imaging surfaces through a single optical fiber that are otherwise optically inaccessible. Such diagnostics advance the measurement state of the art, but even further improvements may come through revamping the standardized tests themselves such as the copper cylinder expansion test. At the core of this modernization, the aforementioned diagnostics play a significant role in revamping and improving the standard test suite for the present era. This research was performed under the auspices of the United States Department of Energy.
The Recognizability and Localizability of Auditory Alarms: Setting Global Medical Device Standards.
Edworthy, Judy; Reid, Scott; McDougall, Siné; Edworthy, Jonathan; Hall, Stephanie; Bennett, Danielle; Khan, James; Pye, Ellen
2017-11-01
Objective Four sets of eight audible alarms matching the functions specified in IEC 60601-1-8 were designed using known principles from auditory cognition with the intention that they would be more recognizable and localizable than those currently specified in the standard. Background The audible alarms associated with IEC 60601-1-8, a global medical device standard, are known to be difficult to learn and retain, and there have been many calls to update them. There are known principles of design and cognition that might form the basis of more readily recognizable alarms. There is also scope for improvement in the localizability of the existing alarms. Method Four alternative sets of alarms matched to the functions specified in IEC 60601-1-8 were tested for recognizability and localizability and compared with the alarms currently specified in the standard. Results With a single exception, all prototype sets of alarms outperformed the current IEC set on both recognizability and localizability. Within the prototype sets, auditory icons were the most easily recognized, but the other sets, using word rhythms and simple acoustic metaphors, were also more easily recognized than the current alarms. With the exception of one set, all prototype sets were also easier to localize. Conclusion Known auditory cognition and perception principles were successfully applied to an existing audible alarm problem. Application This work constitutes the first (benchmarking) phase of replacing the alarms currently specified in the standard. The design principles used for each set demonstrate the relative ease with which different alarm types can be recognized and localized.
Creating Realistic Data Sets with Specified Properties via Simulation
ERIC Educational Resources Information Center
Goldman, Robert N.; McKenzie, John D. Jr.
2009-01-01
We explain how to simulate both univariate and bivariate raw data sets having specified values for common summary statistics. The first example illustrates how to "construct" a data set having prescribed values for the mean and the standard deviation--for a one-sample t test with a specified outcome. The second shows how to create a bivariate data…
Standards Get Boost on the Hill: Bills before Congress Aim to Raise the Bar in States
ERIC Educational Resources Information Center
Olson, Lynn
2007-01-01
This article focuses on the standards debate in the context of renewing the 5-year-old No Child Left Behind Act. The politically sensitive idea of increasing the rigor of state standards and tests by linking them to standards set at the national level is getting a push from prominent lawmakers as Congress moves to reauthorize the No Child Left…
40 CFR 86.1810-17 - General requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... waive only SC03 testing, substitute the SC03 emission result using the standard test fuel for gasoline... the emission measurements with the gasoline test fuel specified in 40 CFR 1065.710. (i) Where we... refer to test procedures set forth in subpart C of this part and 40 CFR part 1066, subpart H. All other...
Good, Andrew C; Hermsmeier, Mark A
2007-01-01
Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.
Evaluation to Redesign a Prototype Officer Data Base for Interdisciplinary Research
1992-01-01
accommodate cohort longitudinal research and econometric model testing . Recommendations regarding the adoption of the LOADB were presented. Utilization...commission data sets (Younkman, 1987), and the AIMS data set ( Ramsey & Younkman, 1989). An analysis of selected standardized tests for ROTC screening was...ARI Research Note 92-16 Evaluation to Redesign a Prototype il Officer Data Base for Interdisciplinary Research Dianne D. Younkman and Lori G. Ramsey
76 FR 28131 - Federal Motor Vehicle Safety Standards; Motorcycle Helmets
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-13
..., this final rule sets a quasi-static load application rate for the helmet retention system; revises the... Analysis and Conclusion e. Quasi-Static Retention Test f. Helmet Conditioning Tolerances g. Other... it as a quasi-static test, instead of a static test. Specifying the application rate will aid...
Unified System Of Data On Materials And Processes
NASA Technical Reports Server (NTRS)
Key, Carlo F.
1989-01-01
Wide-ranging sets of data for aerospace industry described. Document describes Materials and Processes Technical Information System (MAPTIS), computerized set of integrated data bases for use by NASA and aerospace industry. Stores information in standard format for fast retrieval in searches and surveys of data. Helps engineers select materials and verify their properties. Promotes standardized nomenclature as well as standarized tests and presentation of data. Format of document of photographic projection slides used in lectures. Presents examples of reports from various data bases.
Setting Standards for Medically-Based Running Analysis
Vincent, Heather K.; Herman, Daniel C.; Lear-Barnes, Leslie; Barnes, Robert; Chen, Cong; Greenberg, Scott; Vincent, Kevin R.
2015-01-01
Setting standards for medically based running analyses is necessary to ensure that runners receive a high-quality service from practitioners. Medical and training history, physical and functional tests, and motion analysis of running at self-selected and faster speeds are key features of a comprehensive analysis. Self-reported history and movement symmetry are critical factors that require follow-up therapy or long-term management. Pain or injury is typically the result of a functional deficit above or below the site along the kinematic chain. PMID:25014394
Optical tests for using smartphones inside medical devices
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Acobas, Jennifer K.; Phang, Ye Shang; Hassan, David; Bolton, Frank J.; Levitz, David
2018-02-01
Smartphones are currently used in many medical applications and are more frequently being integrated into medical imaging devices. The regulatory requirements in existence today however, particularly the standardization of smartphone imaging through validation and verification testing, only partially cover imaging characteristics with a smartphone. Specifically, it has been shown that smartphone camera specifications are of sufficient quality for medical imaging, and there are devices which comply with the FDA's regulatory requirements for a medical device such as a device's field of view, direction of viewing and optical resolution and optical distortion. However, these regulatory requirements do not call specifically for color testing. Images of the same object using automatic settings or different light sources can show different color composition. Experimental results showing such differences are presented. Under some circumstances, such differences in color composition could potentially lead to incorrect diagnoses. It is therefore critical to control the smartphone camera and illumination parameters properly. This paper examines different smartphone camera settings that affect image quality and color composition. To test and select the correct settings, a test methodology is proposed. It aims at evaluating and testing image color correctness and white balance settings for mobile phones and LED light sources. Emphasis is placed on color consistency and deviation from gray values, specifically by evaluating the ΔC values based on the CIEL*a*b* color space. Results show that such standardization minimizes differences in color composition and thus could reduce the risk of a wrong diagnosis.
NASA safety standard for lifting devices and equipment
NASA Astrophysics Data System (ADS)
1990-09-01
NASA's minimum safety requirements are established for the design, testing, inspection, maintenance, certification, and use of overhead and gantry cranes (including top running monorail, underhung, and jib cranes), mobile cranes, derrick hoists, and special hoist supported personnel lifting devices (these do not include elevators, ground supported personnel lifts, or powered platforms). Minimum requirements are also addressed for the testing, inspection, and use of Hydra-sets, hooks, and slings. Safety standards are thoroughly detailed.
Federal COBOL Compiler Testing Service Compiler Validation Request Information.
1977-05-09
background of the Federal COBOL Compiler Testing Service which was set up by a memorandum of agreement between the National Bureau of Standards and the...Federal Standard, and the requirement of COBOL compiler validation in the procurement process. It also contains a list of all software products...produced by the software Development Division in support of the FCCTS as well as the Validation Summary Reports produced as a result of discharging the
NASA safety standard for lifting devices and equipment
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's minimum safety requirements are established for the design, testing, inspection, maintenance, certification, and use of overhead and gantry cranes (including top running monorail, underhung, and jib cranes), mobile cranes, derrick hoists, and special hoist supported personnel lifting devices (these do not include elevators, ground supported personnel lifts, or powered platforms). Minimum requirements are also addressed for the testing, inspection, and use of Hydra-sets, hooks, and slings. Safety standards are thoroughly detailed.
A framework for the design and development of physical employment tests and standards.
Payne, W; Harvey, J
2010-07-01
Because operational tasks in the uniformed services (military, police, fire and emergency services) are physically demanding and incur the risk of injury, employment policy in these services is usually competency based and predicated on objective physical employment standards (PESs) based on physical employment tests (PETs). In this paper, a comprehensive framework for the design of PETs and PESs is presented. Three broad approaches to physical employment testing are described and compared: generic predictive testing; task-related predictive testing; task simulation testing. Techniques for the selection of a set of tests with good coverage of job requirements, including job task analysis, physical demands analysis and correlation analysis, are discussed. Regarding individual PETs, theoretical considerations including measurability, discriminating power, reliability and validity, and practical considerations, including development of protocols, resource requirements, administrative issues and safety, are considered. With regard to the setting of PESs, criterion referencing and norm referencing are discussed. STATEMENT OF RELEVANCE: This paper presents an integrated and coherent framework for the development of PESs and hence provides a much needed theoretically based but practically oriented guide for organisations seeking to establish valid and defensible PESs.
Automatic sleep stage classification using two facial electrodes.
Virkkala, Jussi; Velin, Riitta; Himanen, Sari-Leena; Värri, Alpo; Müller, Kiti; Hasan, Joel
2008-01-01
Standard sleep stage classification is based on visual analysis of central EEG, EOG and EMG signals. Automatic analysis with a reduced number of sensors has been studied as an easy alternative to the standard. In this study, a single-channel electro-oculography (EOG) algorithm was developed for separation of wakefulness, SREM, light sleep (S1, S2) and slow wave sleep (S3, S4). The algorithm was developed and tested with 296 subjects. Additional validation was performed on 16 subjects using a low weight single-channel Alive Monitor. In the validation study, subjects attached the disposable EOG electrodes themselves at home. In separating the four stages total agreement (and Cohen's Kappa) in the training data set was 74% (0.59), in the testing data set 73% (0.59) and in the validation data set 74% (0.59). Self-applicable electro-oculography with only two facial electrodes was found to provide reasonable sleep stage information.
Experts Question California's Algebra Edict
ERIC Educational Resources Information Center
Cavanagh, Sean
2008-01-01
Business leaders from important sectors of the American economy have been urging schools to set higher standards in math and science--and California officials, in mandating that 8th graders be tested in introductory algebra, have responded with one of the highest such standards in the land. Still, many California educators and school…
School Reform, Standards Testing and English Language Learners
ERIC Educational Resources Information Center
Laguardia, Armando; Goldman, Paul
2007-01-01
This paper reports findings from interviews conducted in two states in the American Northwest, Oregon and Washington, to explore how standards-based educational reform affects English language learners (ELLs) and the educational professionals who serve them. This paper focuses on two sets of multifaceted tensions: (1) organizational tensions that…
Module Measurements | Photovoltaic Research | NREL
prototype concentrator evaluation test bed, and the Daystar DS-10/125 portable I-V curve tracer. Standard Evaluation Test Bed. We developed this test bed to be able to evaluate I-V characteristics throughout the day a function of time, temperature, and light level. This test bed data set is also used to evaluate
A Conformance Test Suite for Arden Syntax Compilers and Interpreters.
Wolf, Klaus-Hendrik; Klimek, Mike
2016-01-01
The Arden Syntax for Medical Logic Modules is a standardized and well-established programming language to represent medical knowledge. To test the compliance level of existing compilers and interpreters no public test suite exists. This paper presents the research to transform the specification into a set of unit tests, represented in JUnit. It further reports on the utilization of the test suite testing four different Arden Syntax processors. The presented and compared results reveal the status conformance of the tested processors. How test driven development of Arden Syntax processors can help increasing the compliance with the standard is described with two examples. In the end some considerations how an open source test suite can improve the development and distribution of the Arden Syntax are presented.
Cryogenic Insulation Standard Data and Methodologies Project
NASA Technical Reports Server (NTRS)
Summerfield, Burton; Thompson, Karen; Zeitlin, Nancy; Mullenix, Pamela; Fesmire, James; Swanger, Adam
2015-01-01
Extending some recent developments in the area of technical consensus standards for cryogenic thermal insulation systems, a preliminary Inter-Laboratory Study of foam insulation materials was performed by NASA Kennedy Space Center and LeTourneau University. The initial focus was ambient pressure cryogenic boil off testing using the Cryostat-400 flat-plate instrument. Completion of a test facility at LETU has enabled direct, comparative testing, using identical cryostat instruments and methods, and the production of standard thermal data sets for a number of materials under sub-ambient conditions. The two sets of measurements were analyzed and indicate there is reasonable agreement between the two laboratories. Based on cryogenic boiloff calorimetry, new equipment and methods for testing thermal insulation systems have been successfully developed. These boiloff instruments (or cryostats) include both flat plate and cylindrical models and are applicable to a wide range of different materials under a wide range of test conditions. Test measurements are generally made at large temperature difference (boundary temperatures of 293 K and 78 K are typical) and include the full vacuum pressure range. Results are generally reported in effective thermal conductivity (ke) and mean heat flux (q) through the insulation system. The new cryostat instruments provide an effective and reliable way to characterize the thermal performance of materials under subambient conditions. Proven in through thousands of tests of hundreds of material systems, they have supported a wide range of aerospace, industry, and research projects. Boiloff testing technology is not just for cryogenic testing but is a cost effective, field-representative methodology to test any material or system for applications at sub-ambient temperatures. This technology, when adequately coupled with a technical standards basis, can provide a cost-effective, field-representative methodology to test any material or system for applications at sub-ambient to cryogenic temperatures. A growing need for energy efficiency and cryogenic applications is creating a worldwide demand for improved thermal insulation systems for low temperatures. The need for thermal characterization of these systems and materials raises a corresponding need for insulation test standards and thermal data targeted for cryogenic-vacuum applications. Such standards have a strong correlation to energy, transportation, and environment and the advancement of new materials technologies in these areas. In conjunction with this project, two new standards on cryogenic insulation were recently published by ASTM International: C1774 and C740. Following the requirements of NPR 7120.10, Technical Standards for NASA Programs and Projects, the appropriate information in this report can be provided to the NASA Chief Engineer as input for NASA's annual report to NIST, as required by OMB Circular No. A-119, describing NASA's use of voluntary consensus standards and participation in the development of voluntary consensus standards and bodies.
Conception of a test bench to generate known and controlled conditions of refrigerant mass flow.
Martins, Erick F; Flesch, Carlos A; Flesch, Rodolfo C C; Borges, Maikon R
2011-07-01
Refrigerant compressor performance tests play an important role in the evaluation of the energy characteristics of the compressor, enabling an increase in the quality, reliability, and efficiency of these products. Due to the nonexistence of a refrigerating capacity standard, it is common to use previously conditioned compressors for the intercomparison and evaluation of the temporal drift of compressor performance test panels. However, there are some limitations regarding the use of these specific compressors as standards. This study proposes the development of a refrigerating capacity standard which consists of a mass flow meter and a variable-capacity compressor, whose speed is set based on the mass flow rate measured by the meter. From the results obtained in the tests carried out on a bench specifically developed for this purpose, it was possible to validate the concept of a capacity standard. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Progressing From Initially Ambiguous Functional Analyses: Three Case Examples
Tiger, Jeffrey H.; Fisher, Wayne W.; Toussaint, Karen A.; Kodak, Tiffany
2009-01-01
Most often functional analyses are initiated using a standard set of test conditions, similar to those described by Iwata, Dorsey, Slifer, Bauman, and Richman (1982/1994). These test conditions involve the careful manipulation of motivating operations, discriminative stimuli, and reinforcement contingencies to determine the events related to the occurrence and maintenance of problem behavior. Some individuals display problem behavior that is occasioned and reinforced by idiosyncratic or otherwise unique combinations of environmental antecedents and consequences of behavior, which are unlikely to be detected using these standard assessment conditions. For these individuals, modifications to the standard test conditions or the inclusion of novel test conditions may result in clearer assessment outcomes. The current study provides three case examples of individuals whose functional analyses were initially undifferentiated; however, modifications to the standard conditions resulted in the identification of behavioral functions and the implementation of effective function-based treatments. PMID:19233611
Calmet, D; Ameon, R; Bombard, A; Brun, S; Byrde, F; Chen, J; Duda, J-M; Forte, M; Fournier, M; Fronka, A; Haug, T; Herranz, M; Husain, A; Jerome, S; Jiranek, M; Judge, S; Kim, S B; Kwakman, P; Loyen, J; LLaurado, M; Michel, R; Porterfield, D; Ratsirahonana, A; Richards, A; Rovenska, K; Sanada, T; Schuler, C; Thomas, L; Tokonami, S; Tsapalov, A; Yamada, T
2017-04-01
Radiological protection is a matter of concern for members of the public and thus national authorities are more likely to trust the quality of radioactivity data provided by accredited laboratories using common standards. Normative approach based on international standards aims to ensure the accuracy or validity of the test result through calibrations and measurements traceable to the International System of Units. This approach guarantees that radioactivity test results on the same types of samples are comparable over time and space as well as between different testing laboratories. Today, testing laboratories involved in radioactivity measurement have a set of more than 150 international standards to help them perform their work. Most of them are published by the International Standardization Organization (ISO) and the International Electrotechnical Commission (IEC). This paper reviews the most essential ISO standards that give guidance to testing laboratories at different stages from sampling planning to the transmission of the test report to their customers, summarizes recent activities and achievements and present the perspectives on new standards under development by the ISO Working Groups dealing with radioactivity measurement in connection with radiological protection. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Screening and surveillance. OSHA's medical surveillance provisions.
Papp, E M; Miller, A S
2000-02-01
The OSH Act requires OSHA to include provisions for medical examinations of employees in its standards. However, the specific test and examinations criteria are not outlined in the OSH Act. Instead, each standard has specific medical surveillance requirements. These are specific to the adverse health effects triggered by exposure to the hazardous substance. The OSHA uses the term medical surveillance to refer to its employee examination and testing provisions. Most occupational health professionals call this activity employee screening and reserve the term surveillance for aggregate analysis of population data. It is important to remember this distinction when referring to OSHA standards. Many standards are challenged in court resulting in changes to medical surveillance provisions of the standards. Some court decisions support OSHA's language. In either case, the court often sets precedents for future standards.
Khosropour, Christine M; Broad, Jennifer M; Scholes, Delia; Saint-Johnson, Jacquelyn; Manhart, Lisa E; Golden, Matthew R
2014-11-01
Population-based surveys (self-report) and health insurance administrative data (Healthcare Effectiveness Data and Information Set [HEDIS]) are used to estimate chlamydia screening coverage in the United States. Estimates from these methods differ, but few studies have compared these 2 indices in the same population. In 2010, we surveyed a random sample of women aged 18 to 25 years enrolled in a Washington State-managed care organization. Respondents were asked if they were sexually active in last year and if they tested for chlamydia in that time. We linked survey responses to administrative records of chlamydia testing and reproductive/testing services used, which comprise the HEDIS definition of the screened population and the sexually active population, respectively. We compared self-report and HEDIS using 3 outcomes: (1) sexual activity (gold standard = self-report), (2) any chlamydia screening (no gold standard), and (3) within-plan chlamydia screening (gold standard = HEDIS). Of 954 eligible respondents, 377 (40%) completed the survey and consented to administrative record linkage. Chlamydia screening estimates for HEDIS and self-report were 47% and 53%, respectively. The sensitivity and specificity of HEDIS to define sexually active women were 84.8% (95% confidence interval [CI], 79.6%-89.1%) and 63.5% (95% CI, 52.4%-73.7%), respectively. Forty percent of women had a chlamydia test in their administrative record, but 53% self-reported being tested for chlamydia (κ = 0.35); 19% reported out-of-plan chlamydia testing. The sensitivity of self-reported within-plan chlamydia testing was 71.3% (95% CI, 61.0%-80.1%); the specificity was 80.6% (95% CI, 72.6%-87.2%). The Healthcare Effectiveness Data and Information Set does not accurately identify sexually active women and may underestimate chlamydia testing coverage. Self-reported testing may not be an accurate measure of true chlamydial testing coverage.
ERIC Educational Resources Information Center
Gorlewski, Julie A., Ed.; Porfilio, Brad J., Ed.; Gorlewski, David A., Ed.
2012-01-01
This book overturns the typical conception of standards, empowering educators by providing concrete examples of how top-down models of assessment can be embraced and used in ways that are consistent with critical pedagogies. Although standards, as broad frameworks for setting learning targets, are not necessarily problematic, when they are…
NASA Astrophysics Data System (ADS)
Dell'Acqua, Fabio; Iannelli, Gianni Cristian; Kerekes, John; Lisini, Gianni; Moser, Gabriele; Ricardi, Niccolo; Pierce, Leland
2016-08-01
The issue of homogeneity in performance assessment of proposed algorithms for information extraction is generally perceived also in the Earth Observation (EO) domain. Different authors propose different datasets to test their developed algorithms and to the reader it is frequently difficult to assess which is better for his/her specific application, given the wide variability in test sets that makes pure comparison of e.g. accuracy values less meaningful than one would desire. With our work, we gave a modest contribution to ease the problem by making it possible to automatically distribute a limited set of possible "standard" open datasets, together with some ground truth info, and automatically assess processing results provided by the users.
Lee, Lawrence; How, Jacques; Tabah, Roger J; Mitmaker, Elliot J
2014-08-01
Novel molecular diagnostics, such as the gene expression classifier (GEC) and gene mutation panel (GMP) testing, may improve the management for thyroid nodules with atypia of undetermined significance (AUS) cytology. The cost-effectiveness of an approach combining both tests in different practice settings in North America is unknown. The aim of the study was to determine the cost-effectiveness of two diagnostic molecular tests, singly or in combination, for AUS thyroid nodules. We constructed a microsimulation model to investigate cost-effectiveness from US (Medicare) and Canadian healthcare system perspectives. Low-risk patients with AUS thyroid nodules were simulated. We examined five management strategies: 1) routine GEC; 2) routine GEC + selective GMP; 3) routine GMP; 4) routine GMP + selective GEC; and 5) standard management. Lifetime costs and quality-adjusted life-years were measured. From the US perspective, the routine GEC + selective GMP strategy was the dominant strategy. From the Canadian perspective, routine GEC + selective GMP cost and additional CAN$24 030 per quality-adjusted life-year gained over standard management, and was dominant over the other strategies. Sensitivity analyses reported that the decisions from both perspectives were sensitive to variations in the probability of malignancy in the nodule and the costs of the GEC and GMP. The probability of cost-effectiveness for routine GEC + selective GMP was low. In the US setting, the most cost-effective strategy was routine GEC + selective GMP. In the Canadian setting, standard management was most likely to be cost effective. The cost of these molecular diagnostics will need to be reduced to increase their cost-effectiveness for practice settings outside the United States.
Computing tools for implementing standards for single-case designs.
Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E
2015-11-01
In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.
Standardization of motion sickness induced by left-right and up-down reversing prisms
NASA Technical Reports Server (NTRS)
Reschke, M. F.; Vanderploeg, J. M.; Brumley, E. A.; Kolafa, J. J.; Wood, S. J.
1990-01-01
Reversing prisms are known to produce symptoms of motion sickness, and have been used to provide a chronic stimulus for training subjects on symptom recognition and regulation. However, testing procedures with reversing prisms have not been standardized. A set of procedures were evaluated which could be standardized using prisms for provocation and to compare the results between Right/Left Reversing Prisms (R/L-RP) and Up/Down Reversing Prisms (U/D-RP). Fifteen subjects were tested with both types of prisms using a self paced walking course throughout the laboratory with work stations established at specified intervals. The work stations provided tasks requiring eye-hand-foot coordination and various head movements. Comparisons were also made between these prism tests and two other standardized susceptibility tests, the KC-135 parabolic static chair test and the Staircase Velocity Motion Test (SVMT). Two different types of subjective symptom reports were compared. The R/L-RP were significantly more provocative than the U/D-RP. The incidence of motion sickness symptoms for the R/L-RP was similar to the KC-135 parabolic static chair test. Poor correlations were found between the prism tests and the other standardized susceptibility tests, which might indicate that different mechanisms are involved in provoking motion sickness for these different tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Nan; Khanna, Nina Zheng; Fridley, David
Over the last twenty years, with growing policy emphasis on improving energy efficiency and reducing environmental pollution and carbon emissions, China has implemented a series of new minimum energy performance standards (MEPS) and mandatory and voluntary energy labels to improve appliance energy efficiency. As China begins planning for the next phase of standards and labeling (S&L) program development under the 12th Five Year Plan, an evaluation of recent program developments and future directions is needed to identify gaps that still exist when compared with international best practices. The review of China’s S&L program development and implementation in comparison with majormore » findings from international experiences reveal that there are still areas of improvement, particularly when compared to success factors observed across leading international S&L program. China currently lacks a formalized regulatory process for standard-setting and do not have any legal or regulatory guidance on elements of S&L development such as stakeholder participation or the issue of legal precedence between conflicting national, industrial and local standards. Consequently, China’s laws regarding standard-setting and management of the mandatory energy label program could be updated, as they have not been amended or revised recently and no longer reflects the current situation. While China uses similar principles for choosing target products as the U.S., Australia, EU and Japan, including high energy-consumption, mature industry and testing procedure and stakeholder support, recent MEPS revisions have generally aimed at only eliminating the bottom 20% efficiency of the market. Setting a firm principle based on maximizing energy savings that are technically feasible and economically justified may help improve the stringency of China’s MEPS program and reduce the need for frequent revisions. China also lacks robust survey data and relies primarily on market research data in relatively simple techno-economic analyses used to determine its efficiency standards levels rather than the specific sets of analyses and tools used internationally. Based on international experiences, inclusion of more detailed energy consumption surveys in the Chinese national census surveys and statistical reporting systems could help provide the necessary data for more comprehensive standard-setting analyses. In terms of stakeholder participation in the standards development process, stakeholder participation in China is limited to membership on technical committees responsible for developing or revising standards and generally do not include environmental groups, consumer associations, utilities and other NGOs. Increasing stakeholder involvement to broader interest groups could help garner more support and feedback in the S&L implementation process. China has emerged as a leader in a national verification testing scheme with complementary pilot checktesting projects, but it still faces challenges with insufficient funding, low local awareness amongst some regulatory agencies and resistance to check-testing by some manufacturers, limited product sampling scope, and testing inconsistency and incomparability of results. Thus, further financial and staff resources and capacity building will be needed to overcome these remaining challenges and to expand impacts evaluations to assess the actual effectiveness of implementation and enforcement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
Rapid Syphilis Testing Is Cost-Effective Even in Low-Prevalence Settings: The CISNE-PERU Experience.
Mallma, Patricia; Garcia, Patricia; Carcamo, Cesar; Torres-Rueda, Sergio; Peeling, Rosanna; Mabey, David; Terris-Prestholt, Fern
2016-01-01
Studies have addressed cost-effectiveness of syphilis testing of pregnant women in high-prevalence settings. This study compares costs of rapid syphilis testing (RST) with laboratory-based rapid plasma reagin (RPR) tests in low-prevalence settings in Peru. The RST was introduced in a tertiary-level maternity hospital and in the Ventanilla Network of primary health centers, where syphilis prevalence is approximately 1%. The costs per woman tested and treated with RST at the hospital were $2.70 and $369 respectively compared with $3.60 and $740 for RPR. For the Ventanilla Network the costs per woman tested and treated with RST were $3.19 and $295 respectively compared with $5.55 and $1454 for RPR. The cost per DALY averted using RST was $46 vs. $109 for RPR. RST showed lower costs compared to the WHO standard costs per DALY ($64). Findings suggest syphilis screening with RST is cost-effective in low-prevalence settings.
Rapid Syphilis Testing Is Cost-Effective Even in Low-Prevalence Settings: The CISNE-PERU Experience
Mallma, Patricia; Garcia, Patricia; Carcamo, Cesar; Torres-Rueda, Sergio; Peeling, Rosanna; Mabey, David; Terris-Prestholt, Fern
2016-01-01
Studies have addressed cost-effectiveness of syphilis testing of pregnant women in high-prevalence settings. This study compares costs of rapid syphilis testing (RST) with laboratory-based rapid plasma reagin (RPR) tests in low-prevalence settings in Peru. The RST was introduced in a tertiary-level maternity hospital and in the Ventanilla Network of primary health centers, where syphilis prevalence is approximately 1%. The costs per woman tested and treated with RST at the hospital were $2.70 and $369 respectively compared with $3.60 and $740 for RPR. For the Ventanilla Network the costs per woman tested and treated with RST were $3.19 and $295 respectively compared with $5.55 and $1454 for RPR. The cost per DALY averted using RST was $46 vs. $109 for RPR. RST showed lower costs compared to the WHO standard costs per DALY ($64). Findings suggest syphilis screening with RST is cost-effective in low-prevalence settings. PMID:26949941
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-13
... program whereby the National Coordinator would authorize organizations to test and certify Complete EHRs... Certification Bodies (ONC-ATCBs)) to test and certify Complete EHRs and/or EHR Modules to the certification... Coordinator to test and certify Complete EHRs and/or EHR Modules, it will be subject, depending on the scope...
Performance testing of radiobioassay laboratories: In vivo measurements, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacLellan, J.A.; Traub, R.J.; Olsen, P.C.
1990-04-01
A study of two rounds of in vivo laboratory performance testing was undertaken by Pacific Northwest Laboratory (PNL) to determine the appropriateness of the in vivo performance criteria of draft American National Standards Institute (ANSI) standard ANSI N13.3, Performance Criteria for Bioassay.'' The draft standard provides guidance to in vivo counting facilities regarding the sensitivity, precision, and accuracy of measurements for certain categories of commonly assayed radionuclides and critical regions of the body. This report concludes the testing program by presenting the results of the Round Two testing. Testing involved two types of measurements: chest counting for radionuclide detection inmore » the lung, and whole body counting for detection of uniformly distributed material. Each type of measurement was further divided into radionuclide categories as defined in the draft standard. The appropriateness of the draft standard criteria by measuring a laboratory's ability to attain them were judged by the results of both round One and Round Two testing. The testing determined that performance criteria are set at attainable levels, and the majority of in vivo monitoring facilities passed the criteria when complete results were submitted. 18 refs., 18 figs., 15 tabs.« less
Jørstad, Melissa Davidsen; Marijani, Msafiri; Dyrhol-Riise, Anne Ma; Sviland, Lisbet; Mustafa, Tehmina
2018-01-01
Extrapulmonary tuberculosis (EPTB) is a diagnostic challenge. An immunochemistry-based MPT64 antigen detection test (MPT64 test) has reported higher sensitivity in the diagnosis of EPTB compared with conventional methods. The objective of this study was to implement and evaluate the MPT64 test in routine diagnostics in a low-resource setting. Patients with presumptive EPTB were prospectively enrolled at Mnazi Mmoja Hospital, Zanzibar, and followed to the end of treatment. Specimens collected were subjected to routine diagnostics, GeneXpert® MTB/RIF assay and the MPT64 test. The performance of the MPT64 test was assessed using a composite reference standard, defining the patients as tuberculosis (TB) cases or non-TB cases. Patients (n = 132) were classified as confirmed TB (n = 12), probable TB (n = 34), possible TB (n = 18), non-TB (n = 62) and uncategorized (n = 6) cases. Overall, in comparison to the composite reference standard for diagnosis, the sensitivity, specificity, positive predictive value, negative predictive value and accuracy of the MPT64 test was 69%, 95%, 94%, 75% and 82%, respectively. The MPT64 test performance was best in TB lymphadenitis cases (n = 67, sensitivity 79%, specificity 97%) and in paediatric TB (n = 41, sensitivity 100%, specificity 96%). We show that the MPT64 test can be implemented in routine diagnostics in a low-resource setting and improves the diagnosis of EPTB, especially in TB lymphadenitis and in children.
Cornwell, Andrew S.; Liao, James Y.; Bryden, Anne M.; Kirsch, Robert F.
2013-01-01
We have developed a set of upper extremity functional tasks to guide the design and test the performance of rehabilitation technologies that restore arm motion in people with high tetraplegia. Our goal was to develop a short set of tasks that would be representative of a much larger set of activities of daily living while also being feasible for a unilateral user of an implanted Functional Electrical Stimulation (FES) system. To compile this list of tasks, we reviewed existing clinical outcome measures related to arm and hand function, and were further informed by surveys of patient desires. We ultimately selected a set of five tasks that captured the most common components of movement seen in these tasks, making them highly relevant for assessing FES-restored unilateral arm function in individuals with high cervical spinal cord injury (SCI). The tasks are intended to be used when setting design specifications and for evaluation and standardization of rehabilitation technologies under development. While not unique, this set of tasks will provide a common basis for comparing different interventions (e.g., FES, powered orthoses, robotic assistants) and testing different user command interfaces (e.g., sip-and-puff, head joysticks, brain-computer interfaces). PMID:22773199
Prototype ultrasonic instrument for quantitative testing
NASA Technical Reports Server (NTRS)
Lynnworth, L. C.; Dubois, J. L.; Kranz, P. R.
1972-01-01
A prototype ultrasonic instrument has been designed and developed for quantitative testing. The complete delivered instrument consists of a pulser/receiver which plugs into a standard oscilloscope, an rf power amplifier, a standard decade oscillator, and a set of broadband transducers for typical use at 1, 2, 5 and 10 MHz. The system provides for its own calibration, and on the oscilloscope, presents a quantitative (digital) indication of time base and sensitivity scale factors and some measurement data.
NASA Technical Reports Server (NTRS)
Theologus, G. C.; Wheaton, G. R.; Mirabella, A.; Brahlek, R. E.
1973-01-01
A set of 36 relatively independent categories of human performance were identified. These categories encompass human performance in the cognitive, perceptual, and psychomotor areas, and include diagnostic measures and sensitive performance metrics. Then a prototype standardized test battery was constructed, and research was conducted to obtain information on the sensitivity of the tests to stress, the sensitivity of selected categories of performance degradation, the time course of stress effects on each of the selected tests, and the learning curves associated with each test. A research project utilizing a three factor partially repeated analysis of covariance design was conducted in which 60 male subjects were exposed to variations in noise level and quality during performance testing. Effects of randomly intermittent noise on performance of the reaction time tests were observed, but most of the other performance tests showed consistent stability. The results of 14 analyses of covariance of the data taken from the performance of the 60 subjects on the prototype standardized test battery provided information which will enable the final development and test of a standardized test battery and the associated development of differential sensitivity metrics and diagnostic classificatory system.
Frequency Spectrum Neutrality Tests: One for All and All for One
Achaz, Guillaume
2009-01-01
Neutrality tests based on the frequency spectrum (e.g., Tajima's D or Fu and Li's F) are commonly used by population geneticists as routine tests to assess the goodness-of-fit of the standard neutral model on their data sets. Here, I show that these neutrality tests are specific instances of a general model that encompasses them all. I illustrate how this general framework can be taken advantage of to devise new more powerful tests that better detect deviations from the standard model. Finally, I exemplify the usefulness of the framework on SNP data by showing how it supports the selection hypothesis in the lactase human gene by overcoming the ascertainment bias. The framework presented here paves the way for constructing novel tests optimized for specific violations of the standard model that ultimately will help to unravel scenarios of evolution. PMID:19546320
Abraham, John; Reed, Tim
2002-06-01
This paper examines international standard-setting in the toxicology of pharmaceuticals during the 1990s, which has involved both the pharmaceutical industry and regulatory agencies in an organization known as the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH). The analysis shows that the relationships between innovation, regulatory science and 'progress' may be more complex and controversial than is often assumed. An assessment of the ICH's claims about the implications of 'technical' harmonization of drug-testing standards for the maintenance of drug safety, via toxicological testing, and the delivery of therapeutic progress, via innovation, is presented. By demonstrating that there is not a technoscientific validity for these claims, it is argued that, within the ICH, a discourse of technological innovation and scientific progress has been used by regulatory agencies and prominent parts of the transnational pharmaceutical industry to legitimize the lowering and loosening of toxicological standards for drug testing. The mobilization and acceptance of this discourse are shown to be pivotal to the ICH's transformation of reductions in safety standards, which are apparently against the interests of patients and public health, into supposed therapeutic benefits derived from promises of greater access to more innovative drug products. The evidence suggests that it is highly implausible that these reductions in the standards of regulatory toxicology are consistent with therapeutic progress for patients, and highlights a worrying aspect embedded in the 'technical trajectories' of regulatory science.
Inter-rater reliability of three standardized functional tests in patients with low back pain
Tidstrand, Johan; Horneij, Eva
2009-01-01
Background Of all patients with low back pain, 85% are diagnosed as "non-specific lumbar pain". Lumbar instability has been described as one specific diagnosis which several authors have described as delayed muscular responses, impaired postural control as well as impaired muscular coordination among these patients. This has mostly been measured and evaluated in a laboratory setting. There are few standardized and evaluated functional tests, examining functional muscular coordination which are also applicable in the non-laboratory setting. In ordinary clinical work, tests of functional muscular coordination should be easy to apply. The aim of this present study was to therefore standardize and examine the inter-rater reliability of three functional tests of muscular functional coordination of the lumbar spine in patients with low back pain. Methods Nineteen consecutive individuals, ten men and nine women were included. (Mean age 42 years, SD ± 12 yrs). Two independent examiners assessed three tests: "single limb stance", "sitting on a Bobath ball with one leg lifted" and "unilateral pelvic lift" on the same occasion. The standardization procedure took altered positions of the spine or pelvis and compensatory movements of the free extremities into account. The inter-rater reliability was analyzed by Cohen's kappa coefficient (κ) and by percentage agreement. Results The inter-rater reliability for the right and the left leg respectively was: for the single limb stance very good (κ: 0.88–1.0), for sitting on a Bobath ball good (κ: 0.79) and very good (κ: 0.88) and for the unilateral pelvic lift: good (κ: 0.61) and moderate (κ: 0.47). Conclusion The present study showed good to very good inter-rater reliability for two standardized tests, that is, the single-limb stance and sitting on a Bobath-ball with one leg lifted. Inter-rater reliability for the unilateral pelvic lift test was moderate to good. Validation of the tests in their ability to evaluate lumbar stability is required. PMID:19490644
Approximate Dynamic Programming and Aerial Refueling
2007-06-01
by two Army Air Corps de Havilland DH -4Bs (9). While crude by modern standards, the passing of hoses be- tween planes is effectively the same approach...incorporating stochastic data sets. . . . . . . . . . . 106 55 Total Cost Stochastically Trained Simulations versus Deterministically Trained Simulations...incorporating stochastic data sets. 106 To create meaningful results when testing stochastic data, the data sets are av- eraged so that conclusions are not
Polonchuk, Liudmila
2012-01-01
The Patchliner® temperature-controlled automated patch clamp system was evaluated for testing drug effects on potassium currents through human ether-à-go-go related gene (hERG) channels expressed in Chinese hamster ovary cells at 35–37°C. IC50 values for a set of reference drugs were compared with those obtained using the conventional voltage clamp technique. The results showed good correlation between the data obtained using automated and conventional electrophysiology. Based on these results, the Patchliner® represents an innovative automated electrophysiology platform for conducting the hERG assay that substantially increases throughput and has the advantage of operating at physiological temperature. It allows fast, accurate, and direct assessment of channel function to identify potential proarrhythmic side effects and sets a new standard in ion channel research for drug safety testing. PMID:22303293
Aronson, Jeffrey K
2016-01-01
Objective To examine how misspellings of drug names could impede searches for published literature. Design Database review. Data source PubMed. Review methods The study included 30 drug names that are commonly misspelt on prescription charts in hospitals in Birmingham, UK (test set), and 30 control names randomly chosen from a hospital formulary (control set). The following definitions were used: standard names—the international non-proprietary names, variant names—deviations in spelling from standard names that are not themselves standard names in English language nomenclature, and hidden reference variants—variant spellings that identified publications in textword (tw) searches of PubMed or other databases, and which were not identified by textword searches for the standard names. Variant names were generated from standard names by applying letter substitutions, omissions, additions, transpositions, duplications, deduplications, and combinations of these. Searches were carried out in PubMed (30 June 2016) for “standard name[tw]” and “variant name[tw] NOT standard name[tw].” Results The 30 standard names of drugs in the test set gave 325 979 hits in total, and 160 hidden reference variants gave 3872 hits (1.17%). The standard names of the control set gave 470 064 hits, and 79 hidden reference variants gave 766 hits (0.16%). Letter substitutions (particularly i to y and vice versa) and omissions together accounted for 2924 (74%) of the variants. Amitriptyline (8530 hits) yielded 18 hidden reference variants (179 (2.1%) hits). Names ending in “in,” “ine,” or “micin” were commonly misspelt. Failing to search for hidden reference variants of “gentamicin,” “amitriptyline,” “mirtazapine,” and “trazodone” would miss at least 19 systematic reviews. A hidden reference variant related to Christmas, “No-el”, was rare; variants of “X-miss” were rarer. Conclusion When performing searches, researchers should include misspellings of drug names among their search terms. PMID:27974346
Ferner, Robin E; Aronson, Jeffrey K
2016-12-14
To examine how misspellings of drug names could impede searches for published literature. Database review. PubMed. The study included 30 drug names that are commonly misspelt on prescription charts in hospitals in Birmingham, UK (test set), and 30 control names randomly chosen from a hospital formulary (control set). The following definitions were used: standard names-the international non-proprietary names, variant names-deviations in spelling from standard names that are not themselves standard names in English language nomenclature, and hidden reference variants-variant spellings that identified publications in textword (tw) searches of PubMed or other databases, and which were not identified by textword searches for the standard names. Variant names were generated from standard names by applying letter substitutions, omissions, additions, transpositions, duplications, deduplications, and combinations of these. Searches were carried out in PubMed (30 June 2016) for "standard name[tw]" and "variant name[tw] NOT standard name[tw]." The 30 standard names of drugs in the test set gave 325 979 hits in total, and 160 hidden reference variants gave 3872 hits (1.17%). The standard names of the control set gave 470 064 hits, and 79 hidden reference variants gave 766 hits (0.16%). Letter substitutions (particularly i to y and vice versa) and omissions together accounted for 2924 (74%) of the variants. Amitriptyline (8530 hits) yielded 18 hidden reference variants (179 (2.1%) hits). Names ending in "in," "ine," or "micin" were commonly misspelt. Failing to search for hidden reference variants of "gentamicin," "amitriptyline," "mirtazapine," and "trazodone" would miss at least 19 systematic reviews. A hidden reference variant related to Christmas, "No-el", was rare; variants of "X-miss" were rarer. When performing searches, researchers should include misspellings of drug names among their search terms. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
[The requirements of standard and conditions of interchangeability of medical articles].
Men'shikov, V V; Lukicheva, T I
2013-11-01
The article deals with possibility to apply specific approaches under evaluation of interchangeability of medical articles for laboratory analysis. The development of standardized analytical technologies of laboratory medicine and formulation of requirements of standards addressed to manufacturers of medical articles the clinically validated requirements are to be followed. These requirements include sensitivity and specificity of techniques, accuracy and precision of research results, stability of reagents' quality in particular conditions of their transportation and storage. The validity of requirements formulated in standards and addressed to manufacturers of medical articles can be proved using reference system, which includes master forms and standard samples, reference techniques and reference laboratories. This approach is supported by data of evaluation of testing systems for measurement of level of thyrotrophic hormone, thyroid hormones and glycated hemoglobin HB A1c. The versions of testing systems can be considered as interchangeable only in case of results corresponding to the results of reference technique and comparable with them. In case of absence of functioning reference system the possibilities of the Joined committee of traceability in laboratory medicine make it possible for manufacturers of reagent sets to apply the certified reference materials under development of manufacturing of sets for large listing of analytes.
34 CFR 462.11 - What must an application contain?
Code of Federal Regulations, 2010 CFR
2010-07-01
... the methodology and procedures used to measure the reliability of the test. (h) Construct validity... previous test, and results from validity, reliability, and equating or standard-setting studies undertaken... NRS educational functioning levels (content validity). Documentation of the extent to which the items...
Applications of Automation Methods for Nonlinear Fracture Test Analysis
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.
New Mexico Standards Based Assessment (NMSBA) Technical Report: 2006 Spring Administration
ERIC Educational Resources Information Center
Griph, Gerald W.
2006-01-01
The purpose of the NMSBA technical report is to provide users and other interested parties with a general overview of and technical characteristics of the 2006 NMSBA. The 2006 technical report contains the following information: (1) Test development; (2) Scoring procedures; (3) Calibration, scaling, and equating procedures; (4) Standard setting;…
40 CFR 89.6 - Reference materials.
Code of Federal Regulations, 2010 CFR
2010-07-01
... set forth the material that has been incorporated by reference in this part. (1) ASTM material. The... 19428-2959. Document number and name 40 CFR part 89 reference ASTM D86-97: “Standard Test Method for Distillation of Petroleum Products at Atmospheric Pressure” Appendix A to Subpart D. ASTM D93-97: “Standard...
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
Summative and Formative Assessments in Mathematics Supporting the Goals of the Common Core Standards
ERIC Educational Resources Information Center
Schoenfeld, Alan H.
2015-01-01
Being proficient in mathematics involves having rich and connected mathematical knowledge, being a strategic and reflective thinker and problem solver, and having productive mathematical beliefs and dispositions. This broad set of mathematics goals is central to the Common Core State Standards for Mathematics. High-stakes testing often drives…
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
How Principals and Teachers Respond to States' Accountability Systems
ERIC Educational Resources Information Center
Lee, Hyemi
2013-01-01
Since the 1990s, many states have started implementing standards-based reforms and developed their own accountability systems. Each state established academic content and performance standards, implemented test for all the students in grades 3 through 8 annually, and set up annual measurable objectives in reading and mathematics for districts,…
Test of a Power Transfer Model for Standardized Electrofishing
Miranda, L.E.; Dolan, C.R.
2003-01-01
Standardization of electrofishing in waters with differing conductivities is critical when monitoring temporal and spatial differences in fish assemblages. We tested a model that can help improve the consistency of electrofishing by allowing control over the amount of power that is transferred to the fish. The primary objective was to verify, under controlled laboratory conditions, whether the model adequately described fish immobilization responses elicited with various electrical settings over a range of water conductivities. We found that the model accurately described empirical observations over conductivities ranging from 12 to 1,030 ??S/cm for DC and various pulsed-DC settings. Because the model requires knowledge of a fish's effective conductivity, an attribute that is likely to vary according to species, size, temperature, and other variables, a second objective was to gather available estimates of the effective conductivity of fish to examine the magnitude of variation and to assess whether in practical applications a standard effective conductivity value for fish may be assumed. We found that applying a standard fish effective conductivity of 115 ??S/cm introduced relatively little error into the estimation of the peak power density required to immobilize fish with electrofishing. However, this standard was derived from few estimates of fish effective conductivity and a limited number of species; more estimates are needed to validate our working standard.
40 CFR 86.1728-99 - Compliance with emission standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... part to test for irregular data from a durability-data set. If any data point is identified as a... apply both the outlier procedure and averaging to the same data set, the outlier procedure shall be... shall be determined from the exhaust emission results of the durability-data vehicle(s) for each engine...
40 CFR 86.1728-99 - Compliance with emission standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... part to test for irregular data from a durability-data set. If any data point is identified as a... apply both the outlier procedure and averaging to the same data set, the outlier procedure shall be... shall be determined from the exhaust emission results of the durability-data vehicle(s) for each engine...
40 CFR 86.1728-99 - Compliance with emission standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... part to test for irregular data from a durability-data set. If any data point is identified as a... apply both the outlier procedure and averaging to the same data set, the outlier procedure shall be... shall be determined from the exhaust emission results of the durability-data vehicle(s) for each engine...
Systematic Observation of Early Adolescents in Educational Settings: The Good, the Bad, and the Ugly
ERIC Educational Resources Information Center
Gregory, Anne; Mikami, Amori Yee
2015-01-01
The growing use of systematic, empirically tested observational frameworks in school-based research is crucial for increasing the replicability and generalizability of findings across settings. That said, observations are often mistakenly assumed to be the "gold standard" assessment, without more nuanced discussions about the best uses…
Informing Instruction of Students with Autism in Public School Settings
ERIC Educational Resources Information Center
Kuo, Nai-Cheng
2016-01-01
The number of applied behavior analysis (ABA) classrooms for students with autism is increasing in K-12 public schools. To inform instruction of students with autism in public school settings, this study examined the relation between performance on mastery learning assessments and standardized achievement tests for students with autism spectrum…
Understanding pyrotechnic shock dynamics and response attenuation over distance
NASA Astrophysics Data System (ADS)
Ott, Richard J.
Pyrotechnic shock events used during stage separation on rocket vehicles produce high amplitude short duration structural response that can lead to malfunction or degradation of electronic components, cracks and fractures in brittle materials, local plastic deformation, and can cause materials to experience accelerated fatigue life. These transient loads propagate as waves through the structural media losing energy as they travel outward from the source. This work assessed available test data in an effort to better understand attenuation characteristics associated with wave propagation and attempted to update a historical standard defined by the Martin Marietta Corporation in the late 1960's using out of date data acquisition systems. Two data sets were available for consideration. The first data set came from a test that used a flight like cylinder used in NASA's Ares I-X program, and the second from a test conducted with a flat plate. Both data sets suggested that the historical standard was not a conservative estimate of shock attenuation with distance, however, the variation in the test data did not lend to recommending an update to the standard. Beyond considering attenuation with distance an effort was made to model the flat plate configuration using finite element analysis. The available flat plate data consisted of three groups of tests, each with a unique charge density linear shape charge (LSC) used to cut an aluminum plate. The model was tuned to a representative test using the lowest charge density LSC as input. The correlated model was then used to predict the other two cases by linearly scaling the input load based on the relative difference in charge density. The resulting model predictions were then compared with available empirical data. Aside from differences in amplitude due to nonlinearities associated with scaling the charge density of the LSC, the model predictions matched the available test data reasonably well. Finally, modeling best practices were recommended when using industry standard software to predict shock response on structures. As part of the best practices documented, a frequency dependent damping schedule that can be used in model development when no data is available is provided.
Standard Specimen Reference Set: Pancreatic — EDRN Public Portal
The primary objective of the EDRN Pancreatic Cancer Working Group Proposal is to create a reference set consisting of well-characterized serum/plasma specimens to use as a resource for the development of biomarkers for the early detection of pancreatic adenocarcinoma. The testing of biomarkers on the same sample set permits direct comparison among them; thereby, allowing the development of a biomarker panel that can be evaluated in a future validation study. Additionally, the establishment of an infrastructure with core data elements and standardized operating procedures for specimen collection, processing and storage, will provide the necessary preparatory platform for larger validation studies when the appropriate marker/panel for pancreatic adenocarcinoma has been identified.
Generation new MP3 data set after compression
NASA Astrophysics Data System (ADS)
Atoum, Mohammed Salem; Almahameed, Mohammad
2016-02-01
The success of audio steganography techniques is to ensure imperceptibility of the embedded secret message in stego file and withstand any form of intentional or un-intentional degradation of secret message (robustness). Crucial to that using digital audio file such as MP3 file, which comes in different compression rate, however research studies have shown that performing steganography in MP3 format after compression is the most suitable one. Unfortunately until now the researchers can not test and implement their algorithm because no standard data set in MP3 file after compression is generated. So this paper focuses to generate standard data set with different compression ratio and different Genre to help researchers to implement their algorithms.
Feasibility of an appliance energy testing and labeling program for Sri Lanka
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biermayer, Peter; Busch, John; Hakim, Sajid
2000-04-01
A feasibility study evaluated the costs and benefits of establishing a program for testing, labeling and setting minimum efficiency standards for appliances and lighting in Sri Lanka. The feasibility study included: refrigerators, air-conditioners, flourescent lighting (ballasts & CFls), ceiling fans, motors, and televisions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... EXHAUST EMISSION REQUIREMENTS FOR TURBINE ENGINE POWERED AIRPLANES Test Procedures for Engine Smoke Emissions (Aircraft Gas Turbine Engines) § 34.80 Introduction. Except as provided under § 34.5, the... of new and in-use gas turbine engines with the applicable standards set forth in this part. The test...
40 CFR 1065.512 - Duty cycle generation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Duty cycle generation. 1065.512... CONTROLS ENGINE-TESTING PROCEDURES Performing an Emission Test Over Specified Duty Cycles § 1065.512 Duty cycle generation. (a) Generate duty cycles according to this section if the standard-setting part...
Clinical applications of breath testing
Paschke, Kelly M; Mashir, Alquam
2010-01-01
Breath testing has the potential to benefit the medical field as a cost-effective, non-invasive diagnostic tool for diseases of the lung and beyond. With growing evidence of clinical worth, standardization of methods, and new sensor and detection technologies the stage is set for breath testing to gain considerable attention and wider application in upcoming years. PMID:21173863
Explaining the Gap in Black-White Scores on IQ and College Admission Tests.
ERIC Educational Resources Information Center
Cross, Theodore, Ed.
1998-01-01
Argues that differences in black performance and white performance on standardized tests likely comes from deeply rooted environmental forces such as expectations of one's life being restricted to a small and poorly rewarded set of social roles. Issues of test bias, the influence of caste-like minorities, the conflict between African American…
40 CFR 721.3435 - Butoxy-substituted ether alkane.
Code of Federal Regulations, 2011 CFR
2011-07-01
... set at 1.0 percent), and (c). In addition, the employer must be able to demonstrate that the gloves... substance (or an EPA-approved analogue) greater than 0.16 µg/cm2/min after 8 h of testing in accordance with the most recent versions of the American Society for Testing and Materials (ASTM) F739 “Standard Test...
40 CFR 721.3435 - Butoxy-substituted ether alkane.
Code of Federal Regulations, 2014 CFR
2014-07-01
... set at 1.0 percent), and (c). In addition, the employer must be able to demonstrate that the gloves... substance (or an EPA-approved analogue) greater than 0.16 µg/cm2/min after 8 h of testing in accordance with the most recent versions of the American Society for Testing and Materials (ASTM) F739 “Standard Test...
40 CFR 721.3435 - Butoxy-substituted ether alkane.
Code of Federal Regulations, 2013 CFR
2013-07-01
... set at 1.0 percent), and (c). In addition, the employer must be able to demonstrate that the gloves... substance (or an EPA-approved analogue) greater than 0.16 µg/cm2/min after 8 h of testing in accordance with the most recent versions of the American Society for Testing and Materials (ASTM) F739 “Standard Test...
Hydrogen Field Test Standard: Laboratory and Field Performance
Pope, Jodie G.; Wright, John D.
2015-01-01
The National Institute of Standards and Technology (NIST) developed a prototype field test standard (FTS) that incorporates three test methods that could be used by state weights and measures inspectors to periodically verify the accuracy of retail hydrogen dispensers, much as gasoline dispensers are tested today. The three field test methods are: 1) gravimetric, 2) Pressure, Volume, Temperature (PVT), and 3) master meter. The FTS was tested in NIST's Transient Flow Facility with helium gas and in the field at a hydrogen dispenser location. All three methods agree within 0.57 % and 1.53 % for all test drafts of helium gas in the laboratory setting and of hydrogen gas in the field, respectively. The time required to perform six test drafts is similar for all three methods, ranging from 6 h for the gravimetric and master meter methods to 8 h for the PVT method. The laboratory tests show that 1) it is critical to wait for thermal equilibrium to achieve density measurements in the FTS that meet the desired uncertainty requirements for the PVT and master meter methods; in general, we found a wait time of 20 minutes introduces errors < 0.1 % and < 0.04 % in the PVT and master meter methods, respectively and 2) buoyancy corrections are important for the lowest uncertainty gravimetric measurements. The field tests show that sensor drift can become a largest component of uncertainty that is not present in the laboratory setting. The scale was calibrated after it was set up at the field location. Checks of the calibration throughout testing showed drift of 0.031 %. Calibration of the master meter and the pressure sensors prior to travel to the field location and upon return showed significant drifts in their calibrations; 0.14 % and up to 1.7 %, respectively. This highlights the need for better sensor selection and/or more robust sensor testing prior to putting into field service. All three test methods are capable of being successfully performed in the field and give equivalent answers if proper sensors without drift are used. PMID:26722192
The development of STS payload environmental engineering standards
NASA Technical Reports Server (NTRS)
Bangs, W. F.
1982-01-01
The presently reported effort to provide a single set of standards for the design, analysis and testing of Space Transportation System (STS) payloads throughout the NASA organization must be viewed as essentially experimental, since the concept of incorporating the diverse opinions and experiences of several separate field research centers may in retrospect be judged too ambitious or perhaps even naive. While each STS payload may have unique characteristics, and the project should formulate its own criteria for environmental design, testing and evaluation, a reference source document providing coordinated standards is expected to minimize the duplication of effort and limit random divergence of practices among the various NASA payload programs. These standards would provide useful information to all potential STS users, and offer a degree of standardization to STS users outside the NASA organization.
Designing testing service at baristand industri Medan’s liquid waste laboratory
NASA Astrophysics Data System (ADS)
Kusumawaty, Dewi; Napitupulu, Humala L.; Sembiring, Meilita T.
2018-03-01
Baristand Industri Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industri Medan is liquid waste testing service. The company set the standard of service is nine working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company because of many samples accumulated. The purpose of this research is designing online services to schedule the coming the liquid waste sample. The method used is designing an information system that consists of model design, output design, input design, database design and technology design. The results of designing information system of testing liquid waste online consist of three pages are pages to the customer, the recipient samples and laboratory. From the simulation results with scheduled samples, then the standard services a minimum of nine working days can be reached.
NMP22 BladderChek Test: point-of-care technology with life- and money-saving potential.
Tomera, Kevin M
2004-11-01
A new, relatively obscure tumor marker assay, the NMP22 BladderChek Test (Matritech, Inc.), represents a paradigm shift in the diagnosis and management of urinary bladder cancer (transitional cell carcinoma). Specifically, BladderChek should be employed every time a cystoscopy is performed, with corresponding changes in the diagnostic protocol and the guidelines of the American Urological Association for the diagnosis and management of bladder cancer. Currently, cystoscopy is the reference standard and NMP22 BladderChek Test in combination with cystoscopy improves the performance of cystoscopy. At every stage of disease, BladderChek provides a higher sensitivity for the detection of bladder cancer than cytology, which now represents the adjunctive standard of care. Moreover, BladderChek is four-times more sensitive than cytology and is available at half the cost. Early detection of bladder cancer improves prognosis, quality of life and survival. BladderChek may be analogous to the prostate-specific antigen test and eventually expand beyond the urologic setting into the primary care setting for the testing of high-risk patients characterized by smoking history, occupational exposures or age.
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Gidey, Mu'uz
2015-01-01
This action research is carried out in a practical class room setting to devise an innovative way of administering tutorial classes to improve students' learning competence with particular reference to gendered test scores. A before-after test score analyses of mean and standard deviations along with t-statistical tests of hypotheses of second…
Fluorescence intensity positivity classification of Hep-2 cells images using fuzzy logic
NASA Astrophysics Data System (ADS)
Sazali, Dayang Farzana Abang; Janier, Josefina Barnachea; May, Zazilah Bt.
2014-10-01
Indirect Immunofluorescence (IIF) is a good standard used for antinuclear autoantibody (ANA) test using Hep-2 cells to determine specific diseases. Different classifier algorithm methods have been proposed in previous works however, there still no valid set as a standard to classify the fluorescence intensity. This paper presents the use of fuzzy logic to classify the fluorescence intensity and to determine the positivity of the Hep-2 cell serum samples. The fuzzy algorithm involves the image pre-processing by filtering the noises and smoothen the image, converting the red, green and blue (RGB) color space of images to luminosity layer, chromaticity layer "a" and "b" (LAB) color space where the mean value of the lightness and chromaticity layer "a" was extracted and classified by using fuzzy logic algorithm based on the standard score ranges of antinuclear autoantibody (ANA) fluorescence intensity. Using 100 data sets of positive and intermediate fluorescence intensity for testing the performance measurements, the fuzzy logic obtained an accuracy of intermediate and positive class as 85% and 87% respectively.
Hanchard, Nigel C A; Lenza, Mário; Handoll, Helen H G; Takwoingi, Yemisi
2013-04-30
Impingement is a common cause of shoulder pain. Impingement mechanisms may occur subacromially (under the coraco-acromial arch) or internally (within the shoulder joint), and a number of secondary pathologies may be associated. These include subacromial-subdeltoid bursitis (inflammation of the subacromial portion of the bursa, the subdeltoid portion, or both), tendinopathy or tears affecting the rotator cuff or the long head of biceps tendon, and glenoid labral damage. Accurate diagnosis based on physical tests would facilitate early optimisation of the clinical management approach. Most people with shoulder pain are diagnosed and managed in the primary care setting. To evaluate the diagnostic accuracy of physical tests for shoulder impingements (subacromial or internal) or local lesions of bursa, rotator cuff or labrum that may accompany impingement, in people whose symptoms and/or history suggest any of these disorders. We searched electronic databases for primary studies in two stages. In the first stage, we searched MEDLINE, EMBASE, CINAHL, AMED and DARE (all from inception to November 2005). In the second stage, we searched MEDLINE, EMBASE and AMED (2005 to 15 February 2010). Searches were delimited to articles written in English. We considered for inclusion diagnostic test accuracy studies that directly compared the accuracy of one or more physical index tests for shoulder impingement against a reference test in any clinical setting. We considered diagnostic test accuracy studies with cross-sectional or cohort designs (retrospective or prospective), case-control studies and randomised controlled trials. Two pairs of review authors independently performed study selection, assessed the study quality using QUADAS, and extracted data onto a purpose-designed form, noting patient characteristics (including care setting), study design, index tests and reference standard, and the diagnostic 2 x 2 table. We presented information on sensitivities and specificities with 95% confidence intervals (95% CI) for the index tests. Meta-analysis was not performed. We included 33 studies involving 4002 shoulders in 3852 patients. Although 28 studies were prospective, study quality was still generally poor. Mainly reflecting the use of surgery as a reference test in most studies, all but two studies were judged as not meeting the criteria for having a representative spectrum of patients. However, even these two studies only partly recruited from primary care.The target conditions assessed in the 33 studies were grouped under five main categories: subacromial or internal impingement, rotator cuff tendinopathy or tears, long head of biceps tendinopathy or tears, glenoid labral lesions and multiple undifferentiated target conditions. The majority of studies used arthroscopic surgery as the reference standard. Eight studies utilised reference standards which were potentially applicable to primary care (local anaesthesia, one study; ultrasound, three studies) or the hospital outpatient setting (magnetic resonance imaging, four studies). One study used a variety of reference standards, some applicable to primary care or the hospital outpatient setting. In two of these studies the reference standard used was acceptable for identifying the target condition, but in six it was only partially so. The studies evaluated numerous standard, modified, or combination index tests and 14 novel index tests. There were 170 target condition/index test combinations, but only six instances of any index test being performed and interpreted similarly in two studies. Only two studies of a modified empty can test for full thickness tear of the rotator cuff, and two studies of a modified anterior slide test for type II superior labrum anterior to posterior (SLAP) lesions, were clinically homogenous. Due to the limited number of studies, meta-analyses were considered inappropriate. Sensitivity and specificity estimates from each study are presented on forest plots for the 170 target condition/index test combinations grouped according to target condition. There is insufficient evidence upon which to base selection of physical tests for shoulder impingements, and local lesions of bursa, tendon or labrum that may accompany impingement, in primary care. The large body of literature revealed extreme diversity in the performance and interpretation of tests, which hinders synthesis of the evidence and/or clinical applicability.
49 CFR 210.31 - Operation standards (stationary locomotives at 30 meters).
Code of Federal Regulations, 2010 CFR
2010-10-01
... stationary locomotives at load cells: (1) Each noise emission test shall begin after the engine of the locomotive has attained the normal cooling water operating temperature as prescribed by the locomotive manufacturer. (2) Noise emission testing in idle or maximum throttle setting shall start after a 40 second...
Standards and Criteria. Paper #10 in Occasional Paper Series.
ERIC Educational Resources Information Center
Glass, Gene V.
The logical and psychological bases for setting cutting scores for criterion-referenced tests are examined; they are found to be intrinsically arbitrary and are often examples of misdirected precision and axiomatization. The term, criterion referenced, originally referred to a technique for making test scores meaningful by controlling the test…
47 CFR 76.601 - Performance tests.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false Performance tests. 76.601 Section 76.601 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND... standards set forth in § 76.605(a) (3), (4), and (5) shall be made on each of the NTSC or similar video...
ERIC Educational Resources Information Center
Campaign for Fiscal Equity, Inc., 2004
2004-01-01
In recent years, New York, like most other states, has adopted a set of challenging educational standards that are geared to preparing all students to be capable citizens and to compete in the global marketplace. The state has also implemented extensive Regents testing programs to measure student progress toward meeting the standards. These…
Critical Multimodal Literacy and the Common Core: Subversive Curriculum in the Age of Accountability
ERIC Educational Resources Information Center
Perttula, Jill
2017-01-01
The purpose of this case study research was to understand the ways in which an innovative, urban secondary English teacher (Ms. B) approached English Language Arts, when a set, standardized curriculum and testing were in place. The Common Core standards were prescribed within a required module-based presentation format. New literacies pedagogy…
NAEP Scores Put Spotlight on Standards: Flat Math Results Also Spur Calls for Teaching Reforms
ERIC Educational Resources Information Center
Cavanagh, Sean
2009-01-01
Fourth grade math scores stagnated for the first time in two decades on a prominent nationwide test, prompting calls for new efforts to improve teacher content knowledge and stirring discussion of the potential benefits of setting more-uniform academic standards across states. The results on the National Assessment of Educational Progress,…
ERIC Educational Resources Information Center
Kuhl, Julius
1978-01-01
A formal elaboration of the original theory of achievement motivation (Atkinson, 1957; Atkinson & Feather, 1966) is proposed that includes personal standards as determinants of motivational tendencies. The results of an experiment are reported that examines the validity of some of the implications of the elaborated model proposed here. (Author/RK)
ERIC Educational Resources Information Center
Lee, Jaekyung
2010-01-01
This study examines potential consequences of the discrepancies between national and state performance standards for school funding in Kentucky and Maine. Applying the successful schools observation method and cost function analysis method to integrated data-sets that match schools' eight-grade mathematics test performance measures to district…
Al-Ahmad, Ali; Zou, Peng; Solarte, Diana Lorena Guevara; Hellwig, Elmar; Steinberg, Thorsten; Lienkamp, Karen
2014-01-01
Bacterial infection of biomaterials is a major concern in medicine, and different kinds of antimicrobial biomaterial have been developed to deal with this problem. To test the antimicrobial performance of these biomaterials, the airborne bacterial assay is used, which involves the formation of biohazardous bacterial aerosols. We here describe a new experimental set-up which allows safe handling of such pathogenic aerosols, and standardizes critical parameters of this otherwise intractable and strongly user-dependent assay. With this new method, reproducible, thorough antimicrobial data (number of colony forming units and live-dead-stain) was obtained. Poly(oxonorbornene)-based Synthetic Mimics of Antimicrobial Peptides (SMAMPs) were used as antimicrobial test samples. The assay was able to differentiate even between subtle sample differences, such as different sample thicknesses. With this new set-up, the airborne bacterial assay was thus established as a useful, reliable, and realistic experimental method to simulate the contamination of biomaterials with bacteria, for example in an intraoperative setting.
Kurth, Ann E.; Severynen, Anneleen; Spielberg, Freya
2014-01-01
HIV testing in emergency departments (EDs) remains underutilized. We evaluated a computer tool to facilitate rapid HIV testing in an urban ED. Randomly assigned non-acute adult ED patients to computer tool (‘CARE’) and rapid HIV testing before standard visit (n=258) or to standard visit (n=259) with chart access. Assessed intervention acceptability and compared noted HIV risks. Participants were 56% non-white, 58% male; median age 37 years. In the CARE arm nearly all (251/258) completed the session and received HIV results; 4 declined test consent. HIV risks were reported by 54% of users and there was one confirmed HIV-positive and 2 false-positives (seroprevalence 0.4%, 95% CI 0.01–2.2%). Half (55%) preferred computerized, over face-to-face, counseling for future HIV testing. In standard arm, one HIV test and 2 referrals for testing occurred. Computer-facilitated HIV testing appears acceptable to ED patients. Future research should assess cost-effectiveness compared with staff-delivered approaches. PMID:23837807
Code of Federal Regulations, 2012 CFR
2012-07-01
.... Test Procedures for Engine Smoke Emissions (Aircraft Gas Turbine Engines) § 87.80 Introduction. Except... determine the conformity of new and in-use gas turbine engines with the applicable standards set forth in...
Spectral gene set enrichment (SGSE).
Frost, H Robert; Li, Zhigang; Moore, Jason H
2015-03-03
Gene set testing is typically performed in a supervised context to quantify the association between groups of genes and a clinical phenotype. In many cases, however, a gene set-based interpretation of genomic data is desired in the absence of a phenotype variable. Although methods exist for unsupervised gene set testing, they predominantly compute enrichment relative to clusters of the genomic variables with performance strongly dependent on the clustering algorithm and number of clusters. We propose a novel method, spectral gene set enrichment (SGSE), for unsupervised competitive testing of the association between gene sets and empirical data sources. SGSE first computes the statistical association between gene sets and principal components (PCs) using our principal component gene set enrichment (PCGSE) method. The overall statistical association between each gene set and the spectral structure of the data is then computed by combining the PC-level p-values using the weighted Z-method with weights set to the PC variance scaled by Tracy-Widom test p-values. Using simulated data, we show that the SGSE algorithm can accurately recover spectral features from noisy data. To illustrate the utility of our method on real data, we demonstrate the superior performance of the SGSE method relative to standard cluster-based techniques for testing the association between MSigDB gene sets and the variance structure of microarray gene expression data. Unsupervised gene set testing can provide important information about the biological signal held in high-dimensional genomic data sets. Because it uses the association between gene sets and samples PCs to generate a measure of unsupervised enrichment, the SGSE method is independent of cluster or network creation algorithms and, most importantly, is able to utilize the statistical significance of PC eigenvalues to ignore elements of the data most likely to represent noise.
Setting Academic Performance Standards: MCAS vs. PARCC. Technical Report. Policy Brief
ERIC Educational Resources Information Center
Phelps, Richard P.
2015-01-01
Political realities dictate that, as with any tests, passing scores on those developed by the Partnership for Assessment of Readiness for College and Careers (PARCC) will be set at a level that avoids having an unacceptable number of students fail. Since Massachusetts is by far the highest performing of the states that remain in the PARCC…
ERIC Educational Resources Information Center
Flores-Mendoza, Carmen; Widaman, Keith F.; Rindermann, Heiner; Primi, Ricardo; Mansur-Alves, Marcela; Pena, Carla Couto
2013-01-01
Sex differences on the Attention Test (AC), the Raven's Standard Progressive Matrices (SPM), and the Brazilian Cognitive Battery (BPR5), were investigated using four large samples (total N=6780), residing in the states of Minas Gerais and Sao Paulo. The majority of samples used, which were obtained from educational settings, could be considered a…
40 CFR 1065.550 - Gas analyzer range validation, drift validation, and drift correction.
Code of Federal Regulations, 2010 CFR
2010-07-01
... interval (i.e., do not set them to zero). A third calculation of composite brake-specific emission values... from each test interval and sets any negative mass (or mass rate) values to zero before calculating the... value is less than the standard by at least two times the absolute difference between the uncorrected...
SU-F-BRD-10: Lung IMRT Planning Using Standardized Beam Bouquet Templates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, L; Wu, Q J.; Yin, F
2014-06-15
Purpose: We investigate the feasibility of choosing from a small set of standardized templates of beam bouquets (i.e., entire beam configuration settings) for lung IMRT planning to improve planning efficiency and quality consistency, and also to facilitate automated planning. Methods: A set of beam bouquet templates is determined by learning from the beam angle settings in 60 clinical lung IMRT plans. A k-medoids cluster analysis method is used to classify the beam angle configuration into clusters. The value of the average silhouette width is used to determine the ideal number of clusters. The beam arrangements in each medoid of themore » resulting clusters are taken as the standardized beam bouquet for the cluster, with the corresponding case taken as the reference case. The resulting set of beam bouquet templates was used to re-plan 20 cases randomly selected from the database and the dosimetric quality of the plans was evaluated against the corresponding clinical plans by a paired t-test. The template for each test case was manually selected by a planner based on the match between the test and reference cases. Results: The dosimetric parameters (mean±S.D. in percentage of prescription dose) of the plans using 6 beam bouquet templates and those of the clinical plans, respectively, and the p-values (in parenthesis) are: lung Dmean: 18.8±7.0, 19.2±7.0 (0.28), esophagus Dmean: 32.0±16.3, 34.4±17.9 (0.01), heart Dmean: 19.2±16.5, 19.4±16.6 (0.74), spinal cord D2%: 47.7±18.8, 52.0±20.3 (0.01), PTV dose homogeneity (D2%-D99%): 17.1±15.4, 20.7±12.2 (0.03).The esophagus Dmean, cord D02 and PTV dose homogeneity are statistically better in the plans using the standardized templates, but the improvements (<5%) may not be clinically significant. The other dosimetric parameters are not statistically different. Conclusion: It's feasible to use a small number of standardized beam bouquet templates (e.g. 6) to generate plans with quality comparable to that of clinical plans. Partially supported by NIH/NCI under grant #R21CA161389 and a master research grant by Varian Medical System.« less
Assessing neglect dyslexia with compound words.
Reinhart, Stefan; Schunck, Alexander; Schaadt, Anna Katharina; Adams, Michaela; Simon, Alexandra; Kerkhoff, Georg
2016-10-01
The neglect syndrome is frequently associated with neglect dyslexia (ND), which is characterized by omissions or misread initial letters of single words. ND is usually assessed with standardized reading texts in clinical settings. However, particularly in the chronic phase of ND, patients often report reading deficits in everyday situations but show (nearly) normal performances in test situations that are commonly well-structured. To date, sensitive and standardized tests to assess the severity and characteristics of ND are lacking, although reading is of high relevance for daily life and vocational settings. Several studies found modulating effects of different word features on ND. We combined those features in a novel test to enhance test sensitivity in the assessment of ND. Low-frequency words of different length that contain residual pronounceable words when the initial letter strings are neglected were selected. We compared these words in a group of 12 ND-patients suffering from right-hemispheric first-ever stroke with word stimuli containing no existing residual words. Finally, we tested whether the serially presented words are more sensitive for the diagnosis of ND than text reading. The severity of ND was modulated strongly by the ND-test words and error frequencies in single word reading of ND words were on average more than 10 times higher than in a standardized text reading test (19.8% vs. 1.8%). The novel ND-test maximizes the frequency of specific ND-errors and is therefore more sensitive for the assessment of ND than conventional text reading tasks. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Phinney, Karen W; Sempos, Christopher T; Tai, Susan S-C; Camara, Johanna E; Wise, Stephen A; Eckfeldt, John H; Hoofnagle, Andrew N; Carter, Graham D; Jones, Julia; Myers, Gary L; Durazo-Arvizu, Ramon; Miller, W Greg; Bachmann, Lorin M; Young, Ian S; Pettit, Juanita; Caldwell, Grahame; Liu, Andrew; Brooks, Stephen P J; Sarafin, Kurtis; Thamm, Michael; Mensink, Gert B M; Busch, Markus; Rabenberg, Martina; Cashman, Kevin D; Kiely, Mairead; Galvin, Karen; Zhang, Joy Y; Kinsella, Michael; Oh, Kyungwon; Lee, Sun-Wha; Jung, Chae L; Cox, Lorna; Goldberg, Gail; Guberg, Kate; Meadows, Sarah; Prentice, Ann; Tian, Lu; Brannon, Patsy M; Lucas, Robyn M; Crump, Peter M; Cavalier, Etienne; Merkel, Joyce; Betz, Joseph M
2017-09-01
The Vitamin D Standardization Program (VDSP) coordinated a study in 2012 to assess the commutability of reference materials and proficiency testing/external quality assurance materials for total 25-hydroxyvitamin D [25(OH)D] in human serum, the primary indicator of vitamin D status. A set of 50 single-donor serum samples as well as 17 reference and proficiency testing/external quality assessment materials were analyzed by participating laboratories that used either immunoassay or LC-MS methods for total 25(OH)D. The commutability test materials included National Institute of Standards and Technology Standard Reference Material 972a Vitamin D Metabolites in Human Serum as well as materials from the College of American Pathologists and the Vitamin D External Quality Assessment Scheme. Study protocols and data analysis procedures were in accordance with Clinical and Laboratory Standards Institute guidelines. The majority of the test materials were found to be commutable with the methods used in this commutability study. These results provide guidance for laboratories needing to choose appropriate reference materials and select proficiency or external quality assessment programs and will serve as a foundation for additional VDSP studies.
Inzaule, Seth C; Hamers, Ralph L; Paredes, Roger; Yang, Chunfu; Schuurman, Rob; Rinke de Wit, Tobias F
2017-01-01
Global scale-up of antiretroviral treatment has dramatically changed the prospects of HIV/AIDS disease, rendering life-long chronic care and treatment a reality for millions of HIV-infected patients. Affordable technologies to monitor antiretroviral treatment are needed to ensure long-term durability of limited available drug regimens. HIV drug resistance tests can complement existing strategies in optimizing clinical decision-making for patients with treatment failure, in addition to facilitating population-based surveillance of HIV drug resistance. This review assesses the current landscape of HIV drug resistance technologies and discusses the strengths and limitations of existing assays available for expanding testing in resource-limited settings. These include sequencing-based assays (Sanger sequencing assays and nextgeneration sequencing), point mutation assays, and genotype-free data-based prediction systems. Sanger assays are currently considered the gold standard genotyping technology, though only available at a limited number of resource-limited setting reference and regional laboratories, but high capital and test costs have limited their wide expansion. Point mutation assays present opportunities for simplified laboratory assays, but HIV genetic variability, extensive codon redundancy at or near the mutation target sites with limited multiplexing capability have restricted their utility. Next-generation sequencing, despite high costs, may have potential to reduce the testing cost significantly through multiplexing in high-throughput facilities, although the level of bioinformatics expertise required for data analysis is currently still complex and expensive and lacks standardization. Web-based genotype-free prediction systems may provide enhanced antiretroviral treatment decision-making without the need for laboratory testing, but require further clinical field evaluation and implementation scientific research in resource-limited settings.
Present and future molecular testing of lung carcinoma.
Dacic, Sanja; Nikiforova, Marina N
2014-03-01
The rapid development of targeted therapies has tremendously changed clinical management of lung carcinoma patients and set the stage for similar developments in other tumor types. Many studies have been published in the past decade in search for the most acceptable method of assessment for predictors of response to targeted therapies in lung cancer. As a result, several guidelines for molecular testing have been published in a past couple of years. Because of accumulated evidence that targetable drugs show the best efficacy and improved progression survival rates in lung cancer patients whose tumors have a specific genotype, molecular testing for predictors of therapy response has became standard of care. Presently, testing for EGFR mutations and ALK rearrangements in lung adenocarcinoma has been standardized. The landscape of targetable genomic alterations in lung carcinoma is expanding, but none of other potentially targetable biomarkers have been standardized outside of clinical trials. This review will summarize current practice of molecular testing. Future methods in molecular testing of lung carcinoma will be briefly reviewed.
de Roos, Paul; Bloem, Bastiaan R.; Kelley, Thomas A.; Antonini, Angelo; Dodel, Richard; Hagell, Peter; Marras, Connie; Martinez-Martin, Pablo; Mehta, Shyamal H.; Odin, Per; Chaudhuri, Kallol Ray; Weintraub, Daniel; Wilson, Bil; Uitti, Ryan J.
2017-01-01
Background Parkinson’s disease (PD) is a progressive neurodegenerative condition that is expected to double in prevalence due to demographic shifts. Value-based healthcare is a proposed strategy to improve outcomes and decrease costs. To move towards an actual value-based health care system, condition-specific outcomes that are meaningful to patients are essential. Objective Propose a global consensus standard set of outcome measures for PD. Methods Established methods for outcome measure development were applied, as outlined and used previously by the International Consortium for Health Outcomes Measurement (ICHOM). An international group, representing both patients and experts from the fields of neurology, psychiatry, nursing, and existing outcome measurement efforts, was convened. The group participated in six teleconferences over a six-month period, reviewed existing data and practices, and ultimately proposed a standard set of measures by which patients should be tracked, and how often data should be collected. Results The standard set applies to all cases of idiopathic PD, and includes assessments of motor and non-motor symptoms, ability to work, PD-related health status, and hospital admissions. Baseline demographic and clinical variables are included to enable case mix adjustment. Conclusions The Standard Set is now ready for use and pilot testing in the clinical setting. Ultimately, we believe that using the set of outcomes proposed here will allow clinicians and scientists across the world to document, report, and compare PD-related outcomes in a standardized fashion. Such international benchmarks will improve our understanding of the disease course and allow for identification of ‘best practices’, ultimately leading to better informed treatment decisions. PMID:28671140
Effects of handcuffs on neuropsychological testing: Implications for criminal forensic evaluations.
Biddle, Christine M; Fazio, Rachel L; Dyshniku, Fiona; Denney, Robert L
2018-01-01
Neuropsychological evaluations are increasingly performed in forensic contexts, including in criminal settings where security sometimes cannot be compromised to facilitate evaluation according to standardized procedures. Interpretation of nonstandardized assessment results poses significant challenges for the neuropsychologist. Research is limited in regard to the validation of neuropsychological test accommodation and modification practices that deviate from standard test administration; there is no published research regarding the effects of hand restraints upon neuropsychological evaluation results. This study provides preliminary results regarding the impact of restraints on motor functioning and common neuropsychological tests with a motor component. When restrained, performance on nearly all tests utilized was significantly impacted, including Trail Making Test A/B, a coding test, and several tests of motor functioning. Significant performance decline was observed in both raw scores and normative scores. Regression models are also provided in order to help forensic neuropsychologists adjust for the effect of hand restraints on raw scores of these tests, as the hand restraints also resulted in significant differences in normative scores; in the most striking case there was nearly a full standard deviation of discrepancy.
Caudle, Kelly E; Dunnenberger, Henry M; Freimuth, Robert R; Peterson, Josh F; Burlison, Jonathan D; Whirl-Carrillo, Michelle; Scott, Stuart A; Rehm, Heidi L; Williams, Marc S; Klein, Teri E; Relling, Mary V; Hoffman, James M
2017-02-01
Reporting and sharing pharmacogenetic test results across clinical laboratories and electronic health records is a crucial step toward the implementation of clinical pharmacogenetics, but allele function and phenotype terms are not standardized. Our goal was to develop terms that can be broadly applied to characterize pharmacogenetic allele function and inferred phenotypes. Terms currently used by genetic testing laboratories and in the literature were identified. The Clinical Pharmacogenetics Implementation Consortium (CPIC) used the Delphi method to obtain a consensus and agree on uniform terms among pharmacogenetic experts. Experts with diverse involvement in at least one area of pharmacogenetics (clinicians, researchers, genetic testing laboratorians, pharmacogenetics implementers, and clinical informaticians; n = 58) participated. After completion of five surveys, a consensus (>70%) was reached with 90% of experts agreeing to the final sets of pharmacogenetic terms. The proposed standardized pharmacogenetic terms will improve the understanding and interpretation of pharmacogenetic tests and reduce confusion by maintaining consistent nomenclature. These standard terms can also facilitate pharmacogenetic data sharing across diverse electronic health care record systems with clinical decision support.Genet Med 19 2, 215-223.
The Air Force Officer Qualifying Test: Validity, Fairness, and Bias
2010-01-01
scores. The Standards for Educational and Psychological Testing (AERA, APA, and NCME, 1999) provides a set of guidelines published and endorsed by the...determining the validity and bias of selection tests falls upon professionals in the discipline of industrial/organizational psychology 20 See Roper v. Dep’t...i). 30 The Air Force Officer Qualifying Test : Validity, Fairness, and Bias and closely related fields (e.g., educational psychology and
Choosing HIV Counseling and Testing Strategies for Outreach Settings: A Randomized Trial.
Spielberg, Freya; Branson, Bernard M; Goldbaum, Gary M; Lockhart, David; Kurth, Ann; Rossini, Anthony; Wood, Robert W
2005-03-01
In surveys, clients have expressed preferences for alternatives to traditional HIV counseling and testing. Few data exist to document how offering such alternatives affects acceptance of HIV testing and receipt of test results. This randomized controlled trial compared types of HIV tests and counseling at a needle exchange and 2 bathhouses to determine which types most effectively ensured that clients received test results. Four alternatives were offered on randomly determined days: (1) traditional test with standard counseling, (2) rapid test with standard counseling, (3) oral fluid test with standard counseling, and (4) traditional test with choice of written pretest materials or standard counseling. Of 17,010 clients offered testing, 7014 (41%) were eligible; of those eligible, 761 (11%) were tested: 324 at the needle exchange and 437 at the bathhouses. At the needle exchange, more clients accepted testing (odds ratio [OR] = 2.3; P < 0.001) and received results (OR = 2.6; P < 0.001) on days when the oral fluid test was offered compared with the traditional test. At the bathhouses, more clients accepted oral fluid testing (OR = 1.6; P < 0.001), but more clients overall received results on days when the rapid test was offered (OR = 1.9; P = 0.01). Oral fluid testing and rapid blood testing at both outreach venues resulted in significantly more people receiving test results compared with traditional HIV testing. Making counseling optional increased testing at the needle exchange but not at the bathhouses.
Aerospace Nickel-cadmium Cell Verification
NASA Technical Reports Server (NTRS)
Manzo, Michelle A.; Strawn, D. Michael; Hall, Stephen W.
2001-01-01
During the early years of satellites, NASA successfully flew "NASA-Standard" nickel-cadmium (Ni-Cd) cells manufactured by GE/Gates/SAFF on a variety of spacecraft. In 1992 a NASA Battery Review Board determined that the strategy of a NASA Standard Cell and Battery Specification and the accompanying NASA control of a standard manufacturing control document (MCD) for Ni-Cd cells and batteries was unwarranted. As a result of that determination, standards were abandoned and the use of cells other than the NASA Standard was required. In order to gain insight into the performance and characteristics of the various aerospace Ni-Cd products available, tasks were initiated within the NASA Aerospace Flight Battery Systems Program that involved the procurement and testing of representative aerospace Ni-Cd cell designs. A standard set of test conditions was established in order to provide similar information about the products from various vendors. The objective of this testing was to provide independent verification of representative commercial flight cells available in the marketplace today. This paper will provide a summary of the verification tests run on cells from various manufacturers: Sanyo 35 Ampere-hour (Ali) standard and 35 Ali advanced Ni-Cd cells, SAFr 50 Ah Ni-Cd cells and Eagle-Picher 21 Ali Magnum and 21 Ali Super Ni-CdTM cells from Eagle-Picher were put through a full evaluation. A limited number of 18 and 55 Ali cells from Acme Electric were also tested to provide an initial evaluation of the Acme aerospace cell designs. Additionally, 35 Ali aerospace design Ni-MH cells from Sanyo were evaluated under the standard conditions established for this program. Ile test program is essentially complete. The cell design parameters, the verification test plan and the details of the test result will be discussed.
Infrastructure | Transportation Research | NREL
establishing a new test fuel standard crucial to set the stage for the commercial introduction of high-octane . Results are provided for all stations-including data from pre-commercial or demonstration stations that
Fulga, Netta
2013-06-01
Quality management and accreditation in the analytical laboratory setting are developing rapidly and becoming the standard worldwide. Quality management refers to all the activities used by organizations to ensure product or service consistency. Accreditation is a formal recognition by an authoritative regulatory body that a laboratory is competent to perform examinations and report results. The Motherisk Drug Testing Laboratory is licensed to operate at the Hospital for Sick Children in Toronto, Ontario. The laboratory performs toxicology tests of hair and meconium samples for research and clinical purposes. Most of the samples are involved in a chain of custody cases. Establishing a quality management system and achieving accreditation became mandatory by legislation for all Ontario clinical laboratories since 2003. The Ontario Laboratory Accreditation program is based on International Organization for Standardization 15189-Medical laboratories-Particular requirements for quality and competence, an international standard that has been adopted as a national standard in Canada. The implementation of a quality management system involves management commitment, planning and staff education, documentation of the system, validation of processes, and assessment against the requirements. The maintenance of a quality management system requires control and monitoring of the entire laboratory path of workflow. The process of transformation of a research/clinical laboratory into an accredited laboratory, and the benefits of maintaining an effective quality management system, are presented in this article.
O’Donnell, Karen; Murphy, Robert; Ostermann, Jan; Masnick, Max; Whetten, Rachel A.; Madden, Elisabeth; Thielman, Nathan M.; Whetten, Kathryn
2013-01-01
Assessment of children’s learning and performance in low and middle income countries has been critiqued as lacking a gold standard, an appropriate norm reference group, and demonstrated applicability of assessment tasks to the context. This study was designed to examine the performance of three nonverbal and one adapted verbal measure of children’s problem solving, memory, motivation, and attention across five culturally diverse sites. The goal was to evaluate the tests as indicators of individual differences affected by life events and care circumstances for vulnerable children. We conclude that the measures can be successfully employed with fidelity in non-standard settings in LMICs, and are associated with child age and educational experience across the settings. The tests can be useful in evaluating variability in vulnerable child outcomes. PMID:21538088
Jessen, Wilko; Wilbert, Stefan; Gueymard, Christian A.; ...
2018-04-10
Reference solar irradiance spectra are needed to specify key parameters of solar technologies such as photovoltaic cell efficiency, in a comparable way. The IEC 60904-3 and ASTM G173 standards present such spectra for Direct Normal Irradiance (DNI) and Global Tilted Irradiance (GTI) on a 37 degrees tilted sun-facing surface for one set of clear-sky conditions with an air mass of 1.5 and low aerosol content. The IEC/G173 standard spectra are the widely accepted references for these purposes. Hence, the authors support the future replacement of the outdated ISO 9845 spectra with the IEC spectra within the ongoing update of thismore » ISO standard. The use of a single reference spectrum per component of irradiance is important for clarity when comparing and rating solar devices such as PV cells. However, at some locations the average spectra can differ strongly from those defined in the IEC/G173 standards due to widely different atmospheric conditions and collector tilt angles. Therefore, additional subordinate standard spectra for other atmospheric conditions and tilt angles are of interest for a rough comparison of product performance under representative field conditions, in addition to using the main standard spectrum for product certification under standard test conditions. This simplifies the product selection for solar power systems when a fully-detailed performance analysis is not feasible (e.g. small installations). Also, the effort for a detailed yield analyses can be reduced by decreasing the number of initial product options. After appropriate testing, this contribution suggests a number of additional spectra related to eight sets of atmospheric conditions and tilt angles that are currently considered within ASTM and ISO working groups. The additional spectra, called subordinate standard spectra, are motivated by significant spectral mismatches compared to the IEC/G173 spectra (up to 6.5%, for PV at 37 degrees tilt and 10-15% for CPV). These mismatches correspond to potential accuracy improvements for a quick estimation of the average efficiency by applying the appropriate subordinate standard spectrum instead of the IEC/G173 spectra. The applicability of these spectra for PV performance analyses is confirmed at five test sites, for which subordinate spectra could be intuitively selected based on the average atmospheric aerosol optical depth (AOD) and precipitable water vapor at those locations. The development of subordinate standard spectra for DNI and concentrating solar power (CSP) and concentrating PV (CPV) is also considered. However, it is found that many more sets of atmospheric conditions would be required to allow the intuitive selection of DNI spectra for the five test sites, due in particular to the stronger effect of AOD on DNI compared to GTI. The matrix of subordinate GTI spectra described in this paper are recommended to appear as an option in the annex of future standards, in addition to the obligatory use of the main spectrum from the ASTM G173 and IEC 60904 standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessen, Wilko; Wilbert, Stefan; Gueymard, Christian A.
Reference solar irradiance spectra are needed to specify key parameters of solar technologies such as photovoltaic cell efficiency, in a comparable way. The IEC 60904-3 and ASTM G173 standards present such spectra for Direct Normal Irradiance (DNI) and Global Tilted Irradiance (GTI) on a 37 degrees tilted sun-facing surface for one set of clear-sky conditions with an air mass of 1.5 and low aerosol content. The IEC/G173 standard spectra are the widely accepted references for these purposes. Hence, the authors support the future replacement of the outdated ISO 9845 spectra with the IEC spectra within the ongoing update of thismore » ISO standard. The use of a single reference spectrum per component of irradiance is important for clarity when comparing and rating solar devices such as PV cells. However, at some locations the average spectra can differ strongly from those defined in the IEC/G173 standards due to widely different atmospheric conditions and collector tilt angles. Therefore, additional subordinate standard spectra for other atmospheric conditions and tilt angles are of interest for a rough comparison of product performance under representative field conditions, in addition to using the main standard spectrum for product certification under standard test conditions. This simplifies the product selection for solar power systems when a fully-detailed performance analysis is not feasible (e.g. small installations). Also, the effort for a detailed yield analyses can be reduced by decreasing the number of initial product options. After appropriate testing, this contribution suggests a number of additional spectra related to eight sets of atmospheric conditions and tilt angles that are currently considered within ASTM and ISO working groups. The additional spectra, called subordinate standard spectra, are motivated by significant spectral mismatches compared to the IEC/G173 spectra (up to 6.5%, for PV at 37 degrees tilt and 10-15% for CPV). These mismatches correspond to potential accuracy improvements for a quick estimation of the average efficiency by applying the appropriate subordinate standard spectrum instead of the IEC/G173 spectra. The applicability of these spectra for PV performance analyses is confirmed at five test sites, for which subordinate spectra could be intuitively selected based on the average atmospheric aerosol optical depth (AOD) and precipitable water vapor at those locations. The development of subordinate standard spectra for DNI and concentrating solar power (CSP) and concentrating PV (CPV) is also considered. However, it is found that many more sets of atmospheric conditions would be required to allow the intuitive selection of DNI spectra for the five test sites, due in particular to the stronger effect of AOD on DNI compared to GTI. The matrix of subordinate GTI spectra described in this paper are recommended to appear as an option in the annex of future standards, in addition to the obligatory use of the main spectrum from the ASTM G173 and IEC 60904 standards.« less
Hendriks, A Jan; Awkerman, Jill A; de Zwart, Dick; Huijbregts, Mark A J
2013-11-01
While variable sensitivity of model species to common toxicants has been addressed in previous studies, a systematic analysis of inter-species variability for different test types, modes of action and species is as of yet lacking. Hence, the aim of the present study was to identify similarities and differences in contaminant levels affecting cold-blooded and warm-blooded species administered via different routes. To that end, data on lethal water concentrations LC50, tissue residues LR50 and oral doses LD50 were collected from databases, each representing the largest of its kind. LC50 data were multiplied by a bioconcentration factor (BCF) to convert them to internal concentrations that allow for comparison among species. For each endpoint data set, we calculated the mean and standard deviation of species' lethal level per compound. Next, the means and standard deviations were averaged by mode of action. Both the means and standard deviations calculated depended on the number of species tested, which is at odds with quality standard setting procedures. Means calculated from (BCF) LC50, LR50 and LD50 were largely similar, suggesting that different administration routes roughly yield similar internal levels. Levels for compounds interfering biochemically with elementary life processes were about one order of magnitude below that of narcotics disturbing membranes, and neurotoxic pesticides and dioxins induced death in even lower amounts. Standard deviations for LD50 data were similar across modes of action, while variability of LC50 values was lower for narcotics than for substances with a specific mode of action. The study indicates several directions to go for efficient use of available data in risk assessment and reduction of species testing. Copyright © 2013 Elsevier Inc. All rights reserved.
A Look at the Impact of Raising Standards in Developmental Mathematics
ERIC Educational Resources Information Center
Guy, G. Michael; Puri, Karan; Cornick, Jonathan
2016-01-01
In this paper, we assess the effect of higher entry and exit standards at a community college in New York City. A complex set of university and college-wide policy modifications led to an increase in placement test cut-scores as well as increased requirements to complete remediation. The implementation of this policy change allows us to utilize…
16 CFR 1611.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2012 CFR
2012-01-01
... FLAMMABLE FABRICS ACT REGULATIONS STANDARD FOR THE FLAMMABILITY OF VINYL PLASTIC FILM Rules and Regulations... procedures set forth in section 4(a) of the act. Note: If the outer layer of plastic film or plastic-coated... under part 1611—Standard for the Flammability of Vinyl Plastic Film. If the outer layer adheres to all...
16 CFR 1611.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2014 CFR
2014-01-01
... FLAMMABLE FABRICS ACT REGULATIONS STANDARD FOR THE FLAMMABILITY OF VINYL PLASTIC FILM Rules and Regulations... procedures set forth in section 4(a) of the act. Note: If the outer layer of plastic film or plastic-coated... under part 1611—Standard for the Flammability of Vinyl Plastic Film. If the outer layer adheres to all...
16 CFR 1611.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... FLAMMABLE FABRICS ACT REGULATIONS STANDARD FOR THE FLAMMABILITY OF VINYL PLASTIC FILM Rules and Regulations... procedures set forth in section 4(a) of the act. Note: If the outer layer of plastic film or plastic-coated... under part 1611—Standard for the Flammability of Vinyl Plastic Film. If the outer layer adheres to all...
ERIC Educational Resources Information Center
Nariman, Nahid; Chrispeels, Janet
2016-01-01
We explore teachers' efforts to implement problem-based learning (PBL) in an elementary school serving predominantly English learners. Teachers had an opportunity to implement the Next Generation Science Standards (NGSS) using PBL in a summer school setting with no test-pressures. To understand the challenges and benefits of PBL implementation, a…
16 CFR 1611.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FLAMMABLE FABRICS ACT REGULATIONS STANDARD FOR THE FLAMMABILITY OF VINYL PLASTIC FILM Rules and Regulations... procedures set forth in section 4(a) of the act. Note: If the outer layer of plastic film or plastic-coated... under part 1611—Standard for the Flammability of Vinyl Plastic Film. If the outer layer adheres to all...
ERIC Educational Resources Information Center
Woolfe, Jennifer; Stockley, Lynn
2005-01-01
Objective: To test the feasibility and effectiveness of dietary change interventions in UK school-based settings. This overview draws out the main lessons that were learnt from these studies, for both practitioners and researchers. Design: A review and analysis of the final reports from five studies commissioned by the Food Standards Agency.…
Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe
2014-09-01
Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-13
...). The new version of this IEC standard includes a number of methodological changes designed to increase... codified) sets forth a variety of provisions designed to improve energy efficiency and established the... prescribed or amended under this section shall be reasonably designed to produce test results which measure...
Effects of Vigorous Intensity Physical Activity on Mathematics Test Performance
ERIC Educational Resources Information Center
Phillips, David S.; Hannon, James C.; Castelli, Darla M.
2015-01-01
The effect of an acute bout of physical activity on academic performance in school-based settings is under researched. The purpose of this study was to examine associations between a single, vigorous (70-85%) bout of physical activity completed during physical education on standardized mathematics test performance among 72, eighth grade students…
Resitting or Compensating a Failed Examination: Does It Affect Subsequent Results?
ERIC Educational Resources Information Center
Arnold, Ivo
2017-01-01
Institutions of higher education commonly employ a conjunctive standard setting strategy, which requires students to resit failed examinations until they pass all tests. An alternative strategy allows students to compensate a failing grade with other test results. This paper uses regression discontinuity design to compare the effect of first-year…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
... contains notices to the public of #0;the proposed issuance of rules and regulations. The purpose of these... architectural products set forth in our regulations, with those testing procedures contained in ANSI Z97.1, ``American National Standard for Safety Glazing Materials Used in Building--Safety Performance Specifications...
NASA Astrophysics Data System (ADS)
Vandenbroucke, D.; Van Orshoven, J.; Vancauwenberghe, G.
2012-12-01
Over the last decennia, the use of Geographic Information (GI) has gained importance, in public as well as in private sector. But even if many spatial data and related information exist, data sets are scattered over many organizations and departments. In practice it remains difficult to find the spatial data sets needed, and to access, obtain and prepare them for using in applications. Therefore Spatial Data Infrastructures (SDI) haven been developed to enhance the access, the use and sharing of GI. SDIs consist of a set of technological and non-technological components to reach this goal. Since the nineties many SDI initiatives saw light. Ultimately, all these initiatives aim to enhance the flow of spatial data between organizations (users as well as producers) involved in intra- and inter-organizational and even cross-country business processes. However, the flow of information and its re-use in different business processes requires technical and semantic interoperability: the first should guarantee that system components can interoperate and use the data, while the second should guarantee that data content is understood by all users in the same way. GI-standards within the SDI are necessary to make this happen. However, it is not known if this is realized in practice. Therefore the objective of the research is to develop a quantitative framework to assess the impact of GI-standards on the performance of business processes. For that purpose, indicators are defined and tested in several cases throughout Europe. The proposed research will build upon previous work carried out in the SPATIALIST project. It analyzed the impact of different technological and non-technological factors on the SDI-performance of business processes (Dessers et al., 2011). The current research aims to apply quantitative performance measurement techniques - which are frequently used to measure performance of production processes (Anupindi et al., 2005). Key to reach the research objectives is a correct design of the test cases. The major challenge is: to set-up the analytical framework for analyzing the impact of GI-standards on the process performance, to define the appropriate indicators and to choose the right test cases. In order to do so, it is proposed to define the test cases as 8 pairs of organizations (see figure). The paper will present the state of the art of performance measurement in the context of work processes, propose a series of SMART indicators for describing the set-up and measure the performance, define the test case set-up and suggest criteria for the selection of the test cases, i.e. the organizational pairs. References Anupindi, R., Chopra, S., Deshmukh, S.D., Van Mieghem, J.A., & Zemel, E. (2006). Managing Business Process Flows: Principles of Operations Management. New-Jersey, USA: Prentice Hall. Dessers, D., Crompvoets, J., Janssen, K., Vancauwenberghe, G., Vandenbroucke, D. & Vanhaverbeke, L. (2011). SDI at work: The Spatial Zoning Plans Case. Leuven, Belgium: Katholieke Universiteit Leuven.
A standard bacterial isolate set for research on contemporary dairy spoilage.
Trmčić, A; Martin, N H; Boor, K J; Wiedmann, M
2015-08-01
Food spoilage is an ongoing issue that could be dealt with more efficiently if some standardization and unification was introduced in this field of research. For example, research and development efforts to understand and reduce food spoilage can greatly be enhanced through availability and use of standardized isolate sets. To address this critical issue, we have assembled a standard isolate set of dairy spoilers and other selected nonpathogenic organisms frequently associated with dairy products. This publicly available bacterial set consists of (1) 35 gram-positive isolates including 9 Bacillus and 15 Paenibacillus isolates and (2) 16 gram-negative isolates including 4 Pseudomonas and 8 coliform isolates. The set includes isolates obtained from samples of pasteurized milk (n=43), pasteurized chocolate milk (n=1), raw milk (n=1), cheese (n=2), as well as isolates obtained from samples obtained from dairy-powder production (n=4). Analysis of growth characteristics in skim milk broth identified 16 gram-positive and 13 gram-negative isolates as psychrotolerant. Additional phenotypic characterization of isolates included testing for activity of β-galactosidase and lipolytic and proteolytic enzymes. All groups of isolates included in the isolate set exhibited diversity in growth and enzyme activity. Source data for all isolates in this isolate set are publicly available in the FoodMicrobeTracker database (http://www.foodmicrobetracker.com), which allows for continuous updating of information and advancement of knowledge on dairy-spoilage representatives included in this isolate set. This isolate set along with publicly available isolate data provide a unique resource that will help advance knowledge of dairy-spoilage organisms as well as aid industry in development and validation of new control strategies. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Vosganoff, Diane; Paatsch, Louise E.; Toe, Dianne M.
2011-01-01
This study examined the science and mathematics achievements of 16 Year 9 students with hearing loss in an inclusive high-school setting in Western Australia. Results from the Monitoring Standards in Education (MSE) compulsory state tests were compared with state and class averages for students with normal hearing. Data were collected from three…
ERIC Educational Resources Information Center
New York State Education Dept., Albany.
The New York State Regents Competency Testing Program is described. Competency tests have been developed in the basic skills of reading, writing, and mathematics, for two purposes: (1) to identify those students who need remedial help; and (2) to assure that students receiving high school diplomas have acquired adequate competence in these areas.…
Schoenberg, Mike R; Rum, Ruba S
2017-11-01
Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.
40 CFR 1065.405 - Test engine preparation and maintenance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... modulates an “operator demand” signal such as commanded fuel rate, torque, or power), choose the governor... in the standard-setting part, you may consider emission levels stable without measurement after 50 h...
A standardized set of 3-D objects for virtual reality research and applications.
Peeters, David
2018-06-01
The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.
Guidelines on Good Clinical Laboratory Practice
Ezzelle, J.; Rodriguez-Chavez, I. R.; Darden, J. M.; Stirewalt, M.; Kunwar, N.; Hitchcock, R.; Walter, T.; D’Souza, M. P.
2008-01-01
A set of Good Clinical Laboratory Practice (GCLP) standards that embraces both the research and clinical aspects of GLP were developed utilizing a variety of collected regulatory and guidance material. We describe eleven core elements that constitute the GCLP standards with the objective of filling a gap for laboratory guidance, based on IND sponsor requirements, for conducting laboratory testing using specimens from human clinical trials. These GCLP standards provide guidance on implementing GLP requirements that are critical for laboratory operations, such as performance of protocol-mandated safety assays, peripheral blood mononuclear cell processing and immunological or endpoint assays from biological interventions on IND-registered clinical trials. The expectation is that compliance with the GCLP standards, monitored annually by external audits, will allow research and development laboratories to maintain data integrity and to provide immunogenicity, safety, and product efficacy data that is repeatable, reliable, auditable and that can be easily reconstructed in a research setting. PMID:18037599
NASA Astrophysics Data System (ADS)
Flores, Jorge L.; García-Torales, G.; Ponce Ávila, Cristina
2006-08-01
This paper describes an in situ image recognition system designed to inspect the quality standards of the chocolate pops during their production. The essence of the recognition system is the localization of the events (i.e., defects) in the input images that affect the quality standards of pops. To this end, processing modules, based on correlation filter, and segmentation of images are employed with the objective of measuring the quality standards. Therefore, we designed the correlation filter and defined a set of features from the correlation plane. The desired values for these parameters are obtained by exploiting information about objects to be rejected in order to find the optimal discrimination capability of the system. Regarding this set of features, the pop can be correctly classified. The efficacy of the system has been tested thoroughly under laboratory conditions using at least 50 images, containing 3 different types of possible defects.
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
The Machine Tool Advanced Skills Technology (MAST) consortium was formed to address the shortage of skilled workers for the machine tools and metals-related industries. Featuring six of the nation's leading advanced technology centers, the MAST consortium developed, tested, and disseminated industry-specific skill standards and model curricula for…
Validation of Proposed Metrics for Two-Body Abrasion Scratch Test Analysis Standards
NASA Technical Reports Server (NTRS)
Kobrick, Ryan L.; Klaus, David M.; Street, Kenneth W., Jr.
2011-01-01
The objective of this work was to evaluate a set of standardized metrics proposed for characterizing a surface that has been scratched from a two-body abrasion test. This is achieved by defining a new abrasion region termed Zone of Interaction (ZOI). The ZOI describes the full surface profile of all peaks and valleys, rather than just measuring a scratch width as currently defined by the ASTM G 171 Standard. The ZOI has been found to be at least twice the size of a standard width measurement, in some cases considerably greater, indicating that at least half of the disturbed surface area would be neglected without this insight. The ZOI is used to calculate a more robust data set of volume measurements that can be used to computationally reconstruct a resultant profile for detailed analysis. Documenting additional changes to various surface roughness parameters also allows key material attributes of importance to ultimate design applications to be quantified, such as depth of penetration and final abraded surface roughness. Data are presented to show that different combinations of scratch tips and abraded materials can actually yield the same scratch width, but result in different volume displacement or removal measurements and therefore, the ZOI method is more discriminating than the ASTM method scratch width. Furthermore, by investigating the use of custom scratch tips for our specific needs, the usefulness of having an abrasion metric that can measure the displaced volume in this standardized manner, and not just by scratch width alone, is reinforced. This benefit is made apparent when a tip creates an intricate contour having multiple peaks and valleys within a single scratch. This work lays the foundation for updating scratch measurement standards to improve modeling and characterization of three-body abrasion test results.
Intraobserver reliability of contact pachymetry in children.
Weise, Katherine K; Kaminski, Brett; Melia, Michele; Repka, Michael X; Bradfield, Yasmin S; Davitt, Bradley V; Johnson, David A; Kraker, Raymond T; Manny, Ruth E; Matta, Noelle S; Schloff, Susan
2013-04-01
Central corneal thickness (CCT) is an important measurement in the treatment and management of pediatric glaucoma and potentially of refractive error, but data regarding reliability of CCT measurement in children are limited. The purpose of this study was to evaluate the reliability of CCT measurement with the use of handheld contact pachymetry in children. We conducted a multicenter intraobserver test-retest reliability study of more than 3,400 healthy eyes in children aged from newborn to 17 years by using a handheld contact pachymeter (Pachmate DGH55; DGH Technology Inc, Exton, PA) in 2 clinical settings--with the use of topical anesthesia in the office and with the patient under general anesthesia in a surgical facility. The overall standard error of measurement, including only measurements with standard deviation ≤5 μm, was 8 μm; the corresponding coefficient of repeatability, or limits within which 95% of test-retest differences fell, was ±22.3 μm. However, standard error of measurement increased as CCT increased, from 6.8 μm for CCT less than 525 μm, to 12.9 μm for CCT 625 μm and greater. The standard error of measurement including measurements with standard deviation >5 μm was 10.5 μm. Age, sex, race/ethnicity group, and examination setting did not influence the magnitude of test-retest differences. CCT measurement reliability in children via the Pachmate DGH55 handheld contact pachymeter is similar to that reported for adults. Because thicker CCT measurements are less reliable than thinner measurements, a second measure may be helpful when the first exceeds 575 μm. Reliability is also improved by disregarding measurements with instrument-reported standard deviations >5 μm. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.
2013-06-01
ABBREVIATIONS ANSI American National Standards Institute ASIS American Society of Industrial Security CCTV Closed Circuit Television CONOPS...is globally recognized for the development and maintenance of standards. ASTM defines a specification as an explicit set of requirements...www.rkb.us/saver/. One of the SAVER reports titled CCTV Technology Handbook has a chapter on system design. The report uses terms like functional
Kurth, Ann E; Severynen, Anneleen; Spielberg, Freya
2013-08-01
HIV testing in emergency departments (EDs) remains underutilized. The authors evaluated a computer tool to facilitate rapid HIV testing in an urban ED. Randomly assigned nonacute adult ED patients were randomly assigned to a computer tool (CARE) and rapid HIV testing before a standard visit (n = 258) or to a standard visit (n = 259) with chart access. The authors assessed intervention acceptability and compared noted HIV risks. Participants were 56% nonWhite and 58% male; median age was 37 years. In the CARE arm, nearly all (251/258) of the patients completed the session and received HIV results; four declined to consent to the test. HIV risks were reported by 54% of users; one participant was confirmed HIV-positive, and two were confirmed false-positive (seroprevalence 0.4%, 95% CI [0.01, 2.2]). Half (55%) of the patients preferred computerized rather than face-to-face counseling for future HIV testing. In the standard arm, one HIV test and two referrals for testing occurred. Computer-facilitated HIV testing appears acceptable to ED patients. Future research should assess cost-effectiveness compared with staff-delivered approaches.
Code of Federal Regulations, 2012 CFR
2012-04-01
... shall also include the manufacturer's name, plant location, and shelf life. (c) Periodic tests and quality assurance. Under the procedures set forth in § 200.935(d)(8) concerning periodic tests and quality... administrator. (2) The administrator shall also review the quality assurance procedures twice a year to assure...
Code of Federal Regulations, 2011 CFR
2011-04-01
... shall also include the manufacturer's name, plant location, and shelf life. (c) Periodic tests and quality assurance. Under the procedures set forth in § 200.935(d)(8) concerning periodic tests and quality... administrator. (2) The administrator shall also review the quality assurance procedures twice a year to assure...
ERIC Educational Resources Information Center
Cirignano, Sherri M.; Hughes, Luanne J.; Wu-Jung, Corey J.; Morgan, Kathleen; Grenci, Alexandra; Savoca, LeeAnne
2013-01-01
The Healthy, Hunger-Free Kids Act (HHFKA) of 2010 sets new nutrition standards for schools, requiring them to serve a greater variety and quantity of fruits and vegetables. Extension educators in New Jersey partnered with school nutrition professionals to implement a school wellness initiative that included taste-testing activities to support…
Development and Standardization of the Air Force Officer Qualifying Test Form M.
ERIC Educational Resources Information Center
Miller, Robert E.
Air Force Officer Qualifying Test (AFOQT) Form M was constructed as a replacement for AFOQT Form L in Fiscal Year 1974. The new form serves the same purposes as its predecessor and possesses basically the same characteristics. It yields Pilot, Navigator-Technical, Officer Quality, Verbal, and Quantitative composite scores. Three sets of conversion…
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Im, Piljae; Bhandari, Mahabir S.; New, Joshua Ryan
This document describes the Oak Ridge National Laboratory (ORNL) multiyear experimental plan for validation and uncertainty characterization of whole-building energy simulation for a multi-zone research facility using a traditional rooftop unit (RTU) as a baseline heating, ventilating, and air conditioning (HVAC) system. The project’s overarching objective is to increase the accuracy of energy simulation tools by enabling empirical validation of key inputs and algorithms. Doing so is required to inform the design of increasingly integrated building systems and to enable accountability for performance gaps between design and operation of a building. The project will produce documented data sets that canmore » be used to validate key functionality in different energy simulation tools and to identify errors and inadequate assumptions in simulation engines so that developers can correct them. ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2004), currently consists primarily of tests to compare different simulation programs with one another. This project will generate sets of measured data to enable empirical validation, incorporate these test data sets in an extended version of Standard 140, and apply these tests to the Department of Energy’s (DOE) EnergyPlus software (EnergyPlus 2016) to initiate the correction of any significant deficiencies. The fitness-for-purpose of the key algorithms in EnergyPlus will be established and demonstrated, and vendors of other simulation programs will be able to demonstrate the validity of their products. The data set will be equally applicable to validation of other simulation engines as well.« less
Workshop on standards in biomass for energy and chemicals: proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milne, T.A.
1984-11-01
In the course of reviewing standards literature, visiting prominent laboratories and research groups, attending biomass meetings and corresponding widely, a whole set of standards needs was identified, the most prominent of which were: biomass standard reference materials, research materials and sample banks; special collections of microorganisms, clonal material, algae, etc.; standard methods of characterization of substrates and biomass fuels; standard tests and methods for the conversion and end-use of biomass; standard protocols for the description, harvesting, preparation, storage, and measurement of productivity of biomass materials in the energy context; glossaries of terms; development of special tests for assay of enzymaticmore » activity and related processes. There was also a recognition of the need for government, professional and industry support of concensus standards development and the dissemination of information on standards. Some 45 biomass researchers and managers met with key NBS staff to identify and prioritize standards needs. This was done through three working panels: the Panel on Standard Reference Materials (SRM's), Research Materials (RM's), and Sample Banks; the Panel on Production and Characterization; and the Panel on Tests and Methods for Conversion and End Use. This report gives a summary of the action items in standards development recommended unanimously by the workshop attendees. The proceedings of the workshop, and an appendix, contain an extensive written record of the findings of the workshop panelists and others regarding presently existing standards and standards issues and needs. Separate abstracts have been prepared for selected papers for inclusion in the Energy Database.« less
Study of materials for space processing
NASA Technical Reports Server (NTRS)
Lal, R. B.
1975-01-01
Materials were selected for device applications and their commercial use. Experimental arrangements were also made for electrical characterization of single crystals using electrical resistivity and Hall effect measurements. The experimental set-up was tested with some standard samples.
In most cases, if visible mold growth is present, sampling is unnecessary. Since no EPA or other federal limits have been set for mold or mold spores, sampling cannot be used to check a building's compliance with federal mold standards.
ERIC Educational Resources Information Center
School Science Review, 1972
1972-01-01
Short articles describing the construction of a self-testing device for learning ionic formulae, problems with standard'' experiments in crystallizing sulfur, preparative details for a cold-setting adhesive and vermillion dye, and providing data related to the industrial manufacture of sulphuric acid. (AL)
Progressing from initially ambiguous functional analyses: three case examples.
Tiger, Jeffrey H; Fisher, Wayne W; Toussaint, Karen A; Kodak, Tiffany
2009-01-01
Most often functional analyses are initiated using a standard set of test conditions, similar to those described by Iwata, Dorsey, Slifer, Bauman, and Richman [Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1994). Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis, 27, 197-209 (Reprinted from Analysis and Intervention in Developmental Disabilities, 2, 3-20, 1982)]. These test conditions involve the careful manipulation of motivating operations, discriminative stimuli, and reinforcement contingencies to determine the events related to the occurrence and maintenance of problem behavior. Some individuals display problem behavior that is occasioned and reinforced by idiosyncratic or otherwise unique combinations of environmental antecedents and consequences of behavior, which are unlikely to be detected using these standard assessment conditions. For these individuals, modifications to the standard test conditions or the inclusion of novel test conditions may result in clearer assessment outcomes. The current study provides three case examples of individuals whose functional analyses were initially undifferentiated; however, modifications to the standard conditions resulted in the identification of behavioral functions and the implementation of effective function-based treatments.
Hervella-Garcés, M; García-Gavín, J; Silvestre-Salvador, J F
2016-09-01
The Spanish standard patch test series, as recommended by the Spanish Contact Dermatitis and Skin Allergy Research Group (GEIDAC), has been updated for 2016. The new series replaces the 2012 version and contains the minimum set of allergens recommended for routine investigation of contact allergy in Spain from 2016 onwards. Four haptens -clioquinol, thimerosal, mercury, and primin- have been eliminated owing to a low frequency of relevant allergic reactions, while 3 new allergens -methylisothiazolinone, diazolidinyl urea, and imidazolidinyl urea- have been added. GEIDAC has also modified the recommended aqueous solution concentrations for the 2 classic, major haptens methylchloroisothiazolinone and methylisothiazolinone, which are now to be tested at 200ppm in aqueous solution, and formaldehyde, which is now to be tested in a 2% aqueous solution. Updating the Spanish standard series is one of the functions of GEIDAC, which is responsible for ensuring that the standard series is suited to the country's epidemiological profile and pattern of contact sensitization. Copyright © 2016 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.
ERIC Educational Resources Information Center
Yin, Robert K.; Schmidt, R. James; Besag, Frank
2006-01-01
The study of federal education initiatives that takes place over multiple years in multiple settings often calls for aggregating and comparing data-in particular, student achievement data-across a broad set of schools, districts, and states. The need to track the trends over time is complicated by the fact that the data from the different schools,…
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
The research described in this report is a randomized controlled trial in which seventh- and eighth-grade students were randomly assigned to complete a set of 25 math questions delivered with either standard language or language that had undergone "linguistic modification" by the research team. The purpose of the study was to assess the…
Goh, Yong-Shian; Selvarajan, Sunil; Chng, Mui-Lee; Tan, Chee-Shiong; Yobas, Piyanee
2016-10-01
Conducting mental status examination and suicide risk assessment is an important skill required of nurses when they are in the clinical setting. With nursing students often expressing the anxiety and lack of confidence in doing so, the use of standardized patients provide an excellent opportunity to practice and become proficient with this skill in a simulated environment. To explore the learning experience of undergraduate nursing students using standardized patients while practising their mental status examination and suicide risk assessment skills in mental health nursing module. A pre- and post-test, single group quasi experimental design was used in this study. A standard didactic tutorial session and a standardized patient session was conducted to evaluate the learning experience of undergraduate nursing students learning mental status examination and suicide risk assessment. Outcome measures for this study include Student Satisfaction and Self-Confidence in learning scale. Qualitative comments in the form of open-ended questions were also collected in this study. A University offering nursing program from undergraduate to postgraduate level. A convenience sample of Year 2 undergraduate nursing students undertaking the mental health nursing module was included in this study. The use of standardized patient session had significantly increased students' satisfaction and confidence level before they are posted to a mental health setting for their clinical attachment. There was a significant difference on students' self-confidence level for those who have taken care of a patient with mental illness after adjusting for pre-test on score in learning. Qualitative feedback obtained from students showed a positive outlook towards the use of standardized patient as an effective tool in augmenting didactic learning into practical skills. Using standardized patient in mental health nursing education enhanced the integration of didactic content into clinical setting allowing students to practice their assessment skills learned in classroom and transfer it to the clinical area. The benefits of using standardized patient include allowing students to practice their communication skills and improving their confidence level in conducting mental status examination and suicide risk assessment by reducing anxiety as compared with traditional classroom and textbook-based pedagogy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Isotropy of low redshift type Ia supernovae: A Bayesian analysis
NASA Astrophysics Data System (ADS)
Andrade, U.; Bengaly, C. A. P.; Alcaniz, J. S.; Santos, B.
2018-04-01
The standard cosmology strongly relies upon the cosmological principle, which consists on the hypotheses of large scale isotropy and homogeneity of the Universe. Testing these assumptions is, therefore, crucial to determining if there are deviations from the standard cosmological paradigm. In this paper, we use the latest type Ia supernova compilations, namely JLA and Union2.1 to test the cosmological isotropy at low redshift ranges (z <0.1 ). This is performed through a Bayesian selection analysis, in which we compare the standard, isotropic model, with another one including a dipole correction due to peculiar velocities. The full covariance matrix of SN distance uncertainties are taken into account. We find that the JLA sample favors the standard model, whilst the Union2.1 results are inconclusive, yet the constraints from both compilations are in agreement with previous analyses. We conclude that there is no evidence for a dipole anisotropy from nearby supernova compilations, albeit this test should be greatly improved with the much-improved data sets from upcoming cosmological surveys.
Sassen, J
2000-08-01
Livestock health care service is very much involved and interested in surveillance of the drinking water as well. However, in order to examine the water immediately "on the fly", test kits have to be provided, which offer results comparable to these obtained in the laboratories according to official prescription. The German Army was confronted with a similar situation during the secently performed mission in crisis regions. At the early state of a mission usually laboratory equipment is not yet established. Therefore a set of test kits was compiled suitable for mobile microbiological examination of drinking water. This set was excessively examined comparison with reference methods. In conclusion it is shown, that the mobile set gains equal or even better results compared to those obtained according to legally prescribed standard procedures.
Garvin, Jennifer H; DuVall, Scott L; South, Brett R; Bray, Bruce E; Bolton, Daniel; Heavirland, Julia; Pickard, Steve; Heidenreich, Paul; Shen, Shuying; Weir, Charlene; Samore, Matthew; Goldstein, Mary K
2012-01-01
Left ventricular ejection fraction (EF) is a key component of heart failure quality measures used within the Department of Veteran Affairs (VA). Our goals were to build a natural language processing system to extract the EF from free-text echocardiogram reports to automate measurement reporting and to validate the accuracy of the system using a comparison reference standard developed through human review. This project was a Translational Use Case Project within the VA Consortium for Healthcare Informatics. We created a set of regular expressions and rules to capture the EF using a random sample of 765 echocardiograms from seven VA medical centers. The documents were randomly assigned to two sets: a set of 275 used for training and a second set of 490 used for testing and validation. To establish the reference standard, two independent reviewers annotated all documents in both sets; a third reviewer adjudicated disagreements. System test results for document-level classification of EF of <40% had a sensitivity (recall) of 98.41%, a specificity of 100%, a positive predictive value (precision) of 100%, and an F measure of 99.2%. System test results at the concept level had a sensitivity of 88.9% (95% CI 87.7% to 90.0%), a positive predictive value of 95% (95% CI 94.2% to 95.9%), and an F measure of 91.9% (95% CI 91.2% to 92.7%). An EF value of <40% can be accurately identified in VA echocardiogram reports. An automated information extraction system can be used to accurately extract EF for quality measurement.
Evaluation of RSA set-up from a clinical biplane fluoroscopy system for 3D joint kinematic analysis.
Bonanzinga, Tommaso; Signorelli, Cecilia; Bontempi, Marco; Russo, Alessandro; Zaffagnini, Stefano; Marcacci, Maurilio; Bragonzoni, Laura
2016-01-01
dinamic roentgen stereophotogrammetric analysis (RSA), a technique currently based only on customized radiographic equipment, has been shown to be a very accurate method for detecting three-dimensional (3D) joint motion. The aim of the present work was to evaluate the applicability of an innovative RSA set-up for in vivo knee kinematic analysis, using a biplane fluoroscopic image system. To this end, the Authors describe the set-up as well as a possible protocol for clinical knee joint evaluation. The accuracy of the kinematic measurements is assessed. the Authors evaluated the accuracy of 3D kinematic analysis of the knee in a new RSA set-up, based on a commercial biplane fluoroscopy system integrated into the clinical environment. The study was organized in three main phases: an in vitro test under static conditions, an in vitro test under dynamic conditions reproducing a flexion-extension range of motion (ROM), and an in vivo analysis of the flexion-extension ROM. For each test, the following were calculated, as an indication of the tracking accuracy: mean, minimum, maximum values and standard deviation of the error of rigid body fitting. in terms of rigid body fitting, in vivo test errors were found to be 0.10±0.05 mm. Phantom tests in static and kinematic conditions showed precision levels, for translations and rotations, of below 0.1 mm/0.2° and below 0.5 mm/0.3° respectively for all directions. the results of this study suggest that kinematic RSA can be successfully performed using a standard clinical biplane fluoroscopy system for the acquisition of slow movements of the lower limb. a kinematic RSA set-up using a clinical biplane fluoroscopy system is potentially applicable and provides a useful method for obtaining better characterization of joint biomechanics.
ERIC Educational Resources Information Center
Tannenbaum, Richard J.; Wylie, E. Caroline
2008-01-01
The Common European Framework of Reference (CEFR) describes language proficiency in reading, writing, speaking, and listening on a 6-level scale. In this study, English-language experts from across Europe linked CEFR levels to scores on three tests: the TOEFL® iBT test, the TOEIC® assessment, and the TOEIC "Bridge"™ test.…
NASA Technical Reports Server (NTRS)
Rumbaugh, Duane M.; Washburn, David A.; Savage-Rumbaugh, E. S.; Hopkins, William D.; Richardson, W. K.
1991-01-01
Automation of a computerized test system for comparative primate research is shown to improve the results of learning in standard paradigms. A mediational paradigm is used to determine the degree to which criterion in the learning-set testing reflects stimulus-response associative or mediational learning. Rhesus monkeys are shown to exhibit positive transfer as the criterion levels are shifted upwards, and the effectiveness of the computerized testing system is confirmed.
Cryogenic insulation standard data and methodologies
NASA Astrophysics Data System (ADS)
Demko, J. A.; Fesmire, J. E.; Johnson, W. L.; Swanger, A. M.
2014-01-01
Although some standards exist for thermal insulation, few address the sub-ambient temperature range and cold-side temperatures below 100 K. Standards for cryogenic insulation systems require cryostat testing and data analysis that will allow the development of the tools needed by design engineers and thermal analysts for the design of practical cryogenic systems. Thus, this critically important information can provide reliable data and methodologies for industrial efficiency and energy conservation. Two Task Groups have been established in the area of cryogenic insulation systems Under ASTM International's Committee C16 on Thermal Insulation. These are WK29609 - New Standard for Thermal Performance Testing of Cryogenic Insulation Systems and WK29608 - Standard Practice for Multilayer Insulation in Cryogenic Service. The Cryogenics Test Laboratory of NASA Kennedy Space Center and the Thermal Energy Laboratory of LeTourneau University are conducting Inter-Laboratory Study (ILS) of selected insulation materials. Each lab carries out the measurements of thermal properties of these materials using identical flat-plate boil-off calorimeter instruments. Parallel testing will provide the comparisons necessary to validate the measurements and methodologies. Here we discuss test methods, some initial data in relation to the experimental approach, and the manner reporting the thermal performance data. This initial study of insulation materials for sub-ambient temperature applications is aimed at paving the way for further ILS comparative efforts that will produce standard data sets for several commercial materials. Discrepancies found between measurements will be used to improve the testing and data reduction techniques being developed as part of the future ASTM International standards.
Refractory Metal Heat Pipe Life Test - Test Plan and Standard Operating Procedures
NASA Technical Reports Server (NTRS)
Martin, J. J.; Reid, R. S.
2010-01-01
Refractory metal heat pipes developed during this project shall be subjected to various operating conditions to evaluate life-limiting corrosion factors. To accomplish this objective, various parameters shall be investigated, including the effect of temperature and mass fluence on long-term corrosion rate. The test series will begin with a performance test of one module to evaluate its performance and to establish the temperature and power settings for the remaining modules. The performance test will be followed by round-the-clock testing of 16 heat pipes. All heat pipes shall be nondestructively inspected at 6-month intervals. At longer intervals, specific modules will be destructively evaluated. Both the nondestructive and destructive evaluations shall be coordinated with Los Alamos National Laboratory. During the processing, setup, and testing of the heat pipes, standard operating procedures shall be developed. Initial procedures are listed here and, as hardware is developed, will be updated, incorporating findings and lessons learned.
Effect on moisture permeability of typewriting on unit dose package surfaces.
Rackson, J T; Zellhofer, M J; Birmingham, P H
1984-10-01
The effects of typewriting on labels of two unit dose packages with respect to moisture permeability were examined. Using an electric typewriter, a standard label format was imprinted on two different types of class A unit dose packages: (1) a heat-sealed paper-backed foil and cellofilm strip pouch, and (2) a copolyester and polyethylene multiple-cup blister with a heat-sealed paper-backed foil and cellofilm cover. The labels were typed at various typing-element impact settings. The official USP test for water permeation was then performed on typed packages and untyped control packages. The original untyped packages were confirmed to be USP class A quality. The packages for which successively harder impact settings were used showed a corresponding increase in moisture permeability. This resulted in a lowering of USP package ratings from class A to class B and D, some of which would be unsuitable for use in any unit dose system under current FDA repackaging standards. Typing directly onto the label of a unit dose package before it is sealed will most likely damage the package and possibly make it unfit for use. Pharmacists who must type labels for the unit dose packages studied should use the lowest possible typewriter impact setting and test for damage using the USP moisture-permeation test.
Geostatistics as a validation tool for setting ozone standards for durum wheat.
De Marco, Alessandra; Screpanti, Augusto; Paoletti, Elena
2010-02-01
Which is the best standard for protecting plants from ozone? To answer this question, we must validate the standards by testing biological responses vs. ambient data in the field. A validation is missing for European and USA standards, because the networks for ozone, meteorology and plant responses are spatially independent. We proposed geostatistics as validation tool, and used durum wheat in central Italy as a test. The standards summarized ozone impact on yield better than hourly averages. Although USA criteria explained ozone-induced yield losses better than European criteria, USA legal level (75 ppb) protected only 39% of sites. European exposure-based standards protected > or =90%. Reducing the USA level to the Canadian 65 ppb or using W126 protected 91% and 97%, respectively. For a no-threshold accumulated stomatal flux, 22 mmol m(-2) was suggested to protect 97% of sites. In a multiple regression, precipitation explained 22% and ozone explained <0.9% of yield variability. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Standards for testing and clinical validation of seizure detection devices.
Beniczky, Sándor; Ryvlin, Philippe
2018-06-01
To increase the quality of studies on seizure detection devices, we propose standards for testing and clinical validation of such devices. We identified 4 key features that are important for studies on seizure detection devices: subjects, recordings, data analysis and alarms, and reference standard. For each of these features, we list the specific aspects that need to be addressed in the studies, and depending on these, studies are classified into 5 phases (0-4). We propose a set of outcome measures that need to be reported, and we propose standards for reporting the results. These standards will help in designing and reporting studies on seizure detection devices, they will give readers clear information on the level of evidence provided by the studies, and they will help regulatory bodies in assessing the quality of the validation studies. These standards are flexible, allowing classification of the studies into one of the 5 phases. We propose actions that can facilitate development of novel methods and devices. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.
Caudle, Kelly E.; Dunnenberger, Henry M.; Freimuth, Robert R.; Peterson, Josh F.; Burlison, Jonathan D.; Whirl-Carrillo, Michelle; Scott, Stuart A.; Rehm, Heidi L.; Williams, Marc S.; Klein, Teri E.; Relling, Mary V.; Hoffman, James M.
2017-01-01
Introduction: Reporting and sharing pharmacogenetic test results across clinical laboratories and electronic health records is a crucial step toward the implementation of clinical pharmacogenetics, but allele function and phenotype terms are not standardized. Our goal was to develop terms that can be broadly applied to characterize pharmacogenetic allele function and inferred phenotypes. Materials and methods: Terms currently used by genetic testing laboratories and in the literature were identified. The Clinical Pharmacogenetics Implementation Consortium (CPIC) used the Delphi method to obtain a consensus and agree on uniform terms among pharmacogenetic experts. Results: Experts with diverse involvement in at least one area of pharmacogenetics (clinicians, researchers, genetic testing laboratorians, pharmacogenetics implementers, and clinical informaticians; n = 58) participated. After completion of five surveys, a consensus (>70%) was reached with 90% of experts agreeing to the final sets of pharmacogenetic terms. Discussion: The proposed standardized pharmacogenetic terms will improve the understanding and interpretation of pharmacogenetic tests and reduce confusion by maintaining consistent nomenclature. These standard terms can also facilitate pharmacogenetic data sharing across diverse electronic health care record systems with clinical decision support. Genet Med 19 2, 215–223. PMID:27441996
Low typing endurance in keyboard workers with work-related upper limb disorder
Povlsen, Bo
2011-01-01
Objective To compare results of typing endurance and pain before and after a standardized functional test. Design A standardized previously published typing test on a standard QWERTY keyboard. Setting An outpatient hospital environment. Participants Sixty-one keyboard and mouse operating patients with WRULD and six normal controls. Main outcome measure Pain severity before and after the test, typing endurance and speed were recorded. Results Thirty-two patients could not complete the test before pain reached VAS 5 and this group only typed a mean of 11 minutes. The control group and the remaining group of 29 patients completed the test. Two-tailed student T test was used for evaluation. The endurance was significantly shorter in the patient group that could not complete the test (P < 0.00001) and the pain levels were also higher in this group both before (P = 0.01) and after the test (P = 0.0003). Both patient groups had more pain in the right than the left hand, both before and after typing. Conclusions Low typing endurance correlates statistically with more resting pain in keyboard and mouse operators with work-related upper limb disorder and statistically more pain after a standardized typing test. As the right hands had higher pain levels, typing alone may not be the cause of the pain as the left hand on a QWERTY keyboard does relative more keystrokes than the right hand. PMID:21637395
Development of responder criteria for multicomponent non-pharmacological treatment in fibromyalgia.
Vervoort, Vera M; Vriezekolk, Johanna E; van den Ende, Cornelia H
2017-01-01
There is a need to identify individual treatment success in patients with fibromyalgia (FM) who received non-pharmacological treatment. The present study described responder criteria for multicomponent non-pharmacological treatment in FM, and estimated and compared their sensitivity and specificity. Candidate responder sets were 1) identified in literature; and 2) formulated by expert group consensus. All candidate responder sets were tested in a cohort of 129 patients with FM receiving multicomponent non-pharmacological treatment. We used two gold standards (both therapist's and patient's perspective), assessed at six months after the start of treatment. Seven responder sets were defined (three identified in literature and four formulated by expert group consensus), and comprised combinations of domains of 1) pain; 2) fatigue; 3) patient global assessment (PGA); 4) illness perceptions; 5) limitations in activities of daily living (ADL); and 6) sleep. The sensitivity and specificity of literature-based responder sets (n=3) ranged between 17%-99% and 15%-95% respectively, whereas the expert-based responder sets (n=4) performed slightly better with regard to sensitivity (range 41%-81%) and specificity (range 50%-96%). Of the literature-based responder sets the OMERACT-OARSI responder set with patient's gold standard performed best (sensitivity 63%, specificity 75% and ROC area = 0.69). Overall, the expert-based responder set comprising the domains illness perceptions and limitations in ADL with patient's gold standard performed best (sensitivity 47%, specificity 96% and ROC area = 0.71). We defined sets of responder criteria for multicomponent non-pharmacological treatment in fibromyalgia. Further research should focus on the validation of those sets with acceptable performance.
Reducing animal experimentation in foot-and-mouth disease vaccine potency tests.
Reeve, Richard; Cox, Sarah; Smitsaart, Eliana; Beascoechea, Claudia Perez; Haas, Bernd; Maradei, Eduardo; Haydon, Daniel T; Barnett, Paul
2011-07-26
The World Organisation for Animal Health (OIE) Terrestrial Manual and the European Pharmacopoeia (EP) still prescribe live challenge experiments for foot-and-mouth disease virus (FMDV) immunogenicity and vaccine potency tests. However, the EP allows for other validated tests for the latter, and specifically in vitro tests if a "satisfactory pass level" has been determined; serological replacements are also currently in use in South America. Much research has therefore focused on validating both ex vivo and in vitro tests to replace live challenge. However, insufficient attention has been given to the sensitivity and specificity of the "gold standard"in vivo test being replaced, despite this information being critical to determining what should be required of its replacement. This paper aims to redress this imbalance by examining the current live challenge tests and their associated statistics and determining the confidence that we can have in them, thereby setting a standard for candidate replacements. It determines that the statistics associated with the current EP PD(50) test are inappropriate given our domain knowledge, but that the OIE test statistics are satisfactory. However, it has also identified a new set of live animal challenge test regimes that provide similar sensitivity and specificity to all of the currently used OIE tests using fewer animals (16 including controls), and can also provide further savings in live animal experiments in exchange for small reductions in sensitivity and specificity. Copyright © 2011 Elsevier Ltd. All rights reserved.
The Mediating Relation between Symbolic and Nonsymbolic Foundations of Math Competence
Price, Gavin R.; Fuchs, Lynn S.
2016-01-01
This study investigated the relation between symbolic and nonsymbolic magnitude processing abilities with 2 standardized measures of math competence (WRAT Arithmetic and KeyMath Numeration) in 150 3rd- grade children (mean age 9.01 years). Participants compared sets of dots and pairs of Arabic digits with numerosities 1–9 for relative numerical magnitude. In line with previous studies, performance on both symbolic and nonsymbolic magnitude processing was related to math ability. Performance metrics combining reaction and accuracy, as well as weber fractions, were entered into mediation models with standardized math test scores. Results showed that symbolic magnitude processing ability fully mediates the relation between nonsymbolic magnitude processing and math ability, regardless of the performance metric or standardized test. PMID:26859564
The Mediating Relation between Symbolic and Nonsymbolic Foundations of Math Competence.
Price, Gavin R; Fuchs, Lynn S
2016-01-01
This study investigated the relation between symbolic and nonsymbolic magnitude processing abilities with 2 standardized measures of math competence (WRAT Arithmetic and KeyMath Numeration) in 150 3rd-grade children (mean age 9.01 years). Participants compared sets of dots and pairs of Arabic digits with numerosities 1-9 for relative numerical magnitude. In line with previous studies, performance on both symbolic and nonsymbolic magnitude processing was related to math ability. Performance metrics combining reaction and accuracy, as well as weber fractions, were entered into mediation models with standardized math test scores. Results showed that symbolic magnitude processing ability fully mediates the relation between nonsymbolic magnitude processing and math ability, regardless of the performance metric or standardized test.
[Procedure of seed quality testing and seed grading standard of Prunus humilis].
Wen, Hao; Ren, Guang-Xi; Gao, Ya; Luo, Jun; Liu, Chun-Sheng; Li, Wei-Dong
2014-11-01
So far there exists no corresponding quality test procedures and grading standards for the seed of Prunus humilis, which is one of the important source of base of semen pruni. Therefor we set up test procedures that are adapt to characteristics of the P. humilis seed through the study of the test of sampling, seed purity, thousand-grain weight, seed moisture, seed viability and germination percentage. 50 cases of seed specimens of P. humilis tested. The related data were analyzed by cluster analysis. Through this research, the seed quality test procedure was developed, and the seed quality grading standard was formulated. The seed quality of each grade should meet the following requirements: for first grade seeds, germination percentage ≥ 68%, thousand-grain weight 383 g, purity ≥ 93%, seed moisture ≤ 5%; for second grade seeds, germination percentage ≥ 26%, thousand-grain weight ≥ 266 g, purity ≥ 73%, seed moisture ≤9%; for third grade seeds, germination percentage ≥ 10%, purity ≥ 50%, thousand-grain weight ≥ 08 g, seed moisture ≤ 13%.
NASA Astrophysics Data System (ADS)
Ahmed, Shamim; Miorelli, Roberto; Calmon, Pierre; Anselmi, Nicola; Salucci, Marco
2018-04-01
This paper describes Learning-By-Examples (LBE) technique for performing quasi real time flaw localization and characterization within a conductive tube based on Eddy Current Testing (ECT) signals. Within the framework of LBE, the combination of full-factorial (i.e., GRID) sampling and Partial Least Squares (PLS) feature extraction (i.e., GRID-PLS) techniques are applied for generating a suitable training set in offine phase. Support Vector Regression (SVR) is utilized for model development and inversion during offine and online phases, respectively. The performance and robustness of the proposed GIRD-PLS/SVR strategy on noisy test set is evaluated and compared with standard GRID/SVR approach.
A new computer program for mass screening of visual defects in preschool children.
Briscoe, D; Lifshitz, T; Grotman, M; Kushelevsky, A; Vardi, H; Weizman, S; Biedner, B
1998-04-01
To test the effectiveness of a PC computer program for detecting vision disorders which could be used by non-trained personnel, and to determine the prevalence of visual impairment in a sample population of preschool children in the city of Beer-Sheba, Israel. 292 preschool children, aged 4-6 years, were examined in the kindergarten setting, using the computer system and "gold standard" tests. Visual acuity and stereopsis were tested and compared using Snellen type symbol charts and random dot stereograms respectively. The sensitivity, specificity, positive predictive value, negative predictive value, and kappa test were evaluated. A computer pseudo Worth four dot test was also performed but could not be compared with the standard Worth four dot test owing to the inability of many children to count. Agreement between computer and gold standard tests was 83% and 97.3% for visual acuity and stereopsis respectively. The sensitivity of the computer stereogram was only 50%, but it had a specificity of 98.9%, whereas the sensitivity and specificity of the visual acuity test were 81.5% and 83% respectively. The positive predictive value of both tests was about 63%. 27.7% of children tested had a visual acuity of 6/12 or less and stereopsis was absent in 28% using standard tests. Impairment of fusion was found in 5% of children using the computer pseudo Worth four dot test. The computer program was found to be stimulating, rapid, and easy to perform. The wide availability of computers in schools and at home allow it to be used as an additional screening tool by non-trained personnel, such as teachers and parents, but it is not a replacement for standard testing.
EU-US standards harmonization task group report : testing for ITS communications.
DOT National Transportation Integrated Search
2003-01-01
The Cape Cod Regional Transit Authority's (CCRTA) Advanced Public Transportation System (APTS) project is an application of Intelligent Transportation Systems (ITS) to fixed-route and paratransit operations in a rural transit setting. The purpose of ...
Establishment of Religion in Primary and Secondary Schools.
ERIC Educational Resources Information Center
Underwood, Julie K.
1989-01-01
A modified analysis of the "Lemon" test as set forth in Supreme Court opinions is explained, and relevant lower court cases are reviewed. Determines that the modified standard is heightened and consistently applied within K-12 education activities. (MLF)
Solvent-accessible surface area: How well can be applied to hot-spot detection?
Martins, João M; Ramos, Rui M; Pimenta, António C; Moreira, Irina S
2014-03-01
A detailed comprehension of protein-based interfaces is essential for the rational drug development. One of the key features of these interfaces is their solvent accessible surface area profile. With that in mind, we tested a group of 12 SASA-based features for their ability to correlate and differentiate hot- and null-spots. These were tested in three different data sets, explicit water MD, implicit water MD, and static PDB structure. We found no discernible improvement with the use of more comprehensive data sets obtained from molecular dynamics. The features tested were shown to be capable of discerning between hot- and null-spots, while presenting low correlations. Residue standardization such as rel SASAi or rel/res SASAi , improved the features as a tool to predict ΔΔGbinding values. A new method using support machine learning algorithms was developed: SBHD (Sasa-Based Hot-spot Detection). This method presents a precision, recall, and F1 score of 0.72, 0.81, and 0.76 for the training set and 0.91, 0.73, and 0.81 for an independent test set. Copyright © 2013 Wiley Periodicals, Inc.
Noise tests of a mixer nozzle-externally blown flap system
NASA Technical Reports Server (NTRS)
Goodykoontz, J. H.; Dorsch, R. G.; Groesbeck, D. E.
1973-01-01
Noise tests were conducted on a large scale model of an externally blown flap lift augmentation system, employing a mixer nozzle. The mixer nozzle consisted of seven flow passages with a total equivalent diameter of 40 centimeters. With the flaps in the 30 - 60 deg setting, the noise level below the wing was less with the mixer nozzle than when a standard circular nozzle was used. At the 10 - 20 deg flap setting, the noise levels were about the same when either nozzle was used. With retracted flaps, the noise level was higher when the mixer nozzle was used.
Cortesi, Marilisa; Bandiera, Lucia; Pasini, Alice; Bevilacqua, Alessandro; Gherardi, Alessandro; Furini, Simone; Giordano, Emanuele
2017-01-01
Quantifying gene expression at single cell level is fundamental for the complete characterization of synthetic gene circuits, due to the significant impact of noise and inter-cellular variability on the system's functionality. Commercial set-ups that allow the acquisition of fluorescent signal at single cell level (flow cytometers or quantitative microscopes) are expensive apparatuses that are hardly affordable by small laboratories. A protocol that makes a standard optical microscope able to acquire quantitative, single cell, fluorescent data from a bacterial population transformed with synthetic gene circuitry is presented. Single cell fluorescence values, acquired with a microscope set-up and processed with custom-made software, are compared with results that were obtained with a flow cytometer in a bacterial population transformed with the same gene circuitry. The high correlation between data from the two experimental set-ups, with a correlation coefficient computed over the tested dynamic range > 0.99, proves that a standard optical microscope- when coupled with appropriate software for image processing- might be used for quantitative single-cell fluorescence measurements. The calibration of the set-up, together with its validation, is described. The experimental protocol described in this paper makes quantitative measurement of single cell fluorescence accessible to laboratories equipped with standard optical microscope set-ups. Our method allows for an affordable measurement/quantification of intercellular variability, whose better understanding of this phenomenon will improve our comprehension of cellular behaviors and the design of synthetic gene circuits. All the required software is freely available to the synthetic biology community (MUSIQ Microscope flUorescence SIngle cell Quantification).
Measurement of Energy Performances for General-Structured Servers
NASA Astrophysics Data System (ADS)
Liu, Ren; Chen, Lili; Li, Pengcheng; Liu, Meng; Chen, Haihong
2017-11-01
Energy consumption of servers in data centers increases rapidly along with the wide application of Internet and connected devices. To improve the energy efficiency of servers, voluntary or mandatory energy efficiency programs for servers, including voluntary label program or mandatory energy performance standards have been adopted or being prepared in the US, EU and China. However, the energy performance of servers and testing methods of servers are not well defined. This paper presents matrices to measure the energy performances of general-structured servers. The impacts of various components of servers on their energy performances are also analyzed. Based on a set of normalized workload, the author proposes a standard method for testing energy efficiency of servers. Pilot tests are conducted to assess the energy performance testing methods of servers. The findings of the tests are discussed in the paper.
Route Learning Impairment in Temporal Lobe Epilepsy
Bell, Brian D.
2012-01-01
Memory impairment on neuropsychological tests is relatively common in temporal lobe epilepsy (TLE) patients. But memory rarely has been evaluated in more naturalistic settings. This study assessed TLE (n = 19) and control (n = 32) groups on a real-world route learning (RL) test. Compared to the controls, the TLE group committed significantly more total errors across the three RL test trials. RL errors correlated significantly with standardized auditory and visual memory and visual-perceptual test scores in the TLE group. In the TLE subset for whom hippocampal data were available (n = 14), RL errors also correlated significantly with left hippocampal volume. This is one of the first studies to demonstrate real-world memory impairment in TLE patients and its association with both mesial temporal lobe integrity and standardized memory test performance. The results support the ecological validity of clinical neuropsychological assessment. PMID:23041173
Somerville, Lyndsay; Bryant, Dianne; Willits, Kevin; Johnson, Andrew
2013-02-08
Shoulder complaints are the third most common musculoskeletal problem in the general population. There are an abundance of physical examination maneuvers for diagnosing shoulder pathology. The validity of these maneuvers has not been adequately addressed. We propose a large Phase III study to investigate the accuracy of these tests in an orthopaedic setting. We will recruit consecutive new shoulder patients who are referred to two tertiary orthopaedic clinics. We will select which physical examination tests to include using a modified Delphi process. The physician will take a thorough history from the patient and indicate their certainty about each possible diagnosis (certain the diagnosis is absent, present or requires further testing). The clinician will only perform the physical examination maneuvers for diagnoses where uncertainty remains. We will consider arthroscopy the reference standard for patients who undergo surgery within 8 months of physical examination and magnetic resonance imaging with arthrogram for patients who do not. We will calculate the sensitivity, specificity and positive and negative likelihood ratios and investigate whether combinations of the top tests provide stronger predictions of the presence or absence of disease. There are several considerations when performing a diagnostic study to ensure that the results are applicable in a clinical setting. These include, 1) including a representative sample, 2) selecting an appropriate reference standard, 3) avoiding verification bias, 4) blinding the interpreters of the physical examination tests to the interpretation of the gold standard and, 5) blinding the interpreters of the gold standard to the interpretation of the physical examination tests. The results of this study will inform clinicians of which tests, or combination of tests, successfully reduce diagnostic uncertainty, which tests are misleading and how physical examination may affect the magnitude of the confidence the clinician feels about their diagnosis. The results of this study may reduce the number of costly and invasive imaging studies (MRI, CT or arthrography) that are requisitioned when uncertainty about diagnosis remains following history and physical exam. We also hope to reduce the variability between specialists in which maneuvers are used during physical examination and how they are used, all of which will assist in improving consistency of care between centres.
An Examination of a Teacher's Use of Authentic Assessment in an Urban Middle School Setting
ERIC Educational Resources Information Center
Stevens, Patricia
2013-01-01
Today in urban education, schools are forced to keep up and compete with students nationally with high-stake testing. Standardized tests are often bias in nature and often do not measure the true ability of a student. Casas (2003) believes that all children can learn but they may learn differently. Therefore, using authentic assessments is an…
Code of Federal Regulations, 2014 CFR
2014-07-01
... are present, the ICI must test and verify the system's ability to find the faults (such as... vehicle. When no fault is present, the ICI must verify that after sufficient prep driving (typically one....e., no codes set and no light illuminated). (v) The ICI may not modify more than 300 vehicles in any...
40 CFR 86.094-28 - Compliance with emission standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the outlier procedure and averaging (as allowed under § 86.094-26(a)(6)(i)) to the same data set, the...) through (3) of this section. (1) All valid exhaust emission data from the tests required under § 86.094-26... § 86.094-29 for all tests conducted on all durability data vehicles of the combination selected under...
40 CFR 86.094-28 - Compliance with emission standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the outlier procedure and averaging (as allowed under § 86.094-26(a)(6)(i)) to the same data set, the...) through (3) of this section. (1) All valid exhaust emission data from the tests required under § 86.094-26... § 86.094-29 for all tests conducted on all durability data vehicles of the combination selected under...
40 CFR 86.094-28 - Compliance with emission standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the outlier procedure and averaging (as allowed under § 86.094-26(a)(6)(i)) to the same data set, the...) through (3) of this section. (1) All valid exhaust emission data from the tests required under § 86.094-26... § 86.094-29 for all tests conducted on all durability data vehicles of the combination selected under...
40 CFR 86.094-28 - Compliance with emission standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the outlier procedure and averaging (as allowed under § 86.094-26(a)(6)(i)) to the same data set, the...) through (3) of this section. (1) All valid exhaust emission data from the tests required under § 86.094-26... § 86.094-29 for all tests conducted on all durability data vehicles of the combination selected under...
The Impact of Cultural Capital on Secondary Student's Performance in Brazil
ERIC Educational Resources Information Center
Caprara, Bernardo
2016-01-01
The main goal of this study is to verify the effects of cultural capital on students' performance in an official test applied by the Brazilian government, as part of the National Assessment of Basic Education (Saeb). The data set used is from 2003 and involves 52,434 students. The standard test is applied every two years in the fields of…
Allometric Scaling of Wingate Anaerobic Power Test Scores in Women
ERIC Educational Resources Information Center
Hetzler, Ronald K.; Stickley, Christopher D.; Kimura, Iris F.
2011-01-01
In this study, we developed allometric exponents for scaling Wingate anaerobic test (WAnT) power data that are reflective in controlling for body mass (BM) and lean body mass (LBM) and established a normative WAnT data set for college-age women. One hundred women completed a standard WAnT. Allometric exponents and percentile ranks for peak (PP)…
Validation of individual and aggregate global flood hazard models for two major floods in Africa.
NASA Astrophysics Data System (ADS)
Trigg, M.; Bernhofen, M.; Whyman, C.
2017-12-01
A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.
Standard test evaluation of graphite fiber/resin matrix composite materials for improved toughness
NASA Technical Reports Server (NTRS)
Chapman, Andrew J.
1984-01-01
Programs sponsored by NASA with the commercial transport manufacturers to develop a technology data base are required to design and build composite wing and fuselage structures. To realize the full potential of composite structures in these strength critical designs, material systems having improved ductility and interlaminar toughness are being sought. To promote systematic evaluation of new materials, NASA and the commercial transport manufacturers have selected and standardized a set of five common tests. These tests evaluate open hole tension and compression performance, compression performance after impact at an energy level of 20 ft-lb, and resistance to delamination. Ten toughened resin matrix/graphite fiber composites were evaluated using this series of tests, and their performance is compared with a widely used composite system.
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
A model-driven approach to information security compliance
NASA Astrophysics Data System (ADS)
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
Evaluation of thermal cameras in quality systems according to ISO 9000 or EN 45000 standards
NASA Astrophysics Data System (ADS)
Chrzanowski, Krzysztof
2001-03-01
According to the international standards ISO 9001-9004 and EN 45001-45003 the industrial plants and the accreditation laboratories that implemented the quality systems according to these standards are required to evaluate an uncertainty of measurements. Manufacturers of thermal cameras do not offer any data that could enable estimation of measurement uncertainty of these imagers. Difficulties in determining the measurement uncertainty is an important limitation of thermal cameras for applications in the industrial plants and the cooperating accreditation laboratories that have implemented these quality systems. A set of parameters for characterization of commercial thermal cameras, a measuring set, some results of testing of these cameras, a mathematical model of uncertainty, and a software that enables quick calculation of uncertainty of temperature measurements with thermal cameras are presented in this paper.
Software Manages Documentation in a Large Test Facility
NASA Technical Reports Server (NTRS)
Gurneck, Joseph M.
2001-01-01
The 3MCS computer program assists and instrumentation engineer in performing the 3 essential functions of design, documentation, and configuration management of measurement and control systems in a large test facility. Services provided by 3MCS are acceptance of input from multiple engineers and technicians working at multiple locations;standardization of drawings;automated cross-referencing; identification of errors;listing of components and resources; downloading of test settings; and provision of information to customers.
Ada Programming Support Environment (APSE) Evaluation and Validation (E&V) Team
1991-12-31
standards. The purpose of the team was to assist the project in several ways. Raymond Szymanski of Wright Research Iand Development Center (WRDC, now...debuggers, program library systems, and compiler diagnostics. The test suite does not include explicit tests for the existence of language features . The...support software is a set of tools and procedures which assist in preparing and executing the test suite, in extracting data from the results of
Wiltz, Jennifer L; Blanck, Heidi M; Lee, Brian; Kocot, S Lawrence; Seeff, Laura; McGuire, Lisa C; Collins, Janet
2017-10-26
Electronic information technology standards facilitate high-quality, uniform collection of data for improved delivery and measurement of health care services. Electronic information standards also aid information exchange between secure systems that link health care and public health for better coordination of patient care and better-informed population health improvement activities. We developed international data standards for healthy weight that provide common definitions for electronic information technology. The standards capture healthy weight data on the "ABCDs" of a visit to a health care provider that addresses initial obesity prevention and care: assessment, behaviors, continuity, identify resources, and set goals. The process of creating healthy weight standards consisted of identifying needs and priorities, developing and harmonizing standards, testing the exchange of data messages, and demonstrating use-cases. Healthy weight products include 2 message standards, 5 use-cases, 31 LOINC (Logical Observation Identifiers Names and Codes) question codes, 7 healthy weight value sets, 15 public-private engagements with health information technology implementers, and 2 technical guides. A logic model and action steps outline activities toward better data capture, interoperable systems, and information use. Sharing experiences and leveraging this work in the context of broader priorities can inform the development of electronic information standards for similar core conditions and guide strategic activities in electronic systems.
Blanck, Heidi M.; Lee, Brian; Kocot, S. Lawrence; Seeff, Laura; McGuire, Lisa C.; Collins, Janet
2017-01-01
Electronic information technology standards facilitate high-quality, uniform collection of data for improved delivery and measurement of health care services. Electronic information standards also aid information exchange between secure systems that link health care and public health for better coordination of patient care and better-informed population health improvement activities. We developed international data standards for healthy weight that provide common definitions for electronic information technology. The standards capture healthy weight data on the “ABCDs” of a visit to a health care provider that addresses initial obesity prevention and care: assessment, behaviors, continuity, identify resources, and set goals. The process of creating healthy weight standards consisted of identifying needs and priorities, developing and harmonizing standards, testing the exchange of data messages, and demonstrating use-cases. Healthy weight products include 2 message standards, 5 use-cases, 31 LOINC (Logical Observation Identifiers Names and Codes) question codes, 7 healthy weight value sets, 15 public–private engagements with health information technology implementers, and 2 technical guides. A logic model and action steps outline activities toward better data capture, interoperable systems, and information use. Sharing experiences and leveraging this work in the context of broader priorities can inform the development of electronic information standards for similar core conditions and guide strategic activities in electronic systems. PMID:29072985
Support vector machines-based modelling of seismic liquefaction potential
NASA Astrophysics Data System (ADS)
Pal, Mahesh
2006-08-01
This paper investigates the potential of support vector machines (SVM)-based classification approach to assess the liquefaction potential from actual standard penetration test (SPT) and cone penetration test (CPT) field data. SVMs are based on statistical learning theory and found to work well in comparison to neural networks in several other applications. Both CPT and SPT field data sets is used with SVMs for predicting the occurrence and non-occurrence of liquefaction based on different input parameter combination. With SPT and CPT test data sets, highest accuracy of 96 and 97%, respectively, was achieved with SVMs. This suggests that SVMs can effectively be used to model the complex relationship between different soil parameter and the liquefaction potential. Several other combinations of input variable were used to assess the influence of different input parameters on liquefaction potential. Proposed approach suggest that neither normalized cone resistance value with CPT data nor the calculation of standardized SPT value is required with SPT data. Further, SVMs required few user-defined parameters and provide better performance in comparison to neural network approach.
The development of a test methodology for the evaluation of EVA gloves
NASA Technical Reports Server (NTRS)
O'Hara, John M.; Cleland, John; Winfield, Dan
1988-01-01
This paper describes the development of a standardized set of tests designed to assess EVA-gloved hand capabilities in six measurement domains: range of motion, strength, tactile perception, dexterity, fatigue, and comfort. Based upon an assessment of general human-hand functioning and EVA task requirements, several tests within each measurement domain were developed to provide a comprehensive evaluation. All tests were designed to be conducted in a glove box with the bare hand as a baseline and the EVA glove at operating pressure.
Detection of gonococcal infection : pros and cons of a rapid test.
Vickerman, Peter; Peeling, Rosanna W; Watts, Charlotte; Mabey, David
2005-01-01
WHO estimates that 62 million cases of gonorrhea occur annually worldwide. Untreated infection can cause serious long-term complications, especially in women. In addition, Neisseria gonorrheae infection can facilitate HIV transmission, and babies born to infected mothers are at risk of ocular infection, which can lead to blindness. Where diagnostic facilities are lacking, gonorrhea can be treated syndromically. However, this inevitably leads to over-treatment, especially in women in whom the syndrome of vaginal discharge may be due not to N. gonorrheae infection but to several other more prevalent conditions. Over-treatment is a major concern because of widespread N. gonorrheae antibiotic resistance. Moreover, a high proportion of gonorrhea cases are asymptomatic and so do not present for syndromic management. Such cases will only be detected by screening tests. The gold standard test for the detection of N. gonorrheae is culture, which has high sensitivity and specificity. However, it requires well trained staff and its performance is affected by specimen transport conditions. Other options include microscopy and tests that detect gonococcal antigen or nucleic acid. Nucleic acid amplification tests (NAATs) have higher sensitivity and can be used on non-invasive samples (urine). However, they can cross-react with other Neisseria species and are expensive, requiring highly trained staff and sophisticated equipment. In settings where patients are asked to return for laboratory results, some infected patients never receive treatment as they fail to return for their test results. This reduction in treatment, and the possible onward transmission of N. gonorrheae during any delay in treatment, means that a rapid test of lower sensitivity may be more effective if it results in patients being treated at the initial visit. Indeed, even with the low sensitivity of currently available rapid tests (50-70%), modeling shows that they can outperform gold standard tests in populations with high sexual activity and/or low return rates. Unfortunately, however, most of the rapid tests currently available are immunoassays that are quite expensive and involve many steps, which limit their current usefulness. In summary, the pros and cons of using a rapid test are dependent on the setting. Culture or NAATs remain the best choice in an ideal setting. However, in settings where laboratory facilities are not available, or in high-risk populations where return rates are low, rapid tests may be the most effective way of diagnosing gonorrhea. Their optimal use in these settings requires the development of simpler and cheaper rapid tests.
Naugle, Alecia Larew; Barlow, Kristina E; Eblen, Denise R; Teter, Vanessa; Umholtz, Robert
2006-11-01
The U.S. Food Safety and Inspection Service (FSIS) tests sets of samples of selected raw meat and poultry products for Salmonella to ensure that federally inspected establishments meet performance standards defined in the pathogen reduction-hazard analysis and critical control point system (PR-HACCP) final rule. In the present report, sample set results are described and associations between set failure and set and establishment characteristics are identified for 4,607 sample sets collected from 1998 through 2003. Sample sets were obtained from seven product classes: broiler chicken carcasses (n = 1,010), cow and bull carcasses (n = 240), market hog carcasses (n = 560), steer and heifer carcasses (n = 123), ground beef (n = 2,527), ground chicken (n = 31), and ground turkey (n = 116). Of these 4,607 sample sets, 92% (4,255) were collected as part of random testing efforts (A sets), and 93% (4,166) passed. However, the percentage of positive samples relative to the maximum number of positive results allowable in a set increased over time for broilers but decreased or stayed the same for the other product classes. Three factors associated with set failure were identified: establishment size, product class, and year. Set failures were more likely early in the testing program (relative to 2003). Small and very small establishments were more likely to fail than large ones. Set failure was less likely in ground beef than in other product classes. Despite an overall decline in set failures through 2003, these results highlight the need for continued vigilance to reduce Salmonella contamination in broiler chicken and continued implementation of programs designed to assist small and very small establishments with PR-HACCP compliance issues.
21 CFR 874.1080 - Audiometer calibration set.
Code of Federal Regulations, 2010 CFR
2010-04-01
... calibration traceable to the National Bureau of Standards, oscillators, frequency counters, microphone amplifiers, and a recorder. The device can measure selected audiometer test frequencies at a given intensity... audiometer. It measures the sound frequency and intensity characteristics that emanate from an audiometer...
21 CFR 874.1080 - Audiometer calibration set.
Code of Federal Regulations, 2011 CFR
2011-04-01
... calibration traceable to the National Bureau of Standards, oscillators, frequency counters, microphone amplifiers, and a recorder. The device can measure selected audiometer test frequencies at a given intensity... audiometer. It measures the sound frequency and intensity characteristics that emanate from an audiometer...
[E-learning with journal articles].
Adriaanse, Marcel T; van Eijsden, Pieter; de Leeuw, Peter W
2014-01-01
E-learning is a popular method of continuous medical education (CME) which is becoming increasingly available to doctors. A specific form of E-learning is an online knowledge test accompanying a journal article. CME accreditation points can be obtained by reading an article and then answering test questions on it. This is a user-friendly form of CME which an increasing number of journals are offering as a service to their readers. The Dutch Journal of Medicine (NTvG) has been offering accredited tests to its readers since 2011. On comparison with international journals, a high standard has been set by the development of a test concept in which interpretation and reflection play integral roles. In the Dutch setting, the concept of the test was developed by professional bodies working closely together and it is a concept that is used as an example to other journals.
Borzekowski, Dina L G; Robinson, Thomas N
2005-07-01
Media can influence aspects of a child's physical, social, and cognitive development; however, the associations between a child's household media environment, media use, and academic achievement have yet to be determined. To examine relationships among a child's household media environment, media use, and academic achievement. During a single academic year, data were collected through classroom surveys and telephone interviews from an ethnically diverse sample of third grade students and their parents from 6 northern California public elementary schools. The majority of our analyses derive from spring 2000 data, including academic achievement assessed through the mathematics, reading, and language arts sections of the Stanford Achievement Test. We fit linear regression models to determine the associations between variations in household media and performance on the standardized tests, adjusting for demographic and media use variables. The household media environment is significantly associated with students' performance on the standardized tests. It was found that having a bedroom television set was significantly and negatively associated with students' test scores, while home computer access and use were positively associated with the scores. Regression models significantly predicted up to 24% of the variation in the scores. Absence of a bedroom television combined with access to a home computer was consistently associated with the highest standardized test scores. This study adds to the growing literature reporting that having a bedroom television set may be detrimental to young elementary school children. It also suggests that having and using a home computer may be associated with better academic achievement.
Standard setting: the crucial issues. A case study of accounting & auditing.
Nowakowski, J R
1982-01-01
A study of standard-setting efforts in accounting and auditing is reported. The study reveals four major areas of concern in a professional standard-setting effort: (1) issues related to the rationale for setting standards, (2) issues related to the standard-setting board and its support structure, (3) issues related to the content of standards and rules for generating them, and (4) issues that deal with how standards are put to use. Principles derived from the study of accounting and auditing are provided to illuminate and assess standard-setting efforts in evaluation.
Yang, Jian-Feng; Fox, Mark; Chu, Hua; Zheng, Xia; Long, Yan-Qin; Pohl, Daniel; Fried, Michael; Dai, Ning
2015-01-01
AIM: To validate 4-sample lactose hydrogen breath testing (4SLHBT) compared to standard 13-sample LHBT in the clinical setting. METHODS: Irritable bowel syndrome patients with diarrhea (IBS-D) and healthy volunteers (HVs) were enrolled and received a 10 g, 20 g, or 40 g dose lactose hydrogen breath test (LHBT) in a randomized, double-blinded, controlled trial. The lactase gene promoter region was sequenced. Breath samples and symptoms were acquired at baseline and every 15 min for 3 h (13 measurements). The detection rates of lactose malabsorption (LM) and lactose intolerance (LI) for a 4SLHBT that acquired four measurements at 0, 90, 120, and 180 min from the same data set were compared with the results of standard LHBT. RESULTS: Sixty IBS-D patients and 60 HVs were studied. The genotype in all participants was C/C-13910. LM and LI detection rates increased with lactose dose from 10 g, 20 g to 40 g in both groups (P < 0.001). 4SLHBT showed excellent diagnostic concordance with standard LHBT (97%-100%, Kappa 0.815-0.942) with high sensitivity (90%-100%) and specificity (100%) at all three lactose doses in both groups. CONCLUSION: Reducing the number of measurements from 13 to 4 samples did not significantly impact on the accuracy of LHBT in health and IBS-D. 4SLHBT is a valid test for assessment of LM and LI in clinical practice. PMID:26140004
Yang, Jian-Feng; Fox, Mark; Chu, Hua; Zheng, Xia; Long, Yan-Qin; Pohl, Daniel; Fried, Michael; Dai, Ning
2015-06-28
To validate 4-sample lactose hydrogen breath testing (4SLHBT) compared to standard 13-sample LHBT in the clinical setting. Irritable bowel syndrome patients with diarrhea (IBS-D) and healthy volunteers (HVs) were enrolled and received a 10 g, 20 g, or 40 g dose lactose hydrogen breath test (LHBT) in a randomized, double-blinded, controlled trial. The lactase gene promoter region was sequenced. Breath samples and symptoms were acquired at baseline and every 15 min for 3 h (13 measurements). The detection rates of lactose malabsorption (LM) and lactose intolerance (LI) for a 4SLHBT that acquired four measurements at 0, 90, 120, and 180 min from the same data set were compared with the results of standard LHBT. Sixty IBS-D patients and 60 HVs were studied. The genotype in all participants was C/C-13910. LM and LI detection rates increased with lactose dose from 10 g, 20 g to 40 g in both groups (P < 0.001). 4SLHBT showed excellent diagnostic concordance with standard LHBT (97%-100%, Kappa 0.815-0.942) with high sensitivity (90%-100%) and specificity (100%) at all three lactose doses in both groups. Reducing the number of measurements from 13 to 4 samples did not significantly impact on the accuracy of LHBT in health and IBS-D. 4SLHBT is a valid test for assessment of LM and LI in clinical practice.
NASA Astrophysics Data System (ADS)
Garrett-Rainey, Syrena
The purpose of this study was to compare the achievement of general education students within regular education classes to the achievement of general education students in inclusion/co-teach classes to determine whether there was a significant difference in the achievement between the two groups. The school district's inclusion/co-teach model included ongoing professional development support for teachers and administrators. General education teachers, special education teachers, and teacher assistants collaborated to develop instructional strategies to provide additional remediation to help students to acquire the skills needed to master course content. This quantitative study reviewed the end-of course test (EoCT) scores of Grade 10 physical science and math students within an urban school district. It is not known whether general education students in an inclusive/co-teach science or math course will demonstrate a higher achievement on the EoCT in math or science than students not in an inclusive/co-teach classroom setting. In addition, this study sought to determine if students classified as low socioeconomic status benefited from participating in co-teaching classrooms as evidenced by standardized tests. Inferential statistics were used to determine whether there was a significant difference between the achievements of the treatment group (inclusion/co-teach) and the control group (non-inclusion/co-teach). The findings can be used to provide school districts with optional instructional strategies to implement in the diverse classroom setting in the modern classroom to increase academic performance on state standardized tests.
Lohmann, Amanda R; Carlson, Matthew L; Sladen, Douglas P
2018-03-01
Intraoperative cochlear implant device testing provides valuable information regarding device integrity, electrode position, and may assist with determining initial stimulation settings. Manual intraoperative device testing during cochlear implantation requires the time and expertise of a trained audiologist. The purpose of the current study is to investigate the feasibility of using automated remote intraoperative cochlear implant reverse telemetry testing as an alternative to standard testing. Prospective pilot study evaluating intraoperative remote automated impedance and Automatic Neural Response Telemetry (AutoNRT) testing in 34 consecutive cochlear implant surgeries using the Intraoperative Remote Assistant (Cochlear Nucleus CR120). In all cases, remote intraoperative device testing was performed by trained operating room staff. A comparison was made to the "gold standard" of manual testing by an experienced cochlear implant audiologist. Electrode position and absence of tip fold-over was confirmed using plain film x-ray. Automated remote reverse telemetry testing was successfully completed in all patients. Intraoperative x-ray demonstrated normal electrode position without tip fold-over. Average impedance values were significantly higher using standard testing versus CR120 remote testing (standard mean 10.7 kΩ, SD 1.2 vs. CR120 mean 7.5 kΩ, SD 0.7, p < 0.001). There was strong agreement between standard manual testing and remote automated testing with regard to the presence of open or short circuits along the array. There were, however, two cases in which standard testing identified an open circuit, when CR120 testing showed the circuit to be closed. Neural responses were successfully obtained in all patients using both systems. There was no difference in basal electrode responses (standard mean 195.0 μV, SD 14.10 vs. CR120 194.5 μV, SD 14.23; p = 0.7814); however, more favorable (lower μV amplitude) results were obtained with the remote automated system in the apical 10 electrodes (standard 185.4 μV, SD 11.69 vs. CR120 177.0 μV, SD 11.57; p value < 0.001). These preliminary data demonstrate that intraoperative cochlear implant device testing using a remote automated system is feasible. This system may be useful for cochlear implant programs with limited audiology support or for programs looking to streamline intraoperative device testing protocols. Future studies with larger patient enrollment are required to validate these promising, but preliminary, findings.
The production of calibration specimens for impact testing of subsize Charpy specimens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, D.J.; Corwin, W.R.; Owings, T.D.
1994-09-01
Calibration specimens have been manufactured for checking the performance of a pendulum impact testing machine that has been configured for testing subsize specimens, both half-size (5.0 {times} 5.0 {times} 25.4 mm) and third-size (3.33 {times} 3.33 {times} 25.4 mm). Specimens were fabricated from quenched-and-tempered 4340 steel heat treated to produce different microstructures that would result in either high or low absorbed energy levels on testing. A large group of both half- and third-size specimens were tested at {minus}40{degrees}C. The results of the tests were analyzed for average value and standard deviation, and these values were used to establish calibration limitsmore » for the Charpy impact machine when testing subsize specimens. These average values plus or minus two standard deviations were set as the acceptable limits for the average of five tests for calibration of the impact testing machine.« less
Aziz, Nazneen; Zhao, Qin; Bry, Lynn; Driscoll, Denise K; Funke, Birgit; Gibson, Jane S; Grody, Wayne W; Hegde, Madhuri R; Hoeltge, Gerald A; Leonard, Debra G B; Merker, Jason D; Nagarajan, Rakesh; Palicki, Linda A; Robetorye, Ryan S; Schrijver, Iris; Weck, Karen E; Voelkerding, Karl V
2015-04-01
The higher throughput and lower per-base cost of next-generation sequencing (NGS) as compared to Sanger sequencing has led to its rapid adoption in clinical testing. The number of laboratories offering NGS-based tests has also grown considerably in the past few years, despite the fact that specific Clinical Laboratory Improvement Amendments of 1988/College of American Pathologists (CAP) laboratory standards had not yet been developed to regulate this technology. To develop a checklist for clinical testing using NGS technology that sets standards for the analytic wet bench process and for bioinformatics or "dry bench" analyses. As NGS-based clinical tests are new to diagnostic testing and are of much greater complexity than traditional Sanger sequencing-based tests, there is an urgent need to develop new regulatory standards for laboratories offering these tests. To develop the necessary regulatory framework for NGS and to facilitate appropriate adoption of this technology for clinical testing, CAP formed a committee in 2011, the NGS Work Group, to deliberate upon the contents to be included in the checklist. Results . -A total of 18 laboratory accreditation checklist requirements for the analytic wet bench process and bioinformatics analysis processes have been included within CAP's molecular pathology checklist (MOL). This report describes the important issues considered by the CAP committee during the development of the new checklist requirements, which address documentation, validation, quality assurance, confirmatory testing, exception logs, monitoring of upgrades, variant interpretation and reporting, incidental findings, data storage, version traceability, and data transfer confidentiality.
Boosting standard order sets utilization through clinical decision support.
Li, Haomin; Zhang, Yinsheng; Cheng, Haixia; Lu, Xudong; Duan, Huilong
2013-01-01
Well-designed standard order sets have the potential to integrate and coordinate care by communicating best practices through multiple disciplines, levels of care, and services. However, there are several challenges which certainly affected the benefits expected from standard order sets. To boost standard order sets utilization, a problem-oriented knowledge delivery solution was proposed in this study to facilitate access of standard order sets and evaluation of its treatment effect. In this solution, standard order sets were created along with diagnostic rule sets which can trigger a CDS-based reminder to help clinician quickly discovery hidden clinical problems and corresponding standard order sets during ordering. Those rule set also provide indicators for targeted evaluation of standard order sets during treatment. A prototype system was developed based on this solution and will be presented at Medinfo 2013.
[Expert investigation on food safety standard system framework construction in China].
He, Xiang; Yan, Weixing; Fan, Yongxiang; Zeng, Biao; Peng, Zhen; Sun, Zhenqiu
2013-09-01
Through investigating food safety standard framework among food safety experts, to summarize the basic elements and principles of food safety standard system, and provide policy advices for food safety standards framework. A survey was carried out among 415 experts from government, professional institutions and the food industry/enterprises using the National Food Safety Standard System Construction Consultation Questionnaire designed in the name of the Secretariat of National Food Safety Standard Committee. Experts have different advices in each group about the principles of food product standards, food additive product standards, food related product standards, hygienic practice, test methods. According to the results, the best solution not only may reflect experts awareness of the work of food safety standards situation, but also provide advices for setting and revision of food safety standards for the next. Through experts investigation, the framework and guiding principles of food safety standard had been built.
Innovative approach to teaching communication skills to nursing students.
Zavertnik, Jean Ellen; Huff, Tanya A; Munro, Cindy L
2010-02-01
This study assessed the effectiveness of a learner-centered simulation intervention designed to improve the communication skills of preprofessional sophomore nursing students. An innovative teaching strategy in which communication skills are taught to nursing students by using trained actors who served as standardized family members in a clinical learning laboratory setting was evaluated using a two-group posttest design. In addition to current standard education, the intervention group received a formal training session presenting a framework for communication and a 60-minute practice session with the standardized family members. Four domains of communication-introduction, gathering of information, imparting information, and clarifying goals and expectations-were evaluated in the control and intervention groups in individual testing sessions with a standardized family member. The intervention group performed better than the control group in all four tested domains related to communication skills, and the difference was statistically significant in the domain of gathering information (p = 0.0257). Copyright 2010, SLACK Incorporated.
Recommending a minimum English proficiency standard for entry-level nursing.
O'Neill, Thomas R; Marks, Casey; Wendt, Anne
2005-01-01
The purpose of this research was to provide sufficient information to the National Council of State Boards of Nursing (NCSBN) to make a defensible recommended passing standard for English proficiency. This standard was based upon the Test of English as a Foreign Language (TOEFL). A large panel of nurses and nurse regulators (N = 25) was convened to determine how much English proficiency is required to be minimally competent as an entry-level nurse. Two standard setting procedures were combined to produce recommendations for each panelist. In conjunction with collateral information, these recommendations were reviewed by the NCSBN Examination Committee, which decided upon an NCSBN recommended standard, a TOEFL score of 220.
Xiao, Xiang; Wang, Tianping; Ye, Hongzhuan; Qiang, Guangxiang; Wei, Haiming; Tian, Zhigang
2005-01-01
OBJECTIVE: To determine the validity of a recently developed rapid test--a colloidal dye immunofiltration assay (CDIFA)--used by health workers in field settings to identify villagers infected with Schistosoma japonicum. METHODS: Health workers in the field used CDIFA to test samples from 1553 villagers in two areas of low endemicity and an area where S. japonicum was not endemic in Anhui, China. All the samples were then tested in the laboratory by laboratory staff using a standard parasitological method (Kato-Katz), an indirect haemagglutination assay (IHA), and CDIFA. The results of CDIFA performed by health workers were compared with those obtained by Kato-Katz and IHA. FINDINGS: Concordance between the results of CDIFA performed in field settings and in the laboratory was high (kappa index, 0.95; 95% confidence interval, 0.93-0.97). When Kato-Katz was used as the reference test, the overall sensitivity and specificity of CDIFA were 98.5% and 83.6%, respectively in the two villages in areas of low endemicity, while the specificity was 99.8% in the nonendemic village. Compared with IHA, the overall specificity and sensitivity of CDIFA were greater than 99% and 96%, respectively. With the combination of Kato-Katz and IHA as the reference standard, CDIFA had a sensitivity of 95.8% and a specificity of 99.5%, and an accuracy of 98.6% in the two areas of low endemicity. CONCLUSION: CDIFA is a specific, sensitive, and reliable test that can be used for rapid screening for schistosomiasis by health workers in field settings. PMID:16175827
Olatunya, Oladele; Ogundare, Olatunde; Olaleye, Abiola; Agaja, Oyinkansola; Omoniyi, Evelyn; Adeyefa, Babajide; Oluwadiya, Kehinde; Oyelami, Oyeku
2016-05-01
Prompt and accurate diagnosis is needed to prevent the untoward effects of anaemia on children. Although haematology analyzers are the gold standard for accurate measurement of haemoglobin or haematocrit for anaemia diagnosis, they are often out of the reach of most health facilities in resource-poor settings thus creating a care gap. We conducted this study to examine the agreement between a point-of-care device and haematology analyzer in determining the haematocrit levels in children and to determine its usefulness in diagnosing anaemia in resource-poor settings. EDTA blood samples collected from participants were processed to estimate their haematocrits using the two devices (Mindray BC-3600 haematology analyzer and Portable Mission Hb/Haemotocrit testing system). A pairwise t-test was used to compare the haematocrit (PCV) results from the automated haematology analyzer and the portable haematocrit meter. The agreement between the two sets of measurements was assessed using the Bland and Altman method where the mean, standard deviation and limit of agreement of paired results were calculated. The intraclass and concordance correlation coefficients were 0.966 and 0.936. Sensitivity and specificity were 97.85% and 94.51% respectively while the positive predictive and negative predictive values were 94.79% and 97.73%. The Bland and Altman`s limit of agreement was -5.5-5.1 with the mean difference being -0.20 and a non-ignificant variability between the two measurements (p = 0.506). Haematocrit determined by the portable testing system is comparable to that determined by the haematology analyzer. We therefore recommend its use as a point-of-care device for determining haematocrit in resource-poor settings where haematology analyzers are not available.
Livingstone, I A T; Tarbert, C M; Giardini, M E; Bastawrous, A; Middleton, D; Hamilton, R
2016-01-01
Mobile technology is increasingly used to measure visual acuity. Standards for chart-based acuity tests specify photometric requirements for luminance, optotype contrast and luminance uniformity. Manufacturers provide some photometric data but little is known about tablet performance for visual acuity testing. This study photometrically characterised seven tablet computers (iPad, Apple inc.) and three ETDRS (Early Treatment Diabetic Retinopathy Study) visual acuity charts with room lights on and off, and compared findings with visual acuity measurement standards. Tablet screen luminance and contrast were measured using nine points across a black and white checkerboard test screen at five arbitrary brightness levels. ETDRS optotypes and adjacent white background luminance and contrast were measured. All seven tablets (room lights off) exceeded the most stringent requirement for mean luminance (≥ 120 cd/m2) providing the nominal brightness setting was above 50%. All exceeded contrast requirement (Weber ≥ 90%) regardless of brightness setting, and five were marginally below the required luminance uniformity threshold (Lmin/Lmax ≥ 80%). Re-assessing three tablets with room lights on made little difference to mean luminance or contrast, and improved luminance uniformity to exceed the threshold. The three EDTRS charts (room lights off) had adequate mean luminance (≥ 120 cd/m2) and Weber contrast (≥ 90%), but all three charts failed to meet the luminance uniformity standard (Lmin/Lmax ≥ 80%). Two charts were operating beyond manufacturer's recommended lamp replacement schedule. With room lights on, chart mean luminance and Weber contrast increased, but two charts still had inadequate luminance uniformity. Tablet computers showed less inter-device variability, higher contrast, and better luminance uniformity than charts in both lights-on and lights-off environments, providing brightness setting was >50%. Overall, iPad tablets matched or marginally out-performed ETDRS charts in terms of photometric compliance with high contrast acuity standards.
Livingstone, I. A. T.; Tarbert, C. M.; Giardini, M. E.; Bastawrous, A.; Middleton, D.; Hamilton, R.
2016-01-01
Mobile technology is increasingly used to measure visual acuity. Standards for chart-based acuity tests specify photometric requirements for luminance, optotype contrast and luminance uniformity. Manufacturers provide some photometric data but little is known about tablet performance for visual acuity testing. This study photometrically characterised seven tablet computers (iPad, Apple inc.) and three ETDRS (Early Treatment Diabetic Retinopathy Study) visual acuity charts with room lights on and off, and compared findings with visual acuity measurement standards. Tablet screen luminance and contrast were measured using nine points across a black and white checkerboard test screen at five arbitrary brightness levels. ETDRS optotypes and adjacent white background luminance and contrast were measured. All seven tablets (room lights off) exceeded the most stringent requirement for mean luminance (≥ 120 cd/m2) providing the nominal brightness setting was above 50%. All exceeded contrast requirement (Weber ≥ 90%) regardless of brightness setting, and five were marginally below the required luminance uniformity threshold (Lmin/Lmax ≥ 80%). Re-assessing three tablets with room lights on made little difference to mean luminance or contrast, and improved luminance uniformity to exceed the threshold. The three EDTRS charts (room lights off) had adequate mean luminance (≥ 120 cd/m2) and Weber contrast (≥ 90%), but all three charts failed to meet the luminance uniformity standard (Lmin/Lmax ≥ 80%). Two charts were operating beyond manufacturer’s recommended lamp replacement schedule. With room lights on, chart mean luminance and Weber contrast increased, but two charts still had inadequate luminance uniformity. Tablet computers showed less inter-device variability, higher contrast, and better luminance uniformity than charts in both lights-on and lights-off environments, providing brightness setting was >50%. Overall, iPad tablets matched or marginally out-performed ETDRS charts in terms of photometric compliance with high contrast acuity standards. PMID:27002333
Reporting standards for studies of diagnostic test accuracy in dementia
Noel-Storr, Anna H.; McCleery, Jenny M.; Richard, Edo; Ritchie, Craig W.; Flicker, Leon; Cullum, Sarah J.; Davis, Daniel; Quinn, Terence J.; Hyde, Chris; Rutjes, Anne W.S.; Smailagic, Nadja; Marcus, Sue; Black, Sandra; Blennow, Kaj; Brayne, Carol; Fiorivanti, Mario; Johnson, Julene K.; Köpke, Sascha; Schneider, Lon S.; Simmons, Andrew; Mattsson, Niklas; Zetterberg, Henrik; Bossuyt, Patrick M.M.; Wilcock, Gordon
2014-01-01
Objective: To provide guidance on standards for reporting studies of diagnostic test accuracy for dementia disorders. Methods: An international consensus process on reporting standards in dementia and cognitive impairment (STARDdem) was established, focusing on studies presenting data from which sensitivity and specificity were reported or could be derived. A working group led the initiative through 4 rounds of consensus work, using a modified Delphi process and culminating in a face-to-face consensus meeting in October 2012. The aim of this process was to agree on how best to supplement the generic standards of the STARD statement to enhance their utility and encourage their use in dementia research. Results: More than 200 comments were received during the wider consultation rounds. The areas at most risk of inadequate reporting were identified and a set of dementia-specific recommendations to supplement the STARD guidance were developed, including better reporting of patient selection, the reference standard used, avoidance of circularity, and reporting of test-retest reliability. Conclusion: STARDdem is an implementation of the STARD statement in which the original checklist is elaborated and supplemented with guidance pertinent to studies of cognitive disorders. Its adoption is expected to increase transparency, enable more effective evaluation of diagnostic tests in Alzheimer disease and dementia, contribute to greater adherence to methodologic standards, and advance the development of Alzheimer biomarkers. PMID:24944261
End-of-fabrication CMOS process monitor
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Allen, R. A.; Blaes, B. R.; Hannaman, D. J.; Lieneweg, U.; Lin, Y.-S.; Sayah, H. R.
1990-01-01
A set of test 'modules' for verifying the quality of a complementary metal oxide semiconductor (CMOS) process at the end of the wafer fabrication is documented. By electrical testing of specific structures, over thirty parameters are collected characterizing interconnects, dielectrics, contacts, transistors, and inverters. Each test module contains a specification of its purpose, the layout of the test structure, the test procedures, the data reduction algorithms, and exemplary results obtained from 3-, 2-, or 1.6-micrometer CMOS/bulk processes. The document is intended to establish standard process qualification procedures for Application Specific Integrated Circuits (ASIC's).
A Goniometry Paradigm Shift to Measure Burn Scar Contracture in Burn Patients
2017-10-01
test more extensively a recently designed Revised Goniometry (RG) method and compare it to Standard Goniometry (SG)used to measure burn scar...joint angle measurements willbe found between SG techniques compared to RG techniques which incorporate CKM and CFU principles. Specific Aim 1: To... compare the average reduction in joint range of motion measured with the standard GM measurements to a newly conceived set of revised GM measurements in
Janet, Jon Paul; Kulik, Heather J
2017-11-22
Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships of the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20× higher errors for feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5× smaller than the full RAC set produce sub- to 1 kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1979-10-01
The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less
NASA Technical Reports Server (NTRS)
1985-01-01
A standard specification for a selected class of graphite fiber/toughened thermoset resin matrix material was developed through joint NASA/Aircraft Industry effort. This specification was compiled to provide uniform requirements and tests for qualifying prepreg systems and for acceptance of prepreg batches. The specification applies specifically to a class of composite prepreg consisting of unidirectional graphite fibers impregnated with a toughened thermoset resin that produce laminates with service temperatures from -65 F to 200 F when cured at temperatures below or equal to 350 F. The specified prepreg has a fiber areal weight of 145 g sq m. The specified tests are limited to those required to set minimum standards for the uncured prepreg and cured laminates, and are not intended to provide design allowable properties.
Dalbeth, Nicola; Schumacher, H Ralph; Fransen, Jaap; Neogi, Tuhina; Jansen, Tim L; Brown, Melanie; Louthrenoo, Worawit; Vazquez-Mellado, Janitzia; Eliseev, Maxim; McCarthy, Geraldine; Stamp, Lisa K; Perez-Ruiz, Fernando; Sivera, Francisca; Ea, Hang-Korng; Gerritsen, Martijn; Scire, Carlo A; Cavagna, Lorenzo; Lin, Chingtsai; Chou, Yin-Yi; Tausche, Anne-Kathrin; da Rocha Castelar-Pinheiro, Geraldo; Janssen, Matthijs; Chen, Jiunn-Horng; Cimmino, Marco A; Uhlig, Till; Taylor, William J
2016-12-01
To identify the best-performing survey definition of gout from items commonly available in epidemiologic studies. Survey definitions of gout were identified from 34 epidemiologic studies contributing to the Global Urate Genetics Consortium (GUGC) genome-wide association study. Data from the Study for Updated Gout Classification Criteria (SUGAR) were randomly divided into development and test data sets. A data-driven case definition was formed using logistic regression in the development data set. This definition, along with definitions used in GUGC studies and the 2015 American College of Rheumatology (ACR)/European League Against Rheumatism (EULAR) gout classification criteria were applied to the test data set, using monosodium urate crystal identification as the gold standard. For all tested GUGC definitions, the simple definition of "self-report of gout or urate-lowering therapy use" had the best test performance characteristics (sensitivity 82%, specificity 72%). The simple definition had similar performance to a SUGAR data-driven case definition with 5 weighted items: self-report, self-report of doctor diagnosis, colchicine use, urate-lowering therapy use, and hyperuricemia (sensitivity 87%, specificity 70%). Both of these definitions performed better than the 1977 American Rheumatism Association survey criteria (sensitivity 82%, specificity 67%). Of all tested definitions, the 2015 ACR/EULAR criteria had the best performance (sensitivity 92%, specificity 89%). A simple definition of "self-report of gout or urate-lowering therapy use" has the best test performance characteristics of existing definitions that use routinely available data. A more complex combination of features is more sensitive, but still lacks good specificity. If a more accurate case definition is required for a particular study, the 2015 ACR/EULAR gout classification criteria should be considered. © 2016, American College of Rheumatology.
DuVall, Scott L; South, Brett R; Bray, Bruce E; Bolton, Daniel; Heavirland, Julia; Pickard, Steve; Heidenreich, Paul; Shen, Shuying; Weir, Charlene; Samore, Matthew; Goldstein, Mary K
2012-01-01
Objectives Left ventricular ejection fraction (EF) is a key component of heart failure quality measures used within the Department of Veteran Affairs (VA). Our goals were to build a natural language processing system to extract the EF from free-text echocardiogram reports to automate measurement reporting and to validate the accuracy of the system using a comparison reference standard developed through human review. This project was a Translational Use Case Project within the VA Consortium for Healthcare Informatics. Materials and methods We created a set of regular expressions and rules to capture the EF using a random sample of 765 echocardiograms from seven VA medical centers. The documents were randomly assigned to two sets: a set of 275 used for training and a second set of 490 used for testing and validation. To establish the reference standard, two independent reviewers annotated all documents in both sets; a third reviewer adjudicated disagreements. Results System test results for document-level classification of EF of <40% had a sensitivity (recall) of 98.41%, a specificity of 100%, a positive predictive value (precision) of 100%, and an F measure of 99.2%. System test results at the concept level had a sensitivity of 88.9% (95% CI 87.7% to 90.0%), a positive predictive value of 95% (95% CI 94.2% to 95.9%), and an F measure of 91.9% (95% CI 91.2% to 92.7%). Discussion An EF value of <40% can be accurately identified in VA echocardiogram reports. Conclusions An automated information extraction system can be used to accurately extract EF for quality measurement. PMID:22437073
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-02
... meets the standards and specifications set forth by the American Society for Testing and Materials (ASTM... produced by forming stainless steel flat-rolled products into a tubular configuration and welding along the...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-19
... standards and specifications set forth by the American Society for Testing and Materials (ASTM) for the... stainless steel flat-rolled products into a tubular configuration and welding along the seam. Welded ASTM A...
Code of Federal Regulations, 2012 CFR
2012-01-01
... EXHAUST EMISSION REQUIREMENTS FOR TURBINE ENGINE POWERED AIRPLANES Test Procedures for Engine Exhaust Gaseous Emissions (Aircraft and Aircraft Gas Turbine Engines) § 34.60 Introduction. (a) Except as provided... determine the conformity of new aircraft gas turbine engines with the applicable standards set forth in this...
The Interview as a Technique for Assessing Oral Ability: Some Guidelines for Its Use.
ERIC Educational Resources Information Center
Nambiar, Mohana
1990-01-01
Some guidelines are offered that detail the complexities involved in interviewing for language testing purposes. They cover strategies for structuring interviews (informal conversational, interview guide, standardized open-ended), questions, interviewing skills, and physical setting. (five references) (LB)
Model Test of Proposed Loading Rates for Onsite Wastewater Treatment Systems
State regulatory agencies set standards for onsite wastewater treatment system (OWTS), commonly known as septic systems, based on expected hydraulic performance and nitrogen (N) treatment in soils of differing texture. In a previous study, hydraulic loading rates were proposed fo...
Miron, Anca M; Warner, Ruth H; Branscombe, Nyla R
2011-06-01
We tested whether differential appraisals of inequality are a function of the injustice standards used by different groups. A confirmatory standard of injustice is defined as the amount of evidence needed to arrive at the conclusion that injustice has occurred. Consistent with a motivational shifting of standards view, we found that advantaged and disadvantaged group members set different standards of injustice when judging the magnitude of gender (Study 1) and racial (Study 2) wage inequality. In addition, because advantaged and disadvantaged group members formed - based on their differential standards - divergent appraisals of wage inequality, they experienced differential desire to restore inter-group justice. We discuss the implications of promoting low confirmatory standards for changing perceptions of social reality and for motivating justice-restorative behaviour. ©2011 The British Psychological Society.
International Experience in Standards and Labeling Programs for Rice Cookers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Nan; Zheng, Nina
China has had an active program on energy efficiency standards for household appliances since the mid-1990s. Rice cooker is among the first to be subject to such mandatory regulation, since it is one of the most prevalent electric appliances in Chinese households. Since first introduced in 1989, the minimum energy efficiency standard for rice cookers has not been revised. Therefore, the potential for energy saving is considerable. Initial analysis from CNIS indicates that potential carbon savings is likely to reach 7.6 million tons of CO2 by the 10th year of the standard implementation. Since September 2007, CNIS has been workingmore » with various groups to develop the new standard for rice cookers. With The Energy Foundation's support, LBNL has assisted CNIS in the revision of the minimum energy efficiency standard for rice cookers that is expected to be effective in 2009. Specifically, work has been in the following areas: assistance in developing consumer survey on usage pattern of rice cookers, review of international standards, review of international test procedures, comparison of the international standards and test procedures, and assessment of technical options of reducing energy use. This report particularly summarizes the findings of reviewing international standards and technical options of reducing energy consumption. The report consists of an overview of rice cooker standards and labeling programs and testing procedures in Hong Kong, South Korea, Japan and Thailand, and Japan's case study in developing energy efficiency rice cooker technologies and rice cooker efficiency programs. The results from the analysis can be summarized as the follows: Hong Kong has a Voluntary Energy Efficiency Labeling scheme for electric rice cookers initiated in 2001, with revision implemented in 2007; South Korea has both MEPS and Mandatory Energy Efficiency Label targeting the same category of rice cookers as Hong Kong; Thailand's voluntary endorsement labeling program is similar to Hong Kong in program design but has 5 efficiency grades; Japan's program is distinct in its adoption of the 'Top Runner' approach, in which, the future efficiency standards is set based on the efficiency levels of the most efficient product in the current domestic market. Although the standards are voluntary, penalties can still be evoked if the average efficiency target is not met. Both Hong Kong and South Korea's tests involve pouring water into the inner pot equal to 80% of its rated volume; however, white rice is used as a load for its tests in Hong Kong whereas no rice is used for tests in South Korea. In Japan's case, water level specified by the manufactures is used and milled rice is used as a load only partially in the tests. Moreover, Japan does not conduct heat efficiency test but its energy consumption measurements tests are much more complex, with 4 different tests are conducted to determine the annual average energy consumption. Hong Kong and Thailand both set Minimum Allowable Heat Efficiency for different rated wattages. The energy efficiency requirements are identical except that the minimum heat efficiency in Thailand is 1 percentage point higher for all rated power categories. In South Korea, MEPS and label's energy efficiency grades are determined by the rice cooker's Rated Energy Efficiency for induction, non-induction, pressure, nonpressure rice cookers. Japan's target standard values are set for electromagnetic induction heating products and non-electromagnetic induction heating products by different size of rice cookers. Specific formulas are used by type and size depending on the mass of water evaporation of the rice cookers. Japan has been the leading country in technology development of various types of rice cookers, and developed concrete energy efficiency standards for rice cookers. However, as consumers in Japan emphasize the deliciousness of cooked rice over other factors, many types of models were developed to improve the taste of cooked rice. Nonetheless, the efficiency of electromagnetic induction heating (IH) rice cookers in warm mode has improved approximately 12 percent from 1993 to 2004 due to the 'low temperature warming method' developed by manufacturers. The Energy Conservation Center of Japan (IEEJ) releases energy saving products database on the web regularly, on which the energy saving performance of each product is listed and ranked. Energy saving in rice cookers mostly rest with insulation of the pot. Technology developed to improve the energy efficiency of the rice cookers includes providing vacuum layers on both side of the pot, using copper-plated materials, and double stainless layer lid that can be heated and steam can run in between the two layers to speed the heating process.« less
Standardization of shape memory alloy test methods toward certification of aerospace applications
NASA Astrophysics Data System (ADS)
Hartl, D. J.; Mabe, J. H.; Benafan, O.; Coda, A.; Conduit, B.; Padan, R.; Van Doren, B.
2015-08-01
The response of shape memory alloy (SMA) components employed as actuators has enabled a number of adaptable aero-structural solutions. However, there are currently no industry or government-accepted standardized test methods for SMA materials when used as actuators and their transition to commercialization and production has been hindered. This brief fast track communication introduces to the community a recently initiated collaborative and pre-competitive SMA specification and standardization effort that is expected to deliver the first ever regulatory agency-accepted material specification and test standards for SMA as employed as actuators for commercial and military aviation applications. In the first phase of this effort, described herein, the team is working to review past efforts and deliver a set of agreed-upon properties to be included in future material certification specifications as well as the associated experiments needed to obtain them in a consistent manner. Essential for the success of this project is the participation and input from a number of organizations and individuals, including engineers and designers working in materials and processing development, application design, SMA component fabrication, and testing at the material, component, and system level. Going forward, strong consensus among this diverse body of participants and the SMA research community at large is needed to advance standardization concepts for universal adoption by the greater aerospace community and especially regulatory bodies. It is expected that the development and release of public standards will be done in collaboration with an established standards development organization.
ERIC Educational Resources Information Center
Xi, Xiaoming
2008-01-01
Although the primary use of the speaking section of the Test of English as a Foreign Language™ Internet-based test (TOEFL® iBT Speaking test) is to inform admissions decisions at English medium universities, it may also be useful as an initial screening measure for international teaching assistants (ITAs). This study provides criterion-related…
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
IRT Analysis of General Outcome Measures in Grades 1-8. Technical Report # 0916
ERIC Educational Resources Information Center
Alonzo, Julie; Anderson, Daniel; Tindal, Gerald
2009-01-01
We present scaling outcomes for mathematics assessments used in the fall to screen students at risk of failing to learn the knowledge and skills described in the National Council of Teachers of Mathematics (NCTM) Focal Point Standards. At each grade level, the assessment consisted of a 48-item test with three 16-item sub-test sets aligned to the…
This report sets standards by which the emissions reduction provided by fuel and lubricant technologies can be tested and be tested in a comparable way. It is a generic protocol under the Environmental Technology Verification program.
16 CFR 1610.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2014 CFR
2014-01-01
... procedures set forth in § 1610.6. (b) If the outer layer of plastic film or plastic-coated fabric of a...—Standard for the Flammability of Vinyl Plastic Film. If the outer layer adheres to all or a portion of one... characteristics of the film or coating, the uncovered or exposed layer shall be tested in accordance with part...
16 CFR 1610.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2012 CFR
2012-01-01
... procedures set forth in § 1610.6. (b) If the outer layer of plastic film or plastic-coated fabric of a...—Standard for the Flammability of Vinyl Plastic Film. If the outer layer adheres to all or a portion of one... characteristics of the film or coating, the uncovered or exposed layer shall be tested in accordance with part...
16 CFR § 1610.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2013 CFR
2013-01-01
... applicable procedures set forth in § 1610.6. (b) If the outer layer of plastic film or plastic-coated fabric... part 1611—Standard for the Flammability of Vinyl Plastic Film. If the outer layer adheres to all or a... characteristics of the film or coating, the uncovered or exposed layer shall be tested in accordance with part...
16 CFR 1610.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... procedures set forth in § 1610.6. (b) If the outer layer of plastic film or plastic-coated fabric of a...—Standard for the Flammability of Vinyl Plastic Film. If the outer layer adheres to all or a portion of one... characteristics of the film or coating, the uncovered or exposed layer shall be tested in accordance with part...
16 CFR 1610.34 - Only uncovered or exposed parts of wearing apparel to be tested.
Code of Federal Regulations, 2010 CFR
2010-01-01
... procedures set forth in § 1610.6. (b) If the outer layer of plastic film or plastic-coated fabric of a...—Standard for the Flammability of Vinyl Plastic Film. If the outer layer adheres to all or a portion of one... characteristics of the film or coating, the uncovered or exposed layer shall be tested in accordance with part...
ERIC Educational Resources Information Center
Pfeiffer, Nils; Hagemann, Dirk; Backenstrass, Matthias
2011-01-01
In response to the low standards in short form development, Smith, McCarthy, and Anderson (2000) introduced a set of guidelines for the construction and evaluation of short forms of psychological tests. One of their recommendations requires researches to show that the variance overlap between the short form and its long form is adequate. This…
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems.
Glover, Jack L; Hudson, Lawrence T
2016-06-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard.
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems
Glover, Jack L.; Hudson, Lawrence T.
2016-01-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard. PMID:27499586
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems
NASA Astrophysics Data System (ADS)
Glover, Jack L.; Hudson, Lawrence T.
2016-06-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in an international aviation security standard.
Schneid, Stephen D; Armour, Chris; Park, Yoon Soo; Yudkowsky, Rachel; Bordage, Georges
2014-10-01
Despite significant evidence supporting the use of three-option multiple-choice questions (MCQs), these are rarely used in written examinations for health professions students. The purpose of this study was to examine the effects of reducing four- and five-option MCQs to three-option MCQs on response times, psychometric characteristics, and absolute standard setting judgements in a pharmacology examination administered to health professions students. We administered two versions of a computerised examination containing 98 MCQs to 38 Year 2 medical students and 39 Year 3 pharmacy students. Four- and five-option MCQs were converted into three-option MCQs to create two versions of the examination. Differences in response time, item difficulty and discrimination, and reliability were evaluated. Medical and pharmacy faculty judges provided three-level Angoff (TLA) ratings for all MCQs for both versions of the examination to allow the assessment of differences in cut scores. Students answered three-option MCQs an average of 5 seconds faster than they answered four- and five-option MCQs (36 seconds versus 41 seconds; p = 0.008). There were no significant differences in item difficulty and discrimination, or test reliability. Overall, the cut scores generated for three-option MCQs using the TLA ratings were 8 percentage points higher (p = 0.04). The use of three-option MCQs in a health professions examination resulted in a time saving equivalent to the completion of 16% more MCQs per 1-hour testing period, which may increase content validity and test score reliability, and minimise construct under-representation. The higher cut scores may result in higher failure rates if an absolute standard setting method, such as the TLA method, is used. The results from this study provide a cautious indication to health professions educators that using three-option MCQs does not threaten validity and may strengthen it by allowing additional MCQs to be tested in a fixed amount of testing time with no deleterious effect on the reliability of the test scores. © 2014 John Wiley & Sons Ltd.
nu-Anomica: A Fast Support Vector Based Novelty Detection Technique
NASA Technical Reports Server (NTRS)
Das, Santanu; Bhaduri, Kanishka; Oza, Nikunj C.; Srivastava, Ashok N.
2009-01-01
In this paper we propose nu-Anomica, a novel anomaly detection technique that can be trained on huge data sets with much reduced running time compared to the benchmark one-class Support Vector Machines algorithm. In -Anomica, the idea is to train the machine such that it can provide a close approximation to the exact decision plane using fewer training points and without losing much of the generalization performance of the classical approach. We have tested the proposed algorithm on a variety of continuous data sets under different conditions. We show that under all test conditions the developed procedure closely preserves the accuracy of standard one-class Support Vector Machines while reducing both the training time and the test time by 5 - 20 times.
Bartoli, Francesco; Crocamo, Cristina; Biagi, Enrico; Di Carlo, Francesco; Parma, Francesca; Madeddu, Fabio; Capuzzi, Enrico; Colmegna, Fabrizia; Clerici, Massimo; Carrà, Giuseppe
2016-08-01
There is a lack of studies testing accuracy of fast screening methods for alcohol use disorder in mental health settings. We aimed at estimating clinical utility of a standard single-item test for case finding and screening of DSM-5 alcohol use disorder among individuals suffering from anxiety and mood disorders. We recruited adults consecutively referred, in a 12-month period, to an outpatient clinic for anxiety and depressive disorders. We assessed the National Institute on Alcohol Abuse and Alcoholism (NIAAA) single-item test, using the Mini- International Neuropsychiatric Interview (MINI), plus an additional item of Composite International Diagnostic Interview (CIDI) for craving, as reference standard to diagnose a current DSM-5 alcohol use disorder. We estimated sensitivity and specificity of the single-item test, as well as positive and negative Clinical Utility Indexes (CUIs). 242 subjects with anxiety and mood disorders were included. The NIAAA single-item test showed high sensitivity (91.9%) and specificity (91.2%) for DSM-5 alcohol use disorder. The positive CUI was 0.601, whereas the negative one was 0.898, with excellent values also accounting for main individual characteristics (age, gender, diagnosis, psychological distress levels, smoking status). Testing for relevant indexes, we found an excellent clinical utility of the NIAAA single-item test for screening true negative cases. Our findings support a routine use of reliable methods for rapid screening in similar mental health settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Alternative Test Methods for Electronic Parts
NASA Technical Reports Server (NTRS)
Plante, Jeannette
2004-01-01
It is common practice within NASA to test electronic parts at the manufacturing lot level to demonstrate, statistically, that parts from the lot tested will not fail in service using generic application conditions. The test methods and the generic application conditions used have been developed over the years through cooperation between NASA, DoD, and industry in order to establish a common set of standard practices. These common practices, found in MIL-STD-883, MIL-STD-750, military part specifications, EEE-INST-002, and other guidelines are preferred because they are considered to be effective and repeatable and their results are usually straightforward to interpret. These practices can sometimes be unavailable to some NASA projects due to special application conditions that must be addressed, such as schedule constraints, cost constraints, logistical constraints, or advances in the technology that make the historical standards an inappropriate choice for establishing part performance and reliability. Alternate methods have begun to emerge and to be used by NASA programs to test parts individually or as part of a system, especially when standard lot tests cannot be applied. Four alternate screening methods will be discussed in this paper: Highly accelerated life test (HALT), forward voltage drop tests for evaluating wire-bond integrity, burn-in options during or after highly accelerated stress test (HAST), and board-level qualification.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudgins, Andrew P.; Sparn, Bethany F.; Jin, Xin
This document is the final report of a two-year development, test, and demonstration project entitled 'Cohesive Application of Standards-Based Connected Devices to Enable Clean Energy Technologies.' The project was part of the National Renewable Energy Laboratory's (NREL) Integrated Network Test-bed for Energy Grid Research and Technology (INTEGRATE) initiative. The Electric Power Research Institute (EPRI) and a team of partners were selected by NREL to carry out a project to develop and test how smart, connected consumer devices can act to enable the use of more clean energy technologies on the electric power grid. The project team includes a set ofmore » leading companies that produce key products in relation to achieving this vision: thermostats, water heaters, pool pumps, solar inverters, electric vehicle supply equipment, and battery storage systems. A key requirement of the project was open access at the device level - a feature seen as foundational to achieving a future of widespread distributed generation and storage. The internal intelligence, standard functionality and communication interfaces utilized in this project result in the ability to integrate devices at any level, to work collectively at the level of the home/business, microgrid, community, distribution circuit or other. Collectively, the set of products serve as a platform on which a wide range of control strategies may be developed and deployed.« less
The triglyceride and glucose index is useful for recognising insulin resistance in children.
Rodríguez-Morán, M; Simental-Mendía, L E; Guerrero-Romero, F
2017-06-01
Although recognising insulin resistance (IR) in children is particularly important, the gold standard test used to diagnose it, the euglyceamic glucose clamp, is costly, invasive and is not routinely available in our clinical settings in Mexico. This study evaluated whether the triglyceride-glucose (TyG) index would provide a useful alternative. A total of 2779 school children aged seven to 17 years, from Durango, Mexico, were enrolled during 2015-2016. The gold standard euglyceamic-hyperinsulinemic clamp test was performed in a randomly selected subsample of 125 children, and diagnostic concordance between the TyG index and the homoeostasis model assessment of IR was evaluated in all of the 2779 enrolled children. The best cut-off values for recognising IR using the TyG index were 4.65 for prepubertal girls and boys, 4.75 for pubertal girls and 4.70 for pubertal boys. Concordance between the TyG index and the homoeostasis model assessment of IR was 0.910 and 0.902 for the prepubertal girls and boys, 0.932 for the pubertal girls and 0.925 for the pubertal boys. The TyG index was useful for recognising IR in both prepubertal and pubertal children and could provide a feasible alternative to the costly and invasive gold standard test for IR in resource-limited settings. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Niemeijer, Meindert; van Ginneken, Bram; Cree, Michael J; Mizutani, Atsushi; Quellec, Gwénolé; Sanchez, Clara I; Zhang, Bob; Hornero, Roberto; Lamard, Mathieu; Muramatsu, Chisako; Wu, Xiangqian; Cazuguel, Guy; You, Jane; Mayo, Agustín; Li, Qin; Hatanaka, Yuji; Cochener, Béatrice; Roux, Christian; Karray, Fakhri; Garcia, María; Fujita, Hiroshi; Abramoff, Michael D
2010-01-01
The detection of microaneurysms in digital color fundus photographs is a critical first step in automated screening for diabetic retinopathy (DR), a common complication of diabetes. To accomplish this detection numerous methods have been published in the past but none of these was compared with each other on the same data. In this work we present the results of the first international microaneurysm detection competition, organized in the context of the Retinopathy Online Challenge (ROC), a multiyear online competition for various aspects of DR detection. For this competition, we compare the results of five different methods, produced by five different teams of researchers on the same set of data. The evaluation was performed in a uniform manner using an algorithm presented in this work. The set of data used for the competition consisted of 50 training images with available reference standard and 50 test images where the reference standard was withheld by the organizers (M. Niemeijer, B. van Ginneken, and M. D. Abràmoff). The results obtained on the test data was submitted through a website after which standardized evaluation software was used to determine the performance of each of the methods. A human expert detected microaneurysms in the test set to allow comparison with the performance of the automatic methods. The overall results show that microaneurysm detection is a challenging task for both the automatic methods as well as the human expert. There is room for improvement as the best performing system does not reach the performance of the human expert. The data associated with the ROC microaneurysm detection competition will remain publicly available and the website will continue accepting submissions.
Norman, J Farley; Cheeseman, Jacob R; Baxter, Michael W; Thomason, Kelsey E; Adkins, Olivia C; Rogers, Connor E
2014-05-01
Younger (20-25 years of age) and older (61-79 years) adults were evaluated for their ability to visually discriminate length. Almost all experiments that have utilized the method of single stimuli to date have required participants to judge test stimuli relative to a single implicit standard (for a rare exception, see Morgan, On the scaling of size judgements by orientational cues, Vision Research, 1992, 32, 1433-1445). In the current experiments, we not only asked participants to judge lengths relative to a single implicit standard, but they also compared test stimuli to two different implicit standards within the same blocks of trials. We analyzed our participants' judgments to evaluate whether significant sequential dependencies occurred. We found that while individual younger and older adults possessed similar length difference thresholds and exhibited similar overall biases, the judgments of older adults within individual blocks of trials were more strongly biased (than younger adults) by preceding responses (i.e., their judgments on any given trial were more strongly affected by responses to previously viewed stimuli). In addition, the judgments of both younger and older adults were more strongly biased by preceding responses in the blocks of trials with multiple implicit standards. Overall, our results are consistent with the operation of the tracking mechanism described by Criterion-setting theory (Lages and Treisman, Spatial frequency discrimination: Visual long-term memory or criterion setting? Vision Research, 1998, 38, 557-572). Copyright © 2014 Elsevier Ltd. All rights reserved.
Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.
2016-01-01
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220
Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X
2016-11-21
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.
Quality Assurance of RNA Expression Profiling in Clinical Laboratories
Tang, Weihua; Hu, Zhiyuan; Muallem, Hind; Gulley, Margaret L.
2012-01-01
RNA expression profiles are increasingly used to diagnose and classify disease, based on expression patterns of as many as several thousand RNAs. To ensure quality of expression profiling services in clinical settings, a standard operating procedure incorporates multiple quality indicators and controls, beginning with preanalytic specimen preparation and proceeding thorough analysis, interpretation, and reporting. Before testing, histopathological examination of each cellular specimen, along with optional cell enrichment procedures, ensures adequacy of the input tissue. Other tactics include endogenous controls to evaluate adequacy of RNA and exogenous or spiked controls to evaluate run- and patient-specific performance of the test system, respectively. Unique aspects of quality assurance for array-based tests include controls for the pertinent outcome signatures that often supersede controls for each individual analyte, built-in redundancy for critical analytes or biochemical pathways, and software-supported scrutiny of abundant data by a laboratory physician who interprets the findings in a manner facilitating appropriate medical intervention. Access to high-quality reagents, instruments, and software from commercial sources promotes standardization and adoption in clinical settings, once an assay is vetted in validation studies as being analytically sound and clinically useful. Careful attention to the well-honed principles of laboratory medicine, along with guidance from government and professional groups on strategies to preserve RNA and manage large data sets, promotes clinical-grade assay performance. PMID:22020152
Experimental test of nonlocal realistic theories without the rotational symmetry assumption.
Paterek, Tomasz; Fedrizzi, Alessandro; Gröblacher, Simon; Jennewein, Thomas; Zukowski, Marek; Aspelmeyer, Markus; Zeilinger, Anton
2007-11-23
We analyze the class of nonlocal realistic theories that was originally considered by Leggett [Found. Phys. 33, 1469 (2003)10.1023/A:1026096313729] and tested by us in a recent experiment [Nature (London) 446, 871 (2007)10.1038/nature05677]. We derive an incompatibility theorem that works for finite numbers of polarizer settings and that does not require the previously assumed rotational symmetry of the two-particle correlation functions. The experimentally measured case involves seven different measurement settings. Using polarization-entangled photon pairs, we exclude this broader class of nonlocal realistic models by experimentally violating a new Leggett-type inequality by 80 standard deviations.
HIV Testing and Treatment with Correctional Populations: People, Not Prisoners
Seal, David Wyatt; Eldridge, Gloria D.; Zack, Barry; Sosman, James
2014-01-01
Institutional policies, practices, and norms can impede the delivery of ethical standard-of-care treatment for people with HIV in correctional settings. In this commentary, we focus on the fundamental issues that must be addressed to create an ethical environment in which best medical practices can be implemented when working with correctional populations. Thus, we consider ethical issues related to access to services, patient privacy, confidentiality, informed consent for testing and treatment, and issues related to the provision of services in an institutional setting in which maintenance of security is the primary mission. Medical providers must understand and navigate the dehumanization inherent in most correctional settings, competing life demands for incarcerated individuals, power dynamics within the correctional system, and the needs of family and significant others who remain in the community. PMID:20693739
Crisp, Ginny D; Burkhart, Jena Ivey; Esserman, Denise A; Weinberger, Morris; Roth, Mary T
2011-12-01
Medication is one of the most important interventions for improving the health of older adults, yet it has great potential for causing harm. Clinical pharmacists are well positioned to engage in medication assessment and planning. The Individualized Medication Assessment and Planning (iMAP) tool was developed to aid clinical pharmacists in documenting medication-related problems (MRPs) and associated recommendations. The purpose of our study was to assess the reliability and usability of the iMAP tool in classifying MRPs and associated recommendations in older adults in the ambulatory care setting. Three cases, representative of older adults seen in an outpatient setting, were developed. Pilot testing was conducted and a "gold standard" key developed. Eight eligible pharmacists consented to participate in the study. They were instructed to read each case, make an assessment of MRPs, formulate a plan, and document the information using the iMAP tool. Inter-rater reliability was assessed for each case, comparing the pharmacists' identified MRPs and recommendations to the gold standard. Consistency of categorization across reviewers was assessed using the κ statistic or percent agreement. The mean κ across the 8 pharmacists in classifying MRPs compared with the gold standard was 0.74 (range, 0.54-1.00) for case 1 and 0.68 (range, 0.36-1.00) for case 2, indicating substantial agreement. For case 3, percent agreement was 63% (range, 40%-100%). The mean κ across the 8 pharmacists when classifying recommendations compared with the gold standard was 0.87 (range, 0.58-1.00) for case 1 and 0.88 (range, 0.75-1.00) for case 2, indicating almost perfect agreement. For case 3, percent agreement was 68% (range, 40%-100%). Clinical pharmacists found the iMAP tool easy to use. The iMAP tool provides a reliable and standardized approach for clinical pharmacists to use in the ambulatory care setting to classify MRPs and associated recommendations. Future studies will explore the predictive validity of the tool on clinical outcomes such as health care utilization. Copyright © 2011 Elsevier HS Journals, Inc. All rights reserved.
Food-Related Contact Dermatitis, Contact Urticaria, and Atopy Patch Test with Food.
Walter, Alexandra; Seegräber, Marlene; Wollenberg, Andreas
2018-06-07
A wide variety of foods may cause or aggravate skin diseases such as contact dermatitis, contact urticaria, or atopic dermatitis (AD), both in occupational and private settings. The mechanism of action underlying allergic disease to food, food additives, and spices may be immunologic and non-immunologic. The classification and understanding of these reactions is a complex field, and knowledge of the possible reaction patterns and appropriate diagnostic test methods is essential. In addition, certain foods may cause worsening of atopic dermatitis lesions in children. The atopy patch test (APT) is a well-established, clinically useful tool for assessing delayed type reactions to protein allergens in patients and may be useful to detect protein allergens relevant for certain skin diseases. The APT may even detect sensitization against allergens in intrinsic atopic dermatitis patients, who show negative skin prick test and negative in vitro IgE test results against these allergens. Native foods, SPT solutions on filter paper, and purified allergens in petrolatum have been used for APT. The European Task Force on Atopic Dermatitis (ETFAD) has worked on standardizing this test in the context of AD patients, who are allergic to aeroallergens and food. This recommended, standardized technique involves test application at the upper back of children and adults; use of large, 12-mm Finn chambers; avoidance of any pre-treatment such as tape stripping or delipidation; standardized amounts of purified allergens in petrolatum; and use of the standardized ETFAD reading key. The APT may not be the best working or best standardized of all possible skin tests, but it is the best test that we currently have available in this niche.
Effect of cleaning status on accuracy and precision of oxygen flowmeters of various ages.
Fissekis, Stephanie; Hodgson, David S; Bello, Nora M
2017-07-01
To evaluate oxygen flowmeters for accuracy and precision, assess the effects of cleaning and assess conformity to the American Society for Testing Materials (ASTM) standards. Experimental study. The flow of oxygen flowmeters from 31 anesthesia machines aged 1-45 years was measured before and after cleaning using a volumetric flow analyzer set at 0.5, 1.0, 2.0, 3.0, and 4.0 L minute -1 . A general linear mixed models approach was used to assess flow accuracy and precision. Flowmeters 1 year of age delivered accurate mean oxygen flows at all settings regardless of cleaning status. Flowmeters ≥5 years of age underdelivered at flows of 3.0 and 4.0 L minute -1 . Flowmeters ≥12 years underdelivered at flows of 2.0, 3.0 and 4.0 L minute -1 prior to cleaning. There was no evidence of any beneficial effect of cleaning on accuracy of flowmeters 5-12 years of age (p > 0.22), but the accuracy of flowmeters ≥15 years of age was improved by cleaning (p < 0.05). Regardless of age, cleaning increased precision, decreasing flow variability by approximately 17%. Nine of 31 uncleaned flowmeters did not meet ASTM standards. After cleaning, a different set of nine flowmeters did not meet standards, including three that had met standards prior to cleaning. Older flowmeters were more likely to underdeliver oxygen, especially at higher flows. Regardless of age, cleaning decreased flow variability, improving precision. However, flowmeters still may fail to meet ASTM standards, regardless of cleaning status. Cleaning anesthesia machine oxygen flowmeters improved precision for all tested machines and partially corrected inaccuracies in flowmeters ≥15 years old. A notable proportion of flowmeters did not meet ASTM standards. Cleaning did not ensure that they subsequently conformed to ASTM standards. We recommend annual flow output validation to identify whether flowmeters are acceptable for continued clinical use. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.
Ronquillo, Jay G; Weng, Chunhua; Lester, William T
2017-11-17
Precision medicine involves three major innovations currently taking place in healthcare: electronic health records, genomics, and big data. A major challenge for healthcare providers, however, is understanding the readiness for practical application of initiatives like precision medicine. To better understand the current state and challenges of precision medicine interoperability using a national genetic testing registry as a starting point, placed in the context of established interoperability formats. We performed an exploratory analysis of the National Institutes of Health Genetic Testing Registry. Relevant standards included Health Level Seven International Version 3 Implementation Guide for Family History, the Human Genome Organization Gene Nomenclature Committee (HGNC) database, and Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT). We analyzed the distribution of genetic testing laboratories, genetic test characteristics, and standardized genome/clinical code mappings, stratified by laboratory setting. There were a total of 25472 genetic tests from 240 laboratories testing for approximately 3632 distinct genes. Most tests focused on diagnosis, mutation confirmation, and/or risk assessment of germline mutations that could be passed to offspring. Genes were successfully mapped to all HGNC identifiers, but less than half of tests mapped to SNOMED CT codes, highlighting significant gaps when linking genetic tests to standardized clinical codes that explain the medical motivations behind test ordering. Conclusion: While precision medicine could potentially transform healthcare, successful practical and clinical application will first require the comprehensive and responsible adoption of interoperable standards, terminologies, and formats across all aspects of the precision medicine pipeline.
Standardized development of computer software. Part 1: Methods
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.
Experimental Characterization of Gas Turbine Emissions at Simulated Flight Altitude Conditions
NASA Technical Reports Server (NTRS)
Howard, R. P.; Wormhoudt, J. C.; Whitefield, P. D.
1996-01-01
NASA's Atmospheric Effects of Aviation Project (AEAP) is developing a scientific basis for assessment of the atmospheric impact of subsonic and supersonic aviation. A primary goal is to assist assessments of United Nations scientific organizations and hence, consideration of emissions standards by the International Civil Aviation Organization (ICAO). Engine tests have been conducted at AEDC to fulfill the need of AEAP. The purpose of these tests is to obtain a comprehensive database to be used for supplying critical information to the atmospheric research community. It includes: (1) simulated sea-level-static test data as well as simulated altitude data; and (2) intrusive (extractive probe) data as well as non-intrusive (optical techniques) data. A commercial-type bypass engine with aviation fuel was used in this test series. The test matrix was set by parametrically selecting the temperature, pressure, and flow rate at sea-level-static and different altitudes to obtain a parametric set of data.
76 FR 78814 - National Voluntary Laboratory Accreditation Program; Operating Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
... requirements for accreditation bodies accrediting conformity assessment bodies. The change will allow NVLAP... the human environment. Therefore, an environmental assessment or Environmental Impact Statement is not..., Laboratories, Measurement standards, Testing. For the reasons set forth in the preamble, title 15 of the Code...
Writing and the Seven Intelligences.
ERIC Educational Resources Information Center
Grow, Gerald
In "Frames of Mind," Howard Gardner replaces the standard view of intelligence with the idea that human beings have several distinct intelligences. Using an elaborate set of criteria, including evidence from studies of brain damage, prodigies, developmental patterns, cross-cultural comparisons, and various kinds of tests, Gardner…
A Population of Assessment Tasks
ERIC Educational Resources Information Center
Daro, Phil; Burkhardt, Hugh
2012-01-01
We propose the development of a "population" of high-quality assessment tasks that cover the performance goals set out in the "Common Core State Standards for Mathematics." The population will be published. Tests are drawn from this population as a structured random sample guided by a "balancing algorithm."
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-16
...): The capability to interface with external devices over a high bandwidth network (e.g., IEEE 802.11 (WiFi), MoCA, HPNA). For purposes of this specification, IEEE 802.3 wired Ethernet is not considered a...
Sommerville, C; Endris, R; Bell, T A; Ogawa, K; Buchmann, K; Sweeney, D
2016-03-30
This guideline is intended to assist in the planning and execution of studies designed to assess the efficacy of ectoparasiticides for fish. It is the first ectoparasite-specific guideline to deal with studies set in the aquatic environment and therefore provides details for the maintenance of environmental standards for finfish. Information is included on a range of pre-clinical study designs as well as clinical studies in commercial/production sites, set within a regulatory framework. It provides information on the study animals, their welfare, husbandry and environmental requirements during the study. The most commonly pathogenic ectoparasites are presented with relevant points regarding life history, host challenge and numeric evaluation. Preparation and presentation of both topical and oral test treatments is provided, together with guidance on data collection and analysis. The guideline provides a quality standard or efficacy studies on finfish, which will assist researchers and regulatory authorities worldwide and contribute to the wider objective of harmonisation of procedures.
Setting Emissions Standards Based on Technology Performance
In setting national emissions standards, EPA sets emissions performance levels rather than mandating use of a particular technology. The law mandates that EPA use numerical performance standards whenever feasible in setting national emissions standards.
NASA Technical Reports Server (NTRS)
Compton, E. C.
1986-01-01
Emittance tests were made on samples of Rene' 41, Haynes 188, and Inconel 625 superalloy metals in an evaluation of a standard test method for determining total hemispherical emittances of surfaces from 293 K to 1673 K. The intent of this evaluation was to address any problems encountered, check repeatability of measured emittances, and gain experience in use of the test procedure. Five test specimens were fabricated to prescribe test dimensions and surfaces cleaned of oil and residue. Three of these specimens were without oxidized surfaces and two with oxidized surfaces. The oxidized specimens were Rene' 41 and Haynes 188. The tests were conducted in a vacuum where the samples were resistance-heated to various temperature levels ranging from 503 K to 1293 K. The calculated results for emittance, in the worst case, were repeatable to a maximum spread to + or - 4% from the mean of five sets of plotted data for each specimen.
International survey on D-dimer test reporting: a call for standardization.
Lippi, Giuseppe; Tripodi, Armando; Simundic, Ana-Maria; Favaloro, Emmanuel J
2015-04-01
D-dimer is the biochemical gold standard for diagnosing a variety of thrombotic disorders, but result reporting is heterogeneous in clinical laboratories. A specific five-item questionnaire was developed to gain a clear picture of the current standardization of D-dimer test results. The questionnaire was opened online (December 24, 2014-February 10, 2015) on the platform "Google Drive (Google Inc., Mountain View; CA)," and widely disseminated worldwide by newsletters and alerts. A total of 409 responses were obtained during the period of data capture, the largest of which were from Italy (136; 33%), Australia (55; 22%), Croatia (29; 7%), Serbia (26; 6%), and the United States (21; 5%). Most respondents belonged to laboratories in general hospitals (208; 51%), followed by laboratories in university hospitals (104; 26%), and the private sector (94; 23%). The majority of respondents (i.e., 246; 60%) indicated the use of fibrinogen equivalent unit for expressing D-dimer results, with significant heterogeneities across countries and health care settings. The highest prevalence of laboratories indicated they were using "ng/mL" (139; 34%), followed by "mg/L" (136; 33%), and "µg/L" (73; 18%), with significant heterogeneity across countries but not among different health care settings. Expectedly, the vast majority of laboratories (379; 93%) declared to be using a fixed cutoff rather than an age-adjusted threshold, with no significant heterogeneity across countries and health care settings. The results of this survey attest that at least 28 different combinations of measurement units are currently used to report D-dimer results worldwide, and this evidence underscores the urgent need for more effective international joined efforts aimed to promote a worldwide standardization of D-dimer results reporting. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
A New Test Unit for Disintegration End-Point Determination of Orodispersible Films.
Low, Ariana; Kok, Si Ling; Khong, Yuet Mei; Chan, Sui Yung; Gokhale, Rajeev
2015-11-01
No standard time or pharmacopoeia disintegration test method for orodispersible films (ODFs) exists. The USP disintegration test for tablets and capsules poses significant challenges for end-point determination when used for ODFs. We tested a newly developed disintegration test unit (DTU) against the USP disintegration test. The DTU is an accessory to the USP disintegration apparatus. It holds the ODF in a horizontal position, allowing top-view of the ODF during testing. A Gauge R&R study was conducted to assign relative contributions of the total variability from the operator, sample or the experimental set-up. Precision was compared using commercial ODF products in different media. Agreement between the two measurement methods was analysed. The DTU showed improved repeatability and reproducibility compared to the USP disintegration system with tighter standard deviations regardless of operator or medium. There is good agreement between the two methods, with the USP disintegration test giving generally longer disintegration times possibly due to difficulty in end-point determination. The DTU provided clear end-point determination and is suitable for quality control of ODFs during product developmental stage or manufacturing. This may facilitate the development of a standardized methodology for disintegration time determination of ODFs. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Kim, Ki-Hyun; Anthwal, A; Pandey, Sudhir Kumar; Kabir, Ehsanul; Sohn, Jong Ryeul
2010-11-01
In this study, a series of GC calibration experiments were conducted to examine the feasibility of the thermal desorption approach for the quantification of five carbonyl compounds (acetaldehyde, propionaldehyde, butyraldehyde, isovaleraldehyde, and valeraldehyde) in conjunction with two internal standard compounds. The gaseous working standards of carbonyls were calibrated with the aid of thermal desorption as a function of standard concentration and of loading volume. The detection properties were then compared against two types of external calibration data sets derived by fixed standard volume and fixed standard concentration approach. According to this comparison, the fixed standard volume-based calibration of carbonyls should be more sensitive and reliable than its fixed standard concentration counterpart. Moreover, the use of internal standard can improve the analytical reliability of aromatics and some carbonyls to a considerable extent. Our preliminary test on real samples, however, indicates that the performance of internal calibration, when tested using samples of varying dilution ranges, can be moderately different from that derivable from standard gases. It thus suggests that the reliability of calibration approaches should be examined carefully with the considerations on the interactive relationships between the compound-specific properties and the operation conditions of the instrumental setups.
A multi‐centre evaluation of nine rapid, point‐of‐care syphilis tests using archived sera
Herring, A J; Ballard, R C; Pope, V; Adegbola, R A; Changalucha, J; Fitzgerald, D W; Hook, E W; Kubanova, A; Mananwatte, S; Pape, J W; Sturm, A W; West, B; Yin, Y P; Peeling, R W
2006-01-01
Objectives To evaluate nine rapid syphilis tests at eight geographically diverse laboratory sites for their performance and operational characteristics. Methods Tests were compared “head to head” using locally assembled panels of 100 archived (50 positive and 50 negative) sera at each site using as reference standards the Treponema pallidum haemagglutination or the T pallidum particle agglutination test. In addition inter‐site variation, result stability, test reproducibility and test operational characteristics were assessed. Results All nine tests gave good performance relative to the reference standard with sensitivities ranging from 84.5–97.7% and specificities from 84.5–98%. Result stability was variable if result reading was delayed past the recommended period. All the tests were found to be easy to use, especially the lateral flow tests. Conclusions All the tests evaluated have acceptable performance characteristics and could make an impact on the control of syphilis. Tests that can use whole blood and do not require refrigeration were selected for further evaluation in field settings. PMID:17118953
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Nan; Zheng, Nina; Fridley, David
2012-02-28
Appliance energy efficiency standards and labeling (S&L) programs have been important policy tools for regulating the efficiency of energy-using products for over 40 years and continue to expand in terms of geographic and product coverage. The most common S&L programs include mandatory minimum energy performance standards (MEPS) that seek to push the market for efficient products, and energy information and endorsement labels that seek to pull the market. This study seeks to review and compare some of the earliest and most well-developed S&L programs in three countries and one region: the U.S. MEPS and ENERGY STAR, Australia MEPS and Energymore » Label, European Union MEPS and Ecodesign requirements and Energy Label and Japanese Top Runner programs. For each program, key elements of S&L programs are evaluated and comparative analyses across the programs undertaken to identify best practice examples of individual elements as well as cross-cutting factors for success and lessons learned in international S&L program development and implementation. The international review and comparative analysis identified several overarching themes and highlighted some common factors behind successful program elements. First, standard-setting and programmatic implementation can benefit significantly from a legal framework that stipulates a specific timeline or schedule for standard-setting and revision, product coverage and legal sanctions for non-compliance. Second, the different MEPS programs revealed similarities in targeting efficiency gains that are technically feasible and economically justified as the principle for choosing a standard level, in many cases at a level that no product on the current market could reach. Third, detailed survey data such as the U.S. Residential Energy Consumption Survey (RECS) and rigorous analyses provide a strong foundation for standard-setting while incorporating the participation of different groups of stakeholders further strengthen the process. Fourth, sufficient program resources for program implementation and evaluation are critical to the effectiveness of standards and labeling programs and cost-sharing between national and local governments can help ensure adequate resources and uniform implementation. Lastly, check-testing and punitive measures are important forms of enforcement while the cancellation of registration or product sales-based fines have also proven effective in reducing non-compliance. The international comparative analysis also revealed the differing degree to which the level of government decentralization has influenced S&L programs and while no single country has best practices in all elements of standards and labeling development and implementation, national examples of best practices for specific elements do exist. For example, the U.S. has exemplified the use of rigorous analyses for standard-setting and robust data source with the RECS database while Japan's Top Runner standard-setting principle has motivated manufacturers to exceed targets. In terms of standards implementation and enforcement, Australia has demonstrated success with enforcement given its long history of check-testing and enforcement initiatives while mandatory information-sharing between EU jurisdictions on compliance results is another important enforcement mechanism. These examples show that it is important to evaluate not only the drivers of different paths of standards and labeling development, but also the country-specific context for best practice examples in order to understand how and why certain elements of specific S&L programs have been effective.« less
Lerna, Anna; Esposito, Dalila; Conson, Massimiliano; Russo, Luigi; Massagli, Angelo
2012-01-01
The Picture Exchange Communication System (PECS) is a common treatment choice for non-verbal children with autism. However, little empirical evidence is available on the usefulness of PECS in treating social-communication impairments in autism. To test the effects of PECS on social-communicative skills in children with autism, concurrently taking into account standardized psychometric data, standardized functional assessment of adaptive behaviour, and information on social-communicative variables coded in an unstructured setting. Eighteen preschool children (mean age = 38.78 months) were assigned to two intervention approaches, i.e. PECS and Conventional Language Therapy (CLT). Both PECS (Phases I-IV) and CLT were delivered three times per week, in 30-min sessions, for 6 months. Outcome measures were the following: Autism Diagnostic Observation Schedule (ADOS) domain scores for Communication and Reciprocal Social Interaction; Language and Personal-Social subscales of the Griffiths' Mental Developmental Scales (GMDS); Communication and Social Abilities domains of the Vineland Adaptive Behavior Scales (VABS); and several social-communicative variables coded in an unstructured setting. Results demonstrated that the two groups did not differ at Time 1 (pre-treatment assessment), whereas at Time 2 (post-test) the PECS group showed a significant improvement with respect to the CLT group on the VABS social domain score and on almost all the social-communicative abilities coded in the unstructured setting (i.e. joint attention, request, initiation, cooperative play, but not eye contact). These findings showed that PECS intervention (Phases I-IV) can improve social-communicative skills in children with autism. This improvement is especially evident in standardized measures of adaptive behaviour and measures derived from the observation of children in an unstructured setting. © 2012 Royal College of Speech and Language Therapists.
Y-balance test: a reliability study involving multiple raters.
Shaffer, Scott W; Teyhen, Deydre S; Lorenson, Chelsea L; Warren, Rick L; Koreerat, Christina M; Straseske, Crystal A; Childs, John D
2013-11-01
The Y-balance test (YBT) is one of the few field expedient tests that have shown predictive validity for injury risk in an athletic population. However, analysis of the YBT in a heterogeneous population of active adults (e.g., military, specific occupations) involving multiple raters with limited experience in a mass screening setting is lacking. The primary purpose of this study was to determine interrater test-retest reliability of the YBT in a military setting using multiple raters. Sixty-four service members (53 males, 11 females) actively conducting military training volunteered to participate. Interrater test-retest reliability of the maximal reach had intraclass correlation coefficients (2,1) of 0.80 to 0.85 with a standard error of measurement ranging from 3.1 to 4.2 cm for the 3 reach directions (anterior, posteromedial, and posterolateral). Interrater test-retest reliability of the average reach of 3 trails had an intraclass correlation coefficients (2,3) range of 0.85 to 0.93 with an associated standard error of measurement ranging from 2.0 to 3.5cm. The YBT showed good interrater test-retest reliability with an acceptable level of measurement error among multiple raters screening active duty service members. In addition, 31.3% (n = 20 of 64) of participants exhibited an anterior reach asymmetry of >4cm, suggesting impaired balance symmetry and potentially increased risk for injury. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
O'Neill, Brian
2009-04-01
Motor vehicle crashes result in some 1.2 million deaths and many more injuries worldwide each year and is one of the biggest public health problems facing societies today. This article reviews the history of, and future potential for, one important countermeasure-designing vehicles that reduce occupant deaths and injuries. For many years, people had urged automakers to add design features to reduce crash injuries, but it was not until the mid-1960s that the idea of pursuing vehicle countermeasures gained any significant momentum. In 1966, the U.S. Congress passed the National Traffic and Motor Vehicle Safety Act, requiring the government to issue a comprehensive set of vehicle safety standards. This was the first broad set of requirements issued anywhere in the world, and within a few years similar standards were adopted in Europe and Australia. Early vehicle safety standards specified a variety of safety designs resulting in cars being equipped with lap/shoulder belts, energy-absorbing steering columns, crash-resistant door locks, high-penetration-resistant windshields, etc. Later, the standards moved away from specifying particular design approaches and instead used crash tests and instrumented dummies to set limits on the potential for serious occupant injuries by crash mode. These newer standards paved the way for an approach that used the marketplace, in addition to government regulation, to improve vehicle safety designs-using crash tests and instrumented dummies to provide consumers with comparative safety ratings for new vehicles. The approach began in the late 1970s, when NHTSA started publishing injury measures from belted dummies in new passenger vehicles subjected to frontal barrier crash tests at speeds somewhat higher than specified in the corresponding regulation. This program became the world's first New Car Assessment Program (NCAP) and rated frontal crashworthiness by awarding stars (five stars being the best and one the worst) derived from head and chest injury measures recorded on driver and front-seat test dummies. NHTSA later added side crash tests and rollover ratings to the U.S. NCAP. Consumer crash testing spread worldwide in the 1990s. In 1995, the Insurance Institute for Highway Safety (IIHS) began using frontal offset crash tests to rate and compare frontal crashworthiness and later added side and rear crash assessments. Shortly after, Europe launched EuroNCAP to assesses new car performance including front, side, and front-end pedestrian tests. The influence of these consumer-oriented crash test programs on vehicle designs has been major. From the beginning, U.S. NCAP results prompted manufacturers to improve seat belt performance. Frontal offset tests from IIHS and EuroNCAP resulted in greatly improved front-end crumple zones and occupant compartments. Side impact tests have similarly resulted in improved side structures and accelerated the introduction of side impact airbags, especially those designed to protect occupant's heads. Vehicle safety designs, initially driven by regulations and later by consumer demand because of crash testing, have proven to be very successful public health measures. Since they were first introduced in the late 1960s, vehicle safety designs have saved hundreds of thousands of lives and prevented countless injuries worldwide. The designs that improved vehicle crashworthiness have been particularly effective. Some newer crash avoidance designs also have the potential to be effective-e.g., electronic stability control is already saving many lives in single-vehicle crashes. However, determining the actual effectiveness of these new technologies is a slow process and needs real-world crash experience because there are no assessment equivalent of crash tests for crash avoidance designs.
A study on setting of the fatigue limit of temporary dental implants.
Kim, M H; Cho, E J; Lee, J W; Kim, E K; Yoo, S H; Park, C W
2017-07-01
A temporary dental implant is a medical device which is temporarily used to support a prosthesis such as an artificial tooth used for restoring patient's masticatory function during implant treatment. It is implanted in the oral cavity to substitute for the role of tooth. Due to the aging and westernization of current Korean society, the number of tooth extraction and implantation procedures is increasing, leading to an increase in the use and development of temporary dental implants. Because an implant performs a masticatory function in place of a tooth, a dynamic load is repeatedly put on the implant. Thus, the fatigue of implants is reported to be the most common causes of the fracture thereof. According to the investigation and analysis of the current domestic and international standards, the standard for fatigue of implant fixtures is not separately specified. Although a test method for measuring the fatigue is suggested in an ISO standard, it is a standard for permanent dental implants. Most of the test standards for Korean manufacturers and importers apply 250 N or more based on the guidance for the safety and performance evaluation of dental implants. Therefore, this study is intended to figure out the fatigue standard which can be applied to temporary dental implants when measuring the fatigue according to the test method suggested in the permanent dental implant standard. The results determined that suitable fatigue standards of temporary dental implants should be provided by each manufacturer rather than applying 250 N. This study will be useful for the establishment of the fatigue standards and fatigue test methods of the manufacturers and importers of temporary dental implants.
The Impact of Setting the Standards of Health Promoting Hospitals on Hospital Indicators in Iran
Amiri, Mohammad; Khosravi, Ahmad; Riyahi, Leila
2016-01-01
Hospitals play a critical role in the health promotion of the society. This study aimed to determine the impact of establishing standards of health promoting hospitals on hospital indicators in Shahroud. This applied study was a quasi-experimental one which was conducted in 2013. Standards of health promoting hospitals were established as an intervention procedure in the Fatemiyeh hospital. Parameters of health promoting hospitals were compared in intervention and control hospitals before and after of intervention (6 months). The data were analyzed using chi-square and t-test. With the establishment of standards for health promotion hospitals, standard scores in intervention and control hospitals were found to be 72.26 ± 4.1 and 16.26 ± 7.5, respectively. T-test showed a significant difference between the mean scores of the hospitals under study (P = 0.001).The chi-square test also showed a significant relationship between patient satisfaction before and after the intervention so that patients’ satisfaction was higher after the intervention (P = 0.001). Commenting on the short-term or long-term positive impacts of establishing standards of health promoting hospitals on all hospital indicators is a bit difficult but preliminary results show the positive impact of the implementation of standards in case hospitals which has led to the improvement of many indicators in the hospital. PMID:27959930
Effects of Space Radiation on Humoral and Cellular Immunity in Rhesus Monkeys.
1992-12-01
55318-1084) immunoplates that are routinely used for quantitative assays of human Ig levels. It seemed justified to use the human system to test the...of Ig between the irradiated and control monkeys of different ages. The tests were set up and read at 18 and 72 h by the same operator, taking careful...note of the lot number and the standard reference curves for each test kit. The samples were suitably diluted to obtain clear-cut reactions (i.e
Search for Lorentz Violation in a Short-Range Gravity Experiment
NASA Astrophysics Data System (ADS)
Bennett, D.; Skavysh, V.; Long, J.
2011-12-01
An experimental test of the Newtonian inverse square law at short range has been used to set limits on Lorentz violation in the pure gravity sector of the Standard-Model Extension. On account of the planar test mass geometry, nominally null with respect to 1/r2 forces, the limits derived for the SME coefficients of Lorentz violation are on the order bar sJK ˜ 104 .
Laboratory and Field Evaluation of Rapid Setting Cementitious Materials for Large Crater Repair
2010-05-01
frame used within which to complete the repair was the current NATO standard of 4 hr. A total of 6 simulated craters were prepared, with each repair...Combat Command 129 Andrews Street Langley Air Force Base, VA 23665 ERDC TR-10-4 ii Abstract: Current practice for expedient runway repair...penalty. Numerous commercial products are available. A full-scale field test was conducted using rapid setting materials to repair simulated bomb craters
Tamhankar, Ashok J; Karnik, Shreyasee S; Stålsby Lundborg, Cecilia
2018-04-23
Antibiotic resistance, a consequence of antibiotic use, is a threat to health, with severe consequences for resource constrained settings. If determinants for human antibiotic use in India, a lower middle income country, with one of the highest antibiotic consumption in the world could be understood, interventions could be developed, having implications for similar settings. Year wise data for India, for potential determinants and antibiotic consumption, was sourced from publicly available databases for the years 2000-2010. Data was analyzed using Partial Least Squares regression and correlation between determinants and antibiotic consumption was evaluated, formulating 'Predictors' and 'Prediction models'. The 'prediction model' with the statistically most significant predictors (root mean square errors of prediction for train set-377.0 and test set-297.0) formulated from a combination of Health infrastructure + Surface transport infrastructure (HISTI), predicted antibiotic consumption within 95% confidence interval and estimated an antibiotic consumption of 11.6 standard units/person (14.37 billion standard units totally; standard units = number of doses sold in the country; a dose being a pill, capsule, or ampoule) for India for 2014. The HISTI model may become useful in predicting antibiotic consumption for countries/regions having circumstances and data similar to India, but without resources to measure actual data of antibiotic consumption.
Drugs and alcohol in civil aviation accident pilot fatalities from 2004-2008.
DOT National Transportation Integrated Search
2011-09-01
The FAA Office of Aerospace Medicine sets medical standards needed to protect the public and pilots from death : or injury due to incapacitation of the pilot. As a part of this process, toxicology testing is performed by the FAA : on almost every pil...
NASA Astrophysics Data System (ADS)
Kaus, Rüdiger
This chapter gives the background on the accreditation of testing and calibration laboratories according to ISO/IEC 17025 and sets out the requirements of this international standard. ISO 15189 describes similar requirements especially tailored for medical laboratories. Because of these similarities ISO 15189 is not separately mentioned throughout this lecture.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-17
...; Comment Request--Omnidirectional Citizens Band Base Station Antennas AGENCY: Consumer Product Safety... antennas. The collection of information is in regulations setting forth the Safety Standard for Omnidirectional Citizens Band Base Station Antennas (16 CFR part 1204). These regulations establish testing and...
75 FR 58014 - Pipeline Safety: Information Collection Activity; Request for Comments
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-23
... detection systems must comply with the standards set out in American Petroleum Institute (API) publication API 1130. API 1130 requires operators to record and retain certain information regarding the operation and testing of CPM systems. Compliance with API 1130, including its recordkeeping requirements...
VOLUMETRIC LEAK DETECTION IN LARGE UNDERGROUND STORAGE TANKS - VOLUME I
A set of experiments was conducted to determine whether volumetric leak detection system presently used to test underground storage tanks (USTs) up to 38,000 L (10,000 gal) in capacity could meet EPA's regulatory standards for tank tightness and automatic tank gauging systems whe...
Exploration of a Reflective Practice Rubric
ERIC Educational Resources Information Center
Young, Karen; James, Kimberley; Noy, Sue
2016-01-01
Work integrated learning (WIL) educators using reflective practice to facilitate student learning require a set of standards that works within the traditional assessment frame of Higher Education, to ascertain the level at which reflective practice has been demonstrated. However, there is a paucity of tested assessment instruments that provide…
Diehl, Alessandra; Rassool, G Hussein; dos Santos, Manoel Antônio; Pillon, Sandra Cristina; Laranjeira, Ronaldo
2016-01-01
The aim of this study is to evaluate whether there is a difference in the identified prevalence between the assessment of symptoms of sexual dysfunction in female drug users using a standardized scale and by means of a nonstandardized set of questions about sexual dysfunctions. A cross-sectional study was conducted with two groups of substance-dependent women using the Drug Abuse Screening Test, the Short Alcohol Dependence Data questionnaire, the Fagerström Test for Nicotine Dependence for the evaluation of the severity of dependence, and the Arizona Sexual Experience Scale. In both groups, the severity of dependence and the prevalence of symptoms of sexual dysfunctions in women were similar. The use of standardized and nonstandardized instruments to assess sexual dysfunction symptoms is an essential resource for the provision of good-quality care to this clientele.
Grol, R
1990-01-01
The Nederlands Huisartsen Genootschap (NHG), the college of general practitioners in the Netherlands, has begun a national programme of standard setting for the quality of care in general practice. When the standards have been drawn up and assessed they are disseminated via the journal Huisarts en Wetenschap. In a survey, carried out among a randomized sample of 10% of all general practitioners, attitudes towards national standard setting in general and to the first set of standards (diabetes care) were studied. The response was 70% (453 doctors). A majority of the respondents said they were well informed about the national standard setting initiatives instigated by the NHG (71%) and about the content of the first standards (77%). The general practitioners had a positive attitude towards the setting of national standards for quality of care, and this was particularly true for doctors who were members of the NHG. Although a large majority of doctors said they agreed with most of the guidelines in the diabetes standards fewer respondents were actually working to the guidelines and some of the standards are certain to meet with a lot of resistance. A better knowledge of the standards and a more positive attitude to the process of national standard setting correlated with a more positive attitude to the guidelines formulated in the diabetes standards. The results could serve as a starting point for an exchange of views about standard setting in general practice in other countries. PMID:2265001