Issues and Methods for Standard-Setting.
ERIC Educational Resources Information Center
Hambleton, Ronald K.; And Others
Issues involved in standard setting along with methods for standard setting are reviewed, with specific reference to their relevance for criterion referenced testing. Definitions are given of continuum and state models, and traditional and normative standard setting procedures. Since continuum models are considered more appropriate for criterion…
Malau-Aduli, Bunmi Sherifat; Teague, Peta-Ann; D'Souza, Karen; Heal, Clare; Turner, Richard; Garne, David L; van der Vleuten, Cees
2017-12-01
A key issue underpinning the usefulness of the OSCE assessment to medical education is standard setting, but the majority of standard-setting methods remain challenging for performance assessment because they produce varying passing marks. Several studies have compared standard-setting methods; however, most of these studies are limited by their experimental scope, or use data on examinee performance at a single OSCE station or from a single medical school. This collaborative study between 10 Australian medical schools investigated the effect of standard-setting methods on OSCE cut scores and failure rates. This research used 5256 examinee scores from seven shared OSCE stations to calculate cut scores and failure rates using two different compromise standard-setting methods, namely the Borderline Regression and Cohen's methods. The results of this study indicate that Cohen's method yields similar outcomes to the Borderline Regression method, particularly for large examinee cohort sizes. However, with lower examinee numbers on a station, the Borderline Regression method resulted in higher cut scores and larger difference margins in the failure rates. Cohen's method yields similar outcomes as the Borderline Regression method and its application for benchmarking purposes and in resource-limited settings is justifiable, particularly with large examinee numbers.
A Comparison of Web-Based Standard Setting and Monitored Standard Setting.
ERIC Educational Resources Information Center
Harvey, Anne L.; Way, Walter D.
Standard setting, when carefully done, can be an expensive and time-consuming process. The modified Angoff method and the benchmark method, as utilized in this study, employ representative panels of judges to provide recommended passing scores to standard setting decision-makers. It has been considered preferable to have the judges meet in a…
Standard setting: comparison of two methods.
George, Sanju; Haque, M Sayeed; Oyebode, Femi
2006-09-14
The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.
Condensed Mastery Profile Method for Setting Standards for Diagnostic Assessment Systems
ERIC Educational Resources Information Center
Clark, A. K.; Nash, B.; Karvonen, M.; Kingston, N.
2017-01-01
The purpose of this study was to develop a standard-setting method appropriate for use with a diagnostic assessment that produces profiles of student mastery rather than a single raw or scale score value. The condensed mastery profile method draws from established holistic standard-setting methods to use rounds of range finding and pinpointing to…
Diagnostic Profiles: A Standard Setting Method for Use with a Cognitive Diagnostic Model
ERIC Educational Resources Information Center
Skaggs, Gary; Hein, Serge F.; Wilkins, Jesse L. M.
2016-01-01
This article introduces the Diagnostic Profiles (DP) standard setting method for setting a performance standard on a test developed from a cognitive diagnostic model (CDM), the outcome of which is a profile of mastered and not-mastered skills or attributes rather than a single test score. In the DP method, the key judgment task for panelists is a…
ERIC Educational Resources Information Center
Wood, Timothy J.; Humphrey-Murto, Susan M.; Norman, Geoffrey R.
2006-01-01
When setting standards, administrators of small-scale OSCEs often face several challenges, including a lack of resources, a lack of available expertise in statistics, and difficulty in recruiting judges. The Modified Borderline-Group Method is a standard setting procedure that compensates for these challenges by using physician examiners and is…
A Mapmark method of standard setting as implemented for the National Assessment Governing Board.
Schulz, E Matthew; Mitzel, Howard C
2011-01-01
This article describes a Mapmark standard setting procedure, developed under contract with the National Assessment Governing Board (NAGB). The procedure enhances the bookmark method with spatially representative item maps, holistic feedback, and an emphasis on independent judgment. A rationale for these enhancements, and the bookmark method, is presented, followed by a detailed description of the materials and procedures used in a meeting to set standards for the 2005 National Assessment of Educational Progress (NAEP) in Grade 12 mathematics. The use of difficulty-ordered content domains to provide holistic feedback is a particularly novel feature of the method. Process evaluation results comparing Mapmark to Anghoff-based methods previously used for NAEP standard setting are also presented.
Reliability and Validity of 10 Different Standard Setting Procedures.
ERIC Educational Resources Information Center
Halpin, Glennelle; Halpin, Gerald
Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…
Adopting Cut Scores: Post-Standard-Setting Panel Considerations for Decision Makers
ERIC Educational Resources Information Center
Geisinger, Kurt F.; McCormick, Carina M.
2010-01-01
Standard-setting studies utilizing procedures such as the Bookmark or Angoff methods are just one component of the complete standard-setting process. Decision makers ultimately must determine what they believe to be the most appropriate standard or cut score to use, employing the input of the standard-setting panelists as one piece of information…
A new IRT-based standard setting method: application to eCat-listening.
García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David
2013-01-01
Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.
Comparison of Web-Based and Face-to-Face Standard Setting Using the Angoff Method
ERIC Educational Resources Information Center
Katz, Irvin R.; Tannenbaum, Richard J.
2014-01-01
Web-based standard setting holds promise for reducing the travel and logistical inconveniences of traditional, face-to-face standard setting meetings. However, because there are few published reports of setting standards via remote meeting technology, little is known about the practical potential of the approach, including technical feasibility of…
The Objective Borderline Method: A Probabilistic Method for Standard Setting
ERIC Educational Resources Information Center
Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim
2015-01-01
A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…
Setting Standards for Minimum Competency Tests.
ERIC Educational Resources Information Center
Mehrens, William A.
Some general questions about minimum competency tests are discussed, and various methods of setting standards are reviewed with major attention devoted to those methods used for dichotomizing a continuum. Methods reviewed under the heading of Absolute Judgments of Test Content include Nedelsky's, Angoff's, Ebel's, and Jaeger's. These methods are…
Higher Education Faculty Engagement in a Modified Mapmark Standard Setting
ERIC Educational Resources Information Center
Horst, S. Jeanne; DeMars, Christine E.
2016-01-01
The Mapmark standard setting method was adapted to a higher education setting in which faculty leaders were highly involved. Eighteen university faculty members participated in a day-long standard setting for a general education communications test. In Round 1, faculty set initial cut-scores for each of four student learning objectives. In Rounds…
A Preliminary Investigation of the Direct Standard Setting Method.
ERIC Educational Resources Information Center
Jones, J. Patrick; And Others
Three studies assessed the psychometric characteristics of the Direct Standard Setting Method (DSSM). The Angoff technique was also used in each study. The DSSM requires judges to consider an examination 10 items at a time and determine the minimum items in that set a candidate should answer correctly to receive the credential. Nine judges set a…
ERIC Educational Resources Information Center
Lin, Jie
2006-01-01
The Bookmark standard-setting procedure was developed to address the perceived problems with the most popular method for setting cut-scores: the Angoff procedure (Angoff, 1971). The purposes of this article are to review the Bookmark procedure and evaluate it in terms of Berk's (1986) criteria for evaluating cut-score setting methods. The…
A Comparative Study of Standard-Setting Methods.
ERIC Educational Resources Information Center
Livingston, Samuel A.; Zieky, Michael J.
1989-01-01
The borderline group standard-setting method (BGSM), Nedelsky method (NM), and Angoff method (AM) were compared, using reading scores for 1,948 and mathematics scores for 2,191 sixth through ninth graders. The NM and AM were inconsistent with the BGSM. Passing scores were higher where students were more able. (SLD)
Combining the Best of Two Standard Setting Methods: The Ordered Item Booklet Angoff
ERIC Educational Resources Information Center
Smith, Russell W.; Davis-Becker, Susan L.; O'Leary, Lisa S.
2014-01-01
This article describes a hybrid standard setting method that combines characteristics of the Angoff (1971) and Bookmark (Mitzel, Lewis, Patz & Green, 2001) methods. The proposed approach utilizes strengths of each method while addressing weaknesses. An ordered item booklet, with items sorted based on item difficulty, is used in combination…
Judgmental Standard Setting Using a Cognitive Components Model.
ERIC Educational Resources Information Center
McGinty, Dixie; Neel, John H.
A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…
Comparison of two methods of standard setting: the performance of the three-level Angoff method.
Jalili, Mohammad; Hejri, Sara M; Norcini, John J
2011-12-01
Cut-scores, reliability and validity vary among standard-setting methods. The modified Angoff method (MA) is a well-known standard-setting procedure, but the three-level Angoff approach (TLA), a recent modification, has not been extensively evaluated. This study aimed to compare standards and pass rates in an objective structured clinical examination (OSCE) obtained using two methods of standard setting with discussion and reality checking, and to assess the reliability and validity of each method. A sample of 105 medical students participated in a 14-station OSCE. Fourteen and 10 faculty members took part in the MA and TLA procedures, respectively. In the MA, judges estimated the probability that a borderline student would pass each station. In the TLA, judges estimated whether a borderline examinee would perform the task correctly or not. Having given individual ratings, judges discussed their decisions. One week after the examination, the procedure was repeated using normative data. The mean score for the total test was 54.11% (standard deviation: 8.80%). The MA cut-scores for the total test were 49.66% and 51.52% after discussion and reality checking, respectively (the consequent percentages of passing students were 65.7% and 58.1%, respectively). The TLA yielded mean pass scores of 53.92% and 63.09% after discussion and reality checking, respectively (rates of passing candidates were 44.8% and 12.4%, respectively). Compared with the TLA, the MA showed higher agreement between judges (0.94 versus 0.81) and a narrower 95% confidence interval in standards (3.22 versus 11.29). The MA seems a more credible and reliable procedure with which to set standards for an OSCE than does the TLA, especially when a reality check is applied. © Blackwell Publishing Ltd 2011.
A Critical Analysis of the Body of Work Method for Setting Cut-Scores
ERIC Educational Resources Information Center
Radwan, Nizam; Rogers, W. Todd
2006-01-01
The recent increase in the use of constructed-response items in educational assessment and the dissatisfaction with the nature of the decision that the judges must make using traditional standard-setting methods created a need to develop new and effective standard-setting procedures for tests that include both multiple-choice and…
The Effect of Various Factors on Standard Setting.
ERIC Educational Resources Information Center
Norcini, John J.; And Others
1988-01-01
Two studies of medical certification examinations were undertaken to assess standard setting using Angoff's Method. Results indicate that (1) specialization within broad content areas does not affect an expert's estimates of the performance of the borderline group; and (2) performance data should be provided during the standard-setting process.…
2015-01-01
Objectives The principal aim of this study is to provide an account of variation in UK undergraduate medical assessment styles and corresponding standard setting approaches with a view to highlighting the importance of a UK national licensing exam in recognizing a common standard. Methods Using a secure online survey system, response data were collected during the period 13 - 30 January 2014 from selected specialists in medical education assessment, who served as representatives for their respective medical schools. Results Assessment styles and corresponding choices of standard setting methods vary markedly across UK medical schools. While there is considerable consensus on the application of compensatory approaches, individual schools display their own nuances through use of hybrid assessment and standard setting styles, uptake of less popular standard setting techniques and divided views on norm referencing. Conclusions The extent of variation in assessment and standard setting practices across UK medical schools validates the concern that there is a lack of evidence that UK medical students achieve a common standard on graduation. A national licensing exam is therefore a viable option for benchmarking the performance of all UK undergraduate medical students. PMID:26520472
Variation in passing standards for graduation-level knowledge items at UK medical schools.
Taylor, Celia A; Gurnell, Mark; Melville, Colin R; Kluth, David C; Johnson, Neil; Wass, Val
2017-06-01
Given the absence of a common passing standard for students at UK medical schools, this paper compares independently set standards for common 'one from five' single-best-answer (multiple-choice) items used in graduation-level applied knowledge examinations and explores potential reasons for any differences. A repeated cross-sectional study was conducted. Participating schools were sent a common set of graduation-level items (55 in 2013-2014; 60 in 2014-2015). Items were selected against a blueprint and subjected to a quality review process. Each school employed its own standard-setting process for the common items. The primary outcome was the passing standard for the common items by each medical school set using the Angoff or Ebel methods. Of 31 invited medical schools, 22 participated in 2013-2014 (71%) and 30 (97%) in 2014-2015. Schools used a mean of 49 and 53 common items in 2013-2014 and 2014-2015, respectively, representing around one-third of the items in the examinations in which they were embedded. Data from 19 (61%) and 26 (84%) schools, respectively, met the inclusion criteria for comparison of standards. There were statistically significant differences in the passing standards set by schools in both years (effect sizes (f 2 ): 0.041 in 2013-2014 and 0.218 in 2014-2015; both p < 0.001). The interquartile range of standards was 5.7 percentage points in 2013-2014 and 6.5 percentage points in 2014-2015. There was a positive correlation between the relative standards set by schools in the 2 years (Pearson's r = 0.57, n = 18, p = 0.014). Time allowed per item, method of standard setting and timing of examination in the curriculum did not have a statistically significant impact on standards. Independently set standards for common single-best-answer items used in graduation-level examinations vary across UK medical schools. Further work to examine standard-setting processes in more detail is needed to help explain this variability and develop methods to reduce it. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
ERIC Educational Resources Information Center
Pitoniak, Mary J.; Yeld, Nan
2013-01-01
Criterion-referenced assessments have become more common around the world, with performance standards being set to differentiate different levels of student performance. However, use of standard setting methods developed in the United States may be complicated by factors related to the political and educational contexts within another country. In…
An Investigation of Undefined Cut Scores with the Hofstee Standard-Setting Method
ERIC Educational Resources Information Center
Wyse, Adam E.; Babcock, Ben
2017-01-01
This article provides an overview of the Hofstee standard-setting method and illustrates several situations where the Hofstee method will produce undefined cut scores. The situations where the cut scores will be undefined involve cases where the line segment derived from the Hofstee ratings does not intersect the score distribution curve based on…
ERIC Educational Resources Information Center
Schulz, E. Matthew; Mitzel, Howard C.
2011-01-01
This article describes a Mapmark standard setting procedure, developed under contract with the National Assessment Governing Board (NAGB). The procedure enhances the bookmark method with spatially representative item maps, holistic feedback, and an emphasis on independent judgment. A rationale for these enhancements, and the bookmark method, is…
Spatial Data Transfer Standard (SDTS), part 3 : ISO 8211 encoding
DOT National Transportation Integrated Search
1997-11-20
The ISO 8211 encoding provides a representation of a Spatial Data Transfer Standard (SDTS) file set in a standardized method enabling the file set to be exported to or imported from different media by general purpose ISO 8211 software.
ERIC Educational Resources Information Center
Oladele, Babatunde
2017-01-01
The aim of the current study is to analyse the 2014 Post UTME scores of candidates in the university of Ibadan towards the establishment of cut off using two methods of standard settings. Prospective candidates who seek admission to higher institution are often denied admission through the Post UTME exercise. There is no single recommended…
Standard setting for OSCEs: trial of borderline approach.
Kilminster, Sue; Roberts, Trudie
2004-01-01
OSCE examinations were held in May and June 2002 for all third and fourth year and some fifth year medical students at the University of Leeds. There has been an arbitrary pass mark of 65% for these examinations. However, we recognise that it is important to adopt a systematic approach towards standard setting in all examinations so held a trial of the borderline approach to standard setting for third and fifth year examinations. This paper reports our findings. The results for the year 3 OSCE demonstrated that the borderline approach to standard setting is feasible and offers a method to ensure that the pass standard is both justifiable and credible. It is efficient, requiring much less time than other methods and has the advantage of using the judgements of expert clinicians about actual practice. In addition it offers a way of empowering clinicians because it uses their expertise.
Coming Full Circle in Standard Setting: A Commentary on Wyse
ERIC Educational Resources Information Center
Skaggs, Gary
2013-01-01
The construct map is a particularly good way to approach instrument development, and this author states that he was delighted to read Adam Wyse's thoughts about how to use construct maps for standard setting. For a number of popular standard-setting methods, Wyse shows how typical feedback to panelists fits within a construct map framework.…
ERIC Educational Resources Information Center
Cramer, Stephen E.
A standard-setting procedure was developed for the Georgia Teacher Certification Testing Program as tests in 30 teaching fields were revised. A list of important characteristics of a standard-setting procedure was derived, drawing on the work of R. A. Berk (1986). The best method was found to be a highly formalized judgmental, empirical Angoff…
ERIC Educational Resources Information Center
Iyioke, Ifeoma Chika
2013-01-01
This dissertation describes a design for training, in accordance with probability judgment heuristics principles, for the Angoff standard setting method. The new training with instruction, practice, and feedback tailored to the probability judgment heuristics principles was called the Heuristic training and the prevailing Angoff method training…
Feller, David; Peterson, Kirk A
2013-08-28
The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.
Extracting insights from the shape of complex data using topology
Lum, P. Y.; Singh, G.; Lehman, A.; Ishkanov, T.; Vejdemo-Johansson, M.; Alagappan, M.; Carlsson, J.; Carlsson, G.
2013-01-01
This paper applies topological methods to study complex high dimensional data sets by extracting shapes (patterns) and obtaining insights about them. Our method combines the best features of existing standard methodologies such as principal component and cluster analyses to provide a geometric representation of complex data sets. Through this hybrid method, we often find subgroups in data sets that traditional methodologies fail to find. Our method also permits the analysis of individual data sets as well as the analysis of relationships between related data sets. We illustrate the use of our method by applying it to three very different kinds of data, namely gene expression from breast tumors, voting data from the United States House of Representatives and player performance data from the NBA, in each case finding stratifications of the data which are more refined than those produced by standard methods. PMID:23393618
Extracting insights from the shape of complex data using topology.
Lum, P Y; Singh, G; Lehman, A; Ishkanov, T; Vejdemo-Johansson, M; Alagappan, M; Carlsson, J; Carlsson, G
2013-01-01
This paper applies topological methods to study complex high dimensional data sets by extracting shapes (patterns) and obtaining insights about them. Our method combines the best features of existing standard methodologies such as principal component and cluster analyses to provide a geometric representation of complex data sets. Through this hybrid method, we often find subgroups in data sets that traditional methodologies fail to find. Our method also permits the analysis of individual data sets as well as the analysis of relationships between related data sets. We illustrate the use of our method by applying it to three very different kinds of data, namely gene expression from breast tumors, voting data from the United States House of Representatives and player performance data from the NBA, in each case finding stratifications of the data which are more refined than those produced by standard methods.
Proficiency Standards and Cut-Scores for Language Proficiency Tests.
ERIC Educational Resources Information Center
Moy, Raymond H.
The problem of standard setting on language proficiency tests is often approached by the use of norms derived from the group being tested, a process commonly known as "grading on the curve." One particular problem with this ad hoc method of standard setting is that it will usually result in a fluctuating standard dependent on the particular group…
ERIC Educational Resources Information Center
Fowell, S. L.; Fewtrell, R.; McLaughlin, P. J.
2008-01-01
Absolute standard setting procedures are recommended for assessment in medical education. Absolute, test-centred standard setting procedures were introduced for written assessments in the Liverpool MBChB in 2001. The modified Angoff and Ebel methods have been used for short answer question-based and extended matching question-based papers,…
ERIC Educational Resources Information Center
Dochy, Filip; Kyndt, Eva; Baeten, Marlies; Pottier, Sofie; Veestraeten, Marlies; Leuven, K. U.
2009-01-01
The aim of this study was to examine the effect of different standard setting methods on the size and composition of the borderline group, on the discrimination between different types of students and on the types of students passing with one method but failing with another. A total of 107 university students were classified into 4 different types…
An update on 'dose calibrator' settings for nuclides used in nuclear medicine.
Bergeron, Denis E; Cessna, Jeffrey T
2018-06-01
Most clinical measurements of radioactivity, whether for therapeutic or imaging nuclides, rely on commercial re-entrant ionization chambers ('dose calibrators'). The National Institute of Standards and Technology (NIST) maintains a battery of representative calibrators and works to link calibration settings ('dial settings') to primary radioactivity standards. Here, we provide a summary of NIST-determined dial settings for 22 radionuclides. We collected previously published dial settings and determined some new ones using either the calibration curve method or the dialing-in approach. The dial settings with their uncertainties are collected in a comprehensive table. In general, current manufacturer-provided calibration settings give activities that agree with National Institute of Standards and Technology standards to within a few percent.
A Comparison of Cut Scores Using Multiple Standard Setting Methods.
ERIC Educational Resources Information Center
Impara, James C.; Plake, Barbara S.
This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…
NASA Astrophysics Data System (ADS)
Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu
2018-03-01
In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.
Yousuf, Naveed; Violato, Claudio; Zuberi, Rukhsana W
2015-01-01
CONSTRUCT: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Thirty OSCE stations administered at least twice in the years 2010-2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean-1.5SD, Mean-1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean-1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study. We also found that cluster analysis using mean method can be used for quality assurance of borderline methods. These findings should be further confirmed by studies in other settings.
ERIC Educational Resources Information Center
Lanier, Paul; Kohl, Patrica L.; Benz, Joan; Swinger, Dawn; Moussette, Pam; Drake, Brett
2011-01-01
Objectives: The purpose of this study was to evaluate Parent-Child Interaction Therapy (PCIT) deployed in a community setting comparing in-home with the standard office-based intervention. Child behavior, parent stress, parent functioning, and attrition were examined. Methods: Using a quasi-experimental design, standardized measures at three time…
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-12-01
We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.
MacDougall, Margaret
2015-10-31
The principal aim of this study is to provide an account of variation in UK undergraduate medical assessment styles and corresponding standard setting approaches with a view to highlighting the importance of a UK national licensing exam in recognizing a common standard. Using a secure online survey system, response data were collected during the period 13 - 30 January 2014 from selected specialists in medical education assessment, who served as representatives for their respective medical schools. Assessment styles and corresponding choices of standard setting methods vary markedly across UK medical schools. While there is considerable consensus on the application of compensatory approaches, individual schools display their own nuances through use of hybrid assessment and standard setting styles, uptake of less popular standard setting techniques and divided views on norm referencing. The extent of variation in assessment and standard setting practices across UK medical schools validates the concern that there is a lack of evidence that UK medical students achieve a common standard on graduation. A national licensing exam is therefore a viable option for benchmarking the performance of all UK undergraduate medical students.
Testing the statistical compatibility of independent data sets
NASA Astrophysics Data System (ADS)
Maltoni, M.; Schwetz, T.
2003-08-01
We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.
Rathi, Monika; Ahrenkiel, S P; Carapella, J J; Wanlass, M W
2013-02-01
Given an unknown multicomponent alloy, and a set of standard compounds or alloys of known composition, can one improve upon popular standards-based methods for energy dispersive X-ray (EDX) spectrometry to quantify the elemental composition of the unknown specimen? A method is presented here for determining elemental composition of alloys using transmission electron microscopy-based EDX with appropriate standards. The method begins with a discrete set of related reference standards of known composition, applies multivariate statistical analysis to those spectra, and evaluates the compositions with a linear matrix algebra method to relate the spectra to elemental composition. By using associated standards, only limited assumptions about the physical origins of the EDX spectra are needed. Spectral absorption corrections can be performed by providing an estimate of the foil thickness of one or more reference standards. The technique was applied to III-V multicomponent alloy thin films: composition and foil thickness were determined for various III-V alloys. The results were then validated by comparing with X-ray diffraction and photoluminescence analysis, demonstrating accuracy of approximately 1% in atomic fraction.
The Explication of Quality Standards in Self-Evaluation
ERIC Educational Resources Information Center
Bronkhorst, Larike H.; Baartman, Liesbeth K. J.; Stokking, Karel M.
2012-01-01
Education aiming at students' competence development asks for new assessment methods. The quality of these methods needs to be assured using adapted quality criteria and accompanying standards. As such standards are not widely available, this study sets out to examine what level of compliance with quality criteria stakeholders consider…
Adaptive Set-Based Methods for Association Testing
Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo
2017-01-01
With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371
ERIC Educational Resources Information Center
van der Linden, Wim J.; Vos, Hans J.; Chang, Lei
In judgmental standard setting experiments, it may be difficult to specify subjective probabilities that adequately take the properties of the items into account. As a result, these probabilities are not consistent with each other in the sense that they do not refer to the same borderline level of performance. Methods to check standard setting…
Will the "Real" Proficiency Standard Please Stand Up?
ERIC Educational Resources Information Center
Baron, Joan Boykoff; And Others
Connecticut's experience with four different standard-setting methods regarding multiple choice proficiency tests is described. The methods include Angoff, Nedelsky, Borderline Group, and Contrasting Groups Methods. All Connecticut ninth graders were administered proficiency tests in reading, language arts, and mathematics. As soon as final test…
Sun, Jiangming; Carlsson, Lars; Ahlberg, Ernst; Norinder, Ulf; Engkvist, Ola; Chen, Hongming
2017-07-24
Conformal prediction has been proposed as a more rigorous way to define prediction confidence compared to other application domain concepts that have earlier been used for QSAR modeling. One main advantage of such a method is that it provides a prediction region potentially with multiple predicted labels, which contrasts to the single valued (regression) or single label (classification) output predictions by standard QSAR modeling algorithms. Standard conformal prediction might not be suitable for imbalanced data sets. Therefore, Mondrian cross-conformal prediction (MCCP) which combines the Mondrian inductive conformal prediction with cross-fold calibration sets has been introduced. In this study, the MCCP method was applied to 18 publicly available data sets that have various imbalance levels varying from 1:10 to 1:1000 (ratio of active/inactive compounds). Our results show that MCCP in general performed well on bioactivity data sets with various imbalance levels. More importantly, the method not only provides confidence of prediction and prediction regions compared to standard machine learning methods but also produces valid predictions for the minority class. In addition, a compound similarity based nonconformity measure was investigated. Our results demonstrate that although it gives valid predictions, its efficiency is much worse than that of model dependent metrics.
Intrajudge Consistency Using the Angoff Standard-Setting Method.
ERIC Educational Resources Information Center
Plake, Barbara S.; Impara, James C.
This study investigated the intrajudge consistency of Angoff-based item performance estimates. The examination used was a certification examination in an emergency medicine specialty. Ten expert panelists rated the same 24 items twice during an operational standard setting study. Results indicate that the panelists were highly consistent, in terms…
Vialaret, Jérôme; Picas, Alexia; Delaby, Constance; Bros, Pauline; Lehmann, Sylvain; Hirtz, Christophe
2018-06-01
Hepcidin-25 peptide is a biomarker which is known to have considerable clinical potential for diagnosing iron-related diseases. Developing analytical methods for the absolute quantification of hepcidin is still a real challenge, however, due to the sensitivity, specificity and reproducibility issues involved. In this study, we compare and discuss two MS-based assays for quantifying hepcidin, which differ only in terms of the type of liquid chromatography (nano LC/MS versus standard LC/MS) involved. The same sample preparation, the same internal standards and the same MS analyzer were used with both approaches. In the field of proteomics, nano LC chromatography is generally known to be more sensitive and less robust than standard LC methods. In this study, we established that the performances of the standard LC method are equivalent to those of our previously developed nano LC method. Although the analytical performances were very similar in both cases. The standard-flow platform therefore provides the more suitable alternative for accurately determining hepcidin in clinical settings. Copyright © 2018 Elsevier B.V. All rights reserved.
Adaptive Set-Based Methods for Association Testing.
Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo
2016-02-01
With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.
Open-source platform to benchmark fingerprints for ligand-based virtual screening
2013-01-01
Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588
Cantrill, Richard C
2008-01-01
Methods of analysis for products of modern biotechnology are required for national and international trade in seeds, grain and food in order to meet the labeling or import/export requirements of different nations and trading blocks. Although many methods were developed by the originators of transgenic events, governments, universities, and testing laboratories, trade is less complicated if there exists a set of international consensus-derived analytical standards. In any analytical situation, multiple methods may exist for testing for the same analyte. These methods may be supported by regional preferences and regulatory requirements. However, tests need to be sensitive enough to determine low levels of these traits in commodity grain for regulatory purposes and also to indicate purity of seeds containing these traits. The International Organization for Standardization (ISO) and its European counterpart have worked to produce a suite of standards through open, balanced and consensus-driven processes. Presently, these standards are approaching the time for their first review. In fact, ISO 21572, the "protein standard" has already been circulated for systematic review. In order to expedite the review and revision of the nucleic acid standards an ISO Technical Specification (ISO/TS 21098) was drafted to set the criteria for the inclusion of precision data from collaborative studies into the annexes of these standards.
40 CFR 61.207 - Radium-226 sampling and measurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...
Niosh analytical methods for Set G
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1976-12-01
Industrial Hygiene sampling and analytical monitoring methods validated under the joint NIOSH/OSHA Standards Completion Program for Set G are contained herein. Monitoring methods for the following compounds are included: butadiene, heptane, ketene, methyl cyclohexane, octachloronaphthalene, pentachloronaphthalene, petroleum distillates, propylene dichloride, turpentine, dioxane, hexane, LPG, naphtha(coal tar), octane, pentane, propane, and stoddard solvent.
Screening and Evaluation Tool (SET) Users Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pincock, Layne
This document is the users guide to using the Screening and Evaluation Tool (SET). SET is a tool for comparing multiple fuel cycle options against a common set of criteria and metrics. It does this using standard multi-attribute utility decision analysis methods.
A Body of Work Standard-Setting Method with Construct Maps
ERIC Educational Resources Information Center
Wyse, Adam E.; Bunch, Michael B.; Deville, Craig; Viger, Steven G.
2014-01-01
This article describes a novel variation of the Body of Work method that uses construct maps to overcome problems of transparency, rater inconsistency, and scores gaps commonly occurring with the Body of Work method. The Body of Work method with construct maps was implemented to set cut-scores for two separate K-12 assessment programs in a large…
16 CFR 1633.3 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FLAMMABILITY (OPEN FLAME) OF MATTRESS SETS The Standard § 1633.3 General requirements. (a) Summary of test method. The test method set forth in § 1633.7 measures the flammability (fire test response... allowing it to burn freely under well-ventilated, controlled environmental conditions. The flaming ignition...
ERIC Educational Resources Information Center
Hardison, Chaitra M.; Vilamovska, Anna-Marie
2009-01-01
The Collegiate Learning Assessment (CLA) is a measure of how much students' critical thinking improves after attending college or university. This report illustrates how institutions can set their own standards on the CLA using a method that is appropriate for the CLA's unique characteristics. The authors examined evidence of reliability and…
ERIC Educational Resources Information Center
Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim
2013-01-01
The decision to pass or fail a medical student is a "high stakes" one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the…
ERIC Educational Resources Information Center
Reid, Jerry B.
This report investigates an area of uncertainty in using the Angoff method for setting standards, namely whether or not a judge's conceptualizations of borderline group performance are realistic. Ratings are usually made with reference to the performance of this hypothetical group, therefore the Angoff method's success is dependent on this point.…
A Comparison of Approaches for Setting Proficiency Standards.
ERIC Educational Resources Information Center
Koffler, Stephen L.
This research compared the cut-off scores estimated from an empirical procedure (Contrasting group method) to those determined from a more theoretical process (Nedelsky method). A methodological and statistical framework was also provided for analysis of the data to obtain the most appropriate standard using the empirical procedure. Data were…
40 CFR 92.5 - Reference materials.
Code of Federal Regulations, 2010 CFR
2010-07-01
...: (1) ASTM material. The following table sets forth material from the American Society for Testing and...., Philadelphia, PA 19103. The table follows: Document number and name 40 CFR part 92 reference ASTM D 86-95, Standard Test Method for Distillation of Petroleum Products § 92.113 ASTM D 93-94, Standard Test Methods...
Defining a standard set of patient-centered outcomes for men with localized prostate cancer.
Martin, Neil E; Massey, Laura; Stowell, Caleb; Bangma, Chris; Briganti, Alberto; Bill-Axelson, Anna; Blute, Michael; Catto, James; Chen, Ronald C; D'Amico, Anthony V; Feick, Günter; Fitzpatrick, John M; Frank, Steven J; Froehner, Michael; Frydenberg, Mark; Glaser, Adam; Graefen, Markus; Hamstra, Daniel; Kibel, Adam; Mendenhall, Nancy; Moretti, Kim; Ramon, Jacob; Roos, Ian; Sandler, Howard; Sullivan, Francis J; Swanson, David; Tewari, Ashutosh; Vickers, Andrew; Wiegel, Thomas; Huland, Hartwig
2015-03-01
Value-based health care has been proposed as a unifying force to drive improved outcomes and cost containment. To develop a standard set of multidimensional patient-centered health outcomes for tracking, comparing, and improving localized prostate cancer (PCa) treatment value. We convened an international working group of patients, registry experts, urologists, and radiation oncologists to review existing data and practices. The group defined a recommended standard set representing who should be tracked, what should be measured and at what time points, and what data are necessary to make meaningful comparisons. Using a modified Delphi method over a series of teleconferences, the group reached consensus for the Standard Set. We recommend that the Standard Set apply to men with newly diagnosed localized PCa treated with active surveillance, surgery, radiation, or other methods. The Standard Set includes acute toxicities occurring within 6 mo of treatment as well as patient-reported outcomes tracked regularly out to 10 yr. Patient-reported domains of urinary incontinence and irritation, bowel symptoms, sexual symptoms, and hormonal symptoms are included, and the recommended measurement tool is the Expanded Prostate Cancer Index Composite Short Form. Disease control outcomes include overall, cause-specific, metastasis-free, and biochemical relapse-free survival. Baseline clinical, pathologic, and comorbidity information is included to improve the interpretability of comparisons. We have defined a simple, easily implemented set of outcomes that we believe should be measured in all men with localized PCa as a crucial first step in improving the value of care. Measuring, reporting, and comparing identical outcomes across treatments and treatment centers will provide patients and providers with information to make informed treatment decisions. We defined a set of outcomes that we recommend being tracked for every man being treated for localized prostate cancer. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.
2016-02-16
Appendix G, the Performance Rating Method in ASHRAE Standard 90.1 has been updated to make two significant changes for the 2016 edition, to be published in October of 2016. First, it allows Appendix G to be used as a third path for compliance with the standard in addition to rating beyond code building performance. This prevents modelers from having to develop separate building models for code compliance and beyond code programs. Using this new version of Appendix G to show compliance with the 2016 edition of the standard, the proposed building design needs to have a performance cost index (PCI)more » less than targets shown in a new table based on building type and climate zone. The second change is that the baseline design is now fixed at a stable level of performance set approximately equal to the 2004 code. Rather than changing the stringency of the baseline with each subsequent edition of the standard, compliance with new editions will simply require a reduced PCI (a PCI of zero is a net-zero building). Using this approach, buildings of any era can be rated using the same method. The intent is that any building energy code or beyond code program can use this methodology and merely set the appropriate PCI target for their needs. This report discusses the process used to set performance criteria for compliance with ASHRAE Standard 90.1-2016 and suggests a method for demonstrating compliance with other codes and beyond code programs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.
2016-03-01
Appendix G, the Performance Rating Method in ASHRAE Standard 90.1 has been updated to make two significant changes for the 2016 edition, to be published in October of 2016. First, it allows Appendix G to be used as a third path for compliance with the standard in addition to rating beyond code building performance. This prevents modelers from having to develop separate building models for code compliance and beyond code programs. Using this new version of Appendix G to show compliance with the 2016 edition of the standard, the proposed building design needs to have a performance cost index (PCI)more » less than targets shown in a new table based on building type and climate zone. The second change is that the baseline design is now fixed at a stable level of performance set approximately equal to the 2004 code. Rather than changing the stringency of the baseline with each subsequent edition of the standard, compliance with new editions will simply require a reduced PCI (a PCI of zero is a net-zero building). Using this approach, buildings of any era can be rated using the same method. The intent is that any building energy code or beyond code program can use this methodology and merely set the appropriate PCI target for their needs. This report discusses the process used to set performance criteria for compliance with ASHRAE Standard 90.1-2016 and suggests a method for demonstrating compliance with other codes and beyond code programs.« less
Schneller, Mikkel B; Pedersen, Mogens T; Gupta, Nidhi; Aadahl, Mette; Holtermann, Andreas
2015-03-13
We compared the accuracy of five objective methods, including two newly developed methods combining accelerometry and activity type recognition (Acti4), against indirect calorimetry, to estimate total energy expenditure (EE) of different activities in semi-standardized settings. Fourteen participants performed a standardized and semi-standardized protocol including seven daily life activity types, while having their EE measured by indirect calorimetry. Simultaneously, physical activity was quantified by an ActivPAL3, two ActiGraph GT3X+'s and an Actiheart. EE was estimated by the standard ActivPAL3 software (ActivPAL), ActiGraph GT3X+ (ActiGraph) and Actiheart (Actiheart), and by a combination of activity type recognition via Acti4 software and activity counts per minute (CPM) of either a hip- or thigh-worn ActiGraph GT3X+ (AGhip + Acti4 and AGthigh + Acti4). At group level, estimated physical activities EE by Actiheart (MSE = 2.05) and AGthigh + Acti4 (MSE = 0.25) were not significantly different from measured EE by indirect calorimetry, while significantly underestimated by ActiGraph, ActivPAL and AGhip + Acti4. AGthigh + Acti4 and Actiheart explained 77% and 45%, of the individual variations in measured physical activity EE by indirect calorimetry, respectively. This study concludes that combining accelerometer data from a thigh-worn ActiGraph GT3X+ with activity type recognition improved the accuracy of activity specific EE estimation against indirect calorimetry in semi-standardized settings compared to previously validated methods using CPM only.
ERIC Educational Resources Information Center
Roberts, William L.; Boulet, John; Sandella, Jeanne
2017-01-01
When the safety of the public is at stake, it is particularly relevant for licensing and credentialing exam agencies to use defensible standard setting methods to categorize candidates into competence categories (e.g., pass/fail). The aim of this study was to gather evidence to support change to the Comprehensive Osteopathic Medical Licensing-USA…
ERIC Educational Resources Information Center
Eckes, Thomas
2017-01-01
This paper presents an approach to standard setting that combines the prototype group method (PGM; Eckes, 2012) with a receiver operating characteristic (ROC) analysis. The combined PGM-ROC approach is applied to setting cut scores on a placement test of English as a foreign language (EFL). To implement the PGM, experts first named learners whom…
de Roos, Paul; Bloem, Bastiaan R.; Kelley, Thomas A.; Antonini, Angelo; Dodel, Richard; Hagell, Peter; Marras, Connie; Martinez-Martin, Pablo; Mehta, Shyamal H.; Odin, Per; Chaudhuri, Kallol Ray; Weintraub, Daniel; Wilson, Bil; Uitti, Ryan J.
2017-01-01
Background Parkinson’s disease (PD) is a progressive neurodegenerative condition that is expected to double in prevalence due to demographic shifts. Value-based healthcare is a proposed strategy to improve outcomes and decrease costs. To move towards an actual value-based health care system, condition-specific outcomes that are meaningful to patients are essential. Objective Propose a global consensus standard set of outcome measures for PD. Methods Established methods for outcome measure development were applied, as outlined and used previously by the International Consortium for Health Outcomes Measurement (ICHOM). An international group, representing both patients and experts from the fields of neurology, psychiatry, nursing, and existing outcome measurement efforts, was convened. The group participated in six teleconferences over a six-month period, reviewed existing data and practices, and ultimately proposed a standard set of measures by which patients should be tracked, and how often data should be collected. Results The standard set applies to all cases of idiopathic PD, and includes assessments of motor and non-motor symptoms, ability to work, PD-related health status, and hospital admissions. Baseline demographic and clinical variables are included to enable case mix adjustment. Conclusions The Standard Set is now ready for use and pilot testing in the clinical setting. Ultimately, we believe that using the set of outcomes proposed here will allow clinicians and scientists across the world to document, report, and compare PD-related outcomes in a standardized fashion. Such international benchmarks will improve our understanding of the disease course and allow for identification of ‘best practices’, ultimately leading to better informed treatment decisions. PMID:28671140
Standardized methods for photography in procedural dermatology using simple equipment.
Hexsel, Doris; Hexsel, Camile L; Dal'Forno, Taciana; Schilling de Souza, Juliana; Silva, Aline F; Siega, Carolina
2017-04-01
Photography is an important tool in dermatology. Reproducing the settings of before photos after interventions allows more accurate evaluation of treatment outcomes. In this article, we describe standardized methods and tips to obtain photographs, both for clinical practice and research procedural dermatology, using common equipment. Standards for the studio, cameras, photographer, patients, and framing are presented in this article. © 2017 The International Society of Dermatology.
Methods and Strategies: Ask the Right Question
ERIC Educational Resources Information Center
Kracl, Carrie; Harshbarger, Dena
2017-01-01
Preparing 21st century learners is the goal of both the "Common Core State Standards" ("CCSS") and the "Next Generation Science Standards" ("NGSS"). These two sets of standards jointly illuminate the need to teach scientific and literacy skills that will more appropriately prepare elementary students to…
Implementing standard setting into the Conjoint MAFP/FRACGP Part 1 examination - Process and issues.
Chan, S C; Mohd Amin, S; Lee, T W
2016-01-01
The College of General Practitioners of Malaysia and the Royal Australian College of General Practitioners held the first Conjoint Member of the College of General Practitioners (MCGP)/Fellow of Royal Australian College of General Practitioners (FRACGP) examination in 1982, later renamed the Conjoint MAFP/FRACGP examinations. The examination assesses competency for safe independent general practice and as family medicine specialists in Malaysia. Therefore, a defensible standard set pass mark is imperative to separate the competent from the incompetent. This paper discusses the process and issues encountered in implementing standard setting to the Conjoint Part 1 examination. Critical to success in standard setting were judges' understanding of the process of the modified Angoff method, defining the borderline candidate's characteristics and the composition of judges. These were overcome by repeated hands-on training, provision of detailed guidelines and careful selection of judges. In December 2013, 16 judges successfully standard set the Part 1 Conjoint examinations, with high inter-rater reliability: Cronbach's alpha coefficient 0.926 (Applied Knowledge Test), 0.921 (Key Feature Problems).
Guo-Qiang, Zhang; Yan, Huang; Licong, Cui
2017-01-01
We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA. PMID:29854262
Guo-Qiang, Zhang; Yan, Huang; Licong, Cui
2017-01-01
We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA.
Pisa, Pedro T; Landais, Edwige; Margetts, Barrie; Vorster, Hester H; Friedenreich, Christine M; Huybrechts, Inge; Martin-Prevel, Yves; Branca, Francesco; Lee, Warren T K; Leclercq, Catherine; Jerling, Johann; Zotor, Francis; Amuna, Paul; Al Jawaldeh, Ayoub; Aderibigbe, Olaide Ruth; Amoussa, Waliou Hounkpatin; Anderson, Cheryl A M; Aounallah-Skhiri, Hajer; Atek, Madjid; Benhura, Chakare; Chifamba, Jephat; Covic, Namukolo; Dary, Omar; Delisle, Hélène; El Ati, Jalila; El Hamdouchi, Asmaa; El Rhazi, Karima; Faber, Mieke; Kalimbira, Alexander; Korkalo, Liisa; Kruger, Annamarie; Ledo, James; Machiweni, Tatenda; Mahachi, Carol; Mathe, Nonsikelelo; Mokori, Alex; Mouquet-Rivier, Claire; Mutie, Catherine; Nashandi, Hilde Liisa; Norris, Shane A; Onabanjo, Oluseye Olusegun; Rambeloson, Zo; Saha, Foudjo Brice U; Ubaoji, Kingsley Ikechukwu; Zaghloul, Sahar; Slimani, Nadia
2018-01-02
To carry out an inventory on the availability, challenges, and needs of dietary assessment (DA) methods in Africa as a pre-requisite to provide evidence, and set directions (strategies) for implementing common dietary methods and support web-research infrastructure across countries. The inventory was performed within the framework of the "Africa's Study on Physical Activity and Dietary Assessment Methods" (AS-PADAM) project. It involves international institutional and African networks. An inventory questionnaire was developed and disseminated through the networks. Eighteen countries responded to the dietary inventory questionnaire. Various DA tools were reported in Africa; 24-Hour Dietary Recall and Food Frequency Questionnaire were the most commonly used tools. Few tools were validated and tested for reliability. Face-to-face interview was the common method of administration. No computerized software or other new (web) technologies were reported. No tools were standardized across countries. The lack of comparable DA methods across represented countries is a major obstacle to implement comprehensive and joint nutrition-related programmes for surveillance, programme evaluation, research, and prevention. There is a need to develop new or adapt existing DA methods across countries by employing related research infrastructure that has been validated and standardized in other settings, with the view to standardizing methods for wider use.
ERIC Educational Resources Information Center
Boursicot, Katharine A. M.; Roberts, Trudie E.; Pell, Godfrey
2006-01-01
While Objective Structured Clinical Examinations (OSCEs) have become widely used to assess clinical competence at the end of undergraduate medical courses, the method of setting the passing score varies greatly, and there is no agreed best methodology. While there is an assumption that the passing standard at graduation is the same at all medical…
Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui
2010-01-01
Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.
[Work quota setting and man-hour productivity estimation in pathologists].
Svistunov, V V; Makarov, S V; Makarova, A E
The paper considers the development and current state of the regulation of work quota setting and remuneration in pathologists. Reasoning from the current staff standards for morbid anatomy departments (units), the authors present a method to calculate the load of pathologists. The essence of the proposed method is demonstrated using a specific example.
Cross-cultural dataset for the evolution of religion and morality project.
Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph
2016-11-08
A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set.
Cross-cultural dataset for the evolution of religion and morality project
Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D.; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K.; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph
2016-01-01
A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set. PMID:27824332
Modified Confidence Intervals for the Mean of an Autoregressive Process.
1985-08-01
Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our
Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe
2014-09-01
Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.
Adapting and Evaluating a Rapid, Low-Cost Method to Enumerate Flies in the Household Setting
Wolfe, Marlene K.; Dentz, Holly N.; Achando, Beryl; Mureithi, MaryAnne; Wolfe, Tim; Null, Clair; Pickering, Amy J.
2017-01-01
Diarrhea is a leading cause of death among children under 5 years of age worldwide. Flies are important vectors of diarrheal pathogens in settings lacking networked sanitation services. There is no standardized method for measuring fly density in households; many methods are cumbersome and unvalidated. We adapted a rapid, low-cost fly enumeration technique previously developed for industrial settings, the Scudder fly grill, for field use in household settings. We evaluated its performance in comparison to a sticky tape fly trapping method at latrine and food preparation areas among households in rural Kenya. The grill method was more sensitive; it detected the presence of any flies at 80% (433/543) of sampling locations versus 64% (348/543) of locations by the sticky tape. We found poor concordance between the two methods, suggesting that standardizing protocols is important for comparison of fly densities between studies. Fly species identification was feasible with both methods; however, the sticky tape trap allowed for more nuanced identification. Both methods detected a greater presence of bottle flies near latrines compared with food preparation areas (P < 0.01). The grill method detected more flies at the food preparation area compared with near the latrine (P = 0.014) while the sticky tape method detected no difference. We recommend the Scudder grill as a sensitive fly enumeration tool that is rapid and low cost to implement. PMID:27956654
Samuel V. Glass; Stanley D. Gatland II; Kohta Ueno; Christopher J. Schumacher
2017-01-01
ASHRAE Standard 160, Criteria for Moisture-Control Design Analysis in Buildings, was published in 2009. The standard sets criteria for moisture design loads, hygrothermal analysis methods, and satisfactory moisture performance of the building envelope. One of the evaluation criteria specifies conditions necessary to avoid mold growth. The current standard requires that...
Improved score statistics for meta-analysis in single-variant and gene-level association studies.
Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo
2018-06-01
Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-11
... Fuel Economy (CAFE) standards for light vehicles since 1978 under the statutory authority of the Energy... 19, 2007, amended EPCA and mandated that NHTSA, in consultation with EPA, set fuel economy standards... agency to implement test methods, measurement metrics, fuel economy standards, and compliance and...
Time-Course Gene Set Analysis for Longitudinal Gene Expression Data
Hejblum, Boris P.; Skinner, Jason; Thiébaut, Rodolphe
2015-01-01
Gene set analysis methods, which consider predefined groups of genes in the analysis of genomic data, have been successfully applied for analyzing gene expression data in cross-sectional studies. The time-course gene set analysis (TcGSA) introduced here is an extension of gene set analysis to longitudinal data. The proposed method relies on random effects modeling with maximum likelihood estimates. It allows to use all available repeated measurements while dealing with unbalanced data due to missing at random (MAR) measurements. TcGSA is a hypothesis driven method that identifies a priori defined gene sets with significant expression variations over time, taking into account the potential heterogeneity of expression within gene sets. When biological conditions are compared, the method indicates if the time patterns of gene sets significantly differ according to these conditions. The interest of the method is illustrated by its application to two real life datasets: an HIV therapeutic vaccine trial (DALIA-1 trial), and data from a recent study on influenza and pneumococcal vaccines. In the DALIA-1 trial TcGSA revealed a significant change in gene expression over time within 69 gene sets during vaccination, while a standard univariate individual gene analysis corrected for multiple testing as well as a standard a Gene Set Enrichment Analysis (GSEA) for time series both failed to detect any significant pattern change over time. When applied to the second illustrative data set, TcGSA allowed the identification of 4 gene sets finally found to be linked with the influenza vaccine too although they were found to be associated to the pneumococcal vaccine only in previous analyses. In our simulation study TcGSA exhibits good statistical properties, and an increased power compared to other approaches for analyzing time-course expression patterns of gene sets. The method is made available for the community through an R package. PMID:26111374
ERIC Educational Resources Information Center
Lee, Jaekyung
2010-01-01
This study examines potential consequences of the discrepancies between national and state performance standards for school funding in Kentucky and Maine. Applying the successful schools observation method and cost function analysis method to integrated data-sets that match schools' eight-grade mathematics test performance measures to district…
Methods for Environments and Contaminants: Criteria Air Pollutants
EPA’s Office of Air Quality Planning and Standards (OAQPS) has set primary (health-based) National Ambient Air Quality Standards (NAAQS) for six common air pollutants, often referred to as criteria air pollutants (or simply criteria pollutants).
Geometric representation methods for multi-type self-defining remote sensing data sets
NASA Technical Reports Server (NTRS)
Anuta, P. E.
1980-01-01
Efficient and convenient representation of remote sensing data is highly important for an effective utilization. The task of merging different data types is currently dealt with by treating each case as an individual problem. A description is provided of work which is carried out to standardize the multidata merging process. The basic concept of the new approach is that of the self-defining data set (SDDS). The creation of a standard is proposed. This standard would be such that data which may be of interest in a large number of earth resources remote sensing applications would be in a format which allows convenient and automatic merging. Attention is given to details regarding the multidata merging problem, a geometric description of multitype data sets, image reconstruction from track-type data, a data set generation system, and an example multitype data set.
Hybrid Grid and Basis Set Approach to Quantum Chemistry DMRG
NASA Astrophysics Data System (ADS)
Stoudenmire, Edwin Miles; White, Steven
We present a new approach for using DMRG for quantum chemistry that combines the advantages of a basis set with that of a grid approximation. Because DMRG scales linearly for quasi-one-dimensional systems, it is feasible to approximate the continuum with a fine grid in one direction while using a standard basis set approach for the transverse directions. Compared to standard basis set methods, we reach larger systems and achieve better scaling when approaching the basis set limit. The flexibility and reduced costs of our approach even make it feasible to incoporate advanced DMRG techniques such as simulating real-time dynamics. Supported by the Simons Collaboration on the Many-Electron Problem.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Advancing Resident Assessment in Graduate Medical Education
Swing, Susan R.; Clyman, Stephen G.; Holmboe, Eric S.; Williams, Reed G.
2009-01-01
Background The Outcome Project requires high-quality assessment approaches to provide reliable and valid judgments of the attainment of competencies deemed important for physician practice. Intervention The Accreditation Council for Graduate Medical Education (ACGME) convened the Advisory Committee on Educational Outcome Assessment in 2007–2008 to identify high-quality assessment methods. The assessments selected by this body would form a core set that could be used by all programs in a specialty to assess resident performance and enable initial steps toward establishing national specialty databases of program performance. The committee identified a small set of methods for provisional use and further evaluation. It also developed frameworks and processes to support the ongoing evaluation of methods and the longer-term enhancement of assessment in graduate medical education. Outcome The committee constructed a set of standards, a methodology for applying the standards, and grading rules for their review of assessment method quality. It developed a simple report card for displaying grades on each standard and an overall grade for each method reviewed. It also described an assessment system of factors that influence assessment quality. The committee proposed a coordinated, national-level infrastructure to support enhancements to assessment, including method development and assessor training. It recommended the establishment of a new assessment review group to continue its work of evaluating assessment methods. The committee delivered a report summarizing its activities and 5 related recommendations for implementation to the ACGME Board in September 2008. PMID:21975993
The Development of Clinical Document Standards for Semantic Interoperability in China
Yang, Peng; Pan, Feng; Wan, Yi; Tu, Haibo; Tang, Xuejun; Hu, Jianping
2011-01-01
Objectives This study is aimed at developing a set of data groups (DGs) to be employed as reusable building blocks for the construction of the eight most common clinical documents used in China's general hospitals in order to achieve their structural and semantic standardization. Methods The Diagnostics knowledge framework, the related approaches taken from the Health Level Seven (HL7), the Integrating the Healthcare Enterprise (IHE), and the Healthcare Information Technology Standards Panel (HITSP) and 1,487 original clinical records were considered together to form the DG architecture and data sets. The internal structure, content, and semantics of each DG were then defined by mapping each DG data set to a corresponding Clinical Document Architecture data element and matching each DG data set to the metadata in the Chinese National Health Data Dictionary. By using the DGs as reusable building blocks, standardized structures and semantics regarding the clinical documents for semantic interoperability were able to be constructed. Results Altogether, 5 header DGs, 48 section DGs, and 17 entry DGs were developed. Several issues regarding the DGs, including their internal structure, identifiers, data set names, definitions, length and format, data types, and value sets, were further defined. Standardized structures and semantics regarding the eight clinical documents were structured by the DGs. Conclusions This approach of constructing clinical document standards using DGs is a feasible standard-driven solution useful in preparing documents possessing semantic interoperability among the disparate information systems in China. These standards need to be validated and refined through further study. PMID:22259722
An Empirical Comparison of Variable Standardization Methods in Cluster Analysis.
ERIC Educational Resources Information Center
Schaffer, Catherine M.; Green, Paul E.
1996-01-01
The common marketing research practice of standardizing the columns of a persons-by-variables data matrix prior to clustering the entities corresponding to the rows was evaluated with 10 large-scale data sets. Results indicate that the column standardization practice may be problematic for some kinds of data that marketing researchers used for…
A novel Python program for implementation of quality control in the ELISA.
Wetzel, Hanna N; Cohen, Cinder; Norman, Andrew B; Webster, Rose P
2017-09-01
The use of semi-quantitative assays such as the enzyme-linked immunosorbent assay (ELISA) requires stringent quality control of the data. However, such quality control is often lacking in academic settings due to unavailability of software and knowledge. Therefore, our aim was to develop methods to easily implement Levey-Jennings quality control methods. For this purpose, we created a program written in Python (a programming language with an open-source license) and tested it using a training set of ELISA standard curves quantifying the Fab fragment of an anti-cocaine monoclonal antibody in mouse blood. A colorimetric ELISA was developed using a goat anti-human anti-Fab capture method. Mouse blood samples spiked with the Fab fragment were tested against a standard curve of known concentrations of Fab fragment in buffer over a period of 133days stored at 4°C to assess stability of the Fab fragment and to generate a test dataset to assess the program. All standard curves were analyzed using our program to batch process the data and to generate Levey-Jennings control charts and statistics regarding the datasets. The program was able to identify values outside of two standard deviations, and this identification of outliers was consistent with the results of a two-way ANOVA. This program is freely available, which will help laboratories implement quality control methods, thus improving reproducibility within and between labs. We report here successful testing of the program with our training set and development of a method for quantification of the Fab fragment in mouse blood. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantitative Technique for Comparing Simulant Materials through Figures of Merit
NASA Technical Reports Server (NTRS)
Rickman, Doug; Hoelzer, Hans; Fourroux, Kathy; Owens, Charles; McLemore, Carole; Fikes, John
2007-01-01
The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication both identified and reinforced a need for a set of standards and requirements for the production and usage of the Lunar simulant materials. As NASA prepares to return to the Moon, and set out to Mars, a set of early requirements have been developed for simulant materials and the initial methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum characteristics for simulants of Lunar regolith, and 3) a method to produce simulants needed for NASA's Exploration mission. As an extension of the requirements document a method to evaluate new and current simulants has been rigorously defined through the mathematics of Figures of Merit (FoM). Requirements and techniques have been developed that allow the simulant provider to compare their product to a standard reference material through Figures of Merit. Standard reference material may be physical material such as the Apollo core samples or material properties predicted for any landing site. The simulant provider is not restricted to providing a single "high fidelity" simulant, which may be costly to produce. The provider can now develop "lower fidelity" simulants for engineering applications such as drilling and mobility applications.
Building an Evaluation Scale using Item Response Theory.
Lalor, John P; Wu, Hao; Yu, Hong
2016-11-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.
Building an Evaluation Scale using Item Response Theory
Lalor, John P.; Wu, Hao; Yu, Hong
2016-01-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.1 PMID:28004039
Langley Wind Tunnel Data Quality Assurance-Check Standard Results
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.
2000-01-01
A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.
The Recognizability and Localizability of Auditory Alarms: Setting Global Medical Device Standards.
Edworthy, Judy; Reid, Scott; McDougall, Siné; Edworthy, Jonathan; Hall, Stephanie; Bennett, Danielle; Khan, James; Pye, Ellen
2017-11-01
Objective Four sets of eight audible alarms matching the functions specified in IEC 60601-1-8 were designed using known principles from auditory cognition with the intention that they would be more recognizable and localizable than those currently specified in the standard. Background The audible alarms associated with IEC 60601-1-8, a global medical device standard, are known to be difficult to learn and retain, and there have been many calls to update them. There are known principles of design and cognition that might form the basis of more readily recognizable alarms. There is also scope for improvement in the localizability of the existing alarms. Method Four alternative sets of alarms matched to the functions specified in IEC 60601-1-8 were tested for recognizability and localizability and compared with the alarms currently specified in the standard. Results With a single exception, all prototype sets of alarms outperformed the current IEC set on both recognizability and localizability. Within the prototype sets, auditory icons were the most easily recognized, but the other sets, using word rhythms and simple acoustic metaphors, were also more easily recognized than the current alarms. With the exception of one set, all prototype sets were also easier to localize. Conclusion Known auditory cognition and perception principles were successfully applied to an existing audible alarm problem. Application This work constitutes the first (benchmarking) phase of replacing the alarms currently specified in the standard. The design principles used for each set demonstrate the relative ease with which different alarm types can be recognized and localized.
A Comparison of Three Types of Test Development Procedures Using Classical and Latent Trait Methods.
ERIC Educational Resources Information Center
Benson, Jeri; Wilson, Michael
Three methods of item selection were used to select sets of 38 items from a 50-item verbal analogies test and the resulting item sets were compared for internal consistency, standard errors of measurement, item difficulty, biserial item-test correlations, and relative efficiency. Three groups of 1,500 cases each were used for item selection. First…
Delay correlation analysis and representation for vital complaint VHDL models
Rich, Marvin J.; Misra, Ashutosh
2004-11-09
A method and system unbind a rise/fall tuple of a VHDL generic variable and create rise time and fall time generics of each generic variable that are independent of each other. Then, according to a predetermined correlation policy, the method and system collect delay values in a VHDL standard delay file, sort the delay values, remove duplicate delay values, group the delay values into correlation sets, and output an analysis file. The correlation policy may include collecting all generic variables in a VHDL standard delay file, selecting each generic variable, and performing reductions on the set of delay values associated with each selected generic variable.
40 CFR 63.11226 - Affirmative defense for violation of emission standards during malfunction.
Code of Federal Regulations, 2014 CFR
2014-07-01
... enforce the standards set forth in § 63.11201 you may assert an affirmative defense to a claim for civil... shall also specify, using best monitoring methods and engineering judgment, the amount of any emissions...
Improving automatic peptide mass fingerprint protein identification by combining many peak sets.
Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim
2004-08-05
An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.
Zietze, Stefan; Müller, Rainer H; Brecht, René
2008-03-01
In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.
Beyond the New Architectures - Enabling Rapid System Configurations
NASA Technical Reports Server (NTRS)
Smith, Dan
2009-01-01
This presentation slide document reviews the attempts to integrate systems and create common standards for missions. A primary example is telemetry and command sets for satellites. The XML Telemetric and Command Exchange (XTCE) exists, but this is not easy to implement. There is a need for a new standard. The document proposes a method to achieve the standard, and the benefits of using a new standard,
Scoring and setting pass/fail standards for an essay certification examination in nurse-midwifery.
Fullerton, J T; Greener, D L; Gross, L J
1992-03-01
Examination for certification or licensure of health professionals (credentialing) in the United States is almost exclusively of the multiple choice format. The certification examination for entry into the practice of the profession of nurse-midwifery has, however, used a modified essay format throughout its twenty-year history. The examination has recently undergone a revision in the method for score interpretation and for pass/fail decision-making. The revised method, described in this paper, has important implications for all health professional credentialing agencies which use modified essay, oral or practical methods of competency assessment. This paper describes criterion-referenced scoring, the process of constructing the essay items, the methods for assuring validity and reliability for the examination, and the manner of standard setting. In addition, two alternative methods for increasing the validity of the pass/fail decision are evaluated, and the rationale for decision-making about marginal candidates is described.
NASA Technical Reports Server (NTRS)
Yates, E. Carson, Jr.
1987-01-01
To promote the evaluation of existing and emerging unsteady aerodynamic codes and methods for applying them to aeroelastic problems, especially for the transonic range, a limited number of aerodynamic configurations and experimental dynamic response data sets are to be designated by the AGARD Structures and Materials Panel as standards for comparison. This set is a sequel to that established several years ago for comparisons of calculated and measured aerodynamic pressures and forces. This report presents the information needed to perform flutter calculations for the first candidate standard configuration for dynamic response along with the related experimental flutter data.
Chaumeau, V; Andolina, C; Fustec, B; Tuikue Ndam, N; Brengues, C; Herder, S; Cerqueira, D; Chareonviriyaphap, T; Nosten, F; Corbel, V
2016-01-01
Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies.
Chaumeau, V.; Andolina, C.; Fustec, B.; Tuikue Ndam, N.; Brengues, C.; Herder, S.; Cerqueira, D.; Chareonviriyaphap, T.; Nosten, F.; Corbel, V.
2016-01-01
Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies. PMID:27441839
Cooper, J J; Brayford, M J; Laycock, P A
2014-08-01
A new method is described which can be used to determine the setting times of small amounts of high value bone cements. The test was developed to measure how the setting times of a commercially available synthetic calcium sulfate cement (Stimulan, Biocomposites, UK) in two forms (standard and Rapid Cure) varies with the addition of clinically relevant antibiotics. The importance of being able to accurately quantify these setting times is discussed. The results demonstrate that this new method, which is shown to correlate to the Vicat needle, gives reliable and repeatable data with additional benefits expressed in the article. The majority of antibiotics mixed were found to retard the setting reaction of the calcium sulfate cement.
Gradient augmented level set method for phase change simulations
NASA Astrophysics Data System (ADS)
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
Kitamura, Aya; Kawai, Yasuhiko
2015-01-01
Laminated alginate impression for edentulous is simple and time efficient compared to border molding technique. The purpose of this study was to examine clinical applicability of the laminated alginate impression, by measuring the effects of different Water/Powder (W/P) and mixing methods, and different bonding methods in the secondary impression of alginate impression. Three W/P: manufacturer-designated mixing water amount (standard), 1.5-fold (1.5×) and 1.75-fold (1.75×) water amount were mixed by manual and automatic mixing methods. Initial and complete setting time, permanent and elastic deformation, and consistency of the secondary impression were investigated (n=10). Additionally, tensile bond strength between the primary and secondary impression were measured in the following surface treatment; air blow only (A), surface baking (B), and alginate impression material bonding agent (ALGI-BOND: AB) (n=12). Initial setting times significantly shortened with automatic mixing for all W/P (p<0.05). The permanent deformation decreased and elastic deformation increased as high W/P, regardless of the mixing method. Elastic deformation significantly reduced in 1.5× and 1.75× with automatic mixing (p<0.05). All of these properties resulted within JIS standards. For all W/P, AB showed a significantly high bonding strength as compared to A and B (p<0.01). The increase of mixing water, 1.5× and 1.75×, resulted within JIS standards in setting time, suggesting its applicability in clinical setting. The use of automatic mixing device decreased elastic strain and shortening of the curing time. For the secondary impression application of adhesives on the primary impression gives secure adhesion. Copyright © 2014 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Dimech, Wayne; Karakaltsas, Marina; Vincini, Giuseppe A
2018-05-25
A general trend towards conducting infectious disease serology testing in centralized laboratories means that quality control (QC) principles used for clinical chemistry testing are applied to infectious disease testing. However, no systematic assessment of methods used to establish QC limits has been applied to infectious disease serology testing. A total of 103 QC data sets, obtained from six different infectious disease serology analytes, were parsed through standard methods for establishing statistical control limits, including guidelines from Public Health England, USA Clinical and Laboratory Standards Institute (CLSI), German Richtlinien der Bundesärztekammer (RiliBÄK) and Australian QConnect. The percentage of QC results failing each method was compared. The percentage of data sets having more than 20% of QC results failing Westgard rules when the first 20 results were used to calculate the mean±2 standard deviation (SD) ranged from 3 (2.9%) for R4S to 66 (64.1%) for 10X rule, whereas the percentage ranged from 0 (0%) for R4S to 32 (40.5%) for 10X when the first 100 results were used to calculate the mean±2 SD. By contrast, the percentage of data sets with >20% failing the RiliBÄK control limits was 25 (24.3%). Only two data sets (1.9%) had more than 20% of results outside the QConnect Limits. The rate of failure of QCs using QConnect Limits was more applicable for monitoring infectious disease serology testing compared with UK Public Health, CLSI and RiliBÄK, as the alternatives to QConnect Limits reported an unacceptably high percentage of failures across the 103 data sets.
SETs: stand evaluation tools: II. tree value conversion standards for hardwood sawtimber
Joseph J. Mendel; Paul S. DeBald; Martin E. Dale
1976-01-01
Tree quatity index tables are presented for 12 important hardwood species of the oak-hickory forest. From these, tree value conversion standards are developed for each species, log grade, merchantable height, and diameter at breast height. The method of calculating tree value conversion standards and adapting them to different conditions is explained. A computer...
How Do Examiners and Examinees Think About Role-Playing of Standardized Patients in an OSCE Setting?
ERIC Educational Resources Information Center
Sadeghi, Majid; Taghva, Arsia; Mirsepassi, Gholamreza; Hassanzadeh, Mehdi
2007-01-01
Objective: The use of standardized patients in Objective Structured Clinical Examinations in the assessment of psychiatric residents has increased in recent years. The aim of this study is to investigate the experience of psychiatry residents and examiners with standardized patients in Iran. Method: Final-year residents in psychiatry participated…
Lin, Long-Ze; Harnly, James M
2008-11-12
A screening method using LC-DAD-ESI/MS was developed for the identification of common hydroxycinnamoylquinic acids based on direct comparison with standards. A complete standard set for mono-, di-, and tricaffeoylquinic isomers was assembled from commercially available standards, positively identified compounds in common plants (artichokes, asparagus, coffee bean, honeysuckle flowers, sweet potato, and Vernonia amygdalina leaves) and chemically modified standards. Four C18 reversed phase columns were tested using the standardized profiling method (based on LC-DAD-ESI/MS) for 30 phenolic compounds, and their elution order and retention times were evaluated. Using only two columns under standardized LC condition and the collected phenolic compound database, it was possible to separate all of the hydroxycinnamoylquinic acid conjugates and to identify 28 and 18 hydroxycinnamoylquinic acids in arnica flowers (Arnica montana L.) and burdock roots (Arctium lappa L.), respectively. Of these, 22 are reported for the first time.
Hayashi, Nobuyuki; Arai, Ritsuko; Tada, Setsuzo; Taguchi, Hiroshi; Ogawa, Yutaka
2007-01-01
Primer sets for a loop-mediated isothermal amplification (LAMP) method were developed to specifically identify each of the four Brettanomyces/Dekkera species, Dekkera anomala, Dekkera bruxellensis, Dekkera custersiana and Brettanomyces naardenensis. Each primer set was designed with target sequences in the ITS region of the four species and could specifically amplify the target DNA of isolates from beer, wine and soft drinks. Furthermore, the primer sets differentiated strains of the target species from strains belonging to other species, even within the genus Brettanomyces/Dekkera. Moreover, the LAMP method with these primer sets could detect about 1 x 10(1) cfu/ml of Brettanomyces/Dekkera yeasts from suspensions in distilled water, wine and beer. This LAMP method with primer sets for the identification of Brettanomyces/Dekkera yeasts is advantageous in terms of specificity, sensitivity and ease of operation compared with standard PCR methods.
Growth rate measurement in free jet experiments
NASA Astrophysics Data System (ADS)
Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent
2017-07-01
An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.
This data set contains the method performance results. This includes field blanks, method blanks, duplicate samples, analytical duplicates, matrix spikes, and surrogate recovery standards.
The Children’s Total Exposure to Persistent Pesticides and Other Persistent Pollutant (...
Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P
2017-08-14
The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .
Huang, Biao; Zhao, Yongcun
2014-01-01
Estimating standard-exceeding probabilities of toxic metals in soil is crucial for environmental evaluation. Because soil pH and land use types have strong effects on the bioavailability of trace metals in soil, they were taken into account by some environmental protection agencies in making composite soil environmental quality standards (SEQSs) that contain multiple metal thresholds under different pH and land use conditions. This study proposed a method for estimating the standard-exceeding probability map of soil cadmium using a composite SEQS. The spatial variability and uncertainty of soil pH and site-specific land use type were incorporated through simulated realizations by sequential Gaussian simulation. A case study was conducted using a sample data set from a 150 km2 area in Wuhan City and the composite SEQS for cadmium, recently set by the State Environmental Protection Administration of China. The method may be useful for evaluating the pollution risks of trace metals in soil with composite SEQSs. PMID:24672364
de-Azevedo-Vaz, Sergio Lins; Oenning, Anne Caroline Costa; Felizardo, Marcela Graciano; Haiter-Neto, Francisco; de Freitas, Deborah Queiroz
2015-04-01
The objective of this study is to assess the accuracy of the vertical tube shift method in identifying the relationship between the mandibular canal (MC) and third molars. Two examiners assessed image sets of 173 lower third molar roots (55 patients) using forced consensus. The image sets comprised two methods: PERI, two periapical radiographs (taken at 0° and -30°), and PAN, a panoramic radiograph (vertical angulation of -8°) and a periapical radiograph taken at a vertical angulation of -30°. Cone beam computed tomography (CBCT) was the reference standard in the study. The responses were recorded for position (buccal, in-line with apex and lingual) and contact (present or absent). The McNemar-Bowker and McNemar tests were used to determine if the PERI and PAN methods would disagree with the reference standard (α = 5 %). The PERI and PAN methods disagreed with the reference standard for both position and contact (p < 0.05). The vertical tube shift method was not accurate in determining the relationship between lower third molars and the MC. The vertical tube shift is not a reliable method for predicting the relationship between lower third molars and the MC.
Implementing Model-Check for Employee and Management Satisfaction
NASA Technical Reports Server (NTRS)
Jones, Corey; LaPha, Steven
2013-01-01
This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.
Shahriyari, Leili
2017-11-03
One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
This data set contains the method performance results for CTEPP-OH. This includes field blanks, method blanks, duplicate samples, analytical duplicates, matrix spikes, and surrogate recovery standards.
The Children’s Total Exposure to Persistent Pesticides and Other Persisten...
UK audit of glomerular filtration rate measurement from plasma sampling in 2013.
Murray, Anthony W; Lawson, Richard S; Cade, Sarah C; Hall, David O; Kenny, Bob; O'Shaughnessy, Emma; Taylor, Jon; Towey, David; White, Duncan; Carson, Kathryn
2014-11-01
An audit was carried out into UK glomerular filtration rate (GFR) calculation. The results were compared with an identical 2001 audit. Participants used their routine method to calculate GFR for 20 data sets (four plasma samples) in millilitres per minute and also the GFR normalized for body surface area. Some unsound data sets were included to analyse the applied quality control (QC) methods. Variability between centres was assessed for each data set, compared with the national median and a reference value calculated using the method recommended in the British Nuclear Medicine Society guidelines. The influence of the number of samples on variability was studied. Supplementary data were requested on workload and methodology. The 59 returns showed widespread standardization. The applied early exponential clearance correction was the main contributor to the observed variability. These corrections were applied by 97% of centres (50% - 2001) with 80% using the recommended averaged Brochner-Mortenson correction. Approximately 75% applied the recommended Haycock body surface area formula for adults (78% for children). The effect of the number of samples used was not significant. There was wide variability in the applied QC techniques, especially in terms of the use of the volume of distribution. The widespread adoption of the guidelines has harmonized national GFR calculation compared with the previous audit. Further standardization could further reduce variability. This audit has highlighted the need to address the national standardization of QC methods. Radionuclide techniques are confirmed as the preferred method for GFR measurement when an unequivocal result is required.
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2009-10-05
In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.
Computing tools for implementing standards for single-case designs.
Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E
2015-11-01
In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.
Importance sampling with imperfect cloning for the computation of generalized Lyapunov exponents
NASA Astrophysics Data System (ADS)
Anteneodo, Celia; Camargo, Sabrina; Vallejos, Raúl O.
2017-12-01
We revisit the numerical calculation of generalized Lyapunov exponents, L (q ) , in deterministic dynamical systems. The standard method consists of adding noise to the dynamics in order to use importance sampling algorithms. Then L (q ) is obtained by taking the limit noise-amplitude → 0 after the calculation. We focus on a particular method that involves periodic cloning and pruning of a set of trajectories. However, instead of considering a noisy dynamics, we implement an imperfect (noisy) cloning. This alternative method is compared with the standard one and, when possible, with analytical results. As a workbench we use the asymmetric tent map, the standard map, and a system of coupled symplectic maps. The general conclusion of this study is that the imperfect-cloning method performs as well as the standard one, with the advantage of preserving the deterministic dynamics.
Kimura, Shinya; Sato, Toshihiko; Ikeda, Shunya; Noda, Mitsuhiko; Nakayama, Takeo
2010-01-01
Health insurance claims (ie, receipts) record patient health care treatments and expenses and, although created for the health care payment system, are potentially useful for research. Combining different types of receipts generated for the same patient would dramatically increase the utility of these receipts. However, technical problems, including standardization of disease names and classifications, and anonymous linkage of individual receipts, must be addressed. In collaboration with health insurance societies, all information from receipts (inpatient, outpatient, and pharmacy) was collected. To standardize disease names and classifications, we developed a computer-aided post-entry standardization method using a disease name dictionary based on International Classification of Diseases (ICD)-10 classifications. We also developed an anonymous linkage system by using an encryption code generated from a combination of hash values and stream ciphers. Using different sets of the original data (data set 1: insurance certificate number, name, and sex; data set 2: insurance certificate number, date of birth, and relationship status), we compared the percentage of successful record matches obtained by using data set 1 to generate key codes with the percentage obtained when both data sets were used. The dictionary's automatic conversion of disease names successfully standardized 98.1% of approximately 2 million new receipts entered into the database. The percentage of anonymous matches was higher for the combined data sets (98.0%) than for data set 1 (88.5%). The use of standardized disease classifications and anonymous record linkage substantially contributed to the construction of a large, chronologically organized database of receipts. This database is expected to aid in epidemiologic and health services research using receipt information.
Inter‐station intensity standardization for whole‐body MR data
Staring, Marius; Reijnierse, Monique; Lelieveldt, Boudewijn P. F.; van der Geest, Rob J.
2016-01-01
Purpose To develop and validate a method for performing inter‐station intensity standardization in multispectral whole‐body MR data. Methods Different approaches for mapping the intensity of each acquired image stack into the reference intensity space were developed and validated. The registration strategies included: “direct” registration to the reference station (Strategy 1), “progressive” registration to the neighboring stations without (Strategy 2), and with (Strategy 3) using information from the overlap regions of the neighboring stations. For Strategy 3, two regularized modifications were proposed and validated. All methods were tested on two multispectral whole‐body MR data sets: a multiple myeloma patients data set (48 subjects) and a whole‐body MR angiography data set (33 subjects). Results For both data sets, all strategies showed significant improvement of intensity homogeneity with respect to vast majority of the validation measures (P < 0.005). Strategy 1 exhibited the best performance, closely followed by Strategy 2. Strategy 3 and its modifications were performing worse, in majority of the cases significantly (P < 0.05). Conclusions We propose several strategies for performing inter‐station intensity standardization in multispectral whole‐body MR data. All the strategies were successfully applied to two types of whole‐body MR data, and the “direct” registration strategy was concluded to perform the best. Magn Reson Med 77:422–433, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26834001
A Dialogic Construction of Ethical Standards for the Teaching Profession
ERIC Educational Resources Information Center
Smith, Deirdre Mary
2013-01-01
In Ontario, Canada, both the educational community and the public, which is understood to include parents, students and citizens of the province, participated in a multi-phased, longitudinal, dialogic inquiry to develop a set of ethical standards for the teaching profession. Collective discovery methods, syntheses, and validation of ethical…
Methods and Strategies: Science Notebooks as Learning Tools
ERIC Educational Resources Information Center
Fulton, Lori
2017-01-01
Writing in science is a natural way to integrate science and literacy and meet the goals set by the "Next Generation Science Standards" ("NGSS") and the "Common Core State Standards" ("CCSS"), which call for learners to be engaged with the language of science. This means that students should record…
40 CFR 406.11 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS GRAIN MILLS POINT SOURCE CATEGORY Corn Wet Milling Subcategory § 406.11 Specialized definitions... and methods of analysis set forth in 40 CFR part 401 shall apply to this subpart. (b) The term corn shall mean the shelled corn delivered to a plant before processing. (c) The term standard bushel shall...
Realistic metrics and methods for testing household biomass cookstoves are required to develop standards needed by international policy makers, donors, and investors. Application of consistent test practices allows emissions and energy efficiency performance to be benchmarked and...
Evaluation of reference crop evapotranspiration methods in arid, semi-arid and humid regions
USDA-ARS?s Scientific Manuscript database
It is necessary to find a simpler method in different climatic regions to calculate reference crop evapotranspiration (ETo) since the application of the FAO-56 Penman-Monteith method is often restricted due to unavailability of a full weather data set. Seven ETo methods, the de facto standard FAO-56...
Methodology issues in implementation science.
Newhouse, Robin; Bobay, Kathleen; Dykes, Patricia C; Stevens, Kathleen R; Titler, Marita
2013-04-01
Putting evidence into practice at the point of care delivery requires an understanding of implementation strategies that work, in what context and how. To identify methodological issues in implementation science using 4 studies as cases and make recommendations for further methods development. Four cases are presented and methodological issues identified. For each issue raised, evidence on the state of the science is described. Issues in implementation science identified include diverse conceptual frameworks, potential weaknesses in pragmatic study designs, and the paucity of standard concepts and measurement. Recommendations to advance methods in implementation include developing a core set of implementation concepts and metrics, generating standards for implementation methods including pragmatic trials, mixed methods designs, complex interventions and measurement, and endorsing reporting standards for implementation studies.
NASA Astrophysics Data System (ADS)
Wang, J.; Shi, M.; Zheng, P.; Xue, Sh.; Peng, R.
2018-03-01
Laser-induced breakdown spectroscopy has been applied for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens Maxim. f. biserrata Shan et Yuan used in traditional Chinese medicine. Ca II 317.993 nm, Mg I 517.268 nm, and K I 769.896 nm spectral lines have been chosen to set up calibration models for the analysis using the external standard and artificial neural network methods. The linear correlation coefficients of the predicted concentrations versus the standard concentrations of six samples determined by the artificial neural network method are 0.9896, 0.9945, and 0.9911 for Ca, Mg, and K, respectively, which are better than for the external standard method. The artificial neural network method also gives better performance comparing with the external standard method for the average and maximum relative errors, average relative standard deviations, and most maximum relative standard deviations of the predicted concentrations of Ca, Mg, and K in the six samples. Finally, it is proved that the artificial neural network method gives better performance compared to the external standard method for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens.
A Review of Functional Analysis Methods Conducted in Public School Classroom Settings
ERIC Educational Resources Information Center
Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.
2016-01-01
The use of functional behavior assessments (FBAs) to address problem behavior in classroom settings has increased as a result of education legislation and long-standing evidence supporting function-based interventions. Although functional analysis remains the standard for identifying behavior--environment functional relations, this component is…
An accelerated training method for back propagation networks
NASA Technical Reports Server (NTRS)
Shelton, Robert O. (Inventor)
1993-01-01
The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.
Schargus, Marc; Grehn, Franz; Glaucocard Workgroup
2008-12-01
To evaluate existing international IT-based ophthalmological medical data projects, and to define a glaucoma data set based on existing international standards of medical and ophthalmological documentation. To develop the technical environment for easy data mining and data exchange in different countries in Europe. Existing clinical and IT-based projects for documentation of medical data in general medicine and ophthalmology were analyzed to create new data sets for medical documentation in glaucoma patients. Different types of data transfer methods were evaluated to find the best method of data exchange between ophthalmologists in different European countries. Data sets from existing IT projects showed a wide variability in specifications, use of codes, terms and graphical data (perimetry, optic nerve analysis etc.) in glaucoma patients. New standardized digital datasets for glaucoma patients were defined, based on existing standards, which can be used by general ophthalmologists for follow-up examinations and for glaucoma specialists to perform teleconsultation, also across country borders. Datasets are available in different languages. Different types of data exchange methods using secure medical data transfer by internet, USB stick and smartcard were tested for different countries with regard to legal acceptance, practicability and technical realization (e.g. compatibility with EMR systems). By creating new standardized glaucoma specific cross-national datasets, it is now possible to develop an electronic glaucoma patient record system for data storage and transfer based on internet, smartcard or USB stick. The digital data can be used for referrals and for teleconsultation of glaucoma specialists in order to optimize glaucoma treatment. This should lead to an increase of quality in glaucoma care, and prevent expenses in health care costs by unnecessary repeated examinations.
2012-01-01
Background Optimization of the clinical care process by integration of evidence-based knowledge is one of the active components in care pathways. When studying the impact of a care pathway by using a cluster-randomized design, standardization of the care pathway intervention is crucial. This methodology paper describes the development of the clinical content of an evidence-based care pathway for in-hospital management of chronic obstructive pulmonary disease (COPD) exacerbation in the context of a cluster-randomized controlled trial (cRCT) on care pathway effectiveness. Methods The clinical content of a care pathway for COPD exacerbation was developed based on recognized process design and guideline development methods. Subsequently, based on the COPD case study, a generalized eight-step method was designed to support the development of the clinical content of an evidence-based care pathway. Results A set of 38 evidence-based key interventions and a set of 24 process and 15 outcome indicators were developed in eight different steps. Nine Belgian multidisciplinary teams piloted both the set of key interventions and indicators. The key intervention set was judged by the teams as being valid and clinically applicable. In addition, the pilot study showed that the indicators were feasible for the involved clinicians and patients. Conclusions The set of 38 key interventions and the set of process and outcome indicators were found to be appropriate for the development and standardization of the clinical content of the COPD care pathway in the context of a cRCT on pathway effectiveness. The developed eight-step method may facilitate multidisciplinary teams caring for other patient populations in designing the clinical content of their future care pathways. PMID:23190552
Standard methods for sampling freshwater fishes: Opportunities for international collaboration
Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D. S.; Lester, Nigel P.; Porath, Mark T.; Winfield, Ian J.
2017-01-01
With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how sampling methods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.
A Low-Cost Inkjet-Printed Glucose Test Strip System for Resource-Poor Settings.
Gainey Wilson, Kayla; Ovington, Patrick; Dean, Delphine
2015-06-12
The prevalence of diabetes is increasing in low-resource settings; however, accessing glucose monitoring is extremely difficult and expensive in these regions. Work is being done to address the multitude of issues surrounding diabetes care in low-resource settings, but an affordable glucose monitoring solution has yet to be presented. An inkjet-printed test strip solution is being proposed as a solution to this problem. The use of a standard inkjet printer is being proposed as a manufacturing method for low-cost glucose monitoring test strips. The printer cartridges are filled with enzyme and dye solutions that are printed onto filter paper. The result is a colorimetric strip that turns a blue/green color in the presence of blood glucose. Using a light-based spectroscopic reading, the strips show a linear color change with an R(2) = .99 using glucose standards and an R(2) = .93 with bovine blood. Initial testing with bovine blood indicates that the strip accuracy is comparable to the International Organization for Standardization (ISO) standard 15197 for glucose testing in the 0-350 mg/dL range. However, further testing with human blood will be required to confirm this. A visible color gradient was observed with both the glucose standard and bovine blood experiment, which could be used as a visual indicator in cases where an electronic glucose meter was unavailable. These results indicate that an inkjet-printed filter paper test strip is a feasible method for monitoring blood glucose levels. The use of inkjet printers would allow for local manufacturing to increase supply in remote regions. This system has the potential to address the dire need for glucose monitoring in low-resource settings. © 2015 Diabetes Technology Society.
Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
Objective To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. Methods From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. Results The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. Limitations The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. Conclusion The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs. PMID:21672912
Recommendations for evaluation of computational methods
NASA Astrophysics Data System (ADS)
Jain, Ajay N.; Nicholls, Anthony
2008-03-01
The field of computational chemistry, particularly as applied to drug design, has become increasingly important in terms of the practical application of predictive modeling to pharmaceutical research and development. Tools for exploiting protein structures or sets of ligands known to bind particular targets can be used for binding-mode prediction, virtual screening, and prediction of activity. A serious weakness within the field is a lack of standards with respect to quantitative evaluation of methods, data set preparation, and data set sharing. Our goal should be to report new methods or comparative evaluations of methods in a manner that supports decision making for practical applications. Here we propose a modest beginning, with recommendations for requirements on statistical reporting, requirements for data sharing, and best practices for benchmark preparation and usage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, R; Gordon, I; Ghebremedhin, A
2014-06-01
Purpose: To determine the proton output factors for an SRS cone set using standardized apertures and varied range compensators (bolus blanks); specifically, to determine the best method for modeling the bolus gap factor (BGF) and eliminate the need for patient specific calibrations. Methods: A Standard Imaging A-16 chamber was placed in a Plastic Water phantom to measure the change in dose/MU with different treatment combinations for a proton SRS cone, using standardized apertures and range compensators. Measurements were made with all apertures in the SRS cone set, with four different range compensator thicknesses and five different air gaps between themore » end of the SRS cone and the surface of the phantom. The chamber was located at iso-center and maintained at a constant depth at the center of modulation for all measurements. Each aperture was placed in the cone to measure the change in MU needed to maintain constant dose at the chamber, as the air gap was increased with different thicknesses of bolus. Results: The dose/MU varied significantly with decreasing aperture size, increasing bolus thickness, or increasing air gap. The measured data was fitted with the lowest order polynomials that accurately described the data, to create a model for determining the change in output for any potential combination of devices used to treat a patient. For a given standardized aperture, the BGF could be described by its constituent factors: the bolus thickness factor (BTF) and the nozzle extension factor (NEF). Conclusion: The methods used to model the dose at the calibration point could be used to accurately predict the change in output for SRS proton beams due to the BGF, eliminating the need for patient specific calibrations. This method for modeling SRS treatments could also be applied to model other treatments using passively scattered proton beams.« less
Utility of an Algorithm to Increase the Accuracy of Medication History in an Obstetrical Setting.
Corbel, Aline; Baud, David; Chaouch, Aziz; Beney, Johnny; Csajka, Chantal; Panchaud, Alice
2016-01-01
In an obstetrical setting, inaccurate medication histories at hospital admission may result in failure to identify potentially harmful treatments for patients and/or their fetus(es). This prospective study was conducted to assess average concordance rates between (1) a medication list obtained with a one-page structured medication history algorithm developed for the obstetrical setting and (2) the medication list reported in medical records and obtained by open-ended questions based on standard procedures. Both lists were converted into concordance rate using a best possible medication history approach as the reference (information obtained by patients, prescribers and community pharmacists' interviews). The algorithm-based method obtained a higher average concordance rate than the standard method, with respectively 90.2% [CI95% 85.8-94.3] versus 24.6% [CI95%15.3-34.4] concordance rates (p<0.01). Our algorithm-based method strongly enhanced the accuracy of the medication history in our obstetric population, without using substantial resources. Its implementation is an effective first step to the medication reconciliation process, which has been recognized as a very important component of patients' drug safety.
A Standardized Mean Difference Effect Size for Single Case Designs
ERIC Educational Resources Information Center
Hedges, Larry V.; Pustejovsky, James E.; Shadish, William R.
2012-01-01
Single case designs are a set of research methods for evaluating treatment effects by assigning different treatments to the same individual and measuring outcomes over time and are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single case designs have focused attention on…
40 CFR 89.6 - Reference materials.
Code of Federal Regulations, 2010 CFR
2010-07-01
... set forth the material that has been incorporated by reference in this part. (1) ASTM material. The... 19428-2959. Document number and name 40 CFR part 89 reference ASTM D86-97: “Standard Test Method for Distillation of Petroleum Products at Atmospheric Pressure” Appendix A to Subpart D. ASTM D93-97: “Standard...
Test Anxiety and High-Stakes Test Performance between School Settings: Implications for Educators
ERIC Educational Resources Information Center
von der Embse, Nathaniel; Hasson, Ramzi
2012-01-01
With the enactment of standards-based accountability in education, high-stakes tests have become the dominant method for measuring school effectiveness and student achievement. Schools and educators are under increasing pressure to meet achievement standards. However, there are variables which may interfere with the authentic measurement of…
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Public notice of changes in Statewide methods and... SERVICES Payment Methods: General Provisions § 447.205 Public notice of changes in Statewide methods and... this section, the agency must provide public notice of any significant proposed change in its methods...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Public notice of changes in Statewide methods and... SERVICES Payment Methods: General Provisions § 447.205 Public notice of changes in Statewide methods and... this section, the agency must provide public notice of any significant proposed change in its methods...
Development of a core set of outcome measures for OAB treatment.
Foust-Wright, Caroline; Wissig, Stephanie; Stowell, Caleb; Olson, Elizabeth; Anderson, Anita; Anger, Jennifer; Cardozo, Linda; Cotterill, Nikki; Gormley, Elizabeth Ann; Toozs-Hobson, Philip; Heesakkers, John; Herbison, Peter; Moore, Kate; McKinney, Jessica; Morse, Abraham; Pulliam, Samantha; Szonyi, George; Wagg, Adrian; Milsom, Ian
2017-12-01
Standardized measures enable the comparison of outcomes across providers and treatments giving valuable information for improving care quality and efficacy. The aim of this project was to define a minimum standard set of outcome measures and case-mix factors for evaluating the care of patients with overactive bladder (OAB). The International Consortium for Health Outcomes Measurement (ICHOM) convened an international working group (WG) of leading clinicians and patients to engage in a structured method for developing a core outcome set. Consensus was determined by a modified Delphi process, and discussions were supported by both literature review and patient input. The standard set measures outcomes of care for adults seeking treatment for OAB, excluding residents of long-term care facilities. The WG focused on treatment outcomes identified as most important key outcome domains to patients: symptom burden and bother, physical functioning, emotional health, impact of symptoms and treatment on quality of life, and success of treatment. Demographic information and case-mix factors that may affect these outcomes were also included. The standardized outcome set for evaluating clinical care is appropriate for use by all health providers caring for patients with OAB, regardless of specialty or geographic location, and provides key data for quality improvement activities and research.
Heffner, John E; Brower, Kathleen; Ellis, Rosemary; Brown, Shirley
2004-07-01
The high cost of computerized physician order entry (CPOE) and physician resistance to standardized care have delayed implementation. An intranet-based order set system can provide some of CPOE's benefits and offer opportunities to acculturate physicians toward standardized care. INTRANET CLINICIAN ORDER FORMS (COF): The COF system at the Medical University of South Carolina (MUSC) allows caregivers to enter and print orders through the intranet at points of care and to access decision support resources. Work on COF began in March 2000 with transfer of 25 MUSC paper-based order set forms to an intranet site. Physician groups developed additional order sets, which number more than 200. Web traffic increased progressively during a 24-month period, peaking at more than 6,400 hits per month to COF. Decision support tools improved compliance with Centers for Medicare & Medicaid Services core indicators. Clinicians demonstrated a willingness to develop and use order sets and decision support tools posted on the COF site. COF provides a low-cost method for preparing caregivers and institutions to adopt CPOE and standardization of care. The educational resources, relevant links to external resources, and communication alerts will all link to CPOE, thereby providing a head start in CPOE implementation.
Assembling Appliances Standards from a Basket of Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siderious, Hans-Paul; Meier, Alan
2014-08-11
Rapid innovation in product design challenges the current methodology for setting standards and labels, especially for electronics, software and networking. Major problems include defining the product, measuring its energy consumption, and choosing the appropriate metric and level for the standard. Most governments have tried to solve these problems by defining ever more specific product subcategories, along with their corresponding test methods and metrics. An alternative approach would treat each energy-using product as something that delivers a basket of functions. Then separate standards would be constructed for the individual functions that can be defined, tested, and evaluated. Case studies of thermostats,more » displays and network equipment are presented to illustrate the problems with the classical approach for setting standards and indicate the merits and drawbacks of the alternative. The functional approach appears best suited to products whose primary purpose is processing information and that have multiple functions.« less
Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi
2016-01-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405
Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi
2015-11-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.
Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levnajić, Zoran; Department of Mechanical Engineering, University of California Santa Barbara, Santa Barbara, California 93106; Mezić, Igor
We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone,more » and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.« less
Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets.
Levnajić, Zoran; Mezić, Igor
2015-05-01
We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone, and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.
Fu, J; Li, L; Yang, X Q; Zhu, M J
2011-01-01
Leucine carboxypeptidase (EC 3.4.16) activity in Actinomucor elegans bran koji was investigated via absorbance at 507 nm after stained by Cd-nihydrin solution, with calibration curve A, which was made by a set of known concentration standard leucine, calibration B, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations inactive crude enzyme extract, and calibration C, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations crude enzyme extract. The results indicated that application of pure amino acid standard curve was not a suitable way to determine carboxypeptidase in complicate mixture, and it probably led to overestimated carboxypeptidase activity. It was found that addition of crude exact into pure amino acid standard curve had a significant difference from pure amino acid standard curve method (p < 0.05). There was no significant enzyme activity difference (p > 0.05) between addition of active crude exact and addition of inactive crude kind, when the proper dilute multiple was used. It was concluded that the addition of crude enzyme extract to the calibration was needed to eliminate the interference of free amino acids and related compounds presented in crude enzyme extract.
Ichikawa, Kazuki; Morishita, Shinichi
2014-01-01
K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.
Congruence of Standard Setting Methods for a Nursing Certification Examination.
ERIC Educational Resources Information Center
Fabrey, Lawrence J.; Raymond, Mark R.
The American Nurses' Association certification provides professional recognition beyond licensure to nurses who pass an examination. To determine the passing score as it would be set by a representative peer group, a survey was mailed to a random sample of 200 recently certified nurses. Three questions were asked: (1) what percentage of examinees…
A pilot study of solar water disinfection in the wilderness setting.
Tedeschi, Christopher M; Barsi, Christopher; Peterson, Shane E; Carey, Kevin M
2014-09-01
Solar disinfection of water has been shown to be an effective treatment method in the developing world, but not specifically in a wilderness or survival setting. The current study sought to evaluate the technique using materials typically available in a wilderness or backcountry environment. Untreated surface water from a stream in rural Costa Rica was disinfected using the solar disinfection (SODIS) method, using both standard containers as well as containers and materials more readily available to a wilderness traveler. Posttreatment samples using polyethylene terephthalate (PET) bottles, as well as Nalgene and Platypus water containers, showed similarly decreased levels of Escherichia coli and total coliforms. The SODIS technique may be applicable in the wilderness setting using tools commonly available in the backcountry. In this limited trial, specific types of containers common in wilderness settings demonstrated similar performance to the standard containers. With further study, solar disinfection in appropriate conditions may be included as a viable treatment option for wilderness water disinfection. Copyright © 2014 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.
Reporting Qualitative Research: Standards, Challenges, and Implications for Health Design.
Peditto, Kathryn
2018-04-01
This Methods column describes the existing reporting standards for qualitative research, their application to health design research, and the challenges to implementation. Intended for both researchers and practitioners, this article provides multiple perspectives on both reporting and evaluating high-quality qualitative research. Two popular reporting standards exist for reporting qualitative research-the Consolidated Criteria for Reporting Qualitative Research (COREQ) and the Standards for Reporting Qualitative Research (SRQR). Though compiled using similar procedures, they differ in their criteria and the methods to which they apply. Creating and applying reporting criteria is inherently difficult due to the undefined and fluctuating nature of qualitative research when compared to quantitative studies. Qualitative research is expansive and occasionally controversial, spanning many different methods of inquiry and epistemological approaches. A "one-size-fits-all" standard for reporting qualitative research can be restrictive, but COREQ and SRQR both serve as valuable tools for developing responsible qualitative research proposals, effectively communicating research decisions, and evaluating submissions. Ultimately, tailoring a set of standards specific to health design research and its frequently used methods would ensure quality research and aid reviewers in their evaluations.
NASA Astrophysics Data System (ADS)
Stock, Michala K.; Stull, Kyra E.; Garvin, Heather M.; Klales, Alexandra R.
2016-10-01
Forensic anthropologists are routinely asked to estimate a biological profile (i.e., age, sex, ancestry and stature) from a set of unidentified remains. In contrast to the abundance of collections and techniques associated with adult skeletons, there is a paucity of modern, documented subadult skeletal material, which limits the creation and validation of appropriate forensic standards. Many are forced to use antiquated methods derived from small sample sizes, which given documented secular changes in the growth and development of children, are not appropriate for application in the medico-legal setting. Therefore, the aim of this project is to use multi-slice computed tomography (MSCT) data from a large, diverse sample of modern subadults to develop new methods to estimate subadult age and sex for practical forensic applications. The research sample will consist of over 1,500 full-body MSCT scans of modern subadult individuals (aged birth to 20 years) obtained from two U.S. medical examiner's offices. Statistical analysis of epiphyseal union scores, long bone osteometrics, and os coxae landmark data will be used to develop modern subadult age and sex estimation standards. This project will result in a database of information gathered from the MSCT scans, as well as the creation of modern, statistically rigorous standards for skeletal age and sex estimation in subadults. Furthermore, the research and methods developed in this project will be applicable to dry bone specimens, MSCT scans, and radiographic images, thus providing both tools and continued access to data for forensic practitioners in a variety of settings.
A Comparison of Imputation Methods for Bayesian Factor Analysis Models
ERIC Educational Resources Information Center
Merkle, Edgar C.
2011-01-01
Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…
Evaluating Sleep Disturbance: A Review of Methods
NASA Technical Reports Server (NTRS)
Smith, Roy M.; Oyung, R.; Gregory, K.; Miller, D.; Rosekind, M.; Rosekind, Mark R. (Technical Monitor)
1996-01-01
There are three general approaches to evaluating sleep disturbance in regards to noise: subjective, behavioral, and physiological. Subjective methods range from standardized questionnaires and scales to self-report measures designed for specific research questions. There are two behavioral methods that provide useful sleep disturbance data. One behavioral method is actigraphy, a motion detector that provides an empirical estimate of sleep quantity and quality. An actigraph, worn on the non-dominant wrist, provides a 24-hr estimate of the rest/activity cycle. The other method involves a behavioral response, either to a specific probe or stimuli or subject initiated (e.g., indicating wakefulness). The classic, gold standard for evaluating sleep disturbance is continuous physiological monitoring of brain, eye, and muscle activity. This allows detailed distinctions of the states and stages of sleep, awakenings, and sleep continuity. Physiological delta can be obtained in controlled laboratory settings and in natural environments. Current ambulatory physiological recording equipment allows evaluation in home and work settings. These approaches will be described and the relative strengths and limitations of each method will be discussed.
Dobecki, Marek
2012-01-01
This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.
Zolg, Daniel Paul; Wilhelm, Mathias; Yu, Peng; Knaute, Tobias; Zerweck, Johannes; Wenschuh, Holger; Reimer, Ulf; Schnatbaum, Karsten; Kuster, Bernhard
2017-11-01
Beyond specific applications, such as the relative or absolute quantification of peptides in targeted proteomic experiments, synthetic spike-in peptides are not yet systematically used as internal standards in bottom-up proteomics. A number of retention time standards have been reported that enable chromatographic aligning of multiple LC-MS/MS experiments. However, only few peptides are typically included in such sets limiting the analytical parameters that can be monitored. Here, we describe PROCAL (ProteomeTools Calibration Standard), a set of 40 synthetic peptides that span the entire hydrophobicity range of tryptic digests, enabling not only accurate determination of retention time indices but also monitoring of chromatographic separation performance over time. The fragmentation characteristics of the peptides can also be used to calibrate and compare collision energies between mass spectrometers. The sequences of all selected peptides do not occur in any natural protein, thus eliminating the need for stable isotope labeling. We anticipate that this set of peptides will be useful for multiple purposes in individual laboratories but also aiding the transfer of data acquisition and analysis methods between laboratories, notably the use of spectral libraries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chow, Clara K.; Corsi, Daniel J.; Lock, Karen; Madhavan, Manisha; Mackie, Pam; Li, Wei; Yi, Sun; Wang, Yang; Swaminathan, Sumathi; Lopez-Jaramillo, Patricio; Gomez-Arbelaez, Diego; Avezum, Álvaro; Lear, Scott A.; Dagenais, Gilles; Teo, Koon; McKee, Martin; Yusuf, Salim
2014-01-01
Background Previous research has shown that environments with features that encourage walking are associated with increased physical activity. Existing methods to assess the built environment using geographical information systems (GIS) data, direct audit or large surveys of the residents face constraints, such as data availability and comparability, when used to study communities in countries in diverse parts of the world. The aim of this study was to develop a method to evaluate features of the built environment of communities using a standard set of photos. In this report we describe the method of photo collection, photo analysis instrument development and inter-rater reliability of the instrument. Methods/Principal Findings A minimum of 5 photos were taken per community in 86 communities in 5 countries according to a standard set of instructions from a designated central point of each community by researchers at each site. A standard pro forma derived from reviewing existing instruments to assess the built environment was developed and used to score the characteristics of each community. Photo sets from each community were assessed independently by three observers in the central research office according to the pro forma and the inter-rater reliability was compared by intra-class correlation (ICC). Overall 87% (53 of 60) items had an ICC of ≥0.70, 7% (4 of 60) had an ICC between 0.60 and 0.70 and 5% (3 of 60) items had an ICC ≤0.50. Conclusions/Significance Analysis of photos using a standardized protocol as described in this study offers a means to obtain reliable and reproducible information on the built environment in communities in very diverse locations around the world. The collection of the photographic data required minimal training and the analysis demonstrated high reliability for the majority of items of interest. PMID:25369366
Benchmark data sets for structure-based computational target prediction.
Schomburg, Karen T; Rarey, Matthias
2014-08-25
Structure-based computational target prediction methods identify potential targets for a bioactive compound. Methods based on protein-ligand docking so far face many challenges, where the greatest probably is the ranking of true targets in a large data set of protein structures. Currently, no standard data sets for evaluation exist, rendering comparison and demonstration of improvements of methods cumbersome. Therefore, we propose two data sets and evaluation strategies for a meaningful evaluation of new target prediction methods, i.e., a small data set consisting of three target classes for detailed proof-of-concept and selectivity studies and a large data set consisting of 7992 protein structures and 72 drug-like ligands allowing statistical evaluation with performance metrics on a drug-like chemical space. Both data sets are built from openly available resources, and any information needed to perform the described experiments is reported. We describe the composition of the data sets, the setup of screening experiments, and the evaluation strategy. Performance metrics capable to measure the early recognition of enrichments like AUC, BEDROC, and NSLR are proposed. We apply a sequence-based target prediction method to the large data set to analyze its content of nontrivial evaluation cases. The proposed data sets are used for method evaluation of our new inverse screening method iRAISE. The small data set reveals the method's capability and limitations to selectively distinguish between rather similar protein structures. The large data set simulates real target identification scenarios. iRAISE achieves in 55% excellent or good enrichment a median AUC of 0.67 and RMSDs below 2.0 Å for 74% and was able to predict the first true target in 59 out of 72 cases in the top 2% of the protein data set of about 8000 structures.
Ripple, Dean C; Montgomery, Christopher B; Hu, Zhishang
2015-02-01
Accurate counting and sizing of protein particles has been limited by discrepancies of counts obtained by different methods. To understand the bias and repeatability of techniques in common use in the biopharmaceutical community, the National Institute of Standards and Technology has conducted an interlaboratory comparison for sizing and counting subvisible particles from 1 to 25 μm. Twenty-three laboratories from industry, government, and academic institutions participated. The circulated samples consisted of a polydisperse suspension of abraded ethylene tetrafluoroethylene particles, which closely mimic the optical contrast and morphology of protein particles. For restricted data sets, agreement between data sets was reasonably good: relative standard deviations (RSDs) of approximately 25% for light obscuration counts with lower diameter limits from 1 to 5 μm, and approximately 30% for flow imaging with specified manufacturer and instrument setting. RSDs of the reported counts for unrestricted data sets were approximately 50% for both light obscuration and flow imaging. Differences between instrument manufacturers were not statistically significant for light obscuration but were significant for flow imaging. We also report a method for accounting for differences in the reported diameter for flow imaging and electrical sensing zone techniques; the method worked well for diameters greater than 15 μm. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Assessing Lower Limb Alignment: Comparison of Standard Knee Xray vs Long Leg View
Zampogna, Biagio; Vasta, Sebastiano; Amendola, Annunziato; Uribe-Echevarria Marbach, Bastian; Gao, Yubo; Papalia, Rocco; Denaro, Vincenzo
2015-01-01
Background High tibial osteotomy (HTO) is a well-established and commonly utilized technique in medial knee osteoarthritis secondary to varus malalignment. Accurate measurement of the preoperative limb alignment, and the amount of correction required are essential when planning limb realignment surgery. The hip-knee-ankle angle (HKA) measured on a full length weightbearing (FLWB) X-ray in the standing position is considered the gold standard, since it allows for reliable and accurate measurement of the mechanical axis of the whole lower extremity. In general practice, alignment is often evaluated on standard anteroposterior weightbearing (APWB) X-rays, as the angle between the femur and tibial anatomic axis (TFa). It is, therefore, of value to establish if measuring the anatomical axis from limited APWB is an effective measure of knee alignment especially in patients undergoing osteotomy about the knee. Methods Three independent observers measured preoperative and postoperative FTa with standard method (FTa1) and with circles method (FTa2) on APWB X-ray and the HKA on FLWB X-ray at three different time-points separated by a two-week period. Intra-observer and inter-observer reliabilities and the comparison and relationship between anatomical and mechanical alignment were calculated. Results Intra- and interclass coefficients for all the three methods indicated excellent reliability, having all the values above 0.80. Using the mean of paired t-student test, the comparison of HKA versus TFa1 and TFa2 showed a statistically significant difference (p<.0001) both for the pre-operative and post-operative sets of values. The correlation between the HKA and FTal was found poor for the preoperative set (R=0.26) and fair for the postoperative one (R=0.53), while the new circles method showed a higher correlation in both the preoperative (R=0.71) and postoperative sets (R=0.79). Conclusions Intra-observer reliability was high for HKA, FTal and FTa2 on APWB x-rays in the pre- and post-operative setting. Inter-rater reliability was higher for HKA and TFa2 compared to FTal. The femoro-tibial angle as measured on APWB with the traditional method (FTal) has a weak correlation with the HKA, and based on these findings, should not be used in everyday practice. The FTa2 showed better correlation with the HKA, although not excellent Level of Evidence Level III, Retrospective study. PMID:26361444
Pyregov, A V; Ovechkin, A Iu; Petrov, S V
2012-01-01
Results of prospective randomized comparative research of 2 total hemoglobin estimation methods are presented. There were laboratory tests and continuous noninvasive technique with multiwave spectrophotometry on the Masimo Rainbow SET. Research was carried out in two stages. At the 1st stage (gynecology)--67 patients were included and in second stage (obstetrics)--44 patients during and after Cesarean section. The standard deviation of noninvasive total hemoglobin estimation from absolute values (invasive) was 7.2 and 4.1%, an standard deviation in a sample--5.2 and 2.7 % in gynecologic operations and surgical delivery respectively, that confirms lack of reliable indicators differences. The method of continuous noninvasive total hemoglobin estimation with multiwave spectrophotometry on the Masimo Rainbow SET technology can be recommended for use in obstetrics and gynecology.
A Preliminary Rubric Design to Evaluate Mixed Methods Research
ERIC Educational Resources Information Center
Burrows, Timothy J.
2013-01-01
With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…
A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes
ERIC Educational Resources Information Center
Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.
2008-01-01
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Microbial contamination of mobile phones in a health care setting in Alexandria, Egypt
Selim, Heba Sayed; Abaza, Amani Farouk
2015-01-01
Aim: This study aimed at investigating the microbial contamination of mobile phones in a hospital setting. Methods: Swab samples were collected from 40 mobile phones of patients and health care workers at the Alexandria University Students’ Hospital. They were tested for their bacterial contamination at the microbiology laboratory of the High Institute of Public Health. Quantification of bacteria was performed using both surface spread and pour plate methods. Isolated bacterial agents were identified using standard microbiological methods. Methicillin-resistant Staphylococcus aureus was identified by disk diffusion method described by Bauer and Kirby. Isolated Gram-negative bacilli were tested for being extended spectrum beta lactamase producers using the double disk diffusion method according to the Clinical and Laboratory Standards Institute recommendations. Results: All of the tested mobile phones (100%) were contaminated with either single or mixed bacterial agents. The most prevalent bacterial contaminants were methicillin-resistant S. aureus and coagulase-negative staphylococci representing 53% and 50%, respectively. The mean bacterial count was 357 CFU/ml, while the median was 13 CFU/ml using the pour plate method. The corresponding figures were 2,192 and 1,720 organisms/phone using the surface spread method. Conclusions: Mobile phones usage in hospital settings poses a risk of transmission of a variety of bacterial agents including multidrug-resistant pathogens as methicillin-resistant S. aureus. The surface spread method is an easy and useful tool for detection and estimation of bacterial contamination of mobile phones. PMID:25699226
Van Hecke, Wim; Sijbers, Jan; De Backer, Steve; Poot, Dirk; Parizel, Paul M; Leemans, Alexander
2009-07-01
Although many studies are starting to use voxel-based analysis (VBA) methods to compare diffusion tensor images between healthy and diseased subjects, it has been demonstrated that VBA results depend heavily on parameter settings and implementation strategies, such as the applied coregistration technique, smoothing kernel width, statistical analysis, etc. In order to investigate the effect of different parameter settings and implementations on the accuracy and precision of the VBA results quantitatively, ground truth knowledge regarding the underlying microstructural alterations is required. To address the lack of such a gold standard, simulated diffusion tensor data sets are developed, which can model an array of anomalies in the diffusion properties of a predefined location. These data sets can be employed to evaluate the numerous parameters that characterize the pipeline of a VBA algorithm and to compare the accuracy, precision, and reproducibility of different post-processing approaches quantitatively. We are convinced that the use of these simulated data sets can improve the understanding of how different diffusion tensor image post-processing techniques affect the outcome of VBA. In turn, this may possibly lead to a more standardized and reliable evaluation of diffusion tensor data sets of large study groups with a wide range of white matter altering pathologies. The simulated DTI data sets will be made available online (http://www.dti.ua.ac.be).
Spectral gene set enrichment (SGSE).
Frost, H Robert; Li, Zhigang; Moore, Jason H
2015-03-03
Gene set testing is typically performed in a supervised context to quantify the association between groups of genes and a clinical phenotype. In many cases, however, a gene set-based interpretation of genomic data is desired in the absence of a phenotype variable. Although methods exist for unsupervised gene set testing, they predominantly compute enrichment relative to clusters of the genomic variables with performance strongly dependent on the clustering algorithm and number of clusters. We propose a novel method, spectral gene set enrichment (SGSE), for unsupervised competitive testing of the association between gene sets and empirical data sources. SGSE first computes the statistical association between gene sets and principal components (PCs) using our principal component gene set enrichment (PCGSE) method. The overall statistical association between each gene set and the spectral structure of the data is then computed by combining the PC-level p-values using the weighted Z-method with weights set to the PC variance scaled by Tracy-Widom test p-values. Using simulated data, we show that the SGSE algorithm can accurately recover spectral features from noisy data. To illustrate the utility of our method on real data, we demonstrate the superior performance of the SGSE method relative to standard cluster-based techniques for testing the association between MSigDB gene sets and the variance structure of microarray gene expression data. Unsupervised gene set testing can provide important information about the biological signal held in high-dimensional genomic data sets. Because it uses the association between gene sets and samples PCs to generate a measure of unsupervised enrichment, the SGSE method is independent of cluster or network creation algorithms and, most importantly, is able to utilize the statistical significance of PC eigenvalues to ignore elements of the data most likely to represent noise.
Al-Ahmad, Ali; Zou, Peng; Solarte, Diana Lorena Guevara; Hellwig, Elmar; Steinberg, Thorsten; Lienkamp, Karen
2014-01-01
Bacterial infection of biomaterials is a major concern in medicine, and different kinds of antimicrobial biomaterial have been developed to deal with this problem. To test the antimicrobial performance of these biomaterials, the airborne bacterial assay is used, which involves the formation of biohazardous bacterial aerosols. We here describe a new experimental set-up which allows safe handling of such pathogenic aerosols, and standardizes critical parameters of this otherwise intractable and strongly user-dependent assay. With this new method, reproducible, thorough antimicrobial data (number of colony forming units and live-dead-stain) was obtained. Poly(oxonorbornene)-based Synthetic Mimics of Antimicrobial Peptides (SMAMPs) were used as antimicrobial test samples. The assay was able to differentiate even between subtle sample differences, such as different sample thicknesses. With this new set-up, the airborne bacterial assay was thus established as a useful, reliable, and realistic experimental method to simulate the contamination of biomaterials with bacteria, for example in an intraoperative setting.
Confidence Interval Coverage for Cohen's Effect Size Statistic
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2006-01-01
Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…
7 CFR 277.14 - Procurement standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... qualitative nature of the material, product or service desired and, when necessary, shall set forth those... technical resources. (g) Procurement methods. State agency procurements made in whole or in part with program funds shall be by one of the following methods: (1) Small purchase procedures are those relatively...
Dunham, Jason B.; Chelgren, Nathan D.; Heck, Michael P.; Clark, Steven M.
2013-01-01
We evaluated the probability of detecting larval lampreys using different methods of backpack electrofishing in wadeable streams in the U.S. Pacific Northwest. Our primary objective was to compare capture of lampreys using electrofishing with standard settings for salmon and trout to settings specifically adapted for capture of lampreys. Field work consisted of removal sampling by means of backpack electrofishing in 19 sites in streams representing a broad range of conditions in the region. Captures of lampreys at these sites were analyzed with a modified removal-sampling model and Bayesian estimation to measure the relative odds of capture using the lamprey-specific settings compared with the standard salmonid settings. We found that the odds of capture were 2.66 (95% credible interval, 0.87–78.18) times greater for the lamprey-specific settings relative to standard salmonid settings. When estimates of capture probability were applied to estimating the probabilities of detection, we found high (>0.80) detectability when the actual number of lampreys in a site was greater than 10 individuals and effort was at least two passes of electrofishing, regardless of the settings used. Further work is needed to evaluate key assumptions in our approach, including the evaluation of individual-specific capture probabilities and population closure. For now our results suggest comparable results are possible for detection of lampreys by using backpack electrofishing with salmonid- or lamprey-specific settings.
Evaluating Different Standard-Setting Methods in an ESL Placement Testing Context
ERIC Educational Resources Information Center
Shin, Sun-Young; Lidster, Ryan
2017-01-01
In language programs, it is crucial to place incoming students into appropriate levels to ensure that course curriculum and materials are well targeted to their learning needs. Deciding how and where to set cutscores on placement tests is thus of central importance to programs, but previous studies in educational measurement disagree as to which…
Advocating the Broad Use of the Decision Tree Method in Education
ERIC Educational Resources Information Center
Gomes, Cristiano Mauro Assis; Almeida, Leandro S.
2017-01-01
Predictive studies have been widely undertaken in the field of education to provide strategic information about the extensive set of processes related to teaching and learning, as well as about what variables predict certain educational outcomes, such as academic achievement or dropout. As in any other area, there is a set of standard techniques…
Automated feature detection and identification in digital point-ordered signals
Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.
1998-01-01
A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.
Niemeijer, Meindert; van Ginneken, Bram; Cree, Michael J; Mizutani, Atsushi; Quellec, Gwénolé; Sanchez, Clara I; Zhang, Bob; Hornero, Roberto; Lamard, Mathieu; Muramatsu, Chisako; Wu, Xiangqian; Cazuguel, Guy; You, Jane; Mayo, Agustín; Li, Qin; Hatanaka, Yuji; Cochener, Béatrice; Roux, Christian; Karray, Fakhri; Garcia, María; Fujita, Hiroshi; Abramoff, Michael D
2010-01-01
The detection of microaneurysms in digital color fundus photographs is a critical first step in automated screening for diabetic retinopathy (DR), a common complication of diabetes. To accomplish this detection numerous methods have been published in the past but none of these was compared with each other on the same data. In this work we present the results of the first international microaneurysm detection competition, organized in the context of the Retinopathy Online Challenge (ROC), a multiyear online competition for various aspects of DR detection. For this competition, we compare the results of five different methods, produced by five different teams of researchers on the same set of data. The evaluation was performed in a uniform manner using an algorithm presented in this work. The set of data used for the competition consisted of 50 training images with available reference standard and 50 test images where the reference standard was withheld by the organizers (M. Niemeijer, B. van Ginneken, and M. D. Abràmoff). The results obtained on the test data was submitted through a website after which standardized evaluation software was used to determine the performance of each of the methods. A human expert detected microaneurysms in the test set to allow comparison with the performance of the automatic methods. The overall results show that microaneurysm detection is a challenging task for both the automatic methods as well as the human expert. There is room for improvement as the best performing system does not reach the performance of the human expert. The data associated with the ROC microaneurysm detection competition will remain publicly available and the website will continue accepting submissions.
Evaluation of extraction methods for ochratoxin A detection in cocoa beans employing HPLC.
Mishra, Rupesh K; Catanante, Gaëlle; Hayat, Akhtar; Marty, Jean-Louis
2016-01-01
Cocoa is an important ingredient for the chocolate industry and for many food products. However, it is prone to contamination by ochratoxin A (OTA), which is highly toxic and potentially carcinogenic to humans. In this work, four different extraction methods were tested and compared based on their recoveries. The best protocol was established which involves an organic solvent-free extraction method for the detection of OTA in cocoa beans using 1% sodium hydrogen carbonate (NaHCO3) in water within 30 min. The extraction method is rapid (as compared with existing methods), simple, reliable and practical to perform without complex experimental set-ups. The cocoa samples were freshly extracted and cleaned-up using immunoaffinity column (IAC) for HPLC analysis using a fluorescence detector. Under the optimised condition, the limit of detection (LOD) and limit of quantification (LOQ) for OTA were 0.62 and 1.25 ng ml(-1) respectively in standard solutions. The method could successfully quantify OTA in naturally contaminated samples. Moreover, good recoveries of OTA were obtained up to 86.5% in artificially spiked cocoa samples, with a maximum relative standard deviation (RSD) of 2.7%. The proposed extraction method could determine OTA at the level 1.5 µg kg(-)(1), which surpassed the standards set by the European Union for cocoa (2 µg kg(-1)). In addition, an efficiency comparison of IAC and molecular imprinted polymer (MIP) column was also performed and evaluated.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
A Gold Standards Approach to Training Instructors to Evaluate Crew Performance
NASA Technical Reports Server (NTRS)
Baker, David P.; Dismukes, R. Key
2003-01-01
The Advanced Qualification Program requires that airlines evaluate crew performance in Line Oriented Simulation. For this evaluation to be meaningful, instructors must observe relevant crew behaviors and evaluate those behaviors consistently and accurately against standards established by the airline. The airline industry has largely settled on an approach in which instructors evaluate crew performance on a series of event sets, using standardized grade sheets on which behaviors specific to event set are listed. Typically, new instructors are given a class in which they learn to use the grade sheets and practice evaluating crew performance observed on videotapes. These classes emphasize reliability, providing detailed instruction and practice in scoring so that all instructors within a given class will give similar scores to similar performance. This approach has value but also has important limitations; (1) ratings within one class of new instructors may differ from those of other classes; (2) ratings may not be driven primarily by the specific behaviors on which the company wanted the crews to be scored; and (3) ratings may not be calibrated to company standards for level of performance skill required. In this paper we provide a method to extend the existing method of training instructors to address these three limitations. We call this method the "gold standards" approach because it uses ratings from the company's most experienced instructors as the basis for training rater accuracy. This approach ties the training to the specific behaviors on which the experienced instructors based their ratings.
Evaluating environmental survivability of optical coatings
NASA Astrophysics Data System (ADS)
Joseph, Shay; Yadlovker, Doron; Marcovitch, Orna; Zipin, Hedva
2009-05-01
In this paper we report an on going research to correlate between optical coating survivability and military (MIL) standards. For this purpose 8 different types of coatings were deposited on 1" substrates of sapphire, multi-spectral ZnS (MS-ZnS), germanium, silicon and BK7. All coatings underwent MIL standard evaluation as defined by customer specifications and have passed successfully. Two other sets were left to age for 12 months at two different locations, one near central Tel-Aviv and one by the shoreline of the Mediterranean Sea. A third set was aged for 2000 hours at a special environmental chamber simulating conditions of temperature, humidity and ultra-violet (UV) radiation simultaneously. Measurements of optical transmission before and after aging from all 3 sets reveal, in some cases, major transmission loss indicating severe coating damage. The different aging methods and their relation to the MIL standards are discussed in detail. The most pronounced conclusion is that MIL standards alone are not sufficient for predicting the lifetime of an external coated optical element and are only useful in certifying the coating process and comparison between coatings.
Fujinaga, Aiichiro; Uchiyama, Iwao; Morisawa, Shinsuke; Yoneda, Minoru; Sasamoto, Yuzuru
2012-01-01
In Japan, environmental standards for contaminants in groundwater and in leachate from soil are set with the assumption that they are used for drinking water over a human lifetime. Where there is neither a well nor groundwater used for drinking, the standard is thus too severe. Therefore, remediation based on these standards incurs excessive effort and cost. In contrast, the environmental-assessment procedure used in the United States and the Netherlands considers the site conditions (land use, existing wells, etc.); however, a risk assessment is required for each site. Therefore, this study proposes a new framework for judging contamination in Japan by considering the merits of the environmental standards used and a method for risk assessment. The framework involves setting risk-based concentrations that are attainable remediation goals for contaminants in soil and groundwater. The framework was then applied to a model contaminated site for risk management, and the results are discussed regarding the effectiveness and applicability of the new methodology. © 2011 Society for Risk Analysis.
Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes.
Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D
2016-10-01
This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. © The Author 2016. Published by Oxford University Press.
Galerkin-collocation domain decomposition method for arbitrary binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-05-01
We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.
In 2008, the United States Environmental Protection Agency (USEPA) set a new National Ambient Air Quality Standard (NAAQS) for lead (Pb) in total suspended particulate matter (Pb-TSP) which called for significant decreases in the allowable limits. The Federal Reference Method (FR...
Statistical Process Control: Going to the Limit for Quality.
ERIC Educational Resources Information Center
Training, 1987
1987-01-01
Defines the concept of statistical process control, a quality control method used especially in manufacturing. Generally, concept users set specific standard levels that must be met. Makes the point that although employees work directly with the method, management is responsible for its success within the plant. (CH)
An improved set of standards for finding cost for cost-effectiveness analysis.
Barnett, Paul G
2009-07-01
Guidelines have helped standardize methods of cost-effectiveness analysis, allowing different interventions to be compared and enhancing the generalizability of study findings. There is agreement that all relevant services be valued from the societal perspective using a long-term time horizon and that more exact methods be used to cost services most affected by the study intervention. Guidelines are not specific enough with respect to costing methods, however. The literature was reviewed to identify the problems associated with the 4 principal methods of cost determination. Microcosting requires direct measurement and is ordinarily reserved to cost novel interventions. Analysts should include nonwage labor cost, person-level and institutional overhead, and the cost of development, set-up activities, supplies, space, and screening. Activity-based cost systems have promise of finding accurate costs of all services provided, but are not widely adopted. Quality must be evaluated and the generalizability of cost estimates to other settings must be considered. Administrative cost estimates, chiefly cost-adjusted charges, are widely used, but the analyst must consider items excluded from the available system. Gross costing methods determine quantity of services used and employ a unit cost. If the intervention will affect the characteristics of a service, the method should not assume that the service is homogeneous. Questions are posed for future reviews of the quality of costing methods. The analyst must avoid inappropriate assumptions, especially those that bias the analysis by exclusion of costs that are affected by the intervention under study.
Microbial contamination of mobile phones in a health care setting in Alexandria, Egypt.
Selim, Heba Sayed; Abaza, Amani Farouk
2015-01-01
This study aimed at investigating the microbial contamination of mobile phones in a hospital setting. Swab samples were collected from 40 mobile phones of patients and health care workers at the Alexandria University Students' Hospital. They were tested for their bacterial contamination at the microbiology laboratory of the High Institute of Public Health. Quantification of bacteria was performed using both surface spread and pour plate methods. Isolated bacterial agents were identified using standard microbiological methods. Methicillin-resistant Staphylococcus aureus was identified by disk diffusion method described by Bauer and Kirby. Isolated Gram-negative bacilli were tested for being extended spectrum beta lactamase producers using the double disk diffusion method according to the Clinical and Laboratory Standards Institute recommendations. All of the tested mobile phones (100%) were contaminated with either single or mixed bacterial agents. The most prevalent bacterial contaminants were methicillin-resistant S. aureus and coagulase-negative staphylococci representing 53% and 50%, respectively. The mean bacterial count was 357 CFU/ml, while the median was 13 CFU/ml using the pour plate method. The corresponding figures were 2,192 and 1,720 organisms/phone using the surface spread method. Mobile phones usage in hospital settings poses a risk of transmission of a variety of bacterial agents including multidrug-resistant pathogens as methicillin-resistant S. aureus. The surface spread method is an easy and useful tool for detection and estimation of bacterial contamination of mobile phones.
Stuart, Elizabeth A.; Lee, Brian K.; Leacy, Finbarr P.
2013-01-01
Objective Examining covariate balance is the prescribed method for determining when propensity score methods are successful at reducing bias. This study assessed the performance of various balance measures, including a proposed balance measure based on the prognostic score (also known as the disease-risk score), to determine which balance measures best correlate with bias in the treatment effect estimate. Study Design and Setting The correlations of multiple common balance measures with bias in the treatment effect estimate produced by weighting by the odds, subclassification on the propensity score, and full matching on the propensity score were calculated. Simulated data were used, based on realistic data settings. Settings included both continuous and binary covariates and continuous covariates only. Results The standardized mean difference in prognostic scores, the mean standardized mean difference, and the mean t-statistic all had high correlations with bias in the effect estimate. Overall, prognostic scores displayed the highest correlations of all the balance measures considered. Prognostic score measure performance was generally not affected by model misspecification and performed well under a variety of scenarios. Conclusion Researchers should consider using prognostic score–based balance measures for assessing the performance of propensity score methods for reducing bias in non-experimental studies. PMID:23849158
Selection of representative embankments based on rough set - fuzzy clustering method
NASA Astrophysics Data System (ADS)
Bin, Ou; Lin, Zhi-xiang; Fu, Shu-yan; Gao, Sheng-song
2018-02-01
The premise condition of comprehensive evaluation of embankment safety is selection of representative unit embankment, on the basis of dividing the unit levee the influencing factors and classification of the unit embankment are drafted.Based on the rough set-fuzzy clustering, the influence factors of the unit embankment are measured by quantitative and qualitative indexes.Construct to fuzzy similarity matrix of standard embankment then calculate fuzzy equivalent matrix of fuzzy similarity matrix by square method. By setting the threshold of the fuzzy equivalence matrix, the unit embankment is clustered, and the representative unit embankment is selected from the classification of the embankment.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
Requirements and Techniques for Developing and Measuring Simulant Materials
NASA Technical Reports Server (NTRS)
Rickman, Doug; Owens, Charles; Howard, Rick
2006-01-01
The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication identify and reinforced a need for a set of standards and requirements for the production and usage of the lunar simulant materials. As NASA need prepares to return to the moon, a set of requirements have been developed for simulant materials and methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum Characteristics for simulants of lunar regolith, and 3) a method to produce lunar regolith simulants needed for NASA's exploration mission. A method to evaluate new and current simulants has also been rigorously defined through the mathematics of Figures of Merit (FoM), a concept new to simulant development. A single FoM is conceptually an algorithm defining a single characteristic of a simulant and provides a clear comparison of that characteristic for both the simulant and a reference material. Included as an intrinsic part of the algorithm is a minimum acceptable performance for the characteristic of interest. The algorithms for the FoM for Standard Lunar Regolith Simulants are also explicitly keyed to a recommended method to make lunar simulants.
NASA Astrophysics Data System (ADS)
Fauzi, Ilham; Muharram Hasby, Fariz; Irianto, Dradjad
2018-03-01
Although government is able to make mandatory standards that must be obeyed by the industry, the respective industries themselves often have difficulties to fulfil the requirements described in those standards. This is especially true in many small and medium sized enterprises that lack the required capital to invest in standard-compliant equipment and machineries. This study aims to develop a set of measurement tools for evaluating the level of readiness of production technology with respect to the requirements of a product standard based on the quality function deployment (QFD) method. By combining the QFD methodology, UNESCAP Technometric model [9] and Analytic Hierarchy Process (AHP), this model is used to measure a firm’s capability to fulfill government standard in the toy making industry. Expert opinions from both the governmental officers responsible for setting and implementing standards and the industry practitioners responsible for managing manufacturing processes are collected and processed to find out the technological capabilities that should be improved by the firm to fulfill the existing standard. This study showed that the proposed model can be used successfully to measure the gap between the requirements of the standard and the readiness of technoware technological component in a particular firm.
Spectral irradiance standard for the ultraviolet - The deuterium lamp
NASA Technical Reports Server (NTRS)
Saunders, R. D.; Ott, W. R.; Bridges, J. M.
1978-01-01
A set of deuterium lamps is calibrated as spectral irradiance standards in the 200-350-nm spectral region utilizing both a high accuracy tungsten spectral irradiance standard and a newly developed argon mini-arc spectral radiance standard. The method which enables a transfer from a spectral radiance to a spectral irradiance standard is described. The following characteristics of the deuterium lamp irradiance standard are determined: sensitivity to alignment; dependence on input power and solid angle; reproducibility; and stability. The absolute spectral radiance is also measured in the 167-330-nm region. Based upon these measurements, values of the spectral irradiance below 200 nm are obtained through extrapolation.
Standard setting: the crucial issues. A case study of accounting & auditing.
Nowakowski, J R
1982-01-01
A study of standard-setting efforts in accounting and auditing is reported. The study reveals four major areas of concern in a professional standard-setting effort: (1) issues related to the rationale for setting standards, (2) issues related to the standard-setting board and its support structure, (3) issues related to the content of standards and rules for generating them, and (4) issues that deal with how standards are put to use. Principles derived from the study of accounting and auditing are provided to illuminate and assess standard-setting efforts in evaluation.
Boosting standard order sets utilization through clinical decision support.
Li, Haomin; Zhang, Yinsheng; Cheng, Haixia; Lu, Xudong; Duan, Huilong
2013-01-01
Well-designed standard order sets have the potential to integrate and coordinate care by communicating best practices through multiple disciplines, levels of care, and services. However, there are several challenges which certainly affected the benefits expected from standard order sets. To boost standard order sets utilization, a problem-oriented knowledge delivery solution was proposed in this study to facilitate access of standard order sets and evaluation of its treatment effect. In this solution, standard order sets were created along with diagnostic rule sets which can trigger a CDS-based reminder to help clinician quickly discovery hidden clinical problems and corresponding standard order sets during ordering. Those rule set also provide indicators for targeted evaluation of standard order sets during treatment. A prototype system was developed based on this solution and will be presented at Medinfo 2013.
ERIC Educational Resources Information Center
Trief, Ellen; Cascella, Paul W.; Bruce, Susan M.
2013-01-01
Introduction: The study reported in this article tracked the learning rate of 43 children with multiple disabilities and visual impairments who had limited to no verbal language across seven months of classroom-based intervention using a standardized set of tangible symbols. Methods: The participants were introduced to tangible symbols on a daily…
1997-02-13
AMINOPROPIOPHENONE IN DOG PLASMA 0 OCH 3 CH2 CH3 OCH2CHOHCH 2 OH H2 N& p-Aminopropiophenone Guaifenesin (WR 000,302) Internal Standard APPROVALS: This Analytical...40 - 80 il VOLUME: RUN TIME: 14 min (PAPP: 10.7 min; Guaifenesin (Internal Standard): 8.5 min) DETECTOR Wavelength: 316 nm SETTINGS: Absorption Range
ERIC Educational Resources Information Center
Eliason, Norma Lynn
2014-01-01
The effects of incorporating an online social networking platform, hosted through Wikispace, as a method to potential improve the performance of middle school students on standardized math assessments was investigated in this study. A principal strategy for any educational setting may provide an instructional approach that improves the delivery of…
Boxwala, Aziz A; Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs.
38 CFR 21.218 - Methods of furnishing supplies.
Code of Federal Regulations, 2010 CFR
2010-07-01
... facility has designated a supplier. Prior authorization of supplies by the case manager is required, except for standard sets of books, tools, or supplies which the facility requires all trainees or employees...
Assessment of opacimeter calibration according to International Standard Organization 10155.
Gomes, J F
2001-01-01
This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.
Assessing Lower Limb Alignment: Comparison of Standard Knee Xray vs Long Leg View.
Zampogna, Biagio; Vasta, Sebastiano; Amendola, Annunziato; Uribe-Echevarria Marbach, Bastian; Gao, Yubo; Papalia, Rocco; Denaro, Vincenzo
2015-01-01
High tibial osteotomy (HTO) is a well-established and commonly utilized technique in medial knee osteoarthritis secondary to varus malalignment. Accurate measurement of the preoperative limb alignment, and the amount of correction required are essential when planning limb realignment surgery. The hip-knee-ankle angle (HKA) measured on a full length weightbearing (FLWB) X-ray in the standing position is considered the gold standard, since it allows for reliable and accurate measurement of the mechanical axis of the whole lower extremity. In general practice, alignment is often evaluated on standard anteroposterior weightbearing (APWB) X-rays, as the angle between the femur and tibial anatomic axis (TFa). It is, therefore, of value to establish if measuring the anatomical axis from limited APWB is an effective measure of knee alignment especially in patients undergoing osteotomy about the knee. Three independent observers measured preoperative and postoperative FTa with standard method (FTa1) and with circles method (FTa2) on APWB X-ray and the HKA on FLWB X-ray at three different time-points separated by a two-week period. Intra-observer and inter-observer reliabilities and the comparison and relationship between anatomical and mechanical alignment were calculated. Intra- and interclass coefficients for all the three methods indicated excellent reliability, having all the values above 0.80. Using the mean of paired t-student test, the comparison of HKA versus TFa1 and TFa2 showed a statistically significant difference (p<.0001) both for the pre-operative and post-operative sets of values. The correlation between the HKA and FTal was found poor for the preoperative set (R=0.26) and fair for the postoperative one (R=0.53), while the new circles method showed a higher correlation in both the preoperative (R=0.71) and postoperative sets (R=0.79). Intra-observer reliability was high for HKA, FTal and FTa2 on APWB x-rays in the pre- and post-operative setting. Inter-rater reliability was higher for HKA and TFa2 compared to FTal. The femoro-tibial angle as measured on APWB with the traditional method (FTal) has a weak correlation with the HKA, and based on these findings, should not be used in everyday practice. The FTa2 showed better correlation with the HKA, although not excellent. Level III, Retrospective study.
Tastan, Sevinc; Linch, Graciele C. F.; Keenan, Gail M.; Stifter, Janet; McKinney, Dawn; Fahey, Linda; Dunn Lopez, Karen; Yao, Yingwei; Wilkie, Diana J.
2014-01-01
Objective To determine the state of the science for the five standardized nursing terminology sets in terms of level of evidence and study focus. Design Systematic Review. Data sources Keyword search of PubMed, CINAHL, and EMBASE databases from 1960s to March 19, 2012 revealed 1,257 publications. Review Methods From abstract review we removed duplicate articles, those not in English or with no identifiable standardized nursing terminology, and those with a low-level of evidence. From full text review of the remaining 312 articles, eight trained raters used a coding system to record standardized nursing terminology names, publication year, country, and study focus. Inter-rater reliability confirmed the level of evidence. We analyzed coded results. Results On average there were 4 studies per year between 1985 and 1995. The yearly number increased to 14 for the decade between 1996–2005, 21 between 2006–2010, and 25 in 2011. Investigators conducted the research in 27 countries. By evidence level for the 312 studies 72.4% were descriptive, 18.9% were observational, and 8.7% were intervention studies. Of the 312 reports, 72.1% focused on North American Nursing Diagnosis-International, Nursing Interventions Classification, Nursing Outcome Classification, or some combination of those three standardized nursing terminologies; 9.6% on Omaha System; 7.1% on International Classification for Nursing Practice; 1.6% on Clinical Care Classification/Home Health Care Classification; 1.6% on Perioperative Nursing Data Set; and 8.0% on two or more standardized nursing terminology sets. There were studies in all 10 foci categories including those focused on concept analysis/classification infrastructure (n = 43), the identification of the standardized nursing terminology concepts applicable to a health setting from registered nurses’ documentation (n = 54), mapping one terminology to another (n = 58), implementation of standardized nursing terminologies into electronic health records (n = 12), and secondary use of electronic health record data (n = 19). Conclusions Findings reveal that the number of standardized nursing terminology publications increased primarily since 2000 with most focusing on North American Nursing Diagnosis-International, Nursing Interventions Classification, and Nursing Outcome Classification. The majority of the studies were descriptive, qualitative, or correlational designs that provide a strong base for understanding the validity and reliability of the concepts underlying the standardized nursing terminologies. There is evidence supporting the successful integration and use in electronic health records for two standardized nursing terminology sets: (1) the North American Nursing Diagnosis-International, Nursing Interventions Classification, and Nursing Outcome Classification set; and (2) the Omaha System set. Researchers, however, should continue to strengthen standardized nursing terminology study designs to promote continuous improvement of the standardized nursing terminologies and use in clinical practice. PMID:24412062
Pattison, Kira M.; Brooks, Dina; Cameron, Jill I.
2015-01-01
Background The use of standardized assessment tools is an element of evidence-informed rehabilitation, but physical therapists report administering these tools inconsistently poststroke. An in-depth understanding of physical therapists' approaches to walking assessment is needed to develop strategies to advance assessment practice. Objectives The objective of this study was to explore the methods physical therapists use to evaluate walking poststroke, reasons for selecting these methods, and the use of assessment results in clinical practice. Design A qualitative descriptive study involving semistructured telephone interviews was conducted. Methods Registered physical therapists assessing a minimum of 10 people with stroke per year in Ontario, Canada, were purposively recruited from acute care, rehabilitation, and outpatient settings. Interviews were audiotaped and transcribed verbatim. Transcripts were coded line by line by the interviewer. Credibility was optimized through triangulation of analysts, audit trail, and collection of field notes. Results Study participants worked in acute care (n=8), rehabilitation (n=11), or outpatient (n=9) settings and reported using movement observation and standardized assessment tools to evaluate walking. When selecting methods to evaluate walking, physical therapists described being influenced by a hierarchy of factors. Factors included characteristics of the assessment tool, the therapist, the workplace, and patients, as well as influential individuals or organizations. Familiarity exerted the primary influence on adoption of a tool into a therapist's assessment repertoire, whereas patient factors commonly determined daily use. Participants reported using the results from walking assessments to communicate progress to the patient and health care professionals. Conclusions Multilevel factors influence physical therapists' adoption and daily administration of standardized tools to assess walking. Findings will inform knowledge translation efforts aimed at increasing the standardized assessment of walking poststroke. PMID:25929532
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
An implicit boundary integral method for computing electric potential of macromolecules in solvent
NASA Astrophysics Data System (ADS)
Zhong, Yimin; Ren, Kui; Tsai, Richard
2018-04-01
A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Drake-Lee, A B; Skinner, D; Hawthorne, M; Clarke, R
2009-10-01
'High stakes' postgraduate medical examinations should conform to current educational standards. In the UK and Ireland, national assessments in surgery are devised and managed through the examination structure of the Royal Colleges of Surgeons. Their efforts are not reported in the medical education literature. In the current paper, we aim to clarify this process. To replace the clinical section of the Diploma of Otorhinolaryngology with an Objective, Structured, Clinical Examination, and to set the level of the assessment at one year of postgraduate training in the specialty. After 'blueprinting' against the whole curriculum, an Objective, Structured, Clinical Examination comprising 25 stations was divided into six clinical stations and 19 other stations exploring written case histories, instruments, test results, written communication skills and interpretation skills. The pass mark was set using a modified borderline method and other methods, and statistical analysis of the results was performed. The results of nine examinations between May 2004 and May 2008 are presented. The pass mark varied between 68 and 82 per cent. Internal consistency was good, with a Cronbach's alpha value of 0.99 for all examinations and split-half statistics varying from 0.96 to 0.99. Different standard settings gave similar pass marks. We have developed a summative, Objective, Structured, Clinical Examination for doctors training in otorhinolaryngology, reported herein. The objectives and standards of setting a high quality assessment were met.
A Multi-Center Space Data System Prototype Based on CCSDS Standards
NASA Technical Reports Server (NTRS)
Rich, Thomas M.
2016-01-01
Deep space missions beyond earth orbit will require new methods of data communications in order to compensate for increasing RF propagation delay. The Consultative Committee for Space Data Systems (CCSDS) standard protocols Spacecraft Monitor & Control (SM&C), Asynchronous Message Service (AMS), and Delay/Disruption Tolerant Networking (DTN) provide such a method. The maturity level of this protocol set is, however, insufficient for mission inclusion at this time. This prototype is intended to provide experience which will raise the Technical Readiness Level (TRL) of these protocols..
Data-optimized source modeling with the Backwards Liouville Test–Kinetic method
Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.; ...
2017-09-14
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
A straightforward experimental method to evaluate the Lamb-Mössbauer factor of a 57Co/Rh source
NASA Astrophysics Data System (ADS)
Spina, G.; Lantieri, M.
2014-01-01
In analyzing Mössbauer spectra by means of the integral transmission function, a correct evaluation of the recoilless fs factor of the source at the position of the sample is needed. A novel method to evaluate fs for a 57Co source is proposed. The method uses the standard transmission experimental set up and it does not need further measurements but the ones that are mandatory in order to center the Mössbauer line and to calibrate the Mössbauer transducer. Firstly, the background counts are evaluated by collecting a standard Multi Channel Scaling (MCS) spectrum of a tick metal iron foil absorber and two Pulse Height Analysis (PHA) spectra with the same life-time and setting the maximum velocity of the transducer at the same value of the MCS spectrum. Secondly, fs is evaluated by fitting the collected MCS spectrum throughout the integral transmission approach. A test of the suitability of the technique is presented, too.
Adamiak, Paul; Vanderkooi, Otto G; Kellner, James D; Schryvers, Anthony B; Bettinger, Julie A; Alcantara, Joenel
2014-06-03
Multi-locus sequence typing (MLST) is a portable, broadly applicable method for classifying bacterial isolates at an intra-species level. This methodology provides clinical and scientific investigators with a standardized means of monitoring evolution within bacterial populations. MLST uses the DNA sequences from a set of genes such that each unique combination of sequences defines an isolate's sequence type. In order to reliably determine the sequence of a typing gene, matching sequence reads for both strands of the gene must be obtained. This study assesses the ability of both the standard, and an alternative set of, Streptococcus pneumoniae MLST primers to completely sequence, in both directions, the required typing alleles. The results demonstrated that for five (aroE, recP, spi, xpt, ddl) of the seven S. pneumoniae typing alleles, the standard primers were unable to obtain the complete forward and reverse sequences. This is due to the standard primers annealing too closely to the target regions, and current sequencing technology failing to sequence the bases that are too close to the primer. The alternative primer set described here, which includes a combination of primers proposed by the CDC and several designed as part of this study, addresses this limitation by annealing to highly conserved segments further from the target region. This primer set was subsequently employed to sequence type 105 S. pneumoniae isolates collected by the Canadian Immunization Monitoring Program ACTive (IMPACT) over a period of 18 years. The inability of several of the standard S. pneumoniae MLST primers to fully sequence the required region was consistently observed and is the result of a shift in sequencing technology occurring after the original primers were designed. The results presented here introduce clear documentation describing this phenomenon into the literature, and provide additional guidance, through the introduction of a widely validated set of alternative primers, to research groups seeking to undertake S. pneumoniae MLST based studies.
Voxel classification based airway tree segmentation
NASA Astrophysics Data System (ADS)
Lo, Pechin; de Bruijne, Marleen
2008-03-01
This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.
Use of simulated data sets to evaluate the fidelity of metagenomic processing methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavromatis, K; Ivanova, N; Barry, Kerrie
2007-01-01
Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene-finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity-based ( blast hit distribution) and twomore » sequence composition-based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.« less
Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri
2006-12-01
Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and twomore » sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.« less
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
16 CFR 309.10 - Alternative vehicle fuel rating.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Analysis of Natural Gas by Gas Chromatography.” For the purposes of this section, fuel ratings for the... methods set forth in ASTM D 1946-90, “Standard Practice for Analysis of Reformed Gas by Gas Chromatography... the principal component of compressed natural gas are to be determined in accordance with test methods...
16 CFR 309.10 - Alternative vehicle fuel rating.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Analysis of Natural Gas by Gas Chromatography.” For the purposes of this section, fuel ratings for the... methods set forth in ASTM D 1946-90, “Standard Practice for Analysis of Reformed Gas by Gas Chromatography... the principal component of compressed natural gas are to be determined in accordance with test methods...
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
ERIC Educational Resources Information Center
Maclennan, Ian
1977-01-01
Suggests that there exists a "finite" number of elementary concepts and distinguishable modes of thinking, that all human beings tend to acquire the same set of elements of thinking and the same strategies with which to understand and control their physical environment, and that the method of analysis used here is a standard scientific method.…
Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method
ERIC Educational Resources Information Center
Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev
2018-01-01
The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…
A SAS Interface for Bayesian Analysis with WinBUGS
ERIC Educational Resources Information Center
Zhang, Zhiyong; McArdle, John J.; Wang, Lijuan; Hamagami, Fumiaki
2008-01-01
Bayesian methods are becoming very popular despite some practical difficulties in implementation. To assist in the practical application of Bayesian methods, we show how to implement Bayesian analysis with WinBUGS as part of a standard set of SAS routines. This implementation procedure is first illustrated by fitting a multiple regression model…
A Comparison of the Effectiveness of Two Design Methodologies in a Secondary School Setting.
ERIC Educational Resources Information Center
Cannizzaro, Brenton; Boughton, Doug
1998-01-01
Examines the effectiveness of the analysis-synthesis and generator-conjuncture-analysis models of design education. Concludes that the generator-conjecture-analysis design method produced student design product of a slightly higher standard than the analysis-synthesis design method. Discusses the findings in more detail and considers implications.…
ERIC Educational Resources Information Center
Garrido, Mariquita; Payne, David A.
Minimum competency cut-off scores on a statistics exam were estimated under four conditions: the Angoff judging method with item data (n=20), and without data available (n=19); and the Modified Angoff method with (n=19), and without (n=19) item data available to judges. The Angoff method required free response percentage estimates (0-100) percent,…
Propulsion Diagnostic Method Evaluation Strategy (ProDiMES) User's Guide
NASA Technical Reports Server (NTRS)
Simon, Donald L.
2010-01-01
This report is a User's Guide for the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES). ProDiMES is a standard benchmarking problem and a set of evaluation metrics to enable the comparison of candidate aircraft engine gas path diagnostic methods. This Matlab (The Mathworks, Inc.) based software tool enables users to independently develop and evaluate diagnostic methods. Additionally, a set of blind test case data is also distributed as part of the software. This will enable the side-by-side comparison of diagnostic approaches developed by multiple users. The Users Guide describes the various components of ProDiMES, and provides instructions for the installation and operation of the tool.
Hu, Qiyue; Peng, Zhengwei; Kostrowicki, Jaroslav; Kuki, Atsuo
2011-01-01
Pfizer Global Virtual Library (PGVL) of 10(13) readily synthesizable molecules offers a tremendous opportunity for lead optimization and scaffold hopping in drug discovery projects. However, mining into a chemical space of this size presents a challenge for the concomitant design informatics due to the fact that standard molecular similarity searches against a collection of explicit molecules cannot be utilized, since no chemical information system could create and manage more than 10(8) explicit molecules. Nevertheless, by accepting a tolerable level of false negatives in search results, we were able to bypass the need for full 10(13) enumeration and enabled the efficient similarity search and retrieval into this huge chemical space for practical usage by medicinal chemists. In this report, two search methods (LEAP1 and LEAP2) are presented. The first method uses PGVL reaction knowledge to disassemble the incoming search query molecule into a set of reactants and then uses reactant-level similarities into actual available starting materials to focus on a much smaller sub-region of the full virtual library compound space. This sub-region is then explicitly enumerated and searched via a standard similarity method using the original query molecule. The second method uses a fuzzy mapping onto candidate reactions and does not require exact disassembly of the incoming query molecule. Instead Basis Products (or capped reactants) are mapped into the query molecule and the resultant asymmetric similarity scores are used to prioritize the corresponding reactions and reactant sets. All sets of Basis Products are inherently indexed to specific reactions and specific starting materials. This again allows focusing on a much smaller sub-region for explicit enumeration and subsequent standard product-level similarity search. A set of validation studies were conducted. The results have shown that the level of false negatives for the disassembly-based method is acceptable when the query molecule can be recognized for exact disassembly, and the fuzzy reaction mapping method based on Basis Products has an even better performance in terms of lower false-negative rate because it is not limited by the requirement that the query molecule needs to be recognized by any disassembly algorithm. Both search methods have been implemented and accessed through a powerful desktop molecular design tool (see ref. (33) for details). The chapter will end with a comparison of published search methods against large virtual chemical space.
32 CFR 719.138 - Fees of civilian witnesses.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Method of Payment. The fees and mileage of a civilian witness shall be paid by the disbursing officer of... whose testimony is determined not to meet the standards of relevancy and materiality set forth in...
MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z
2018-03-01
In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
ERIC Educational Resources Information Center
Goh, Shaun K. Y.; Tham, Elaine K. H.; Magiati, Iliana; Sim, Litwee; Sanmugam, Shamini; Qiu, Anqi; Daniel, Mary L.; Broekman, Birit F. P.; Rifkin-Graboi, Anne
2017-01-01
Purpose: The purpose of this study was to improve standardized language assessments among bilingual toddlers by investigating and removing the effects of bias due to unfamiliarity with cultural norms or a distributed language system. Method: The Expressive and Receptive Bayley-III language scales were adapted for use in a multilingual country…
Improving Pharmacy Student Communication Outcomes Using Standardized Patients.
Gillette, Chris; Rudolph, Michael; Rockich-Winston, Nicole; Stanton, Robert; Anderson, H Glenn
2017-08-01
Objective. To examine whether standardized patient encounters led to an improvement in a student pharmacist-patient communication assessment compared to traditional active-learning activities within a classroom setting. Methods. A quasi-experimental study was conducted with second-year pharmacy students in a drug information and communication skills course. Student patient communication skills were assessed using high-stakes communication assessment. Results. Two hundred and twenty students' data were included. Students were significantly more likely to have higher scores on the communication assessment when they had higher undergraduate GPAs, were female, and taught using standardized patients. Similarly, students were significantly more likely to pass the assessment on the first attempt when they were female and when they were taught using standardized patients. Conclusion. Incorporating standardized patients within a communication course resulted in improved scores as well as first-time pass rates on a communication assessment than when using different methods of active learning.
Core Outcome Set-STAndards for Development: The COS-STAD recommendations.
Kirkham, Jamie J; Davis, Katherine; Altman, Douglas G; Blazeby, Jane M; Clarke, Mike; Tunis, Sean; Williamson, Paula R
2017-11-01
The use of core outcome sets (COS) ensures that researchers measure and report those outcomes that are most likely to be relevant to users of their research. Several hundred COS projects have been systematically identified to date, but there has been no formal quality assessment of these studies. The Core Outcome Set-STAndards for Development (COS-STAD) project aimed to identify minimum standards for the design of a COS study agreed upon by an international group, while other specific guidance exists for the final reporting of COS development studies (Core Outcome Set-STAndards for Reporting [COS-STAR]). An international group of experienced COS developers, methodologists, journal editors, potential users of COS (clinical trialists, systematic reviewers, and clinical guideline developers), and patient representatives produced the COS-STAD recommendations to help improve the quality of COS development and support the assessment of whether a COS had been developed using a reasonable approach. An open survey of experts generated an initial list of items, which was refined by a 2-round Delphi survey involving nearly 250 participants representing key stakeholder groups. Participants assigned importance ratings for each item using a 1-9 scale. Consensus that an item should be included in the set of minimum standards was defined as at least 70% of the voting participants from each stakeholder group providing a score between 7 and 9. The Delphi survey was followed by a consensus discussion with the study management group representing multiple stakeholder groups. COS-STAD contains 11 minimum standards that are the minimum design recommendations for all COS development projects. The recommendations focus on 3 key domains: the scope, the stakeholders, and the consensus process. The COS-STAD project has established 11 minimum standards to be followed by COS developers when planning their projects and by users when deciding whether a COS has been developed using reasonable methods.
Setting Emissions Standards Based on Technology Performance
In setting national emissions standards, EPA sets emissions performance levels rather than mandating use of a particular technology. The law mandates that EPA use numerical performance standards whenever feasible in setting national emissions standards.
Cortesi, Marilisa; Bandiera, Lucia; Pasini, Alice; Bevilacqua, Alessandro; Gherardi, Alessandro; Furini, Simone; Giordano, Emanuele
2017-01-01
Quantifying gene expression at single cell level is fundamental for the complete characterization of synthetic gene circuits, due to the significant impact of noise and inter-cellular variability on the system's functionality. Commercial set-ups that allow the acquisition of fluorescent signal at single cell level (flow cytometers or quantitative microscopes) are expensive apparatuses that are hardly affordable by small laboratories. A protocol that makes a standard optical microscope able to acquire quantitative, single cell, fluorescent data from a bacterial population transformed with synthetic gene circuitry is presented. Single cell fluorescence values, acquired with a microscope set-up and processed with custom-made software, are compared with results that were obtained with a flow cytometer in a bacterial population transformed with the same gene circuitry. The high correlation between data from the two experimental set-ups, with a correlation coefficient computed over the tested dynamic range > 0.99, proves that a standard optical microscope- when coupled with appropriate software for image processing- might be used for quantitative single-cell fluorescence measurements. The calibration of the set-up, together with its validation, is described. The experimental protocol described in this paper makes quantitative measurement of single cell fluorescence accessible to laboratories equipped with standard optical microscope set-ups. Our method allows for an affordable measurement/quantification of intercellular variability, whose better understanding of this phenomenon will improve our comprehension of cellular behaviors and the design of synthetic gene circuits. All the required software is freely available to the synthetic biology community (MUSIQ Microscope flUorescence SIngle cell Quantification).
Method of calculation overall equipment effectiveness in fertilizer factory
NASA Astrophysics Data System (ADS)
Siregar, I.; Muchtar, M. A.; Rahmat, R. F.; Andayani, U.; Nasution, T. H.; Sari, R. M.
2018-02-01
This research was conducted at a fertilizer company in Sumatra, where companies that produce fertilizers in large quantities to meet the needs of consumers. This company cannot be separated from issues related to the performance/effectiveness of the machinery and equipment. It can be seen from the engine that runs every day without a break resulted in not all of the quality of products in accordance with the quality standards set by the company. Therefore, to measure and improve the performance of the machine in the unit Plant Urea-1 as a whole then used method of Overall Equipment Effectiveness (OEE), which is one important element in the Total Productive Maintenance (TPM) to measure the effectiveness of the machine so that it can take measures to maintain that level. In July, August and September OEE values above the standard set at 85%. Meanwhile, in October, November and December have not reached the standard OEE values. The low value of OEE due to lack of time availability of machines for the production shut down due to the occurrence of the engine long enough so that the availability of reduced production time.
Keikha, Leila; Farajollah, Seyede Sedigheh Seied; Safdari, Reza; Ghazisaeedi, Marjan; Mohammadzadeh, Niloofar
2018-01-01
Background In developing countries such as Iran, international standards offer good sources to survey and use for appropriate planning in the domain of electronic health records (EHRs). Therefore, in this study, HL7 and ASTM standards were considered as the main sources from which to extract EHR data. Objective The objective of this study was to propose a hospital data set for a national EHR consisting of data classes and data elements by adjusting data sets extracted from the standards and paper-based records. Method This comparative study was carried out in 2017 by studying the contents of the paper-based records approved by the health ministry in Iran and the international ASTM and HL7 standards in order to extract a minimum hospital data set for a national EHR. Results As a result of studying the standards and paper-based records, a total of 526 data elements in 174 classes were extracted. An examination of the data indicated that the highest number of extracted data came from the free text elements, both in the paper-based records and in the standards related to the administrative data. The major sources of data extracted from ASTM and HL7 were the E1384 and Hl7V.x standards, respectively. In the paper-based records, data were extracted from 19 forms sporadically. Discussion By declaring the confidentiality of information, the ASTM standards acknowledge the issue of confidentiality of information as one of the main challenges of EHR development, and propose new types of admission, such as teleconference, tele-video, and home visit, which are inevitable with the advent of new technology for providing healthcare and treating diseases. Data related to finance and insurance, which were scattered in different categories by three organizations, emerged as the financial category. Documenting the role and responsibility of the provider by adding the authenticator/signature data element was deemed essential. Conclusion Not only using well-defined and standardized data, but also adapting EHR systems to the local facilities and the existing social and cultural conditions, will facilitate the development of structured data sets. PMID:29618962
Compressed Sensing Quantum Process Tomography for Superconducting Quantum Gates
NASA Astrophysics Data System (ADS)
Rodionov, Andrey
An important challenge in quantum information science and quantum computing is the experimental realization of high-fidelity quantum operations on multi-qubit systems. Quantum process tomography (QPT) is a procedure devised to fully characterize a quantum operation. We first present the results of the estimation of the process matrix for superconducting multi-qubit quantum gates using the full data set employing various methods: linear inversion, maximum likelihood, and least-squares. To alleviate the problem of exponential resource scaling needed to characterize a multi-qubit system, we next investigate a compressed sensing (CS) method for QPT of two-qubit and three-qubit quantum gates. Using experimental data for two-qubit controlled-Z gates, taken with both Xmon and superconducting phase qubits, we obtain estimates for the process matrices with reasonably high fidelities compared to full QPT, despite using significantly reduced sets of initial states and measurement configurations. We show that the CS method still works when the amount of data is so small that the standard QPT would have an underdetermined system of equations. We also apply the CS method to the analysis of the three-qubit Toffoli gate with simulated noise, and similarly show that the method works well for a substantially reduced set of data. For the CS calculations we use two different bases in which the process matrix is approximately sparse (the Pauli-error basis and the singular value decomposition basis), and show that the resulting estimates of the process matrices match with reasonably high fidelity. For both two-qubit and three-qubit gates, we characterize the quantum process by its process matrix and average state fidelity, as well as by the corresponding standard deviation defined via the variation of the state fidelity for different initial states. We calculate the standard deviation of the average state fidelity both analytically and numerically, using a Monte Carlo method. Overall, we show that CS QPT offers a significant reduction in the needed amount of experimental data for two-qubit and three-qubit quantum gates.
Progress in the development of paper-based diagnostics for low-resource point-of-care settings
Byrnes, Samantha; Thiessen, Gregory; Fu, Elain
2014-01-01
This Review focuses on recent work in the field of paper microfluidics that specifically addresses the goal of translating the multistep processes that are characteristic of gold-standard laboratory tests to low-resource point-of-care settings. A major challenge is to implement multistep processes with the robust fluid control required to achieve the necessary sensitivity and specificity of a given application in a user-friendly package that minimizes equipment. We review key work in the areas of fluidic controls for automation in paper-based devices, readout methods that minimize dedicated equipment, and power and heating methods that are compatible with low-resource point-of-care settings. We also highlight a focused set of recent applications and discuss future challenges. PMID:24256361
Setting the standard, implementation and auditing within haemodialysis.
Jones, J
1997-01-01
With an ever increasing awareness of the need to deliver a quality of care that is measurable in Nursing, the concept of Standards provides an ideal tool (1). Standards operate outside the boundaries of policies and procedures to provide an audit tool of authenticity and flexibility. Within our five Renal Units, while we felt confident that we were delivering an excellent standard of care to our patients and continually trying to improve upon it, what we really needed was a method of measuring this current level of care and highlighting key areas where we could offer improvement.
Hydrogen Field Test Standard: Laboratory and Field Performance
Pope, Jodie G.; Wright, John D.
2015-01-01
The National Institute of Standards and Technology (NIST) developed a prototype field test standard (FTS) that incorporates three test methods that could be used by state weights and measures inspectors to periodically verify the accuracy of retail hydrogen dispensers, much as gasoline dispensers are tested today. The three field test methods are: 1) gravimetric, 2) Pressure, Volume, Temperature (PVT), and 3) master meter. The FTS was tested in NIST's Transient Flow Facility with helium gas and in the field at a hydrogen dispenser location. All three methods agree within 0.57 % and 1.53 % for all test drafts of helium gas in the laboratory setting and of hydrogen gas in the field, respectively. The time required to perform six test drafts is similar for all three methods, ranging from 6 h for the gravimetric and master meter methods to 8 h for the PVT method. The laboratory tests show that 1) it is critical to wait for thermal equilibrium to achieve density measurements in the FTS that meet the desired uncertainty requirements for the PVT and master meter methods; in general, we found a wait time of 20 minutes introduces errors < 0.1 % and < 0.04 % in the PVT and master meter methods, respectively and 2) buoyancy corrections are important for the lowest uncertainty gravimetric measurements. The field tests show that sensor drift can become a largest component of uncertainty that is not present in the laboratory setting. The scale was calibrated after it was set up at the field location. Checks of the calibration throughout testing showed drift of 0.031 %. Calibration of the master meter and the pressure sensors prior to travel to the field location and upon return showed significant drifts in their calibrations; 0.14 % and up to 1.7 %, respectively. This highlights the need for better sensor selection and/or more robust sensor testing prior to putting into field service. All three test methods are capable of being successfully performed in the field and give equivalent answers if proper sensors without drift are used. PMID:26722192
Failure of Standard Training Sets in the Analysis of Fast-Scan Cyclic Voltammetry Data.
Johnson, Justin A; Rodeberg, Nathan T; Wightman, R Mark
2016-03-16
The use of principal component regression, a multivariate calibration method, in the analysis of in vivo fast-scan cyclic voltammetry data allows for separation of overlapping signal contributions, permitting evaluation of the temporal dynamics of multiple neurotransmitters simultaneously. To accomplish this, the technique relies on information about current-concentration relationships across the scan-potential window gained from analysis of training sets. The ability of the constructed models to resolve analytes depends critically on the quality of these data. Recently, the use of standard training sets obtained under conditions other than those of the experimental data collection (e.g., with different electrodes, animals, or equipment) has been reported. This study evaluates the analyte resolution capabilities of models constructed using this approach from both a theoretical and experimental viewpoint. A detailed discussion of the theory of principal component regression is provided to inform this discussion. The findings demonstrate that the use of standard training sets leads to misassignment of the current-concentration relationships across the scan-potential window. This directly results in poor analyte resolution and, consequently, inaccurate quantitation, which may lead to erroneous conclusions being drawn from experimental data. Thus, it is strongly advocated that training sets be obtained under the experimental conditions to allow for accurate data analysis.
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
Ye, Bixiong; E, Xueli; Zhang, Lan
2015-01-01
To optimize non-regular drinking water quality indices (except Giardia and Cryptosporidium) of urban drinking water. Several methods including drinking water quality exceed the standard, the risk of exceeding standard, the frequency of detecting concentrations below the detection limit, water quality comprehensive index evaluation method, and attribute reduction algorithm of rough set theory were applied, redundancy factor of water quality indicators were eliminated, control factors that play a leading role in drinking water safety were found. Optimization results showed in 62 unconventional water quality monitoring indicators of urban drinking water, 42 water quality indicators could be optimized reduction by comprehensively evaluation combined with attribute reduction of rough set. Optimization of the water quality monitoring indicators and reduction of monitoring indicators and monitoring frequency could ensure the safety of drinking water quality while lowering monitoring costs and reducing monitoring pressure of the sanitation supervision departments.
Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim
2013-05-01
The decision to pass or fail a medical student is a 'high stakes' one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the Regression Method, the Borderline Group Method, and the new Objective Borderline Method (OBM). Using Year 5 students' OSCE results from one medical school we established the pass/fail cut-off scores by the abovementioned three methods. The comparison indicated that the pass/fail cut-off scores generated by the OBM were similar to those generated by the more established methods (0.840 ≤ r ≤ 0.998; p < .0001). Based on theoretical and empirical analysis, we suggest that the OBM has advantages over existing methods in that it combines objectivity, realism, robust empirical basis and, no less importantly, is simple to use.
New Primary Standards for Establishing SI Traceability for Moisture Measurements in Solid Materials
NASA Astrophysics Data System (ADS)
Heinonen, M.; Bell, S.; Choi, B. Il; Cortellessa, G.; Fernicola, V.; Georgin, E.; Hudoklin, D.; Ionescu, G. V.; Ismail, N.; Keawprasert, T.; Krasheninina, M.; Aro, R.; Nielsen, J.; Oğuz Aytekin, S.; Österberg, P.; Skabar, J.; Strnad, R.
2018-01-01
A European research project METefnet addresses a fundamental obstacle to improving energy-intensive drying process control: due to ambiguous reference analysis methods and insufficient methods for estimating uncertainty in moisture measurements, the achievable accuracy in the past was limited and measurement uncertainties were largely unknown. This paper reports the developments in METefnet that provide a sound basis for the SI traceability: four new primary standards for realizing the water mass fraction were set up, analyzed and compared to each other. The operation of these standards is based on combining sample weighing with different water vapor detection techniques: cold trap, chilled mirror, electrolytic and coulometric Karl Fischer titration. The results show that an equivalence of 0.2 % has been achieved between the water mass fraction realizations and that the developed methods are applicable to a wide range of materials.
Melanins and melanogenesis: methods, standards, protocols.
d'Ischia, Marco; Wakamatsu, Kazumasa; Napolitano, Alessandra; Briganti, Stefania; Garcia-Borron, José-Carlos; Kovacs, Daniela; Meredith, Paul; Pezzella, Alessandro; Picardo, Mauro; Sarna, Tadeusz; Simon, John D; Ito, Shosuke
2013-09-01
Despite considerable advances in the past decade, melanin research still suffers from the lack of universally accepted and shared nomenclature, methodologies, and structural models. This paper stems from the joint efforts of chemists, biochemists, physicists, biologists, and physicians with recognized and consolidated expertise in the field of melanins and melanogenesis, who critically reviewed and experimentally revisited methods, standards, and protocols to provide for the first time a consensus set of recommended procedures to be adopted and shared by researchers involved in pigment cell research. The aim of the paper was to define an unprecedented frame of reference built on cutting-edge knowledge and state-of-the-art methodology, to enable reliable comparison of results among laboratories and new progress in the field based on standardized methods and shared information. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Workshop on standards in biomass for energy and chemicals: proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milne, T.A.
1984-11-01
In the course of reviewing standards literature, visiting prominent laboratories and research groups, attending biomass meetings and corresponding widely, a whole set of standards needs was identified, the most prominent of which were: biomass standard reference materials, research materials and sample banks; special collections of microorganisms, clonal material, algae, etc.; standard methods of characterization of substrates and biomass fuels; standard tests and methods for the conversion and end-use of biomass; standard protocols for the description, harvesting, preparation, storage, and measurement of productivity of biomass materials in the energy context; glossaries of terms; development of special tests for assay of enzymaticmore » activity and related processes. There was also a recognition of the need for government, professional and industry support of concensus standards development and the dissemination of information on standards. Some 45 biomass researchers and managers met with key NBS staff to identify and prioritize standards needs. This was done through three working panels: the Panel on Standard Reference Materials (SRM's), Research Materials (RM's), and Sample Banks; the Panel on Production and Characterization; and the Panel on Tests and Methods for Conversion and End Use. This report gives a summary of the action items in standards development recommended unanimously by the workshop attendees. The proceedings of the workshop, and an appendix, contain an extensive written record of the findings of the workshop panelists and others regarding presently existing standards and standards issues and needs. Separate abstracts have been prepared for selected papers for inclusion in the Energy Database.« less
Kay, Jack F
2016-05-01
The Codex Committee on Residues of Veterinary Drugs in Food (CCRVDF) fulfils a number of functions revolving around standard setting. The core activities of the CCRVDF include agreeing priorities for assessing veterinary drug residues, recommending maximum residue limits for veterinary drugs in foods of animal origin, considering methods of sampling and analyses, and developing codes of practice. Draft standards are developed and progress through an agreed series of steps common to all Codex Alimentarius Commission Committees. Meetings of the CCRVDF are held at approximately 18-month intervals. To ensure effective progress is made with meetings at this frequency, the CCRVDF makes use of a number of management tools. These include circular letters to interested parties, physical and electronic drafting groups between plenary sessions, meetings of interested parties immediately prior to sessions, as well as break out groups within sessions and detailed discussions within the CCRVDF plenary sessions. A range of these approaches is required to assist advances within the standards setting process and can be applied to other Codex areas and international standard setting more generally. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Liu, Yuan; Chen, Wei-Hua; Hou, Qiao-Juan; Wang, Xi-Chang; Dong, Ruo-Yan; Wu, Hao
2014-04-01
Near infrared spectroscopy (NIR) was used in this experiment to evaluate the freshness of ice-stored large yellow croaker (Pseudosciaena crocea) during different storage periods. And the TVB-N was used as an index to evaluate the freshness. Through comparing the correlation coefficent and standard deviations of calibration set and validation set of models established by singly and combined using of different pretreatment methods, different modeling methods and different wavelength region, the best TVB-N models of ice-stored large yellow croaker sold in the market were established to predict the freshness quickly. According to the research, the model shows that the best performance could be established by using the normalization by closure (Ncl) with 1st derivative (Dbl) and normalization to unit length (Nle) with 1st derivative as the pretreated method and partial least square (PLS) as the modeling method combined with choosing the wavelength region of 5 000-7 144, and 7 404-10 000 cm(-1). The calibration model gave the correlation coefficient of 0.992, with a standard error of calibration of 1.045 and the validation model gave the correlation coefficient of 0.999, with a standard error of prediction of 0.990. This experiment attempted to combine several pretreatment methods and choose the best wavelength region, which has got a good result. It could have a good prospective application of freshness detection and quality evaluation of large yellow croaker in the market.
Tu, Xiao-Ming; Zhang, Zuo-Heng; Wan, Cheng; Zheng, Yu; Xu, Jin-Mei; Zhang, Yuan-Yuan; Luo, Jian-Ping; Wu, Hai-Wei
2012-12-01
To develop a software that can be used to standardize optical density to normalize the procedures and results of standardization in order to effectively solve several problems generated during standardization of in-direct ELISA results. The software was designed based on the I-STOD method with operation settings to solve the problems that one might encounter during the standardization. Matlab GUI was used as a tool for the development. The software was tested with the results of the detection of sera of persons from schistosomiasis japonica endemic areas. I-STOD V1.0 (WINDOWS XP/WIN 7, 0.5 GB) was successfully developed to standardize optical density. A serial of serum samples from schistosomiasis japonica endemic areas were used to examine the operational effects of I-STOD V1.0 software. The results indicated that the software successfully overcame several problems including reliability of standard curve, applicable scope of samples and determination of dilution for samples outside the scope, so that I-STOD was performed more conveniently and the results of standardization were more consistent. I-STOD V1.0 is a professional software based on I-STOD. It can be easily operated and can effectively standardize the testing results of in-direct ELISA.
Al-Harbi, L M; El-Mossalamy, E H; Obaid, A Y; Al-Jedaani, A H
2014-01-01
Charge transfer complexes of substituted aryl Schiff bases as donors with picric acid and m-dinitrobenzene as acceptors were investigated by using computational analysis calculated by Configuration Interaction Singles Hartree-Fock (CIS-HF) at standard 6-31G∗ basis set and Time-Dependent Density-Functional Theory (TD-DFT) levels of theory at standard 6-31G∗∗ basis set, infrared spectra, visible and nuclear magnetic resonance spectra are investigated. The optimized geometries and vibrational frequencies were evaluated. The energy and oscillator strength were calculated by Configuration Interaction Singles Hartree-Fock method (CIS-HF) and the Time-Dependent Density-Functional Theory (TD-DFT) results. Electronic properties, such as HOMO and LUMO energies and band gaps of CTCs set, were studied by the Time-Dependent density functional theory with Becke-Lee-Young-Parr (B3LYP) composite exchange correlation functional and by Configuration Interaction Singles Hartree-Fock method (CIS-HF). The ionization potential Ip and electron affinity EA were calculated by PM3, HF and DFT methods. The columbic force was calculated theoretically by using (CIS-HF and TD-DFT) methods. This study confirms that the theoretical calculation of vibrational frequencies for (aryl Schiff bases--(m-dinitrobenzene and picric acid)) complexes are quite useful for the vibrational assignment and for predicting new vibrational frequencies. Copyright © 2013 Elsevier B.V. All rights reserved.
Ashby, R
1994-01-01
CEC Directives have been implemented for plastics materials and articles intended to come into contact with foodstuffs. These introduce limits upon the overall migration from plastics into food and food simulants. In addition, specific migration limits or composition limits for free monomer in the final article, have been set for some monomers. Agreed test methods are required to allow these Directives to be respected. CEN, the European Committee for Standardization, has created a working group to develop suitable test methods. This is 'Working Group 5, Chemical Methods of Test', of CEN Technical Committee TC 194, Utensils in contact with food. This group has drafted a ten part standard for determining overall migration into aqueous and fatty food simulants by total immersion, by standard cell, by standard pouch and by filling. This draft standard has been approved by CEN TC 194 for circulation for public comment as a provisional standard, i.e. as an ENV. Further parts of this standard are in preparation for determining overall migration at high temperatures, etc. Simultaneously, Working Group 5 is cooperating with the BCR (Community Bureau of Reference) to produce reference materials with certified values of overall migration. CEN TC 194 Working Group 5 is also drafting methods for monomers subject to limitation in Directive 90/128/EEC. Good progress is being made on the monomers of highest priority but it is recognized that developing methods for all the monomers subject to limitation would take many years. Therefore, collaboration with the BCR, the Council of Europe and others is taking place to accelerate method development.
Grol, R
1990-01-01
The Nederlands Huisartsen Genootschap (NHG), the college of general practitioners in the Netherlands, has begun a national programme of standard setting for the quality of care in general practice. When the standards have been drawn up and assessed they are disseminated via the journal Huisarts en Wetenschap. In a survey, carried out among a randomized sample of 10% of all general practitioners, attitudes towards national standard setting in general and to the first set of standards (diabetes care) were studied. The response was 70% (453 doctors). A majority of the respondents said they were well informed about the national standard setting initiatives instigated by the NHG (71%) and about the content of the first standards (77%). The general practitioners had a positive attitude towards the setting of national standards for quality of care, and this was particularly true for doctors who were members of the NHG. Although a large majority of doctors said they agreed with most of the guidelines in the diabetes standards fewer respondents were actually working to the guidelines and some of the standards are certain to meet with a lot of resistance. A better knowledge of the standards and a more positive attitude to the process of national standard setting correlated with a more positive attitude to the guidelines formulated in the diabetes standards. The results could serve as a starting point for an exchange of views about standard setting in general practice in other countries. PMID:2265001
Estimating the mass variance in neutron multiplicity counting-A comparison of approaches
NASA Astrophysics Data System (ADS)
Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.
2017-12-01
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubi, C.; Croft, S.; Favalli, A.
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
Dubi, C.; Croft, S.; Favalli, A.; ...
2017-09-14
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Myllymaa, Sami; Muraja-Murro, Anu; Westeren-Punnonen, Susanna; Hukkanen, Taina; Lappalainen, Reijo; Mervaala, Esa; Töyräs, Juha; Sipilä, Kirsi; Myllymaa, Katja
2016-12-01
Recently, a number of portable devices designed for full polysomnography at home have appeared. However, current scalp electrodes used for electroencephalograms are not practical for patient self-application. The aim of this study was to evaluate the suitability of recently introduced forehead electroencephalogram electrode set and supplementary chin electromyogram electrodes for sleep staging. From 31 subjects (10 male, 21 female; age 31.3 ± 11.8 years), sleep was recorded simultaneously with a forehead electroencephalogram electrode set and with a standard polysomnography setup consisting of six recommended electroencephalogram channels, two electrooculogram channels and chin electromyogram. Thereafter, two experienced specialists scored each recording twice, based on either standard polysomnography or forehead recordings. Sleep variables recorded with the forehead electroencephalogram electrode set and separate chin electromyogram electrodes were highly consistent with those obtained with the standard polysomnography. There were no statistically significant differences in total sleep time, sleep efficiency or sleep latencies. However, compared with the standard polysomnography, there was a significant increase in the amount of stage N1 and N2, and a significant reduction in stage N3 and rapid eye movement sleep. Overall, epoch-by-epoch agreement between the methods was 79.5%. Inter-scorer agreement for the forehead electroencephalogram was only slightly lower than that for standard polysomnography (76.1% versus 83.2%). Forehead electroencephalogram electrode set as supplemented with chin electromyogram electrodes may serve as a reliable and simple solution for recording total sleep time, and may be adequate for measuring sleep architecture. Because this electrode concept is well suited for patient's self-application, it may offer a significant advancement in home polysomnography. © 2016 European Sleep Research Society.
Kahn, S; Pelgrim, W
2010-08-01
The missions of the World Organisation for Animal Health (OIE) include the design of surveillance and control methods for infectious transboundary animal diseases (including zoonoses), the provision of guarantees concerning animal health and animal production food safety, and the setting of standards for, and promotion of, animal welfare. The OIE role in setting standards for the sanitary safety of international trade in animals and animal products is formally recognised in the World Trade Organization (WTO) Agreement on the Application of Sanitary and Phytosanitary Measures (the SPS Agreement). While the primary focus of the OIE is on animal diseases and zoonoses, the OIE has also been working within the WTO framework to examine possible contributions the organisation can make to achieving the goals of the Convention on Biological Diversity, particularly to preventing the global spread of invasive alien species (IAS). However, at the present time, setting standards for invasive species (other than those connected to the cause and distribution of diseases listed by the OIE) is outside the OIE mandate. Any future expansion of the OIE mandate would need to be decided by its Members and resources (expertise and financial contributions) for an extended standard-setting work programme secured. The other international standard-setting organisations referenced by the SPS Agreement are the International Plant Protection Convention (IPPC) and the Codex Alimentarius Commission (CAC). The IPPC mandate and work programme address IAS and the protection of biodiversity. The CAC is not involved in this field.
A simple web-based tool to compare freshwater fish data collected using AFS standard methods
Bonar, Scott A.; Mercado-Silva, Norman; Rahr, Matt; Torrey, Yuta T.; Cate, Averill
2016-01-01
The American Fisheries Society (AFS) recently published Standard Methods for Sampling North American Freshwater Fishes. Enlisting the expertise of 284 scientists from 107 organizations throughout Canada, Mexico, and the United States, this text was developed to facilitate comparisons of fish data across regions or time. Here we describe a user-friendly web tool that automates among-sample comparisons in individual fish condition, population length-frequency distributions, and catch per unit effort (CPUE) data collected using AFS standard methods. Currently, the web tool (1) provides instantaneous summaries of almost 4,000 data sets of condition, length frequency, and CPUE of common freshwater fishes collected using standard gears in 43 states and provinces; (2) is easily appended with new standardized field data to update subsequent queries and summaries; (3) compares fish data from a particular water body with continent, ecoregion, and state data summaries; and (4) provides additional information about AFS standard fish sampling including benefits, ongoing validation studies, and opportunities to comment on specific methods. The web tool—programmed in a PHP-based Drupal framework—was supported by several AFS Sections, agencies, and universities and is freely available from the AFS website and fisheriesstandardsampling.org. With widespread use, the online tool could become an important resource for fisheries biologists.
Image characterization metrics for muon tomography
NASA Astrophysics Data System (ADS)
Luo, Weidong; Lehovich, Andre; Anashkin, Edward; Bai, Chuanyong; Kindem, Joel; Sossong, Michael; Steiger, Matt
2014-05-01
Muon tomography uses naturally occurring cosmic rays to detect nuclear threats in containers. Currently there are no systematic image characterization metrics for muon tomography. We propose a set of image characterization methods to quantify the imaging performance of muon tomography. These methods include tests of spatial resolution, uniformity, contrast, signal to noise ratio (SNR) and vertical smearing. Simulated phantom data and analysis methods were developed to evaluate metric applicability. Spatial resolution was determined as the FWHM of the point spread functions in X, Y and Z axis for 2.5cm tungsten cubes. Uniformity was measured by drawing a volume of interest (VOI) within a large water phantom and defined as the standard deviation of voxel values divided by the mean voxel value. Contrast was defined as the peak signals of a set of tungsten cubes divided by the mean voxel value of the water background. SNR was defined as the peak signals of cubes divided by the standard deviation (noise) of the water background. Vertical smearing, i.e. vertical thickness blurring along the zenith axis for a set of 2 cm thick tungsten plates, was defined as the FWHM of vertical spread function for the plate. These image metrics provided a useful tool to quantify the basic imaging properties for muon tomography.
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
[Study on quality standards of decoction pieces of salt Alpinia].
Li, Wenbing; Hu, Changjiang; Long, Lanyan; Huang, Qinwan; Xie, Xiuqiong
2010-12-01
To establish the quality criteria for decoction pieces of salt Alpinia. Decoction pieces of salt Alpinia were measured with moisture, total ash, acid-insoluble ash, water-extract and volatile oils according to the procedures recorded in the Chinese Pharmacopoeia 2010. The content of Nootkatone was determined by HPLC, and NaCl, by chloridion electrode method. We obtained results of total ash, acid-insoluble ash, water-extract and volatile oils of 10 batches of decoction pieces of salt Alpinia moisture; Meanwhile we set the HPLC and chloridion electrode method. This research established a fine quality standard for decoction pieces of salt Alpinia.
Development of Gold Standard Ion-Selective Electrode-Based Methods for Fluoride Analysis
Martínez-Mier, E.A.; Cury, J.A.; Heilman, J.R.; Katz, B.P.; Levy, S.M.; Li, Y.; Maguire, A.; Margineda, J.; O’Mullane, D.; Phantumvanit, P.; Soto-Rojas, A.E.; Stookey, G.K.; Villa, A.; Wefel, J.S.; Whelton, H.; Whitford, G.M.; Zero, D.T.; Zhang, W.; Zohouri, V.
2011-01-01
Background/Aims: Currently available techniques for fluoride analysis are not standardized. Therefore, this study was designed to develop standardized methods for analyzing fluoride in biological and nonbiological samples used for dental research. Methods A group of nine laboratories analyzed a set of standardized samples for fluoride concentration using their own methods. The group then reviewed existing analytical techniques for fluoride analysis, identified inconsistencies in the use of these techniques and conducted testing to resolve differences. Based on the results of the testing undertaken to define the best approaches for the analysis, the group developed recommendations for direct and microdiffusion methods using the fluoride ion-selective electrode. Results Initial results demonstrated that there was no consensus regarding the choice of analytical techniques for different types of samples. Although for several types of samples, the results of the fluoride analyses were similar among some laboratories, greater differences were observed for saliva, food and beverage samples. In spite of these initial differences, precise and true values of fluoride concentration, as well as smaller differences between laboratories, were obtained once the standardized methodologies were used. Intraclass correlation coefficients ranged from 0.90 to 0.93, for the analysis of a certified reference material, using the standardized methodologies. Conclusion The results of this study demonstrate that the development and use of standardized protocols for F analysis significantly decreased differences among laboratories and resulted in more precise and true values. PMID:21160184
A combined representation method for use in band structure calculations. 1: Method
NASA Technical Reports Server (NTRS)
Friedli, C.; Ashcroft, N. W.
1975-01-01
A representation was described whose basis levels combine the important physical aspects of a finite set of plane waves with those of a set of Bloch tight-binding levels. The chosen combination has a particularly simple dependence on the wave vector within the Brillouin Zone, and its use in reducing the standard one-electron band structure problem to the usual secular equation has the advantage that the lattice sums involved in the calculation of the matrix elements are actually independent of the wave vector. For systems with complicated crystal structures, for which the Korringa-Kohn-Rostoker (KKR), Augmented-Plane Wave (APW) and Orthogonalized-Plane Wave (OPW) methods are difficult to apply, the present method leads to results with satisfactory accuracy and convergence.
Spin-Neto, R; Gotfredsen, E; Wenzel, A
2015-01-01
To suggest a standardized method to assess the variation in voxel value distribution in patient-simulated CBCT data sets and the effect of time between exposures (TBE). Additionally, a measurement of reproducibility, Aarhus measurement of reproducibility (AMORe), is introduced, which could be used for quality assurance purposes. Six CBCT units were tested [Cranex(®) 3D/CRAN (Soredex Oy, Tuusula, Finland); Scanora(®) 3D/SCAN (Soredex Oy); NewTom™ 5G/NEW5 (QR srl, Verona, Italy); i-CAT/ICAT (Imaging Sciences International, Hatfield, PA); 3D Accuitomo FPD80/ACCU (Morita, Kyoto, Japan); and NewTom VG/NEWV (QR srl)]. Two sets of volumetric data of a wax-imbedded dry human skull (containing a titanium implant) were acquired by each CBCT unit at two sessions on separate days. Each session consisted 21 exposures: 1 "initial" followed by a 30-min interval (initial data set), 10 acquired with 30-min TBE (data sets 1-10) and 10 acquired with 15-min TBE (data sets 11-20). CBCT data were exported as digital imaging and communications in medicine files and converted to text files containing x, y and z positions and grey shade for each voxel. Subtractions were performed voxel-by-voxel in two set-ups: (1) between two consecutive data sets and (2) between any subsequent data set and data set 1. The mean grey shade variation for each voxel was calculated for each unit/session. The largest mean grey shade variation was found in the subtraction set-up 2 (27-447 shades of grey, depending on the unit). Considering subtraction set-up 1, the highest variation was seen for NEW5, between data sets 1 and the initial. Discrepancies in voxel value distribution were found by comparing the initial examination of the day with the subsequent examinations. TBE had no predictable effect on the variation of CBCT-derived voxel values. AMORe ranged between 0 and 64.
Singer product apertures-A coded aperture system with a fast decoding algorithm
NASA Astrophysics Data System (ADS)
Byard, Kevin; Shutler, Paul M. E.
2017-06-01
A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.
[Expert investigation on food safety standard system framework construction in China].
He, Xiang; Yan, Weixing; Fan, Yongxiang; Zeng, Biao; Peng, Zhen; Sun, Zhenqiu
2013-09-01
Through investigating food safety standard framework among food safety experts, to summarize the basic elements and principles of food safety standard system, and provide policy advices for food safety standards framework. A survey was carried out among 415 experts from government, professional institutions and the food industry/enterprises using the National Food Safety Standard System Construction Consultation Questionnaire designed in the name of the Secretariat of National Food Safety Standard Committee. Experts have different advices in each group about the principles of food product standards, food additive product standards, food related product standards, hygienic practice, test methods. According to the results, the best solution not only may reflect experts awareness of the work of food safety standards situation, but also provide advices for setting and revision of food safety standards for the next. Through experts investigation, the framework and guiding principles of food safety standard had been built.
1989-08-01
Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S
Weighted mining of massive collections of [Formula: see text]-values by convex optimization.
Dobriban, Edgar
2018-06-01
Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
Measuring impact crater depth throughout the solar system
Robbins, Stuart J.; Watters, Wesley A.; Chappelow, John E.; Bray, Veronica J.; Daubar, Ingrid J.; Craddock, Robert A.; Beyer, Ross A.; Landis, Margaret E.; Ostrach, Lillian; Tornabene, Livio L.; Riggs, Jamie D.; Weaver, Brian P.
2018-01-01
One important, almost ubiquitous, tool for understanding the surfaces of solid bodies throughout the solar system is the study of impact craters. While measuring a distribution of crater diameters and locations is an important tool for a wide variety of studies, so too is measuring a crater's “depth.” Depth can inform numerous studies including the strength of a surface and modification rates in the local environment. There is, however, no standard data set, definition, or technique to perform this data‐gathering task, and the abundance of different definitions of “depth” and methods for estimating that quantity can lead to misunderstandings in and of the literature. In this review, we describe a wide variety of data sets and methods to analyze those data sets that have been, are currently, or could be used to derive different types of crater depth measurements. We also recommend certain nomenclature in doing so to help standardize practice in the field. We present a review section of all crater depths that have been published on different solar system bodies which shows how the field has evolved through time and how some common assumptions might not be wholly accurate. We conclude with several recommendations for researchers which could help different data sets to be more easily understood and compared.
da Silva, Kátia Regina; Costa, Roberto; Crevelari, Elizabeth Sartori; Lacerda, Marianna Sobral; de Moraes Albertini, Caio Marcos; Filho, Martino Martinelli; Santana, José Eduardo; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo; Barros, Jacson V.
2013-01-01
Background The ability to apply standard and interoperable solutions for implementing and managing medical registries as well as aggregate, reproduce, and access data sets from legacy formats and platforms to advanced standard formats and operating systems are crucial for both clinical healthcare and biomedical research settings. Purpose Our study describes a reproducible, highly scalable, standard framework for a device registry implementation addressing both local data quality components and global linking problems. Methods and Results We developed a device registry framework involving the following steps: (1) Data standards definition and representation of the research workflow, (2) Development of electronic case report forms using REDCap (Research Electronic Data Capture), (3) Data collection according to the clinical research workflow and, (4) Data augmentation by enriching the registry database with local electronic health records, governmental database and linked open data collections, (5) Data quality control and (6) Data dissemination through the registry Web site. Our registry adopted all applicable standardized data elements proposed by American College Cardiology / American Heart Association Clinical Data Standards, as well as variables derived from cardiac devices randomized trials and Clinical Data Interchange Standards Consortium. Local interoperability was performed between REDCap and data derived from Electronic Health Record system. The original data set was also augmented by incorporating the reimbursed values paid by the Brazilian government during a hospitalization for pacemaker implantation. By linking our registry to the open data collection repository Linked Clinical Trials (LinkedCT) we found 130 clinical trials which are potentially correlated with our pacemaker registry. Conclusion This study demonstrates how standard and reproducible solutions can be applied in the implementation of medical registries to constitute a re-usable framework. Such approach has the potential to facilitate data integration between healthcare and research settings, also being a useful framework to be used in other biomedical registries. PMID:23936257
Mining Hierarchies and Similarity Clusters from Value Set Repositories.
Peterson, Kevin J; Jiang, Guoqian; Brue, Scott M; Shen, Feichen; Liu, Hongfang
2017-01-01
A value set is a collection of permissible values used to describe a specific conceptual domain for a given purpose. By helping to establish a shared semantic understanding across use cases, these artifacts are important enablers of interoperability and data standardization. As the size of repositories cataloging these value sets expand, knowledge management challenges become more pronounced. Specifically, discovering value sets applicable to a given use case may be challenging in a large repository. In this study, we describe methods to extract implicit relationships between value sets, and utilize these relationships to overlay organizational structure onto value set repositories. We successfully extract two different structurings, hierarchy and clustering, and show how tooling can leverage these structures to enable more effective value set discovery.
Setting the bar: Standards for ecosystem services
Polasky, Stephen; Tallis, Heather; Reyers, Belinda
2015-01-01
Progress in ecosystem service science has been rapid, and there is now a healthy appetite among key public and private sector decision makers for this science. However, changing policy and management is a long-term project, one that raises a number of specific practical challenges. One impediment to broad adoption of ecosystem service information is the lack of standards that define terminology, acceptable data and methods, and reporting requirements. Ecosystem service standards should be tailored to specific use contexts, such as national income and wealth accounts, corporate sustainability reporting, land-use planning, and environmental impact assessments. Many standard-setting organizations already exist, and the research community will make the most headway toward rapid uptake of ecosystem service science by working directly with these organizations. Progress has been made in aligning with existing organizations in areas such as product certification and sustainability reporting, but a major challenge remains in mainstreaming ecosystem service information into core public and private use contexts, such as agricultural and energy subsidy design, national income accounts, and corporate accounts. PMID:26082540
Setting the bar: Standards for ecosystem services.
Polasky, Stephen; Tallis, Heather; Reyers, Belinda
2015-06-16
Progress in ecosystem service science has been rapid, and there is now a healthy appetite among key public and private sector decision makers for this science. However, changing policy and management is a long-term project, one that raises a number of specific practical challenges. One impediment to broad adoption of ecosystem service information is the lack of standards that define terminology, acceptable data and methods, and reporting requirements. Ecosystem service standards should be tailored to specific use contexts, such as national income and wealth accounts, corporate sustainability reporting, land-use planning, and environmental impact assessments. Many standard-setting organizations already exist, and the research community will make the most headway toward rapid uptake of ecosystem service science by working directly with these organizations. Progress has been made in aligning with existing organizations in areas such as product certification and sustainability reporting, but a major challenge remains in mainstreaming ecosystem service information into core public and private use contexts, such as agricultural and energy subsidy design, national income accounts, and corporate accounts.
Creation of three-dimensional craniofacial standards from CBCT images
NASA Astrophysics Data System (ADS)
Subramanyan, Krishna; Palomo, Martin; Hans, Mark
2006-03-01
Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.
Internal audit in a microbiology laboratory.
Mifsud, A J; Shafi, M S
1995-01-01
AIM--To set up a programme of internal laboratory audit in a medical microbiology laboratory. METHODS--A model of laboratory based process audit is described. Laboratory activities were examined in turn by specimen type. Standards were set using laboratory standard operating procedures; practice was observed using a purpose designed questionnaire and the data were analysed by computer; performance was assessed at laboratory audit meetings; and the audit circle was closed by re-auditing topics after an interval. RESULTS--Improvements in performance scores (objective measures) and in staff morale (subjective impression) were observed. CONCLUSIONS--This model of process audit could be applied, with amendments to take local practice into account, in any microbiology laboratory. PMID:7665701
Naimi, Ashley I; Cole, Stephen R; Kennedy, Edward H
2017-04-01
Robins' generalized methods (g methods) provide consistent estimates of contrasts (e.g. differences, ratios) of potential outcomes under a less restrictive set of identification conditions than do standard regression methods (e.g. linear, logistic, Cox regression). Uptake of g methods by epidemiologists has been hampered by limitations in understanding both conceptual and technical details. We present a simple worked example that illustrates basic concepts, while minimizing technical complications. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems.
Glover, Jack L; Hudson, Lawrence T
2016-06-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard.
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems
Glover, Jack L.; Hudson, Lawrence T.
2016-01-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard. PMID:27499586
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems
NASA Astrophysics Data System (ADS)
Glover, Jack L.; Hudson, Lawrence T.
2016-06-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in an international aviation security standard.
Registration of segmented histological images using thin plate splines and belief propagation
NASA Astrophysics Data System (ADS)
Kybic, Jan
2014-03-01
We register images based on their multiclass segmentations, for cases when correspondence of local features cannot be established. A discrete mutual information is used as a similarity criterion. It is evaluated at a sparse set of location on the interfaces between classes. A thin-plate spline regularization is approximated by pairwise interactions. The problem is cast into a discrete setting and solved efficiently by belief propagation. Further speedup and robustness is provided by a multiresolution framework. Preliminary experiments suggest that our method can provide similar registration quality to standard methods at a fraction of the computational cost.
Sample Selection for Training Cascade Detectors.
Vállez, Noelia; Deniz, Oscar; Bueno, Gloria
2015-01-01
Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
7 CFR 28.107 - Original cotton standards and reserve sets.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Original cotton standards and reserve sets. 28.107... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Practical Forms of Cotton Standards § 28.107 Original cotton standards and reserve sets. (a...
Development of a new model to engage patients and clinicians in setting research priorities.
Pollock, Alex; St George, Bridget; Fenton, Mark; Crowe, Sally; Firkins, Lester
2014-01-01
Equitable involvement of patients and clinicians in setting research and funding priorities is ethically desirable and can improve the quality, relevance and implementation of research. Survey methods used in previous priority setting projects to gather treatment uncertainties may not be sufficient to facilitate responses from patients and their lay carers for some health care topics. We aimed to develop a new model to engage patients and clinicians in setting research priorities relating to life after stroke, and to explore the use of this model within a James Lind Alliance (JLA) priority setting project. We developed a model to facilitate involvement through targeted engagement and assisted involvement (FREE TEA model). We implemented both standard surveys and the FREE TEA model to gather research priorities (treatment uncertainties) from people affected by stroke living in Scotland. We explored and configured the number of treatment uncertainties elicited from different groups by the two approaches. We gathered 516 treatment uncertainties from stroke survivors, carers and health professionals. We achieved approximately equal numbers of contributions; 281 (54%) from stroke survivors/carers; 235 (46%) from health professionals. For stroke survivors and carers, 98 (35%) treatment uncertainties were elicited from the standard survey and 183 (65%) at FREE TEA face-to-face visits. This contrasted with the health professionals for whom 198 (84%) were elicited from the standard survey and only 37 (16%) from FREE TEA visits. The FREE TEA model has implications for future priority setting projects and user-involvement relating to populations of people with complex health needs. Our results imply that reliance on standard surveys may result in poor and unrepresentative involvement of patients, thereby favouring the views of health professionals.
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
Flexible Method for Inter-object Communication in C++
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Gould, Jack J.
1994-01-01
A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.
Neuss, Michael N; Gilmore, Terry R; Belderson, Kristin M; Billett, Amy L; Conti-Kalchik, Tara; Harvey, Brittany E; Hendricks, Carolyn; LeFebvre, Kristine B; Mangu, Pamela B; McNiff, Kristen; Olsen, MiKaela; Schulmeister, Lisa; Von Gehr, Ann; Polovich, Martha
2016-12-01
Purpose To update the ASCO/Oncology Nursing Society (ONS) Chemotherapy Administration Safety Standards and to highlight standards for pediatric oncology. Methods The ASCO/ONS Chemotherapy Administration Safety Standards were first published in 2009 and updated in 2011 to include inpatient settings. A subsequent 2013 revision expanded the standards to include the safe administration and management of oral chemotherapy. A joint ASCO/ONS workshop with stakeholder participation, including that of the Association of Pediatric Hematology Oncology Nurses and American Society of Pediatric Hematology/Oncology, was held on May 12, 2015, to review the 2013 standards. An extensive literature search was subsequently conducted, and public comments on the revised draft standards were solicited. Results The updated 2016 standards presented here include clarification and expansion of existing standards to include pediatric oncology and to introduce new standards: most notably, two-person verification of chemotherapy preparation processes, administration of vinca alkaloids via minibags in facilities in which intrathecal medications are administered, and labeling of medications dispensed from the health care setting to be taken by the patient at home. The standards were reordered and renumbered to align with the sequential processes of chemotherapy prescription, preparation, and administration. Several standards were separated into their respective components for clarity and to facilitate measurement of adherence to a standard. Conclusion As oncology practice has changed, so have chemotherapy administration safety standards. Advances in technology, cancer treatment, and education and training have prompted the need for periodic review and revision of the standards. Additional information is available at http://www.asco.org/chemo-standards .
Li, Hui
2009-03-01
To construct the growth standardized data and curves based on weight, length/height, head circumference for Chinese children under 7 years of age. Random cluster sampling was used. The fourth national growth survey of children under 7 years in the nine cities (Beijing, Harbin, Xi'an, Shanghai, Nanjing, Wuhan, Fuzhou, Guangzhou and Kunming) of China was performed in 2005 and from this survey, data of 69 760 urban healthy boys and girls were used to set up the database for weight-for-age, height-for-age (length was measured for children under 3 years) and head circumference-for-age. Anthropometric data were ascribed to rigorous methods of data collection and standardized procedures across study sites. LMS method based on BOX-COX normal transformation and cubic splines smoothing technique was chosen for fitting the raw data according to study design and data features, and standardized values of any percentile and standard deviation were obtained by the special formulation of L, M and S parameters. Length-for-age and height-for-age standards were constructed by fitting the same model but the final curves reflected the 0.7 cm average difference between these two measurements. A set of systematic diagnostic tools was used to detect possible biases in estimated percentiles or standard deviation curves, including chi2 test, which was used for reference to evaluate to the goodness of fit. The 3rd, 10th, 25th, 50th, 75th, 90th, 97th smoothed percentiles and -3, -2, -1, 0, +1, +2, +3 SD values and curves of weight-for-age, length/height-for-age and head circumference-for-age for boys and girls aged 0-7 years were made out respectively. The Chinese child growth charts was slightly higher than the WHO child growth standards. The newly established growth charts represented the growth level of healthy and well-nourished Chinese children. The sample size was very large and national, the data were high-quality and the smoothing method was internationally accepted. The new Chinese growth charts are recommended as the Chinese child growth standards in 21st century used in China.
Methods for Linking Item Parameters.
1981-08-01
within and across data sets; all proportion-correct distributions were quite platykurtic . Biserial item-total correlations had relatively consistent...would produce a distribution of a parameters which had a larger mean and standard deviation, was more positively skewed, and was somewhat more platykurtic
Code of Federal Regulations, 2011 CFR
2011-01-01
... Metal Halide Lamp Ballasts and Fixtures Energy Conservation Standards § 431.329 Enforcement. Process for Metal Halide Lamp Ballasts. This section sets forth procedures DOE will follow in pursuing alleged... with the following statistical sampling procedures for metal halide lamp ballasts, with the methods...
Cost Effectiveness Of Selected Roadway Dust Control Methods For Eagle River, Alaska
DOT National Transportation Integrated Search
1988-01-01
The U.S. Environmental Protection Agency has set air quality standards for airborne particulates with diameters equal to or less than ten microns (PM10 particulates). These particulates have been correlated with respiratory illnesses. The primary sta...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reu, Phillip L.; Toussaint, E.; Jones, Elizabeth M. C.
With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approachesmore » and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.« less
Eshun-Wilson, Ingrid; Havers, Fiona; Nachega, Jean B; Prozesky, Hans W; Taljaard, Jantjie J; Zeier, Michele D; Cotton, Mark; Simon, Gary; Soentjens, Patrick
2011-01-01
Objective Standardized case definitions have recently been proposed by the International Network for the Study of HIV-associated IRIS (INSHI) for use in resource-limited settings. We evaluated paradoxical TB-associated IRIS in a large cohort from a TB endemic setting with the use of these case definitions. Design A retrospective cohort analysis. Methods We reviewed records from 1250 South African patients who initiated anti-retroviral therapy (ART) over a five-year period. Results 333 (27%) of the patients in the cohort had prevalent TB at the initiation of ART. Of 54 possible paradoxical TB-associated IRIS cases, 35 fulfilled the INSHI case definitions (11% of TB cases). Conclusions INSHI standardised case definitions were used successfully in identifying paradoxical TB-associated IRIS in this cohort and resulted in a similar proportion of TB IRIS cases (11%) as that reported in previous studies from resource-limited settings (8-13%). This case definition should be evaluated prospectively. PMID:20160249
Reu, Phillip L.; Toussaint, E.; Jones, Elizabeth M. C.; ...
2017-12-11
With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approachesmore » and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.« less
Grabenhenrich, L B; Reich, A; Bellach, J; Trendelenburg, V; Sprikkelman, A B; Roberts, G; Grimshaw, K E C; Sigurdardottir, S; Kowalski, M L; Papadopoulos, N G; Quirce, S; Dubakiene, R; Niggemann, B; Fernández-Rivas, M; Ballmer-Weber, B; van Ree, R; Schnadt, S; Mills, E N C; Keil, T; Beyer, K
2017-03-01
The conduct of oral food challenges as the preferred diagnostic standard for food allergy (FA) was harmonized over the last years. However, documentation and interpretation of challenge results, particularly in research settings, are not sufficiently standardized to allow valid comparisons between studies. Our aim was to develop a diagnostic toolbox to capture and report clinical observations in double-blind placebo-controlled food challenges (DBPCFC). A group of experienced allergists, paediatricians, dieticians, epidemiologists and data managers developed generic case report forms and standard operating procedures for DBPCFCs and piloted them in three clinical centres. The follow-up of the EuroPrevall/iFAAM birth cohort and other iFAAM work packages applied these methods. A set of newly developed questionnaire or interview items capture the history of FA. Together with sensitization status, this forms the basis for the decision to perform a DBPCFC, following a standardized decision algorithm. A generic form including details about severity and timing captures signs and symptoms observed during or after the procedures. In contrast to the commonly used dichotomous outcome FA vs no FA, the allergy status is interpreted in multiple categories to reflect the complexity of clinical decision-making. The proposed toolbox sets a standard for improved documentation and harmonized interpretation of DBPCFCs. By a detailed documentation and common terminology for communicating outcomes, these tools hope to reduce the influence of subjective judgment of supervising physicians. All forms are publicly available for further evolution and free use in clinical and research settings. © 2016 The Authors. Allergy Published by John Wiley & Sons Ltd.
Kalter, Henry D; Perin, Jamie; Black, Robert E
2016-06-01
Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0.16-0.33; best kappa = 0.29, 0.23-0.35; child: best CCC = 0.40, 0.19-0.45; best kappa = 0.29, 0.07-0.35). Expert algorithms in a hierarchy offer an accessible, automated method for assigning VA causes of death. Overall population-level accuracy is similar to that of more complex machine learning methods, but without need for a training data set from a prior validation study.
Assessment of an undergraduate psychiatry course in an African setting
Baig, Benjamin J; Beaglehole, Anna; Stewart, Robert C; Boeing, Leonie; Blackwood, Douglas H; Leuvennink, Johan; Kauye, Felix
2008-01-01
Background International reports recommend the improvement in the amount and quality of training for mental health workers in low and middle income countries. The Scotland-Malawi Mental Health Education Project (SMMHEP) has been established to support the teaching of psychiatry to medical students in the University of Malawi. While anecdotally supportive medical educational initiatives appear of value, little quantitative evidence exists to demonstrate whether such initiatives can deliver comparable educational standards. This study aimed to assess the effectiveness of an undergraduate psychiatry course given by UK psychiatrists in Malawi by studying University of Malawi and Edinburgh University medical students' performance on an MCQ examination paper. Methods An undergraduate psychiatry course followed by an MCQ exam was delivered by the SMMHEP to 57 Malawi medical students. This same MCQ exam was given to 71 Edinburgh University medical students who subsequently sat their own Edinburgh University examination. Results There were no significant differences between Edinburgh students' performance on the Malawi exam and their own Edinburgh University exam. (p = 0.65). This would suggest that the Malawi exam is a comparable standard to the Edinburgh exam. Malawi students marks ranged from 52.4%–84.6%. Importantly 84.4% of Malawi students scored above 60% on their exam which would equate to a hypothetical pass by UK university standards. Conclusion The support of an undergraduate course in an African setting by high income country specialists can attain a high percentage pass rate by UK standards. Although didactic teaching has been surpassed by more novel educational methods, in resource poor countries it remains an effective and cost effective method of gaining an important educational standard. PMID:18430237
Towards Formal Implementation of PUS Standard
NASA Astrophysics Data System (ADS)
Ilić, D.
2009-05-01
As an effort to promote the reuse of on-board and ground systems ESA developed a standard for packet telemetry and telecommand - PUS. It defines a set of standard service models with the corresponding structures of the associated telemetry and telecommand packets. Various missions then can choose to implement those standard PUS services that best conform to their specific requirements. In this paper we propose a formal development (based on the Event-B method) of reusable service patterns, which can be instantiated for concrete application. Our formal models allow us to formally express and verify specific service properties including various telecommand and telemetry packet structure validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neymark, J.; Kennedy, M.; Judkoff, R.
This report documents a set of diagnostic analytical verification cases for testing the ability of whole building simulation software to model the air distribution side of typical heating, ventilating and air conditioning (HVAC) equipment. These cases complement the unitary equipment cases included in American National Standards Institute (ANSI)/American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs, which test the ability to model the heat-transfer fluid side of HVAC equipment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Jong, Wibe A.; Harrison, Robert J.; Dixon, David A.
A parallel implementation of the spin-free one-electron Douglas-Kroll(-Hess) Hamiltonian (DKH) in NWChem is discussed. An efficient and accurate method to calculate DKH gradients is introduced. It is shown that the use of standard (non-relativistic) contracted basis set can produce erroneous results for elements beyond the first row elements. The generation of DKH contracted cc-pVXZ (X = D, T, Q, 5) basis sets for H, He, B - Ne, Al - Ar, and Ga - Br will be discussed.
The importance of dew in watershed-management research
James W. Hornbeck
1964-01-01
Many studies, using various methods, have been made of dew deposition to determine its importance as a source of moisture. For example, Duvdevani (1947) used an optical method in which dew collected on a wooden block was compared with a set of standardized photographs of dew. Potvin (1949) exposed diamond-shaped glass plates at 45º to ground level, so that...
NASA Astrophysics Data System (ADS)
Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Wei, Y.
2010-12-01
Terrestrial ecology data sets are produced from diverse data sources such as model output, field data collection, laboratory analysis and remote sensing observation. These data sets can be created, distributed, and consumed in diverse ways as well. However, this diversity can hinder the usability of the data, and limit data users’ abilities to validate and reuse data for science and application purposes. Geospatial web services, such as those described in this paper, are an important means of reducing this burden. Terrestrial ecology researchers generally create the data sets in diverse file formats, with file and data structures tailored to the specific needs of their project, possibly as tabular data, geospatial images, or documentation in a report. Data centers may reformat the data to an archive-stable format and distribute the data sets through one or more protocols, such as FTP, email, and WWW. Because of the diverse data preparation, delivery, and usage patterns, users have to invest time and resources to bring the data into the format and structure most useful for their analysis. This time-consuming data preparation process shifts valuable resources from data analysis to data assembly. To address these issues, the ORNL DAAC, a NASA-sponsored terrestrial ecology data center, has utilized geospatial Web service technology, such as Open Geospatial Consortium (OGC) Web Map Service (WMS) and OGC Web Coverage Service (WCS) standards, to increase the usability and availability of terrestrial ecology data sets. Data sets are standardized into non-proprietary file formats and distributed through OGC Web Service standards. OGC Web services allow the ORNL DAAC to store data sets in a single format and distribute them in multiple ways and formats. Registering the OGC Web services through search catalogues and other spatial data tools allows for publicizing the data sets and makes them more available across the Internet. The ORNL DAAC has also created a Web-based graphical user interface called Spatial Data Access Tool (SDAT) that utilizes OGC Web services standards and allows data distribution and consumption for users not familiar with OGC standards. SDAT also allows for users to visualize the data set prior to download. Google Earth visualizations of the data set are also provided through SDAT. The use of OGC Web service standards at the ORNL DAAC has enabled an increase in data consumption. In one case, a data set had ~10 fold increase in download through OGC Web service in comparison to the conventional FTP and WWW method of access. The increase in download suggests that users are not only finding the data sets they need but also able to consume them readily in the format they need.
A Goniometry Paradigm Shift to Measure Burn Scar Contracture in Burn Patients
2017-10-01
test more extensively a recently designed Revised Goniometry (RG) method and compare it to Standard Goniometry (SG)used to measure burn scar...joint angle measurements willbe found between SG techniques compared to RG techniques which incorporate CKM and CFU principles. Specific Aim 1: To... compare the average reduction in joint range of motion measured with the standard GM measurements to a newly conceived set of revised GM measurements in
Wu, Chunwei; Guan, Qingxiao; Wang, Shumei; Rong, Yueying
2017-01-01
Root of Panax ginseng C. A. Mey (Renseng in Chinese) is a famous Traditional Chinese Medicine. Ginsenosides are the major bioactive components. However, the shortage and high cost of some ginsenoside reference standards make it is difficult for quality control of P. ginseng . A method, single standard for determination of multicomponents (SSDMC), was developed for the simultaneous determination of nine ginsenosides in P. ginseng (ginsenoside Rg 1 , Re, Rf, Rg 2 , Rb 1 , Rc, Rb 2 , Rb 3 , Rd). The analytes were separated on Inertsil ODS-3 C18 (250 mm × 4.6 mm, 5 μm) with gradient elution of acetonitrile and water. The flow rate was 1 mL/min and detection wavelength was set at 203 nm. The feasibility and accuracy of SSDMC were checked by the external standard method, and various high-performance liquid chromatographic (HPLC) instruments and chromatographic conditions were investigated to verify its applicability. Using ginsenoside Rg 1 as the internal reference substance, the contents of other eight ginsenosides were calculated according to conversion factors (F) by HPLC. The method was validated with linearity ( r 2 ≥ 0.9990), precision (relative standard deviation [RSD] ≤2.9%), accuracy (97.5%-100.8%, RSD ≤ 1.6%), repeatability, and stability. There was no significant difference between the SSDMC method and the external standard method. New SSDMC method could be considered as an ideal mean to analyze the components for which reference standards are not readily available. A method, single standard for determination of multicomponents (SSDMC), was established by high-performance liquid chromatography for the simultaneous determination of nine ginsenosides in Panax ginseng (ginsenoside Rg1, Re, Rf, Rg2, Rb1, Rc, Rb2, Rb3, Rd)Various chromatographic conditions were investigated to verify applicability of FsThe feasibility and accuracy of SSDMC were checked by the external standard method. Abbreviations used: DRT: Different value of retention time; F: Conversion factor; HPLC: High-performance Liquid Chromatography; LOD: Limit of detection; LOQ: Limit of quantitation; PD: Percent difference; PPD: 20(S)-protopanaxadiol; PPT: 20(S)-protopanaxatriol; RSD: Relative standard deviation; SSDMC: Single Standard for Determination of Multicomponents; TCM: Traditional Chinese Medicine.
Tracking the hyoid bone in videofluoroscopic swallowing studies
NASA Astrophysics Data System (ADS)
Kellen, Patrick M.; Becker, Darci; Reinhardt, Joseph M.; van Daele, Douglas
2008-03-01
Difficulty swallowing, or dysphagia, has become a growing problem. Swallowing complications can lead to malnutrition, dehydration, respiratory infection, and even death. The current gold standard for analyzing and diagnosing dysphagia is the videofluoroscopic barium swallow study. In these studies, a fluoroscope is used to image the patient ingesting barium solutions of different volumes and viscosities. The hyoid bone anchors many key muscles involved in swallowing and plays a key role in the process. Abnormal hyoid bone motion during a swallow can indicate swallowing dysfunction. Currently in clinical settings, hyoid bone motion is assessed qualitatively, which can be subject to intra-rater and inter-rater bias. This paper presents a semi-automatic method for tracking the hyoid bone that makes quantitative analysis feasible. The user defines a template of the hyoid on one frame, and this template is tracked across subsequent frames. The matching phase is optimized by predicting the position of the template based on kinematics. An expert speech pathologist marked the position of the hyoid on each frame of ten studies to serve as the gold standard. Results from performing Bland-Altman analysis at a 95% confidence interval showed a bias of 0.0+/-0.08 pixels in x and -0.08+/-0.09 pixels in y between the manually-defined gold standard and the proposed method. The average Pearson's correlation between the gold standard and the proposed method was 0.987 in x and 0.980 in y. This paper also presents a method for automatically establishing a patient-centric coordinate system for the interpretation of hyoid motion. This coordinate system corrects for upper body patient motion during the study and identifies superior-inferior and anterior-posterior motion components. These tools make the use of quantitative hyoid motion analysis feasible in clinical and research settings.
Realization of the medium and high vacuum primary standard in CENAM, Mexico
NASA Astrophysics Data System (ADS)
Torres-Guzman, J. C.; Santander, L. A.; Jousten, K.
2005-12-01
A medium and high vacuum primary standard, based on the static expansion method, has been set up at Centro Nacional de Metrología (CENAM), Mexico. This system has four volumes and covers a measuring range of 1 × 10-5 Pa to 1 × 103 Pa of absolute pressure. As part of its realization, a characterization was performed, which included volume calibrations, several tests and a bilateral key comparison. To determine the expansion ratios, two methods were applied: the gravimetric method and the method with a linearized spinning rotor gauge. The outgassing ratios for the whole system were also determined. A comparison was performed with Physikalisch-Technische Bundesanstalt (comparison SIM-Euromet.M.P-BK3). By means of this comparison, a link has been achieved with the Euromet comparison (Euromet.M.P-K1.b). As a result, it is concluded that the value obtained at CENAM is equivalent to the Euromet reference value, and therefore the design, construction and operation of CENAM's SEE-1 vacuum primary standard were successful.
SU-F-BRD-10: Lung IMRT Planning Using Standardized Beam Bouquet Templates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, L; Wu, Q J.; Yin, F
2014-06-15
Purpose: We investigate the feasibility of choosing from a small set of standardized templates of beam bouquets (i.e., entire beam configuration settings) for lung IMRT planning to improve planning efficiency and quality consistency, and also to facilitate automated planning. Methods: A set of beam bouquet templates is determined by learning from the beam angle settings in 60 clinical lung IMRT plans. A k-medoids cluster analysis method is used to classify the beam angle configuration into clusters. The value of the average silhouette width is used to determine the ideal number of clusters. The beam arrangements in each medoid of themore » resulting clusters are taken as the standardized beam bouquet for the cluster, with the corresponding case taken as the reference case. The resulting set of beam bouquet templates was used to re-plan 20 cases randomly selected from the database and the dosimetric quality of the plans was evaluated against the corresponding clinical plans by a paired t-test. The template for each test case was manually selected by a planner based on the match between the test and reference cases. Results: The dosimetric parameters (mean±S.D. in percentage of prescription dose) of the plans using 6 beam bouquet templates and those of the clinical plans, respectively, and the p-values (in parenthesis) are: lung Dmean: 18.8±7.0, 19.2±7.0 (0.28), esophagus Dmean: 32.0±16.3, 34.4±17.9 (0.01), heart Dmean: 19.2±16.5, 19.4±16.6 (0.74), spinal cord D2%: 47.7±18.8, 52.0±20.3 (0.01), PTV dose homogeneity (D2%-D99%): 17.1±15.4, 20.7±12.2 (0.03).The esophagus Dmean, cord D02 and PTV dose homogeneity are statistically better in the plans using the standardized templates, but the improvements (<5%) may not be clinically significant. The other dosimetric parameters are not statistically different. Conclusion: It's feasible to use a small number of standardized beam bouquet templates (e.g. 6) to generate plans with quality comparable to that of clinical plans. Partially supported by NIH/NCI under grant #R21CA161389 and a master research grant by Varian Medical System.« less
Audit activity and quality of completed audit projects in primary care in Staffordshire.
Chambers, R; Bowyer, S; Campbell, I
1995-01-01
OBJECTIVES--To survey audit activity in primary care and determine which practice factors are associated with completed audit; to survey the quality of completed audit projects. DESIGN--From April 1992 to June 1993 a team from the medical audit advisory group visited all general practices; a research assistant visited each practice to study the best audit project. Data were collected in structured interviews. SETTING--Staffordshire, United Kingdom. SUBJECTS--All 189 general practices. MAIN MEASURES--Audit activity using Oxford classification system. Quality of best audit project by assessing choice of topic; participation of practice staff; setting of standards; methods of data collection and presentation of results; whether a plan to make changes resulted from the audit; and whether changes led to the set standards being achieved. RESULTS--Audit information was available from 169 practices (89%). 44(26%) practices had carried out at least one full audit; 40(24%) had not started audit. Mean scores with the Oxford classification system were significantly higher with the presence of a practice manager (2.7(95% confidence interval 2.4 to 2.9) v 1.2(0.7 to 1.8), p < 0.0001) and with computerisation (2.8(2.5 to 3.1) v 1.4 (0.9 to 2.0), p < 0.0001), organised notes (2.6(2.1 to 3.0) v 1.7(7.2 to 2.2), p = 0.03), being a training practice (3.5(3.2 to 3.8) v 2.1(1.8 to 2.4), p < 0.0001), and being a partnership (2.8(2.6 to 3.0) v 1.5(1.1 to 2.0), p < 0.0001). Standards had been set in 62 of the 71 projects reviewed. Data were collected prospectively in 36 projects and retrospectively in 35. 16 projects entailed taking samples from a study population and 55 from the whole population. 50 projects had a written summary. Performance was less than the standards set or expected in 56 projects. 62 practices made changes as a result of the audit. 35 of the 53 that had reviewed the changes found that the original standards had been reached. CONCLUSIONS--Evaluation of audit in primary care should include evaluation of the methods used, whether deficiencies were identified, and whether changes were implemented to resolve any problems found. PMID:10153426
RAMESES publication standards: meta-narrative reviews
2013-01-01
Background Meta-narrative review is one of an emerging menu of new approaches to qualitative and mixed-method systematic review. A meta-narrative review seeks to illuminate a heterogeneous topic area by highlighting the contrasting and complementary ways in which researchers have studied the same or a similar topic. No previous publication standards exist for the reporting of meta-narrative reviews. This publication standard was developed as part of the RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) project. The project's aim is to produce preliminary publication standards for meta-narrative reviews. Methods We (a) collated and summarized existing literature on the principles of good practice in meta-narrative reviews; (b) considered the extent to which these principles had been followed by published reviews, thereby identifying how rigor may be lost and how existing methods could be improved; (c) used a three-round online Delphi method with an interdisciplinary panel of national and international experts in evidence synthesis, meta-narrative reviews, policy and/or publishing to produce and iteratively refine a draft set of methodological steps and publication standards; (d) provided real-time support to ongoing meta-narrative reviews and the open-access RAMESES online discussion list so as to capture problems and questions as they arose; and (e) synthesized expert input, evidence review and real-time problem analysis into a definitive set of standards. Results We identified nine published meta-narrative reviews, provided real-time support to four ongoing reviews and captured questions raised in the RAMESES discussion list. Through analysis and discussion within the project team, we summarized the published literature, and common questions and challenges into briefing materials for the Delphi panel, comprising 33 members. Within three rounds this panel had reached consensus on 20 key publication standards, with an overall response rate of 90%. Conclusion This project used multiple sources to draw together evidence and expertise in meta-narrative reviews. For each item we have included an explanation for why it is important and guidance on how it might be reported. Meta-narrative review is a relatively new method for evidence synthesis and as experience and methodological developments occur, we anticipate that these standards will evolve to reflect further theoretical and methodological developments. We hope that these standards will act as a resource that will contribute to improving the reporting of meta-narrative reviews. To encourage dissemination of the RAMESES publication standards, this article is co-published in the Journal of Advanced Nursing and is freely accessible on Wiley Online Library (http://www.wileyonlinelibrary.com/journal/jan). Please see related articles http://www.biomedcentral.com/1741-7015/11/21 and http://www.biomedcentral.com/1741-7015/11/22 PMID:23360661
Comparison of 1-step and 2-step methods of fitting microbiological models.
Jewell, Keith
2012-11-15
Previous conclusions that a 1-step fitting method gives more precise coefficients than the traditional 2-step method are confirmed by application to three different data sets. It is also shown that, in comparison to 2-step fits, the 1-step method gives better fits to the data (often substantially) with directly interpretable regression diagnostics and standard errors. The improvement is greatest at extremes of environmental conditions and it is shown that 1-step fits can indicate inappropriate functional forms when 2-step fits do not. 1-step fits are better at estimating primary parameters (e.g. lag, growth rate) as well as concentrations, and are much more data efficient, allowing the construction of more robust models on smaller data sets. The 1-step method can be straightforwardly applied to any data set for which the 2-step method can be used and additionally to some data sets where the 2-step method fails. A 2-step approach is appropriate for visual assessment in the early stages of model development, and may be a convenient way to generate starting values for a 1-step fit, but the 1-step approach should be used for any quantitative assessment. Copyright © 2012 Elsevier B.V. All rights reserved.
Photometric calibration of the COMBO-17 survey with the Softassign Procrustes Matching method
NASA Astrophysics Data System (ADS)
Sheikhbahaee, Z.; Nakajima, R.; Erben, T.; Schneider, P.; Hildebrandt, H.; Becker, A. C.
2017-11-01
Accurate photometric calibration of optical data is crucial for photometric redshift estimation. We present the Softassign Procrustes Matching (SPM) method to improve the colour calibration upon the commonly used Stellar Locus Regression (SLR) method for the COMBO-17 survey. Our colour calibration approach can be categorised as a point-set matching method, which is frequently used in medical imaging and pattern recognition. We attain a photometric redshift precision Δz/(1 + zs) of better than 2 per cent. Our method is based on aligning the stellar locus of the uncalibrated stars to that of a spectroscopic sample of the Sloan Digital Sky Survey standard stars. We achieve our goal by finding a correspondence matrix between the two point-sets and applying the matrix to estimate the appropriate translations in multidimensional colour space. The SPM method is able to find the translation between two point-sets, despite the existence of noise and incompleteness of the common structures in the sets, as long as there is a distinct structure in at least one of the colour-colour pairs. We demonstrate the precision of our colour calibration method with a mock catalogue. The SPM colour calibration code is publicly available at https://neuronphysics@bitbucket.org/neuronphysics/spm.git.
Demands Upon Children Regarding Quality of Achievement: Standard Setting in Preschool Classrooms.
ERIC Educational Resources Information Center
Potter, Ellen F.
Focusing particularly on messages transmitted by socializing agents in preschool settings, this exploratory study investigates (1) the incidence of communication events in which standards for achievement are expressed, (2) the nature of the standards, and (3) variations across settings in the nature of standard-setting events. The relationship of…
Science and Art of Setting Performance Standards and Cutoff Scores in Kinesiology
ERIC Educational Resources Information Center
Zhu, Weimo
2013-01-01
Setting standards and cutoff scores is essential to any measurement and evaluation practice. Two evaluation frameworks, norm-referenced (NR) and criterion-referenced (CR), have often been used for setting standards. Although setting fitness standards based on the NR evaluation is relatively easy as long as a nationally representative sample can be…
Standard Setting: A Systematic Approach to Interpreting Student Learning.
ERIC Educational Resources Information Center
DeMars, Christine E.; Sundre, Donna L.; Wise, Steven L.
2002-01-01
Describes workshops designed to set standards for freshman technological literacy at James Madison University (Virginia). Results indicated that about 30% of incoming freshmen could meet the standards set initially; by the end of the year, an additional 50-60% could meet them. Provides recommendations for standard setting in a general education…
Sample handling for mass spectrometric proteomic investigations of human sera.
West-Nielsen, Mikkel; Høgdall, Estrid V; Marchiori, Elena; Høgdall, Claus K; Schou, Christian; Heegaard, Niels H H
2005-08-15
Proteomic investigations of sera are potentially of value for diagnosis, prognosis, choice of therapy, and disease activity assessment by virtue of discovering new biomarkers and biomarker patterns. Much debate focuses on the biological relevance and the need for identification of such biomarkers while less effort has been invested in devising standard procedures for sample preparation and storage in relation to model building based on complex sets of mass spectrometric (MS) data. Thus, development of standardized methods for collection and storage of patient samples together with standards for transportation and handling of samples are needed. This requires knowledge about how sample processing affects MS-based proteome analyses and thereby how nonbiological biased classification errors are avoided. In this study, we characterize the effects of sample handling, including clotting conditions, storage temperature, storage time, and freeze/thaw cycles, on MS-based proteomics of human serum by using principal components analysis, support vector machine learning, and clustering methods based on genetic algorithms as class modeling and prediction methods. Using spiking to artificially create differentiable sample groups, this integrated approach yields data that--even when working with sample groups that differ more than may be expected in biological studies--clearly demonstrate the need for comparable sampling conditions for samples used for modeling and for the samples that are going into the test set group. Also, the study emphasizes the difference between class prediction and class comparison studies as well as the advantages and disadvantages of different modeling methods.
Standardized development of computer software. Part 1: Methods
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.
A Mixed QM/MM Scoring Function to Predict Protein-Ligand Binding Affinity
Hayik, Seth A.; Dunbrack, Roland; Merz, Kenneth M.
2010-01-01
Computational methods for predicting protein-ligand binding free energy continue to be popular as a potential cost-cutting method in the drug discovery process. However, accurate predictions are often difficult to make as estimates must be made for certain electronic and entropic terms in conventional force field based scoring functions. Mixed quantum mechanics/molecular mechanics (QM/MM) methods allow electronic effects for a small region of the protein to be calculated, treating the remaining atoms as a fixed charge background for the active site. Such a semi-empirical QM/MM scoring function has been implemented in AMBER using DivCon and tested on a set of 23 metalloprotein-ligand complexes, where QM/MM methods provide a particular advantage in the modeling of the metal ion. The binding affinity of this set of proteins can be calculated with an R2 of 0.64 and a standard deviation of 1.88 kcal/mol without fitting and 0.71 and a standard deviation of 1.69 kcal/mol with fitted weighting of the individual scoring terms. In this study we explore using various methods to calculate terms in the binding free energy equation, including entropy estimates and minimization standards. From these studies we found that using the rotational bond estimate to ligand entropy results in a reasonable R2 of 0.63 without fitting. We also found that using the ESCF energy of the proteins without minimization resulted in an R2 of 0.57, when using the rotatable bond entropy estimate. PMID:21221417
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Wang, Jianing; Liu, Yuan; Noble, Jack H; Dawant, Benoit M
2017-10-01
Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.
Low-derivative operators of the Standard Model effective field theory via Hilbert series methods
NASA Astrophysics Data System (ADS)
Lehman, Landon; Martin, Adam
2016-02-01
In this work, we explore an extension of Hilbert series techniques to count operators that include derivatives. For sufficiently low-derivative operators, we conjecture an algorithm that gives the number of invariant operators, properly accounting for redundancies due to the equations of motion and integration by parts. Specifically, the conjectured technique can be applied whenever there is only one Lorentz invariant for a given partitioning of derivatives among the fields. At higher numbers of derivatives, equation of motion redundancies can be removed, but the increased number of Lorentz contractions spoils the subtraction of integration by parts redundancies. While restricted, this technique is sufficient to automatically recreate the complete set of invariant operators of the Standard Model effective field theory for dimensions 6 and 7 (for arbitrary numbers of flavors). At dimension 8, the algorithm does not automatically generate the complete operator set; however, it suffices for all but five classes of operators. For these remaining classes, there is a well defined procedure to manually determine the number of invariants. Assuming our method is correct, we derive a set of 535 dimension-8 N f = 1 operators.
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Sinha, A. K.
1973-01-01
Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.
NASA Astrophysics Data System (ADS)
Choi, Chu Hwan
2002-09-01
Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.
Gu, Z.; Sam, S. S.; Sun, Y.; Tang, L.; Pounds, S.; Caliendo, A. M.
2016-01-01
A potential benefit of digital PCR is a reduction in result variability across assays and platforms. Three sets of PCR reagents were tested on two digital PCR systems (Bio-Rad and RainDance), using three different sets of PCR reagents for quantitation of cytomegalovirus (CMV). Both commercial quantitative viral standards and 16 patient samples (n = 16) were tested. Quantitative accuracy (compared to nominal values) and variability were determined based on viral standard testing results. Quantitative correlation and variability were assessed with pairwise comparisons across all reagent-platform combinations for clinical plasma sample results. The three reagent sets, when used to assay quantitative standards on the Bio-Rad system, all showed a high degree of accuracy, low variability, and close agreement with one another. When used on the RainDance system, one of the three reagent sets appeared to have a much better correlation to nominal values than did the other two. Quantitative results for patient samples showed good correlation in most pairwise comparisons, with some showing poorer correlations when testing samples with low viral loads. Digital PCR is a robust method for measuring CMV viral load. Some degree of result variation may be seen, depending on platform and reagents used; this variation appears to be greater in samples with low viral load values. PMID:27535685
Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco
2018-03-01
This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.
Crapanzano, John P.; Heymann, Jonas J.; Monaco, Sara; Nassar, Aziza; Saqi, Anjali
2014-01-01
Background: In the recent past, algorithms and recommendations to standardize the morphological, immunohistochemical and molecular classification of lung cancers on cytology specimens have been proposed, and several organizations have recommended cell blocks (CBs) as the preferred modality for molecular testing. Based on the literature, there are several different techniques available for CB preparation-suggesting that there is no standard. The aim of this study was to conduct a survey of CB preparation techniques utilized in various practice settings and analyze current issues, if any. Materials and Methods: A single E-mail with a link to an electronic survey was distributed to members of the American Society of Cytopathology and other pathologists. Questions pertaining to the participants’ practice setting and CBs-volume, method, quality and satisfaction-were included. Results: Of 95 respondents, 90/95 (94%) completed the survey and comprise the study group. Most participants practice in a community hospital/private practice (44%) or academic center (41%). On average, 14 CBs (range 0-50; median 10) are prepared by a laboratory daily. Over 10 methods are utilized: Plasma thrombin (33%), HistoGel (27%), Cellient automated cell block system (8%) and others (31%) respectively. Forty of 90 (44%) respondents are either unsatisfied or sometimes satisfied with their CB quality, with low-cellular yield being the leading cause of dissatisfaction. There was no statistical significance between the three most common CB preparation methods and satisfaction with quality. Discussion: Many are dissatisfied with their current method of CB preparation, and there is no consistent method to prepare CBs. In today's era of personalized medicine with an increasing array of molecular tests being applied to cytological specimens, there is a need for a standardized protocol for CB optimization to enhance cellularity. PMID:24799951
Galaviz, Karla I.; Lobelo, Felipe; Joy, Elizabeth; Heath, Gregory W.; Hutber, Adrian; Estabrooks, Paul
2018-01-01
Introduction Exercise is Medicine (EIM) is an initiative that seeks to integrate physical activity assessment, prescription, and patient referral as a standard in patient care. Methods to assess this integration have lagged behind its implementation. Purpose and Objectives The purpose of this work is to provide a pragmatic framework to guide health care systems in assessing the implementation and impact of EIM. Evaluation Methods A working group of experts from health care, public health, and implementation science convened to develop an evaluation model based on the RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) framework. The working group aimed to provide pragmatic guidance on operationalizing EIM across the different RE-AIM dimensions based on data typically available in health care settings. Results The Reach of EIM can be determined by the number and proportion of patients that were screened for physical inactivity, received brief counseling and/or a physical activity prescription, and were referred to physical activity resources. Effectiveness can be assessed through self-reported changes in physical activity, cardiometabolic biometric factors, incidence/burden of chronic disease, as well as health care utilization and costs. Adoption includes assessing the number and representativeness of health care settings that adopt any component of EIM, and Implementation involves assessing the extent to which health care teams implement EIM in their clinic. Finally, Maintenance involves assessing the long-term effectiveness (patient level) and sustained implementation (clinic level) of EIM in a given health care setting. Implications for Public Health The availability of a standardized, pragmatic, evaluation framework is critical in determining the impact of implementing EIM as a standard of care across health care systems. PMID:29752803
Liu, Fang; Ren, Dequan; Guo, De-an; Pan, Yifeng; Zhang, Huzhe; Hu, Ping
2008-03-01
In this paper, a new method for liquid chromatographic fingerprint of saponins in Gynostemma pentaphyllum (THUNB.) MAKINO was developed. The G. pentaphyllum powder was defatted by Soxhlet extraction with petroleum ether and then gypenosides were extracted from the residue with methanol by sonicating. Column chromatography with macro pore resin was then used to separate and enrich gypenosides. HPLC fingerprint analysis of gypenosides fraction was performed on a C18 column, with an isocratic elution of 34% acetonitrile for 60 min at 0.8 ml/min, sample injection volume was 20 microl and the wavelength was 203 nm. To cover the lack of standard compounds, the addition of an internal standard of ginsenoside Rb2 was employed in the gypenosides fingerprint profile. The relative retention time (RRT) and relative peak area (RPA) of the gypenosides peaks in the fingerprint were calculated by setting the ginsenoside Rb2 as the marker compound. The relative standard deviation (RSDs) of RRT of five common peaks vs. ginsenoside Rb2 in precision, repeatability and stability test were less than 1%, and the RSDs of RPA were less than 5%. The method validation data proved that the proposed method for the fingerprint with internal standard of G. pentaphyllum saponins is adequate, valid and applicable. Finally, three batches of crude drug samples collected from Shanxi province were tested by following the established method.
NASA Astrophysics Data System (ADS)
Denny, Ellen G.; Gerst, Katharine L.; Miller-Rushing, Abraham J.; Tierney, Geraldine L.; Crimmins, Theresa M.; Enquist, Carolyn A. F.; Guertin, Patricia; Rosemartin, Alyssa H.; Schwartz, Mark D.; Thomas, Kathryn A.; Weltzin, Jake F.
2014-05-01
Phenology offers critical insights into the responses of species to climate change; shifts in species' phenologies can result in disruptions to the ecosystem processes and services upon which human livelihood depends. To better detect such shifts, scientists need long-term phenological records covering many taxa and across a broad geographic distribution. To date, phenological observation efforts across the USA have been geographically limited and have used different methods, making comparisons across sites and species difficult. To facilitate coordinated cross-site, cross-species, and geographically extensive phenological monitoring across the nation, the USA National Phenology Network has developed in situ monitoring protocols standardized across taxonomic groups and ecosystem types for terrestrial, freshwater, and marine plant and animal taxa. The protocols include elements that allow enhanced detection and description of phenological responses, including assessment of phenological "status", or the ability to track presence-absence of a particular phenophase, as well as standards for documenting the degree to which phenological activity is expressed in terms of intensity or abundance. Data collected by this method can be integrated with historical phenology data sets, enabling the development of databases for spatial and temporal assessment of changes in status and trends of disparate organisms. To build a common, spatially, and temporally extensive multi-taxa phenological data set available for a variety of research and science applications, we encourage scientists, resources managers, and others conducting ecological monitoring or research to consider utilization of these standardized protocols for tracking the seasonal activity of plants and animals.
Denny, Ellen G.; Gerst, Katharine L.; Miller-Rushing, Abraham J.; Tierney, Geraldine L.; Crimmins, Theresa M.; Enquist, Carolyn A.F.; Guertin, Patricia; Rosemartin, Alyssa H.; Schwartz, Mark D.; Thomas, Kathryn A.; Weltzin, Jake F.
2014-01-01
Phenology offers critical insights into the responses of species to climate change; shifts in species’ phenologies can result in disruptions to the ecosystem processes and services upon which human livelihood depends. To better detect such shifts, scientists need long-term phenological records covering many taxa and across a broad geographic distribution. To date, phenological observation efforts across the USA have been geographically limited and have used different methods, making comparisons across sites and species difficult. To facilitate coordinated cross-site, cross-species, and geographically extensive phenological monitoring across the nation, the USA National Phenology Network has developed in situ monitoring protocols standardized across taxonomic groups and ecosystem types for terrestrial, freshwater, and marine plant and animal taxa. The protocols include elements that allow enhanced detection and description of phenological responses, including assessment of phenological “status”, or the ability to track presence–absence of a particular phenophase, as well as standards for documenting the degree to which phenological activity is expressed in terms of intensity or abundance. Data collected by this method can be integrated with historical phenology data sets, enabling the development of databases for spatial and temporal assessment of changes in status and trends of disparate organisms. To build a common, spatially, and temporally extensive multi-taxa phenological data set available for a variety of research and science applications, we encourage scientists, resources managers, and others conducting ecological monitoring or research to consider utilization of these standardized protocols for tracking the seasonal activity of plants and animals.
Santos, Sara; Oliveira, Manuela; Amorim, António; van Asch, Barbara
2014-11-01
The grapevine (Vitis vinifera subsp. vinifera) is one of the most important agricultural crops worldwide. A long interest in the historical origins of ancient and cultivated current grapevines, as well as the need to establish phylogenetic relationships and parentage, solve homonymies and synonymies, fingerprint cultivars and clones, and assess the authenticity of plants and wines has encouraged the development of genetic identification methods. STR analysis is currently the most commonly used method for these purposes. A large dataset of grapevines genotypes for many cultivars worldwide has been produced in the last decade using a common set of recommended dinucleotide nuclear STRs. This type of marker has been replaced by long core-repeat loci in standardized state-of-the-art human forensic genotyping. The first steps toward harmonized grapevine genotyping have already been taken to bring the genetic identification methods closer to human forensic STR standards by previous authors. In this context, we bring forward a set of basic suggestions that reinforce the need to (i) guarantee trueness-to-type of the sample; (ii) use the long core-repeat markers; (iii) verify the specificity and amplification consistency of PCR primers; (iv) sequence frequent alleles and use these standardized allele ladders; (v) consider mutation rates when evaluating results of STR-based parentage and pedigree analysis; (vi) genotype large and representative samples in order to obtain allele frequency databases; (vii) standardize genotype data by establishing allele nomenclature based on repeat number to facilitate information exchange and data compilation. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Intra-operative adjustment of standard planes in C-arm CT image data.
Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana
2016-03-01
With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.
Denny, Ellen G; Gerst, Katharine L; Miller-Rushing, Abraham J; Tierney, Geraldine L; Crimmins, Theresa M; Enquist, Carolyn A F; Guertin, Patricia; Rosemartin, Alyssa H; Schwartz, Mark D; Thomas, Kathryn A; Weltzin, Jake F
2014-05-01
Phenology offers critical insights into the responses of species to climate change; shifts in species' phenologies can result in disruptions to the ecosystem processes and services upon which human livelihood depends. To better detect such shifts, scientists need long-term phenological records covering many taxa and across a broad geographic distribution. To date, phenological observation efforts across the USA have been geographically limited and have used different methods, making comparisons across sites and species difficult. To facilitate coordinated cross-site, cross-species, and geographically extensive phenological monitoring across the nation, the USA National Phenology Network has developed in situ monitoring protocols standardized across taxonomic groups and ecosystem types for terrestrial, freshwater, and marine plant and animal taxa. The protocols include elements that allow enhanced detection and description of phenological responses, including assessment of phenological "status", or the ability to track presence-absence of a particular phenophase, as well as standards for documenting the degree to which phenological activity is expressed in terms of intensity or abundance. Data collected by this method can be integrated with historical phenology data sets, enabling the development of databases for spatial and temporal assessment of changes in status and trends of disparate organisms. To build a common, spatially, and temporally extensive multi-taxa phenological data set available for a variety of research and science applications, we encourage scientists, resources managers, and others conducting ecological monitoring or research to consider utilization of these standardized protocols for tracking the seasonal activity of plants and animals.
Ganchev, Philip; Malehorn, David; Bigbee, William L.; Gopalakrishnan, Vanathi
2013-01-01
We present a novel framework for integrative biomarker discovery from related but separate data sets created in biomarker profiling studies. The framework takes prior knowledge in the form of interpretable, modular rules, and uses them during the learning of rules on a new data set. The framework consists of two methods of transfer of knowledge from source to target data: transfer of whole rules and transfer of rule structures. We evaluated the methods on three pairs of data sets: one genomic and two proteomic. We used standard measures of classification performance and three novel measures of amount of transfer. Preliminary evaluation shows that whole-rule transfer improves classification performance over using the target data alone, especially when there is more source data than target data. It also improves performance over using the union of the data sets. PMID:21571094
An improved method for bivariate meta-analysis when within-study correlations are unknown.
Hong, Chuan; D Riley, Richard; Chen, Yong
2018-03-01
Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.
Aronson, Jeffrey K
2016-01-01
Objective To examine how misspellings of drug names could impede searches for published literature. Design Database review. Data source PubMed. Review methods The study included 30 drug names that are commonly misspelt on prescription charts in hospitals in Birmingham, UK (test set), and 30 control names randomly chosen from a hospital formulary (control set). The following definitions were used: standard names—the international non-proprietary names, variant names—deviations in spelling from standard names that are not themselves standard names in English language nomenclature, and hidden reference variants—variant spellings that identified publications in textword (tw) searches of PubMed or other databases, and which were not identified by textword searches for the standard names. Variant names were generated from standard names by applying letter substitutions, omissions, additions, transpositions, duplications, deduplications, and combinations of these. Searches were carried out in PubMed (30 June 2016) for “standard name[tw]” and “variant name[tw] NOT standard name[tw].” Results The 30 standard names of drugs in the test set gave 325 979 hits in total, and 160 hidden reference variants gave 3872 hits (1.17%). The standard names of the control set gave 470 064 hits, and 79 hidden reference variants gave 766 hits (0.16%). Letter substitutions (particularly i to y and vice versa) and omissions together accounted for 2924 (74%) of the variants. Amitriptyline (8530 hits) yielded 18 hidden reference variants (179 (2.1%) hits). Names ending in “in,” “ine,” or “micin” were commonly misspelt. Failing to search for hidden reference variants of “gentamicin,” “amitriptyline,” “mirtazapine,” and “trazodone” would miss at least 19 systematic reviews. A hidden reference variant related to Christmas, “No-el”, was rare; variants of “X-miss” were rarer. Conclusion When performing searches, researchers should include misspellings of drug names among their search terms. PMID:27974346
76 FR 54293 - Review of National Ambient Air Quality Standards for Carbon Monoxide
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-31
...This rule is being issued at this time as required by a court order governing the schedule for completion of this review of the air quality criteria and the national ambient air quality standards (NAAQS) for carbon monoxide (CO). Based on its review, the EPA concludes the current primary standards are requisite to protect public health with an adequate margin of safety, and is retaining those standards. After review of the air quality criteria, EPA further concludes that no secondary standard should be set for CO at this time. EPA is also making changes to the ambient air monitoring requirements for CO, including those related to network design, and is updating, without substantive change, aspects of the Federal reference method.
Increasing patient safety and efficiency in transfusion therapy using formal process definitions.
Henneman, Elizabeth A; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Andrzejewski, Chester; Merrigan, Karen; Cobleigh, Rachel; Frederick, Kimberly; Katz-Bassett, Ethan; Henneman, Philip L
2007-01-01
The administration of blood products is a common, resource-intensive, and potentially problem-prone area that may place patients at elevated risk in the clinical setting. Much of the emphasis in transfusion safety has been targeted toward quality control measures in laboratory settings where blood products are prepared for administration as well as in automation of certain laboratory processes. In contrast, the process of transfusing blood in the clinical setting (ie, at the point of care) has essentially remained unchanged over the past several decades. Many of the currently available methods for improving the quality and safety of blood transfusions in the clinical setting rely on informal process descriptions, such as flow charts and medical algorithms, to describe medical processes. These informal descriptions, although useful in presenting an overview of standard processes, can be ambiguous or incomplete. For example, they often describe only the standard process and leave out how to handle possible failures or exceptions. One alternative to these informal descriptions is to use formal process definitions, which can serve as the basis for a variety of analyses because these formal definitions offer precision in the representation of all possible ways that a process can be carried out in both standard and exceptional situations. Formal process definitions have not previously been used to describe and improve medical processes. The use of such formal definitions to prospectively identify potential error and improve the transfusion process has not previously been reported. The purpose of this article is to introduce the concept of formally defining processes and to describe how formal definitions of blood transfusion processes can be used to detect and correct transfusion process errors in ways not currently possible using existing quality improvement methods.
Graham, Tanya; Rose, Diana; Murray, Joanna; Ashworth, Mark; Tylee, André
2014-01-01
Objectives To develop user-generated quality standards for young people with mental health problems in primary care using a participatory research model. Methods 50 young people aged 16–25 from community settings and primary care participated in focus groups and interviews about their views and experiences of seeking help for mental health problems in primary care, cofacilitated by young service users and repeated to ensure respondent validation. A second group of young people also aged 16–25 who had sought help for any mental health problem from primary care or secondary care within the last 5 years were trained as focus groups cofacilitators (n=12) developed the quality standards from the qualitative data and participated in four nominal groups (n=28). Results 46 quality standards were developed and ranked by young service users. Agreement was defined as 100% of scores within a two-point region. Group consensus existed for 16 quality standards representing the following aspects of primary care: better advertising and information (three); improved competence through mental health training and skill mix within the practice (two); alternatives to medication (three); improved referral protocol (three); and specific questions and reassurances (five). Alternatives to medication and specific questions and reassurances are aspects of quality which have not been previously reported. Conclusions We have demonstrated the feasibility of using participatory research methods in order to develop user-generated quality standards. The development of patient-generated quality standards may offer a more formal method of incorporating the views of service users into quality improvement initiatives. This method can be adapted for generating quality standards applicable to other patient groups. PMID:24920648
Aaltonen, T; González, B Alvarez; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bauer, G; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Bland, K R; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Brisuda, A; Bromberg, C; Brucken, E; Bucciantonio, M; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Cabrera, S; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; De Cecco, S; De Lorenzo, G; Dell'Orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hidas, D; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, Q; Liu, T; Lockwitz, S; Lockyer, N S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Makhoul, K; Maksimovic, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Fernandez, P Movilla; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Griso, S Pagan; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Santi, L; Sartori, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shreyber, I; Simonenko, A; Sinervo, P; Sissakian, A; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamaoka, J; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zucchelli, S
2010-12-17
We search for the standard model Higgs boson produced with a Z boson in 4.1 fb(-1) of integrated luminosity collected with the CDF II detector at the Tevatron. In events consistent with the decay of the Higgs boson to a bottom-quark pair and the Z boson to electrons or muons, we set 95% credibility level upper limits on the ZH production cross section multiplied by the H → bb branching ratio. Improved analysis methods enhance signal sensitivity by 20% relative to previous searches. At a Higgs boson mass of 115 GeV/c2 we set a limit of 5.9 times the standard model cross section.
Neural response in obsessive-compulsive washers depends on individual fit of triggers
Baioui, Ali; Pilgramm, Juliane; Merz, Christian J.; Walter, Bertram; Vaitl, Dieter; Stark, Rudolf
2013-01-01
Background: Patients with obsessive-compulsive disorder (OCD) have highly idiosyncratic triggers. To fully understand which role this idiosyncrasy plays in the neurobiological mechanisms behind OCD, it is necessary to elucidate the impact of individualization regarding the applied investigation methods. This functional magnetic resonance imaging (fMRI) study explores the neural correlates of contamination/washing-related OCD with a highly individualized symptom provocation paradigm. Additionally, it is the first study to directly compare individualized and standardized symptom provocation. Methods: Nineteen patients with washing compulsions created individual OCD hierarchies, which later served as instructions to photograph their own individualized stimulus sets. The patients and 19 case-by-case matched healthy controls participated in a symptom provocation fMRI experiment with individualized and standardized stimulus sets created for each patient. Results: OCD patients compared to healthy controls displayed stronger activation in the basal ganglia (nucleus accumbens, nucleus caudatus, pallidum) for individualized symptom provocation. Using standardized symptom provocation, this group comparison led to stronger activation in the nucleus caudatus. The direct comparison of between-group effects for both symptom provocation approaches revealed stronger activation of the orbitofronto-striatal network for individualized symptom provocation. Conclusions: The present study provides insight into the differential impact of individualized and standardized symptom provocation on the orbitofronto-striatal network of OCD washers. Behavioral and neural responses imply a higher symptom-specificity of individualized symptom provocation. PMID:23630478
Recommendations for Selecting Drug-Drug Interactions for Clinical Decision Support
Tilson, Hugh; Hines, Lisa E.; McEvoy, Gerald; Weinstein, David M.; Hansten, Philip D.; Matuszewski, Karl; le Comte, Marianne; Higby-Baker, Stefanie; Hanlon, Joseph T.; Pezzullo, Lynn; Vieson, Kathleen; Helwig, Amy L.; Huang, Shiew-Mei; Perre, Anthony; Bates, David W.; Poikonen, John; Wittie, Michael A.; Grizzle, Amy J.; Brown, Mary; Malone, Daniel C.
2016-01-01
Purpose To recommend principles for including drug-drug interactions (DDIs) in clinical decision support. Methods A conference series was conducted to improve clinical decision support (CDS) for DDIs. The Content Workgroup met monthly by webinar from January 2013 to February 2014, with two in-person meetings to reach consensus. The workgroup consisted of 20 experts in pharmacology, drug information, and CDS from academia, government agencies, health information (IT) vendors, and healthcare organizations. Workgroup members addressed four key questions: (1) What process should be used to develop and maintain a standard set of DDIs?; (2) What information should be included in a knowledgebase of standard DDIs?; (3) Can/should a list of contraindicated drug pairs be established?; and (4) How can DDI alerts be more intelligently filtered? Results To develop and maintain a standard set of DDIs for CDS in the United States, we recommend a transparent, systematic, and evidence-driven process with graded recommendations by a consensus panel of experts and oversight by a national organization. We outline key DDI information needed to help guide clinician decision-making. We recommend judicious classification of DDIs as contraindicated, as only a small set of drug combinations are truly contraindicated. Finally, we recommend more research to identify methods to safely reduce repetitive and less relevant alerts. Conclusion A systematic ongoing process is necessary to select DDIs for alerting clinicians. We anticipate that our recommendations can lead to consistent and clinically relevant content for interruptive DDIs, and thus reduce alert fatigue and improve patient safety. PMID:27045070
Data preprocessing methods of FT-NIR spectral data for the classification cooking oil
NASA Astrophysics Data System (ADS)
Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli
2014-12-01
This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.
21 CFR 10.95 - Participation in outside standard-setting activities.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., professional societies, and academic institutions. (4) An FDA employee appointed as the liaison representative... associations, professional societies, and academic institutions. (5) An FDA employee appointed as the liaison... validation of analytical methods for regulatory use, drafting uniform laws and regulations, and the...
Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.
Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph
2018-06-01
There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.
Niu, J L; Burnett, J
2001-06-01
Methods, standards, and regulations that are aimed to reduce indoor air pollution from building materials are critically reviewed. These are classified as content control and emission control. Methods and standards can be found in both of these two classes. In the regulation domain, only content control is enforced in some countries and some regions, and asbestos is the only building material that is banned for building use. The controlled pollutants include heavy metals, radon, formaldehyde, and volatile organic compounds (VOCs). Emission rate control based upon environment chamber testing is very much in the nature of voluntary product labeling and ranking, and this mainly targets formaldehyde and VOC emissions. It is suggested that radon emission from building materials should be subject to similar emission rate control. A comprehensive set criteria and credit-awarding scheme that encourages the use of low-emission building material is synthesized, and how this scheme can be practiced in building design is proposed and discussed.
Earth Science for Educators: Preparing 7-12 Teachers for Standards-based, Inquiry Instruction
NASA Astrophysics Data System (ADS)
Sloan, H.
2002-05-01
"Earth Science for Educators" is an innovative, standards-based, graduate level teacher education curriculum that presents science content and pedagogic technique in parallel. The curriculum calls upon the resources and expertise of the American Museum of Natural History (AMNH) to prepare novice New York City teachers for teaching Earth Science. One of the goals of teacher education is to assure and facilitate science education reform through preparation of K-12 teachers who understand and are able to implement standard-based instruction. Standards reflect not only the content knowledge students are expected to attain but also the science skills and dispositions towards science they are expected to develop. Melding a list of standards with a curriculum outline to create inquiry-based classroom instruction that reaches a very diverse population of learners is extremely challenging. "Earth Science for Educators" helps novice teachers make the link between standards and practice by constantly connecting standards with instruction they receive and activities they carry out. Development of critical thinking and enthusiasm for inquiry is encouraged through engaging experience and contact with scientists and their work. Teachers are taught Earth systems science content through modeling of a wide variety of instruction and assessment methods based upon authentic scientific inquiry and aimed at different learning styles. Use of fieldwork and informal settings, such as the Museum, familiarizes novice teachers with ways of drawing on community resources for content and instructional settings. Metacognitive reflection that articulates standards, practice, and the teachers' own learning experience help draw out teachers' insights into their students' learning. The innovation of bring science content together with teaching methods is key to preparing teachers for standards-based, inquiry instruction. This curriculum was successfully piloted with a group of 28 novice teachers as part of the AMNH-City University of New York partnership and the CUNY Teaching Opportunity Program Scholarship. Reactions and feedback from program coordinators and teachers have been extremely positive during the year and a half since its implementation.
Syndrome diagnosis: human intuition or machine intelligence?
Braaten, Oivind; Friestad, Johannes
2008-01-01
The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a 'vector method' and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes' calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods.
Mushi, Martha Fidelis; Paterno, Laurent; Tappe, Dennis; Deogratius, Anna Pendo; Seni, Jeremiah; Moremi, Nyambura; Mirambo, Mariam Mwijuma; Mshana, Stephen Eliatosha
2014-01-01
Introduction Campylobacter species are recognized as a major cause of acute gastroenteritis in humans throughout the world. The diagnosis is mainly based on stool culture. This study was done to evaluate the effectiveness of staining methods (Gram stain using 0.3% carbol fuchsin as counter stain and 1% carbol fuchsin direct stain) versus culture as the gold standard. Methods A total of 300 children attending Bugando Medical Centre (BMC) and the Sekou Toure regional hospital with acute watery diarrhea were enrolled. Two sets of slides were prepared stained with 1% carbol fuchsin for 30 seconds first set, and the second set stained with Gram's stain using 0.3% carbol fuchsin as counter stain for five minutes. Concurrently, stool samples were inoculated on Preston Agar selective. Results Of 300 stool specimens, 14(4.7%) showed positive culture after 48 hours of incubation and 28 (9.3%) shows typical morphology of Campylobacter species by both Gram stain and direct stain. The sensitivity of the Gram stain using 0.3% carbol fuchsin as counter stain and 1% carbol fuchsin simple stain versus culture as gold standard was 64.3%, with a specificity of 93.4%. The positive predictive value and negative predictive value were 32.1% and 98.2% respectively. Conclusion The detection of Campylobacter by 1% carbol fuchsin is simple, inexpensive, and fast, with both a high sensitivity and specificity. Laboratories in settings with high prevalence of campylobacteriosis and/or limited resources can employ 1% carbol fuchsin direct stain in detecting campylobacter infections. PMID:25995788
McCaffrey, Daniel; Ramchand, Rajeev; Hunter, Sarah B.; Suttorp, Marika
2012-01-01
We develop a new tool for assessing the sensitivity of findings on treatment effectiveness to differential follow-up rates in the two treatment conditions being compared. The method censors the group with the higher response rate to create a synthetic respondent group that is then compared with the observed cases in the other condition to estimate a treatment effect. Censoring is done under various assumptions about the strength of the relationship between follow-up and outcomes to determine how informative differential dropout can alter inferences relative to estimates from models that assume the data are missing at random. The method provides an intuitive measure for understanding the strength of the association between outcomes and dropout that would be required to alter inferences about treatment effects. Our approach is motivated by translational research in which treatments found to be effective under experimental conditions are tested in standard treatment conditions. In such applications, follow-up rates in the experimental setting are likely to be substantially higher than in the standard setting, especially when observational data are used in the evaluation. We test the method on a case study evaluation of the effectiveness of an evidence-supported adolescent substance abuse treatment program (Motivational Enhancement Therapy/Cognitive Behavioral Therapy-5 [MET/CBT-5]) delivered by community-based treatment providers relative to its performance in a controlled research trial. In this case study, follow-up rates in the community based settings were extremely low (54%) compared to the experimental setting (95%) giving raise to concerns about non-ignorable drop-out. PMID:22956890
X-ray Fluorescence Spectroscopy Study of Coating Thickness and Base Metal Composition
NASA Technical Reports Server (NTRS)
Rolin, T. D.; Leszczuk, Y.
2008-01-01
For electrical, electronic, and electromechanical (EEE) parts to be approved for space use, they must be able to meet safety standards approved by NASA. A fast, reliable, and precise method is needed to make sure these standards are met. Many EEE parts are coated in gold (Au) and nickel (Ni), and the thickness coating is crucial to a part s performance. A nondestructive method that is efficient in measuring coating thickness is x-ray fluorescence (XRF) spectroscopy. The XRF spectrometer is a machine designed to measure layer thickness and composition of single or multilayered samples. By understanding the limitations in the collection of the data by this method, accurate composition and thickness measurements can be obtained for samples with Au and Ni coatings. To understand the limitations of data found, measurements were taken with the XRF spectrometer and compared to true values of standard reference materials (SRM) that were National Institute of Standards and Technology (NIST) traceable. For every sample, six different parameters were varied to understand measurement error: coating/substrate combination, number of layers, counting interval, collimator size, coating thickness, and test area location. Each measurement was taken in accordance with standards set by the American Society for Testing and Materials (ASTM) International Standard B 568.
Dolan, Anthony; Burgess, Catherine M; Barry, Thomas B; Fanning, Seamus; Duffy, Geraldine
2009-04-01
A sensitive quantitative reverse-transcription PCR (qRT-PCR) method was developed for enumeration of total bacteria. Using two sets of primers separately to target the ribonuclease-P (RNase P) RNA transcripts of gram positive and gram negative bacteria. Standard curves were generated using SYBR Green I kits for the LightCycler 2.0 instrument (Roche Diagnostics) to allow quantification of mixed microflora in liquid media. RNA standards were used and extracted from known cell equivalents and subsequently converted to cDNA for the construction of standard curves. The number of mixed bacteria in culture was determined by qRT-PCR, and the results correlated (r(2)=0.88, rsd=0.466) with the total viable count over the range from approx. Log(10) 3 to approx. Log(10) 7 CFU ml(-1). The rapid nature of this assay (8 h) and its potential as an alternative method to the standard plate count method to predict total viable counts and shelf life are discussed.
A framework for automatic creation of gold-standard rigid 3D-2D registration datasets.
Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga
2017-02-01
Advanced image-guided medical procedures incorporate 2D intra-interventional information into pre-interventional 3D image and plan of the procedure through 3D/2D image registration (32R). To enter clinical use, and even for publication purposes, novel and existing 32R methods have to be rigorously validated. The performance of a 32R method can be estimated by comparing it to an accurate reference or gold standard method (usually based on fiducial markers) on the same set of images (gold standard dataset). Objective validation and comparison of methods are possible only if evaluation methodology is standardized, and the gold standard dataset is made publicly available. Currently, very few such datasets exist and only one contains images of multiple patients acquired during a procedure. To encourage the creation of gold standard 32R datasets, we propose an automatic framework. The framework is based on rigid registration of fiducial markers. The main novelty is spatial grouping of fiducial markers on the carrier device, which enables automatic marker localization and identification across the 3D and 2D images. The proposed framework was demonstrated on clinical angiograms of 20 patients. Rigid 32R computed by the framework was more accurate than that obtained manually, with the respective target registration error below 0.027 mm compared to 0.040 mm. The framework is applicable for gold standard setup on any rigid anatomy, provided that the acquired images contain spatially grouped fiducial markers. The gold standard datasets and software will be made publicly available.
Go With the Flow, on Jupiter and Snow. Coherence from Model-Free Video Data Without Trajectories
NASA Astrophysics Data System (ADS)
AlMomani, Abd AlRahman R.; Bollt, Erik
2018-06-01
Viewing a data set such as the clouds of Jupiter, coherence is readily apparent to human observers, especially the Great Red Spot, but also other great storms and persistent structures. There are now many different definitions and perspectives mathematically describing coherent structures, but we will take an image processing perspective here. We describe an image processing perspective inference of coherent sets from a fluidic system directly from image data, without attempting to first model underlying flow fields, related to a concept in image processing called motion tracking. In contrast to standard spectral methods for image processing which are generally related to a symmetric affinity matrix, leading to standard spectral graph theory, we need a not symmetric affinity which arises naturally from the underlying arrow of time. We develop an anisotropic, directed diffusion operator corresponding to flow on a directed graph, from a directed affinity matrix developed with coherence in mind, and corresponding spectral graph theory from the graph Laplacian. Our methodology is not offered as more accurate than other traditional methods of finding coherent sets, but rather our approach works with alternative kinds of data sets, in the absence of vector field. Our examples will include partitioning the weather and cloud structures of Jupiter, and a local to Potsdam, NY, lake effect snow event on Earth, as well as the benchmark test double-gyre system.
NASA Technical Reports Server (NTRS)
Moe, Karen L.; Perkins, Dorothy C.; Szczur, Martha R.
1987-01-01
The user support environment (USE) which is a set of software tools for a flexible standard interactive user interface to the Space Station systems, platforms, and payloads is described in detail. Included in the USE concept are a user interface language, a run time environment and user interface management system, support tools, and standards for human interaction methods. The goals and challenges of the USE are discussed as well as a methodology based on prototype demonstrations for involving users in the process of validating the USE concepts. By prototyping the key concepts and salient features of the proposed user interface standards, the user's ability to respond is greatly enhanced.
Syndrome Diagnosis: Human Intuition or Machine Intelligence?
Braaten, Øivind; Friestad, Johannes
2008-01-01
The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a ‘vector method’ and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes’ calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods. PMID:19415142
An XML-based method for astronomy software designing
NASA Astrophysics Data System (ADS)
Liao, Mingxue; Aili, Yusupu; Zhang, Jin
XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-01-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology. PMID:23554632
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-05-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology.
NASA Technical Reports Server (NTRS)
Kurtz, L. A.; Smith, R. E.; Parks, C. L.; Boney, L. R.
1978-01-01
Steady state solutions to two time dependent partial differential systems have been obtained by the Method of Lines (MOL) and compared to those obtained by efficient standard finite difference methods: (1) Burger's equation over a finite space domain by a forward time central space explicit method, and (2) the stream function - vorticity form of viscous incompressible fluid flow in a square cavity by an alternating direction implicit (ADI) method. The standard techniques were far more computationally efficient when applicable. In the second example, converged solutions at very high Reynolds numbers were obtained by MOL, whereas solution by ADI was either unattainable or impractical. With regard to 'set up' time, solution by MOL is an attractive alternative to techniques with complicated algorithms, as much of the programming difficulty is eliminated.
Assessing and minimizing contamination in time of flight based validation data
NASA Astrophysics Data System (ADS)
Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald
2017-10-01
Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.
Application of micromechanics to the characterization of mortar by ultrasound.
Hernández, M G; Anaya, J J; Izquierdo, M A G; Ullate, L G
2002-05-01
Mechanical properties of concrete and mortar structures can be estimated by ultrasonic non-destructive testing. When the ultrasonic velocity is known, there are standardized methods based on considering the concrete a homogeneous material. Cement composites, however, are heterogeneous and porous, and have a negative effect on the mechanical properties of structures. This work studies the impact of porosity on mechanical properties by considering concrete a multiphase material. A micromechanical model is applied in which the material is considered to consist of two phases: a solid matrix and pores. From this method, a set of expressions is obtained that relates the acoustic velocity and Young's modulus of mortar. Experimental work is based on non-destructive and destructive procedures over mortar samples whose porosity is varied. A comparison is drawn between micromechanical and standard methods, showing positive results for the method here proposed.
Leak Rate Quantification Method for Gas Pressure Seals with Controlled Pressure Differential
NASA Technical Reports Server (NTRS)
Daniels, Christopher C.; Braun, Minel J.; Oravec, Heather A.; Mather, Janice L.; Taylor, Shawn C.
2015-01-01
An enhancement to the pressure decay leak rate method with mass point analysis solved deficiencies in the standard method. By adding a control system, a constant gas pressure differential across the test article was maintained. As a result, the desired pressure condition was met at the onset of the test, and the mass leak rate and measurement uncertainty were computed in real-time. The data acquisition and control system were programmed to automatically stop when specified criteria were met. Typically, the test was stopped when a specified level of measurement uncertainty was attained. Using silicone O-ring test articles, the new method was compared with the standard method that permitted the downstream pressure to be non-constant atmospheric pressure. The two methods recorded comparable leak rates, but the new method recorded leak rates with significantly lower measurement uncertainty, statistical variance, and test duration. Utilizing this new method in leak rate quantification, projects will reduce cost and schedule, improve test results, and ease interpretation between data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sidhu, K.S.
1991-06-01
The primary objective of a standard setting process is to arrive at a drinking water concentration at which exposure to a contaminant would result in no known or potential adverse health effect on human health. The drinking water standards also serve as guidelines to prevent pollution of water sources and may be applicable in some cases as regulatory remediation levels. The risk assessment methods along with various decision making parameters are used to establish drinking water standards. For carcinogens classified in Groups A and B by the United States Environmental Protection Agency (USEPA) the standards are set by using nonthresholdmore » cancer risk models. The linearized multistage model is commonly used for computation of potency factors for carcinogenic contaminants. The acceptable excess risk level may vary from 10(-6) to 10(-4). For noncarcinogens, a threshold model approach based on application of an uncertainty factor is used to arrive at a reference dose (RfD). The RfD approach may also be used for carcinogens classified in Group C by the USEPA. The RfD approach with an additional uncertainty factory of 10 for carcinogenicity has been applied in the formulation of risk assessment for Group C carcinogens. The assumptions commonly used in arriving at drinking water standards are human life expectancy, 70 years; average human body weight, 70 kg; human daily drinking water consumption, 2 liters; and contribution of exposure to the contaminant from drinking water (expressed as a part of the total environmental exposure), 20%. Currently, there are over 80 USEPA existing or proposed primary standards for organic and inorganic contaminants in drinking water. Some of the state versus federal needs and viewpoints are discussed.« less
2014-01-01
Background The incidence of oropharyngeal cancer is increasing in the developed world. This has led to a large rise in research activity and clinical trials in this area, yet there is no consensus on which outcomes should be measured. As a result, the outcomes measured often differ between trials of comparable interventions, making the combination or comparison of results between trials impossible. Outcomes may also be ‘cherry-picked’, such that favourable results are reported, and less favourable results withheld. The development of a minimum outcome reporting standard, known as a core outcome set, goes some way to addressing these problems. Core outcome sets are ideally developed using a patient-centred approach so that the outcomes measured are relevant to patients and clinical practice. Core outcome sets drive up the quality and relevance of research by ensuring that the right outcomes are consistently measured and reported in trials in specific areas of health or healthcare. Methods/Design This is a mixed methods study involving three phases to develop a core outcome set for oropharyngeal cancer clinical trials. Firstly, a systematic review will establish which outcomes are measured in published oropharyngeal cancer randomised controlled trials (RCTs). Secondly, qualitative interviews with patients and carers in the UK and the USA will aim to establish which outcomes are important to these stakeholders. Data from these first two stages will be used to develop a comprehensive list of outcomes to be considered for inclusion in the core outcome set. In the third stage, patients and clinicians will participate in an iterative consensus exercise known as a Delphi study to refine the contents of the core outcome set. This protocol lays out the methodology to be implemented in the CONSENSUS study. Discussion A core outcome set defines a minimum outcome reporting standard for clinical trials in a particular area of health or healthcare. Its consistent implementation in oropharyngeal cancer clinical trials will improve the quality and relevance of research. Trials and registration This study is registered at the National Institute for Health Research (NIHR) Clinical Research Network (CRN) portfolio, ID 13823 (17 January 2013). PMID:24885068
Chen, Henry W; Du, Jingcheng; Song, Hsing-Yi; Liu, Xiangyu; Jiang, Guoqian
2018-01-01
Background Today, there is an increasing need to centralize and standardize electronic health data within clinical research as the volume of data continues to balloon. Domain-specific common data elements (CDEs) are emerging as a standard approach to clinical research data capturing and reporting. Recent efforts to standardize clinical study CDEs have been of great benefit in facilitating data integration and data sharing. The importance of the temporal dimension of clinical research studies has been well recognized; however, very few studies have focused on the formal representation of temporal constraints and temporal relationships within clinical research data in the biomedical research community. In particular, temporal information can be extremely powerful to enable high-quality cancer research. Objective The objective of the study was to develop and evaluate an ontological approach to represent the temporal aspects of cancer study CDEs. Methods We used CDEs recorded in the National Cancer Institute (NCI) Cancer Data Standards Repository (caDSR) and created a CDE parser to extract time-relevant CDEs from the caDSR. Using the Web Ontology Language (OWL)–based Time Event Ontology (TEO), we manually derived representative patterns to semantically model the temporal components of the CDEs using an observing set of randomly selected time-related CDEs (n=600) to create a set of TEO ontological representation patterns. In evaluating TEO’s ability to represent the temporal components of the CDEs, this set of representation patterns was tested against two test sets of randomly selected time-related CDEs (n=425). Results It was found that 94.2% (801/850) of the CDEs in the test sets could be represented by the TEO representation patterns. Conclusions In conclusion, TEO is a good ontological model for representing the temporal components of the CDEs recorded in caDSR. Our representative model can harness the Semantic Web reasoning and inferencing functionalities and present a means for temporal CDEs to be machine-readable, streamlining meaningful searches. PMID:29472179
Using electrical resistance probes for moisture determination in switchgrass windrows
USDA-ARS?s Scientific Manuscript database
Determining moisture levels in windrowed biomass is important for both forage producers and researchers. Energy crops such as switchgrass have been troublesome when using the standard methods set for electrical resistance meters. The objectives of this study were to i) develop the methodologies need...
The Logic of Summative Confidence
ERIC Educational Resources Information Center
Gugiu, P. Cristian
2007-01-01
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…
Screening-level assays for potentially human-infectious environmental Legionella spp.
In spite of the fact that Legionella species can be isolated from nonclinical settings, there is no standard method to determine whether environmental legionellae may be infectious to humans. In this study, an in vivo murine model of pneumonia and three in vitro proliferation as...
Mining functionally relevant gene sets for analyzing physiologically novel clinical expression data.
Turcan, Sevin; Vetter, Douglas E; Maron, Jill L; Wei, Xintao; Slonim, Donna K
2011-01-01
Gene set analyses have become a standard approach for increasing the sensitivity of transcriptomic studies. However, analytical methods incorporating gene sets require the availability of pre-defined gene sets relevant to the underlying physiology being studied. For novel physiological problems, relevant gene sets may be unavailable or existing gene set databases may bias the results towards only the best-studied of the relevant biological processes. We describe a successful attempt to mine novel functional gene sets for translational projects where the underlying physiology is not necessarily well characterized in existing annotation databases. We choose targeted training data from public expression data repositories and define new criteria for selecting biclusters to serve as candidate gene sets. Many of the discovered gene sets show little or no enrichment for informative Gene Ontology terms or other functional annotation. However, we observe that such gene sets show coherent differential expression in new clinical test data sets, even if derived from different species, tissues, and disease states. We demonstrate the efficacy of this method on a human metabolic data set, where we discover novel, uncharacterized gene sets that are diagnostic of diabetes, and on additional data sets related to neuronal processes and human development. Our results suggest that our approach may be an efficient way to generate a collection of gene sets relevant to the analysis of data for novel clinical applications where existing functional annotation is relatively incomplete.
Simpson, Deborah M; Beynon, Robert J
2012-09-01
Systems biology requires knowledge of the absolute amounts of proteins in order to model biological processes and simulate the effects of changes in specific model parameters. Quantification concatamers (QconCATs) are established as a method to provide multiplexed absolute peptide standards for a set of target proteins in isotope dilution standard experiments. Two or more quantotypic peptides representing each of the target proteins are concatenated into a designer gene that is metabolically labelled with stable isotopes in Escherichia coli or other cellular or cell-free systems. Co-digestion of a known amount of QconCAT with the target proteins generates a set of labelled reference peptide standards for the unlabelled analyte counterparts, and by using an appropriate mass spectrometry platform, comparison of the intensities of the peptide ratios delivers absolute quantification of the encoded peptides and in turn the target proteins for which they are surrogates. In this review, we discuss the criteria and difficulties associated with surrogate peptide selection and provide examples in the design of QconCATs for quantification of the proteins of the nuclear factor κB pathway.
Standards and Students with Disabilities: Reality or Virtual Reality? Brief Report 8.
ERIC Educational Resources Information Center
Saint Cloud State Univ., MN.
This Brief Report highlights current activities focused on setting standards in education, and examines whether students with disabilities are considered when standards are set. Types of standards are distinguished, including performance standards, delivery standards, and content standards. Information on organizations developing standards in…
Validation of Proposed Metrics for Two-Body Abrasion Scratch Test Analysis Standards
NASA Technical Reports Server (NTRS)
Kobrick, Ryan L.; Klaus, David M.; Street, Kenneth W., Jr.
2011-01-01
The objective of this work was to evaluate a set of standardized metrics proposed for characterizing a surface that has been scratched from a two-body abrasion test. This is achieved by defining a new abrasion region termed Zone of Interaction (ZOI). The ZOI describes the full surface profile of all peaks and valleys, rather than just measuring a scratch width as currently defined by the ASTM G 171 Standard. The ZOI has been found to be at least twice the size of a standard width measurement, in some cases considerably greater, indicating that at least half of the disturbed surface area would be neglected without this insight. The ZOI is used to calculate a more robust data set of volume measurements that can be used to computationally reconstruct a resultant profile for detailed analysis. Documenting additional changes to various surface roughness parameters also allows key material attributes of importance to ultimate design applications to be quantified, such as depth of penetration and final abraded surface roughness. Data are presented to show that different combinations of scratch tips and abraded materials can actually yield the same scratch width, but result in different volume displacement or removal measurements and therefore, the ZOI method is more discriminating than the ASTM method scratch width. Furthermore, by investigating the use of custom scratch tips for our specific needs, the usefulness of having an abrasion metric that can measure the displaced volume in this standardized manner, and not just by scratch width alone, is reinforced. This benefit is made apparent when a tip creates an intricate contour having multiple peaks and valleys within a single scratch. This work lays the foundation for updating scratch measurement standards to improve modeling and characterization of three-body abrasion test results.
Mahmoudvand, Zahra; Kamkar, Mehran; Shahmoradi, Leila; Nejad, Ahmadreza Farzaneh
2016-04-01
Determination of minimum data set (MDS) in echocardiography reports is necessary for documentation and putting information in a standard way, and leads to the enhancement of electrocardiographic studies through having access to precise and perfect reports and also to the development of a standard database for electrocardiographic reports. to determine the minimum data set of echocardiography reporting system to exchange with Iran's electronic health record (EHR) system. First, a list of minimum data set was prepared after reviewing texts and studying cardiac patients' records. Then, to determine the content validity of the prepared MDS, the expert views of 10 cardiologists and 10 health information management (HIM) specialists were obtained; to estimate the reliability of the set, test-retest method was employed. Finally, the data were analyzed using SPSS software. The highest degree of consensus was found for the following MDSs: patient's name and family name (5), accepting doctor's name and family name, familial death records due to cardiac disorders, the image identification code, mitral valve, aortic valve, tricuspid valve, pulmonary valve, left ventricle, hole, atrium valve, Doppler examination of ventricular and atrial movement models and diagnoses with an average of. To prepare a model of echocardiography reporting system to exchange with EHR system, creation a standard data set is the vital point. Therefore, based on the research findings, the minimum reporting system data to exchange with Iran's electronic health record system include information on entity, management, medical record, carried-out acts, and the main content of the echocardiography report, which the planners of reporting system should consider.
An International Standard Set of Patient-Centered Outcome Measures After Stroke.
Salinas, Joel; Sprinkhuizen, Sara M; Ackerson, Teri; Bernhardt, Julie; Davie, Charlie; George, Mary G; Gething, Stephanie; Kelly, Adam G; Lindsay, Patrice; Liu, Liping; Martins, Sheila C O; Morgan, Louise; Norrving, Bo; Ribbers, Gerard M; Silver, Frank L; Smith, Eric E; Williams, Linda S; Schwamm, Lee H
2016-01-01
Value-based health care aims to bring together patients and health systems to maximize the ratio of quality over cost. To enable assessment of healthcare value in stroke management, an international standard set of patient-centered stroke outcome measures was defined for use in a variety of healthcare settings. A modified Delphi process was implemented with an international expert panel representing patients, advocates, and clinical specialists in stroke outcomes, stroke registers, global health, epidemiology, and rehabilitation to reach consensus on the preferred outcome measures, included populations, and baseline risk adjustment variables. Patients presenting to a hospital with ischemic stroke or intracerebral hemorrhage were selected as the target population for these recommendations, with the inclusion of transient ischemic attacks optional. Outcome categories recommended for assessment were survival and disease control, acute complications, and patient-reported outcomes. Patient-reported outcomes proposed for assessment at 90 days were pain, mood, feeding, selfcare, mobility, communication, cognitive functioning, social participation, ability to return to usual activities, and health-related quality of life, with mobility, feeding, selfcare, and communication also collected at discharge. One instrument was able to collect most patient-reported subdomains (9/16, 56%). Minimum data collection for risk adjustment included patient demographics, premorbid functioning, stroke type and severity, vascular and systemic risk factors, and specific treatment/care-related factors. A consensus stroke measure Standard Set was developed as a simple, pragmatic method to increase the value of stroke care. The set should be validated in practice when used for monitoring and comparisons across different care settings. © 2015 The Authors.
Coherent Image Layout using an Adaptive Visual Vocabulary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillard, Scott E.; Henry, Michael J.; Bohn, Shawn J.
When querying a huge image database containing millions of images, the result of the query may still contain many thousands of images that need to be presented to the user. We consider the problem of arranging such a large set of images into a visually coherent layout, one that places similar images next to each other. Image similarity is determined using a bag-of-features model, and the layout is constructed from a hierarchical clustering of the image set by mapping an in-order traversal of the hierarchy tree into a space-filling curve. This layout method provides strong locality guarantees so we aremore » able to quantitatively evaluate performance using standard image retrieval benchmarks. Performance of the bag-of-features method is best when the vocabulary is learned on the image set being clustered. Because learning a large, discriminative vocabulary is a computationally demanding task, we present a novel method for efficiently adapting a generic visual vocabulary to a particular dataset. We evaluate our clustering and vocabulary adaptation methods on a variety of image datasets and show that adapting a generic vocabulary to a particular set of images improves performance on both hierarchical clustering and image retrieval tasks.« less
Schulze, H Georg; Turner, Robin F B
2013-04-01
Raman spectra often contain undesirable, randomly positioned, intense, narrow-bandwidth, positive, unidirectional spectral features generated when cosmic rays strike charge-coupled device cameras. These must be removed prior to analysis, but doing so manually is not feasible for large data sets. We developed a quick, simple, effective, semi-automated procedure to remove cosmic ray spikes from spectral data sets that contain large numbers of relatively homogenous spectra. Although some inhomogeneous spectral data sets can be accommodated--it requires replacing excessively modified spectra with the originals and removing their spikes with a median filter instead--caution is advised when processing such data sets. In addition, the technique is suitable for interpolating missing spectra or replacing aberrant spectra with good spectral estimates. The method is applied to baseline-flattened spectra and relies on fitting a third-order (or higher) polynomial through all the spectra at every wavenumber. Pixel intensities in excess of a threshold of 3× the noise standard deviation above the fit are reduced to the threshold level. Because only two parameters (with readily specified default values) might require further adjustment, the method is easily implemented for semi-automated processing of large spectral sets.
Estimating Spectra from Photometry
NASA Astrophysics Data System (ADS)
Kalmbach, J. Bryce; Connolly, Andrew J.
2017-12-01
Measuring the physical properties of galaxies such as redshift frequently requires the use of spectral energy distributions (SEDs). SED template sets are, however, often small in number and cover limited portions of photometric color space. Here we present a new method to estimate SEDs as a function of color from a small training set of template SEDs. We first cover the mathematical background behind the technique before demonstrating our ability to reconstruct spectra based upon colors and then compare our results to other common interpolation and extrapolation methods. When the photometric filters and spectra overlap, we show that the error in the estimated spectra is reduced by more than 65% compared to the more commonly used techniques. We also show an expansion of the method to wavelengths beyond the range of the photometric filters. Finally, we demonstrate the usefulness of our technique by generating 50 additional SED templates from an original set of 10 and by applying the new set to photometric redshift estimation. We are able to reduce the photometric redshifts standard deviation by at least 22.0% and the outlier rejected bias by over 86.2% compared to original set for z ≤ 3.
Double-tick realization of binary control program
NASA Astrophysics Data System (ADS)
Kobylecki, Michał; Kania, Dariusz
2016-12-01
This paper presents a procedure for the implementation of control algorithms for hardware-bit compatible with the standard IEC61131-3. The described transformation based on the sets of calculus and graphs, allows translation of the original form of the control program to the form in full compliance with the original, giving the architecture represented by two tick. The proposed method enables the efficient implementation of the control bits in the FPGA with the use of a standardized programming language LD.
1990-09-01
community’s search for a workable set of standards for school mathematics . In 1989 the National Council of Teachers of Mathematics ( NCTM ) established the...made by the Commission on Standards for School Mathematics to the National Council of Teachers of Mathematics ( NCTM ). Of the 40 students who...Abstract This -s-y evaluated students’ responses to a teaching method designed to involve students and teachers of mathematics in a meaningful learning
Transformation-cost time-series method for analyzing irregularly sampled data
NASA Astrophysics Data System (ADS)
Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen
2015-06-01
Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.
Transformation-cost time-series method for analyzing irregularly sampled data.
Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen
2015-06-01
Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.
Standards for reporting qualitative research: a synthesis of recommendations.
O'Brien, Bridget C; Harris, Ilene B; Beckman, Thomas J; Reed, Darcy A; Cook, David A
2014-09-01
Standards for reporting exist for many types of quantitative research, but currently none exist for the broad spectrum of qualitative research. The purpose of the present study was to formulate and define standards for reporting qualitative research while preserving the requisite flexibility to accommodate various paradigms, approaches, and methods. The authors identified guidelines, reporting standards, and critical appraisal criteria for qualitative research by searching PubMed, Web of Science, and Google through July 2013; reviewing the reference lists of retrieved sources; and contacting experts. Specifically, two authors reviewed a sample of sources to generate an initial set of items that were potentially important in reporting qualitative research. Through an iterative process of reviewing sources, modifying the set of items, and coding all sources for items, the authors prepared a near-final list of items and descriptions and sent this list to five external reviewers for feedback. The final items and descriptions included in the reporting standards reflect this feedback. The Standards for Reporting Qualitative Research (SRQR) consists of 21 items. The authors define and explain key elements of each item and provide examples from recently published articles to illustrate ways in which the standards can be met. The SRQR aims to improve the transparency of all aspects of qualitative research by providing clear standards for reporting qualitative research. These standards will assist authors during manuscript preparation, editors and reviewers in evaluating a manuscript for potential publication, and readers when critically appraising, applying, and synthesizing study findings.
Nakagaki, Naomi; Hitt, Kerie J.; Price, Curtis V.; Falcone, James A.
2012-01-01
Characterization of natural and anthropogenic features that define the environmental settings of sampling sites for streams and groundwater, including drainage basins and groundwater study areas, is an essential component of water-quality and ecological investigations being conducted as part of the U.S. Geological Survey's National Water-Quality Assessment program. Quantitative characterization of environmental settings, combined with physical, chemical, and biological data collected at sampling sites, contributes to understanding the status of, and influences on, water-quality and ecological conditions. To support studies for the National Water-Quality Assessment program, a geographic information system (GIS) was used to develop a standard set of methods to consistently characterize the sites, drainage basins, and groundwater study areas across the nation. This report describes three methods used for characterization-simple overlay, area-weighted areal interpolation, and land-cover-weighted areal interpolation-and their appropriate applications to geographic analyses that have different objectives and data constraints. In addition, this document records the GIS thematic datasets that are used for the Program's national design and data analyses.
Crapanzano, John P; Heymann, Jonas J; Monaco, Sara; Nassar, Aziza; Saqi, Anjali
2014-01-01
In the recent past, algorithms and recommendations to standardize the morphological, immunohistochemical and molecular classification of lung cancers on cytology specimens have been proposed, and several organizations have recommended cell blocks (CBs) as the preferred modality for molecular testing. Based on the literature, there are several different techniques available for CB preparation-suggesting that there is no standard. The aim of this study was to conduct a survey of CB preparation techniques utilized in various practice settings and analyze current issues, if any. A single E-mail with a link to an electronic survey was distributed to members of the American Society of Cytopathology and other pathologists. Questions pertaining to the participants' practice setting and CBs-volume, method, quality and satisfaction-were included. Of 95 respondents, 90/95 (94%) completed the survey and comprise the study group. Most participants practice in a community hospital/private practice (44%) or academic center (41%). On average, 14 CBs (range 0-50; median 10) are prepared by a laboratory daily. Over 10 methods are utilized: Plasma thrombin (33%), HistoGel (27%), Cellient automated cell block system (8%) and others (31%) respectively. Forty of 90 (44%) respondents are either unsatisfied or sometimes satisfied with their CB quality, with low-cellular yield being the leading cause of dissatisfaction. There was no statistical significance between the three most common CB preparation methods and satisfaction with quality. Many are dissatisfied with their current method of CB preparation, and there is no consistent method to prepare CBs. In today's era of personalized medicine with an increasing array of molecular tests being applied to cytological specimens, there is a need for a standardized protocol for CB optimization to enhance cellularity.
Use of the azimuthal resistivity technique for determination of regional azimuth of transmissivity
Carlson, D.
2010-01-01
Many bedrock units contain joint sets that commonly act as preferred paths for the movement of water, electrical charge, and possible contaminants associated with production or transit of crude oil or refined products. To facilitate the development of remediation programs, a need exists to reliably determine regional-scale properties of these joint sets: azimuth of transmissivity ellipse, dominant set, and trend(s). The surface azimuthal electrical resistivity survey method used for local in situ studies can be a noninvasive, reliable, efficient, and relatively cost-effective method for regional studies. The azimuthal resistivity survey method combines the use of standard resistivity equipment with a Wenner array rotated about a fixed center point, at selected degree intervals, which yields an apparent resistivity ellipse from which joint-set orientation can be determined. Regional application of the azimuthal survey method was tested at 17 sites in an approximately 500 km2 (193 mi2) area around Milwaukee, Wisconsin, with less than 15m (50 ft) overburden above the dolomite. Results of 26 azimuthal surveys were compared and determined to be consistent with the results of two other methods: direct observation of joint-set orientation and transmissivity ellipses from multiple-well-aquifer tests. The average of joint-set trend determined by azimuthal surveys is within 2.5?? of the average of joint-set trend determined by direct observation of major joint sets at 24 sites. The average of maximum of transmissivity trend determined by azimuthal surveys is within 5.7?? of the average of maximum of transmissivity trend determined for 14 multiple-well-aquifer tests. Copyright ?? 2010 The American Association of Petroleum Geologists/Division of Environmental Geosciences. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
Liu, Xi; Yu, Jingjing; Li, Shen; Wang, Hong; Liu, Jiaxin
2013-08-01
We used blood as leaching medium, simulating clinical operation under maximum condition, to develop Liquid-phase extraction- High Performance Liquid Chromatography (HPLC) method for determination of plasticizer Di-(2-ethylhexyl)phthalate (DEHP) released from Disposable Extracorporeal Circulation Tube in order to lay the foundation of risk analysis of this product. The characteristic wavelength of DEHP in methanol was detected. Acetonitrile was added to the leaching blood in proportion and extracted DEHP from blood. The methodology for HPLC to quantify DEHP was established and the DEHP amount released from this disposable extracorporeal circulation tube was measured. The experiments showed good results as follows. The characteristic wavelength of DEHP was 272nm. The concentration of DEHP (5-250 microg/mL) kept good linear relationship with peak area (r=0.9999). Method sensitivity was 1 microg/mL. Precisions showed RSD<5%. The adding standard extraction Recovery Rates of 25, 100 and 250 microg DEHP standard were 61.91 +/- 3.32)%, (69.38 +/- 0.55)% and (68.47 +/- 1.15)%. The DEHP maximum amounts released from 3 sets of this disposable extracorporeal circulation tube were 204.14, 106.30 and 165.34 mg/set. Our Liquid-phase Extraction-HPLC method showed high accuracy and precision, and relatively stable recovery rate. Its operation was also convenient.
Assessment of an undergraduate psychiatry course in an African setting.
Baig, Benjamin J; Beaglehole, Anna; Stewart, Robert C; Boeing, Leonie; Blackwood, Douglas H; Leuvennink, Johan; Kauye, Felix
2008-04-22
International reports recommend the improvement in the amount and quality of training for mental health workers in low and middle income countries. The Scotland-Malawi Mental Health Education Project (SMMHEP) has been established to support the teaching of psychiatry to medical students in the University of Malawi. While anecdotally supportive medical educational initiatives appear of value, little quantitative evidence exists to demonstrate whether such initiatives can deliver comparable educational standards. This study aimed to assess the effectiveness of an undergraduate psychiatry course given by UK psychiatrists in Malawi by studying University of Malawi and Edinburgh University medical students' performance on an MCQ examination paper. An undergraduate psychiatry course followed by an MCQ exam was delivered by the SMMHEP to 57 Malawi medical students. This same MCQ exam was given to 71 Edinburgh University medical students who subsequently sat their own Edinburgh University examination. There were no significant differences between Edinburgh students' performance on the Malawi exam and their own Edinburgh University exam. (p = 0.65). This would suggest that the Malawi exam is a comparable standard to the Edinburgh exam. Malawi students marks ranged from 52.4%-84.6%. Importantly 84.4% of Malawi students scored above 60% on their exam which would equate to a hypothetical pass by UK university standards. The support of an undergraduate course in an African setting by high income country specialists can attain a high percentage pass rate by UK standards. Although didactic teaching has been surpassed by more novel educational methods, in resource poor countries it remains an effective and cost effective method of gaining an important educational standard.
Animal behavior and well-being symposium: Farm animal welfare assurance: science and application.
Rushen, J; Butterworth, A; Swanson, J C
2011-04-01
Public and consumer pressure for assurances that farm animals are raised humanely has led to a range of private and public animal welfare standards, and for methods to assess compliance with these standards. The standards usually claim to be science based, but even though researchers have developed measures of animal welfare and have tested the effects of housing and management variables on welfare within controlled laboratory settings, there are challenges in extending this research to develop on-site animal welfare standards. The standards need to be validated against a definition of welfare that has broad support and which is amenable to scientific investigation. Ensuring that such standards acknowledge scientific uncertainty is also challenging, and balanced input from all scientific disciplines dealing with animal welfare is needed. Agencies providing animal welfare audit services need to integrate these scientific standards and legal requirements into successful programs that effectively measure and objectively report compliance. On-farm assessment of animal welfare requires a combination of animal-based measures to assess the actual state of welfare and resource-based measures to identify risk factors. We illustrate this by referring to a method of assessing welfare in broiler flocks. Compliance with animal welfare standards requires buy-in from all stakeholders, and this will be best achieved by a process of inclusion in the development of pragmatic assessment methods and the development of audit programs verifying the conditions and continuous improvement of farm animal welfare.
Peng, Jun; Chen, Yi-Ting; Chen, Chien-Lun; Li, Liang
2014-07-01
Large-scale metabolomics study requires a quantitative method to generate metabolome data over an extended period with high technical reproducibility. We report a universal metabolome-standard (UMS) method, in conjunction with chemical isotope labeling liquid chromatography-mass spectrometry (LC-MS), to provide long-term analytical reproducibility and facilitate metabolome comparison among different data sets. In this method, UMS of a specific type of sample labeled by an isotope reagent is prepared a priori. The UMS is spiked into any individual samples labeled by another form of the isotope reagent in a metabolomics study. The resultant mixture is analyzed by LC-MS to provide relative quantification of the individual sample metabolome to UMS. UMS is independent of a study undertaking as well as the time of analysis and useful for profiling the same type of samples in multiple studies. In this work, the UMS method was developed and applied for a urine metabolomics study of bladder cancer. UMS of human urine was prepared by (13)C2-dansyl labeling of a pooled sample from 20 healthy individuals. This method was first used to profile the discovery samples to generate a list of putative biomarkers potentially useful for bladder cancer detection and then used to analyze the verification samples about one year later. Within the discovery sample set, three-month technical reproducibility was examined using a quality control sample and found a mean CV of 13.9% and median CV of 9.4% for all the quantified metabolites. Statistical analysis of the urine metabolome data showed a clear separation between the bladder cancer group and the control group from the discovery samples, which was confirmed by the verification samples. Receiver operating characteristic (ROC) test showed that the area under the curve (AUC) was 0.956 in the discovery data set and 0.935 in the verification data set. These results demonstrated the utility of the UMS method for long-term metabolomics and discovering potential metabolite biomarkers for diagnosis of bladder cancer.
Applications of Automation Methods for Nonlinear Fracture Test Analysis
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.
Statistical testing of association between menstruation and migraine.
Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G
2015-02-01
To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.
NASA Astrophysics Data System (ADS)
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
Francy, D.S.; Hart, T.L.; Virosteck, C.M.
1996-01-01
Bacterial injury, survival, and regrowth were investigated by use of replicate flow-through incubation chambers placed in the Cuyahoga River or Lake Erie in the greater Cleveland metropolitan area during seven 4-day field studies. The chambers contained wastewater or combined-sewer-overflow (CSO) effluents treated three ways-unchlorinated, chlorinated, and dechlorinated. At timestep intervals, the chamber contents were analyzed for concentrations of injured and healthy fecal coliforms by use of standard selective and enhanced-recovery membrane-filtration methods. Mean percent injuries and survivals were calculated from the fecal-coliform concentration data for each field study. The results of analysis of variance (ANOVA) indicated that treatment affected mean percent injury and survival, whereas site did not. In the warm-weather Lake Erie field study, but not in the warm-weather Cuyahoga River studies, the results of ANOVA indicated that dechlorination enhanced the repair of injuries and regrowth of chlorine-injured fecal coliforms on culture media over chlorination alone. The results of ANOVA on the percent injury from CSO effluent field studies indicated that dechlorination reduced the ability of organisms to recover and regrow on culture media over chlorination alone. However, because of atypical patterns of concentration increases and decreases in some CSO effluent samples, more work needs to be done before the effect of dechlorination and chlorination on reducing fecal-coliform concentrations in CSO effluents can be confirmed. The results of ANOVA on percent survivals found statistically significant differences among the three treatment methods for all but one study. Dechlorination was found to be less effective than chlorination alone in reducing the survival of fecal coliforms in wastewater effluent, but not in CSO effluent. If the concentration of fecal coliforms determined by use of the enhanced-recovery method can be predicted accurately from the concentration found by use of the standard method, then increased monitoring and expense to detect chlorine-injured organisms would be unnecessary. The results of linear regression analysis, however, indicated that the relation between enhanced-recovery and standard-method concentrations was best represented when the data were grouped by treatment. The model generated from linear regression of the unchlorinated data set provided an accurate estimate of enhanced-recovery concentrations from standard-method concentrations, whereas the models generated from the chlorinated and dechlorinated data sets did not. In addition, evaluation of fecal-coliform concentrations found in field studies in terms of Ohio recreational water-quality standards showed that concentrations obtained by standard and enhanced-recovery methods were not comparable. Sample treatment and analysis methods were found to affect the percentage of samples meeting and exceeding Ohio's bathing-water, primary-contact, and secondary-contact standards. Therefore, determining the health risk of swimming in receiving waters was often difficult without information on enhanced-recovery method concentrations and was especially difficult in waters receiving high proportions of chlorinated or dechlorinated effluents.
Non-specific filtering of beta-distributed data.
Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D
2014-06-19
Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered.
Time dependence of breakdown in a global fiber-bundle model with continuous damage.
Moral, L; Moreno, Y; Gómez, J B; Pacheco, A F
2001-06-01
A time-dependent global fiber-bundle model of fracture with continuous damage is formulated in terms of a set of coupled nonlinear differential equations. A first integral of this set is analytically obtained. The time evolution of the system is studied by applying a discrete probabilistic method. Several results are discussed emphasizing their differences with the standard time-dependent model. The results obtained show that with this simple model a variety of experimental observations can be qualitatively reproduced.
2012-01-01
waste management tools at locations where more so- phisticated methods of solid waste disposal ( incinerators , reuse/recycling, containerized removal by...an incinerator or other equip- ment specifi cally designed…for burning of solid waste, designated for the purpose of disposing of solid waste by...regularly exceeded the 24-hour standards set by the US Environmental Pro - tection Agency.14 Exhaust and Industrial Byproducts The operational setting in
1998-02-24
step, the earthquake data set must be re-checked in order for the change to take effect. Once changed the new symbol stays changed until the session is...standard methods for discriminating between earthquakes and ripple fired explosions to a new geologic setting (northwest Morocco) in an effort to examine the...Tectonophysics, 217: 217-226. Shapira, A., Avni, R. & Nur, A., 1993. Note: A New Estimate For The Epicenter Of The Jericho Earthquake Of 11 July 1927. Israel
Setting technical standards for visual assessment procedures
Kenneth H. Craik; Nickolaus R. Feimer
1979-01-01
Under the impetus of recent legislative and administrative mandates concerning analysis and management of the landscape, governmental agencies are being called upon to adopt or develop visual resource and impact assessment (VRIA) systems. A variety of techniques that combine methods of psychological assessment and landscape analysis to serve these purposes is being...
On the Estimation of Standard Errors in Cognitive Diagnosis Models
ERIC Educational Resources Information Center
Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim
2018-01-01
Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…
CTEPP STANDARD OPERATING PROCEDURE FOR SETTING UP A HOUSEHOLD SAMPLING SCHEDULE (SOP-2.10)
This SOP describes the method for scheduling study subjects for field sampling activities in North Carolina (NC) and Ohio (OH). There are three field sampling teams with two staff members on each team. Two field sampling teams collect the field data simultaneously. A third fiel...
Empirical Performance of Covariates in Education Observational Studies
ERIC Educational Resources Information Center
Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate
2017-01-01
This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
USDA-ARS?s Scientific Manuscript database
Characterization of complex microbial communities by DNA sequencing has become a standard technique in microbial ecology. Yet, particular features of this approach render traditional methods of community comparison problematic. In particular, a very low proportion of community members are typically ...
Problem-Solving Therapy for Depression in Adults: A Systematic Review
ERIC Educational Resources Information Center
Gellis, Zvi D.; Kenaley, Bonnie
2008-01-01
Objectives: This article presents a systematic review of the evidence on problem-solving therapy (PST) for depressive disorders in noninstitutionalized adults. Method: Intervention studies using randomized controlled designs are included and methodological quality is assessed using a standard set of criteria from the Cochrane Collaborative Review…
Microarray Genomic Systems Development
2008-06-01
11 species), Escherichia coli TOP10 (7 strains), and Geobacillus stearothermophilus . Using standard molecular biology methods, we isolated genomic...comparisons. Results: Different species of bacteria, including Escherichia coli, Bacillus bacteria, and Geobacillus stearothermophilus produce qualitatively...oligonucleotides to labelled genomic DNA from a set of test samples, including eleven Bacillus species, Geobacillus stearothermophilus , and seven Escherichia
42 CFR 37.52 - Method of obtaining definitive interpretations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... other diseases must be demonstrated by those physicians who desire to be B Readers by taking and passing... specified by NIOSH. Each physician who desires to take the digital version of the examination will be provided a complete set of the current NIOSH-approved standard reference digital radiographs. Physicians...
75 FR 12793 - Petitions for Modification
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-17
... number'' on the subject line, by any of the following methods: 1. Electronic Mail: Standards-Petitions... cables will be no smaller than 10 American Wire Gauge (AWG); (4) all circuit breakers used to protect... unit calibrated to trip at 70% of phase to phase short circuit current. The trip setting of these...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spickett, Jeffery, E-mail: J.Spickett@curtin.edu.au; Faculty of Health Sciences, School of Public Health, Curtin University, Perth, Western Australia; Katscherian, Dianne
The approaches used for setting or reviewing air quality standards vary from country to country. The purpose of this research was to consider the potential to improve decision-making through integration of HIA into the processes to review and set air quality standards used in Australia. To assess the value of HIA in this policy process, its strengths and weaknesses were evaluated aligned with review of international processes for setting air quality standards. Air quality standard setting programmes elsewhere have either used HIA or have amalgamated and incorporated factors normally found within HIA frameworks. They clearly demonstrate the value of amore » formalised HIA process for setting air quality standards in Australia. The following elements should be taken into consideration when using HIA in standard setting. (a) The adequacy of a mainly technical approach in current standard setting procedures to consider social determinants of health. (b) The importance of risk assessment criteria and information within the HIA process. The assessment of risk should consider equity, the distribution of variations in air quality in different locations and the potential impacts on health. (c) The uncertainties in extrapolating evidence from one population to another or to subpopulations, especially the more vulnerable, due to differing environmental factors and population variables. (d) The significance of communication with all potential stakeholders on issues associated with the management of air quality. In Australia there is also an opportunity for HIA to be used in conjunction with the NEPM to develop local air quality standard measures. The outcomes of this research indicated that the use of HIA for air quality standard setting at the national and local levels would prove advantageous. -- Highlights: • Health Impact Assessment framework has been applied to a policy development process. • HIA process was evaluated for application in air quality standard setting. • Advantages of HIA in the air quality standard setting process are demonstrated.« less
Schmitt, Jochen; Apfelbacher, Christian; Spuls, Phyllis I; Thomas, Kim S; Simpson, Eric L; Furue, Masutaka; Chalmers, Joanne; Williams, Hywel C
2015-01-01
Core outcome sets (COSs) are consensus-derived minimum sets of outcomes to be assessed in a specific situation. COSs are being increasingly developed to limit outcome-reporting bias, allow comparisons across trials, and strengthen clinical decision making. Despite the increasing interest in outcomes research, methods to develop COSs have not yet been standardized. The aim of this paper is to present the Harmonizing Outcomes Measures for Eczema (HOME) roadmap for the development and implementation of COSs, which was developed on the basis of our experience in the standardization of outcome measurements for atopic eczema. Following the establishment of a panel representing all relevant stakeholders and a research team experienced in outcomes research, the scope and setting of the core set should be defined. The next steps are the definition of a core set of outcome domains such as symptoms or quality of life, followed by the identification or development and validation of appropriate outcome measurement instruments to measure these core domains. Finally, the consented COS needs to be disseminated, implemented, and reviewed. We believe that the HOME roadmap is a useful methodological framework to develop COSs in dermatology, with the ultimate goal of better decision making and promoting patient-centered health care.
New methods of MR image intensity standardization via generalized scale
NASA Astrophysics Data System (ADS)
Madabhushi, Anant; Udupa, Jayaram K.
2005-04-01
Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.
Quantum chemical approach to estimating the thermodynamics of metabolic reactions.
Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán
2014-11-12
Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.
Temperature calibration of cryoscopic solutions used in the milk industry by adiabatic calorimetry
NASA Astrophysics Data System (ADS)
Méndez-Lango, E.; Lira-Cortes, L.; Quiñones-Ibarra, R.
2013-09-01
One method to detect extraneous water in milk is through cryoscopy. This method is used to measure the freezing point of milk. For calibration of a cryoscope there are is a set of standardized solution with known freezing points values. These values are related with the solute concentration, based in almost a century old data; it was no found recent results. It was found that reference solution are not certified in temperature: they do not have traceability to the temperature unit or standards. We prepared four solutions and measured them on a cryoscope and on an adiabatic calorimeter. It was found that results obtained with one technique dose not coincide with the other one.
A sequential quadratic programming algorithm using an incomplete solution of the subproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, W.; Prieto, F.J.
1993-05-01
We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less
Lacasse, Anaïs; Roy, Jean-Sébastien; Parent, Alexandre J.; Noushi, Nioushah; Odenigbo, Chúk; Pagé, Gabrielle; Beaudet, Nicolas; Choinière, Manon; Stone, Laura S.; Ware, Mark A.
2017-01-01
Background: To better standardize clinical and epidemiological studies about the prevalence, risk factors, prognosis, impact and treatment of chronic low back pain, a minimum data set was developed by the National Institutes of Health (NIH) Task Force on Research Standards for Chronic Low Back Pain. The aim of the present study was to develop a culturally adapted questionnaire that could be used for chronic low back pain research among French-speaking populations in Canada. Methods: The adaptation of the French Canadian version of the minimum data set was achieved according to guidelines for the cross-cultural adaptation of self-reported measures (double forward-backward translation, expert committee, pretest among 35 patients with pain in the low back region). Minor cultural adaptations were also incorporated into the English version by the expert committee (e.g., items about race/ethnicity, education level). Results: This cross-cultural adaptation provides an equivalent French-Canadian version of the minimal data set questionnaire and a culturally adapted English-Canadian version. Modifications made to the original NIH minimum data set were minimized to facilitate comparison between the Canadian and American versions. Interpretation: The present study is a first step toward the use of a culturally adapted instrument for phenotyping French- and English-speaking low back pain patients in Canada. Clinicians and researchers will recognize the importance of this standardized tool and are encouraged to incorporate it into future research studies on chronic low back pain. PMID:28401140
Leandro, G; Rolando, N; Gallus, G; Rolles, K; Burroughs, A
2005-01-01
Background: Monitoring clinical interventions is an increasing requirement in current clinical practice. The standard CUSUM (cumulative sum) charts are used for this purpose. However, they are difficult to use in terms of identifying the point at which outcomes begin to be outside recommended limits. Objective: To assess the Bernoulli CUSUM chart that permits not only a 100% inspection rate, but also the setting of average expected outcomes, maximum deviations from these, and false positive rates for the alarm signal to trigger. Methods: As a working example this study used 674 consecutive first liver transplant recipients. The expected one year mortality set at 24% from the European Liver Transplant Registry average. A standard CUSUM was compared with Bernoulli CUSUM: the control value mortality was therefore 24%, maximum accepted mortality 30%, and average number of observations to signal was 500—that is, likelihood of false positive alarm was 1:500. Results: The standard CUSUM showed an initial descending curve (nadir at patient 215) then progressively ascended indicating better performance. The Bernoulli CUSUM gave three alarm signals initially, with easily recognised breaks in the curve. There were no alarms signals after patient 143 indicating satisfactory performance within the criteria set. Conclusions: The Bernoulli CUSUM is more easily interpretable graphically and is more suitable for monitoring outcomes than the standard CUSUM chart. It only requires three parameters to be set to monitor any clinical intervention: the average expected outcome, the maximum deviation from this, and the rate of false positive alarm triggers. PMID:16210461
Construct Maps as a Foundation for Standard Setting
ERIC Educational Resources Information Center
Wyse, Adam E.
2013-01-01
Construct maps are tools that display how the underlying achievement construct upon which one is trying to set cut-scores is related to other information used in the process of standard setting. This article reviews what construct maps are, uses construct maps to provide a conceptual framework to view commonly used standard-setting procedures (the…
Sewell, Fiona; Doe, John; Gellatly, Nichola; Ragan, Ian; Burden, Natalie
2017-10-01
The current animal-based paradigm for safety assessment must change. In September 2016, the UK National Centre for Replacement, Refinement and Reduction of Animals in Research (NC3Rs) brought together scientists from regulatory authorities, academia and industry to review progress in bringing new methodology into regulatory use, and to identify ways to expedite progress. Progress has been slow. Science is advancing to make this possible but changes are necessary. The new paradigm should allow new methodology to be adopted once it is developed rather than being based on a fixed set of studies. Regulatory authorities can help by developing Performance-Based Standards. The most pressing need is in repeat dose toxicology, although setting standards will be more complex than in areas such as sensitization. Performance standards should be aimed directly at human safety, not at reproducing the results of animal studies. Regulatory authorities can also aid progress towards the acceptance of non-animal based methodology by promoting "safe-haven" trials where traditional and new methodology data can be submitted in parallel to build up experience in the new methods. Industry can play its part in the acceptance of new methodology, by contributing to the setting of performance standards and by actively contributing to "safe-haven" trials. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Performance Analysis of Hybrid Electric Vehicle over Different Driving Cycles
NASA Astrophysics Data System (ADS)
Panday, Aishwarya; Bansal, Hari Om
2017-02-01
Article aims to find the nature and response of a hybrid vehicle on various standard driving cycles. Road profile parameters play an important role in determining the fuel efficiency. Typical parameters of road profile can be reduced to a useful smaller set using principal component analysis and independent component analysis. Resultant data set obtained after size reduction may result in more appropriate and important parameter cluster. With reduced parameter set fuel economies over various driving cycles, are ranked using TOPSIS and VIKOR multi-criteria decision making methods. The ranking trend is then compared with the fuel economies achieved after driving the vehicle over respective roads. Control strategy responsible for power split is optimized using genetic algorithm. 1RC battery model and modified SOC estimation method are considered for the simulation and improved results compared with the default are obtained.
Sassen, J
2000-08-01
Livestock health care service is very much involved and interested in surveillance of the drinking water as well. However, in order to examine the water immediately "on the fly", test kits have to be provided, which offer results comparable to these obtained in the laboratories according to official prescription. The German Army was confronted with a similar situation during the secently performed mission in crisis regions. At the early state of a mission usually laboratory equipment is not yet established. Therefore a set of test kits was compiled suitable for mobile microbiological examination of drinking water. This set was excessively examined comparison with reference methods. In conclusion it is shown, that the mobile set gains equal or even better results compared to those obtained according to legally prescribed standard procedures.
Compliance with the guide for commissioning oral surgery: an audit and discussion.
Modgill, O; Shah, A
2017-10-13
Introduction The Guide for commissioning oral surgery and oral medicine published by NHS England (2015) prescribes the level of complexity of oral surgery and oral medicine investigations and procedures to be carried out within NHS services. These are categorised as Level 1, Level 2, Level 3A and Level 3B. An audit was designed to ascertain the level of oral surgery procedures performed by clinicians of varying experience and qualification working in a large oral surgery department within a major teaching hospital.Materials and methods Two audit cycles were conducted on retrospective case notes and radiographic review of 100 patient records undergoing dental extractions within the Department of Oral Surgery at King's College Dental Hospital. The set gold standard was: '100% of Level 1 procedures should be performed by dental undergraduates or discharged back to the referring general dental practitioner'. Data were collected and analysed on a Microsoft Excel spreadsheet. The results of the first audit cycle were presented to all clinicians within the department in a formal meeting, recommendations were made and an action plan implemented prior to undertaking a second cycle.Results The first cycle revealed that 25% of Level 1 procedures met the set gold standard, with Level 2 practitioners performing the majority of Level 1 and Level 2 procedures. The second cycle showed a marked improvement, with 66% of Level 1 procedures meeting the set gold standard.Conclusion Our audit demonstrates that whilst we were able to achieve an improvement with the set gold standard, several barriers still remain to ensure that patients are treated by the appropriate level of clinician in a secondary care setting. We have used this audit as a foundation upon which to discuss the challenges faced in implementation of the commissioning framework within both primary and secondary dental care and strategies to overcome these challenges, which are likely to be encountered in any NHS care setting in which oral surgery procedures are performed.
Leveraging transcript quantification for fast computation of alternative splicing profiles.
Alamancos, Gael P; Pagès, Amadís; Trincado, Juan L; Bellora, Nicolás; Eyras, Eduardo
2015-09-01
Alternative splicing plays an essential role in many cellular processes and bears major relevance in the understanding of multiple diseases, including cancer. High-throughput RNA sequencing allows genome-wide analyses of splicing across multiple conditions. However, the increasing number of available data sets represents a major challenge in terms of computation time and storage requirements. We describe SUPPA, a computational tool to calculate relative inclusion values of alternative splicing events, exploiting fast transcript quantification. SUPPA accuracy is comparable and sometimes superior to standard methods using simulated as well as real RNA-sequencing data compared with experimentally validated events. We assess the variability in terms of the choice of annotation and provide evidence that using complete transcripts rather than more transcripts per gene provides better estimates. Moreover, SUPPA coupled with de novo transcript reconstruction methods does not achieve accuracies as high as using quantification of known transcripts, but remains comparable to existing methods. Finally, we show that SUPPA is more than 1000 times faster than standard methods. Coupled with fast transcript quantification, SUPPA provides inclusion values at a much higher speed than existing methods without compromising accuracy, thereby facilitating the systematic splicing analysis of large data sets with limited computational resources. The software is implemented in Python 2.7 and is available under the MIT license at https://bitbucket.org/regulatorygenomicsupf/suppa. © 2015 Alamancos et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Morris, Heather C; Monaco, Lisa A; Steele, Andrew; Wainwright, Norm
2010-10-01
Historically, colony-forming units as determined by plate cultures have been the standard unit for microbiological analysis of environmental samples, medical diagnostics, and products for human use. However, the time and materials required make plate cultures expensive and potentially hazardous in the closed environments of future NASA missions aboard the International Space Station and missions to other Solar System targets. The Limulus Amebocyte Lysate (LAL) assay is an established method for ensuring the sterility and cleanliness of samples in the meat-packing and pharmaceutical industries. Each of these industries has verified numerical requirements for the correct interpretation of results from this assay. The LAL assay is a rapid, point-of-use, verified assay that has already been approved by NASA Planetary Protection as an alternate, molecular method for the examination of outbound spacecraft. We hypothesize that standards for molecular techniques, similar to those used by the pharmaceutical and meat-packing industries, need to be set by space agencies to ensure accurate data interpretation and subsequent decision making. In support of this idea, we present research that has been conducted to relate the LAL assay to plate cultures, and we recommend values obtained from these investigations that could assist in interpretation and analysis of data obtained from the LAL assay.
PIXE and XRF Analysis of Roman Denarii
NASA Astrophysics Data System (ADS)
Fasano, Cecilia; Raddell, Mark; Manukyan, Khachatur; Stech, Edward; Wiescher, Michael
2017-09-01
A set of Roman Denarii from the republican to the imperial period (140BC-240AD) has been studied using X-ray fluorescent (XRF) scanning and proton induced x-ray emission (PIXE) techniques. XRF and PIXE are commonly used in the study of cultural heritage objects because they are nondestructive. The combination of these two methods is also unique because of the ability to penetrate the sample with a broader spectrum of depths and energies than either could achieve on its own. The coins are from a large span of Roman history and their analysis serves to follow the economic and political change of the era using the relative silver and copper contents in each sample. In addition to analyzing the samples, the study sought to compare these two common analysis techniques and to explore the use of a standard to examine any shortcomings in either of the methods. Data sets were compared and then adjusted to a calibration curve which was created from the analysis of a number of standard solutions. The concentrations of the standard solutions were confirmed using inductively coupled plasma spectroscopy. Through this we were able to assemble results which will progress the basis of understanding of PIXE and XRF techniques as well as increase the wealth of knowledge of Ancient Roman currency.
Clustering of Variables for Mixed Data
NASA Astrophysics Data System (ADS)
Saracco, J.; Chavent, M.
2016-05-01
This chapter presents clustering of variables which aim is to lump together strongly related variables. The proposed approach works on a mixed data set, i.e. on a data set which contains numerical variables and categorical variables. Two algorithms of clustering of variables are described: a hierarchical clustering and a k-means type clustering. A brief description of PCAmix method (that is a principal component analysis for mixed data) is provided, since the calculus of the synthetic variables summarizing the obtained clusters of variables is based on this multivariate method. Finally, the R packages ClustOfVar and PCAmixdata are illustrated on real mixed data. The PCAmix and ClustOfVar approaches are first used for dimension reduction (step 1) before applying in step 2 a standard clustering method to obtain groups of individuals.
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
NASA Astrophysics Data System (ADS)
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
Measuring housing quality in the absence of a monetized real estate market.
Rindfuss, Ronald R; Piotrowski, Martin; Thongthai, Varachai; Prasartkul, Pramote
2007-03-01
Measuring housing quality or value or both has been a weak component of demographic and development research in less developed countries that lack an active real estate (housing) market. We describe a new method based on a standardized subjective rating process. It is designed to be used in settings that do not have an active, monetized housing market. The method is applied in an ongoing longitudinal study in north-east Thailand and could be straightforwardly used in many other settings. We develop a conceptual model of the process whereby households come to reside in high-quality or low-quality housing units. We use this theoretical model in conjunction with longitudinal data to show that the new method of measuring housing quality behaves as theoretically expected, thus providing evidence of face validity.
NASA Astrophysics Data System (ADS)
Corucci, Linda; Masini, Andrea; Cococcioni, Marco
2011-01-01
This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.
Setting and validating the pass/fail score for the NBDHE.
Tsai, Tsung-Hsun; Dixon, Barbara Leatherman
2013-04-01
This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.
Algorithms of maximum likelihood data clustering with applications
NASA Astrophysics Data System (ADS)
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
Fluorescence intensity positivity classification of Hep-2 cells images using fuzzy logic
NASA Astrophysics Data System (ADS)
Sazali, Dayang Farzana Abang; Janier, Josefina Barnachea; May, Zazilah Bt.
2014-10-01
Indirect Immunofluorescence (IIF) is a good standard used for antinuclear autoantibody (ANA) test using Hep-2 cells to determine specific diseases. Different classifier algorithm methods have been proposed in previous works however, there still no valid set as a standard to classify the fluorescence intensity. This paper presents the use of fuzzy logic to classify the fluorescence intensity and to determine the positivity of the Hep-2 cell serum samples. The fuzzy algorithm involves the image pre-processing by filtering the noises and smoothen the image, converting the red, green and blue (RGB) color space of images to luminosity layer, chromaticity layer "a" and "b" (LAB) color space where the mean value of the lightness and chromaticity layer "a" was extracted and classified by using fuzzy logic algorithm based on the standard score ranges of antinuclear autoantibody (ANA) fluorescence intensity. Using 100 data sets of positive and intermediate fluorescence intensity for testing the performance measurements, the fuzzy logic obtained an accuracy of intermediate and positive class as 85% and 87% respectively.
Establishment of Class e1 Mass Standard of 50 kg
NASA Astrophysics Data System (ADS)
Yao, Hong; Wang, Jian; Ding, Jingan; Zhong, Ruilin; Ren, Xiaoping
Because of the equipment limit, the dissemination of large mass has been realized by a large amount of higher class of 20 kg weights since 1950s in China. But with improvement of the technique and customer's requirements, it is necessary to establish the mass standard of 50 kg weight. In 1990s, mass standard laboratory has set up Class E1 weight sets from 20 kg to 1 mg. To extend the mass capacity up to 50 kg of Class E1, it is not only to produce Class E1 50 kg weight and import a mass comparator, but also need to lift the heavy weight from weight box to balance receptor safely. Up to now, the mass comparator has been installed in Hepingli campus of NIM. Two pieces of Class E1 50 kg weights are determined by combination weighing method. A lifting device has been mounted close to the mass comparator in order to move the 50 kg easily.
Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford
2018-04-01
Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.
Quantitative LIBS analysis of vanadium in samples of hexagonal mesoporous silica catalysts.
Pouzar, Miloslav; Kratochvíl, Tomás; Capek, Libor; Smoláková, Lucie; Cernohorský, Tomás; Krejcová, Anna; Hromádko, Ludek
2011-02-15
The method for the analysis of vanadium in hexagonal mesoporous silica (V-HMS) catalysts using Laser Induced Breakdown Spectrometry (LIBS) was suggested. Commercially available LIBS spectrometer was calibrated with the aid of authentic V-HMS samples previously analyzed by ICP OES after microwave digestion. Deposition of the sample on the surface of adhesive tape was adopted as a sample preparation method. Strong matrix effect connected with the catalyst preparation technique (1st vanadium added in the process of HMS synthesis, 2nd already synthesised silica matrix was impregnated by vanadium) was observed. The concentration range of V in the set of nine calibration standards was 1.3-4.5% (w/w). Limit of detection was 0.13% (w/w) and it was calculated as a triple standard deviation from five replicated determinations of vanadium in the real sample with a very low vanadium concentration. Comparable results of LIBS and ED XRF were obtained if the same set of standards was used for calibration of both methods and vanadium was measured in the same type of real samples. LIBS calibration constructed using V-HMS-impregnated samples failed for measuring of V-HMS-synthesized samples. LIBS measurements seem to be strongly influenced with different chemical forms of vanadium in impregnated and synthesised samples. The combination of LIBS and ED XRF is able to provide new information about measured samples (in our case for example about procedure of catalyst preparation). Copyright © 2010 Elsevier B.V. All rights reserved.
Liebow, Edward B; Derzon, James H; Fontanesi, John; Favoretto, Alessandra M; Baetz, Rich Ann; Shaw, Colleen; Thompson, Pamela; Mass, Diana; Christenson, Robert; Epner, Paul; Snyder, Susan R
2012-09-01
To conduct a systematic review of the evidence available in support of automated notification methods and call centers and to acknowledge other considerations in making evidence-based recommendations for best practices in improving the timeliness and accuracy of critical value reporting. This review followed the Laboratory Medicine Best Practices (LMBP) review methods (Christenson, et al. 2011). A broad literature search and call for unpublished submissions returned 196 bibliographic records which were screened for eligibility. 41 studies were retrieved. Of these, 4 contained credible evidence for the timeliness and accuracy of automatic notification systems and 5 provided credible evidence for call centers for communicating critical value information in in-patient care settings. Studies reporting improvement from implementing automated notification findings report mean differences and were standardized using the standard difference in means (d=0.42; 95% CI=0.2-0.62) while studies reporting improvement from implementing call centers generally reported criterion referenced findings and were standardized using odds ratios (OR=22.1; 95% CI=17.1-28.6). The evidence, although suggestive, is not sufficient to make an LMBP recommendation for or against using automated notification systems as a best practice to improve the timeliness of critical value reporting in an in-patient care setting. Call centers, however, are effective in improving the timeliness of critical value reporting in an in-patient care setting, and meet LMBP criteria to be recommended as an "evidence-based best practice." Copyright © 2012 The Canadian Society of Clinical Chemists. All rights reserved.
Setting, Evaluating, and Maintaining Certification Standards with the Rasch Model.
ERIC Educational Resources Information Center
Grosse, Martin E.; Wright, Benjamin D.
1986-01-01
Based on the standard setting procedures or the American Board of Preventive Medicine for their Core Test, this article describes how Rasch measurement can facilitate using test content judgments in setting a standard. Rasch measurement can then be used to evaluate and improve the precision of the standard and to hold it constant across time.…
A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.
Hart, Emma; Sim, Kevin
2016-01-01
We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.
The robust design for improving crude palm oil quality in Indonesian Mill
NASA Astrophysics Data System (ADS)
Maretia Benu, Siti; Sinulingga, Sukaria; Matondang, Nazaruddin; Budiman, Irwan
2018-04-01
This research was conducted in palm oil mill in Sumatra Utara Province, Indonesia. Currently, the main product of this mill is Crude Palm Oil (CPO) and hasn’t met the expected standard quality. CPO is the raw material for many fat derivative products. The generally stipulated quality criteria are dirt count, free fatty acid, and moisture of CPO. The aim of this study is to obtain the optimal setting for factor’s affect the quality of CPO. The optimal setting will result in an improvement of product’s quality. In this research, Experimental Design with Taguchi Method is used. Steps of this method are identified influence factors, select the orthogonal array, processed data using ANOVA test and signal to noise ratio, and confirmed the research using Quality Loss Function. The result of this study using Taguchi Method is to suggest to set fruit maturity at 75.4-86.9%, digester temperature at 95°C and press at 21 Ampere to reduce quality deviation until 42.42%.
Topology optimization in acoustics and elasto-acoustics via a level-set method
NASA Astrophysics Data System (ADS)
Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.
2018-04-01
Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.
Mspire-Simulator: LC-MS shotgun proteomic simulator for creating realistic gold standard data.
Noyce, Andrew B; Smith, Rob; Dalgleish, James; Taylor, Ryan M; Erb, K C; Okuda, Nozomu; Prince, John T
2013-12-06
The most important step in any quantitative proteomic pipeline is feature detection (aka peak picking). However, generating quality hand-annotated data sets to validate the algorithms, especially for lower abundance peaks, is nearly impossible. An alternative for creating gold standard data is to simulate it with features closely mimicking real data. We present Mspire-Simulator, a free, open-source shotgun proteomic simulator that goes beyond previous simulation attempts by generating LC-MS features with realistic m/z and intensity variance along with other noise components. It also includes machine-learned models for retention time and peak intensity prediction and a genetic algorithm to custom fit model parameters for experimental data sets. We show that these methods are applicable to data from three different mass spectrometers, including two fundamentally different types, and show visually and analytically that simulated peaks are nearly indistinguishable from actual data. Researchers can use simulated data to rigorously test quantitation software, and proteomic researchers may benefit from overlaying simulated data on actual data sets.
NASA Technical Reports Server (NTRS)
Ruiz, Ian B.; Burke, Gary R.; Lung, Gerald; Whitaker, William D.; Nowicki, Robert M.
2004-01-01
The Jet Propulsion Laboratory (JPL) has developed a command interface chip-set that primarily consists of two mixed-signal ASICs'; the Command Interface ASIC (CIA) and Analog Interface ASIC (AIA). The Open-systems architecture employed during the design of this chip-set enables its use as both an intelligent gateway between the system's flight computer and the control, actuation, and activation of the spacecraft's loads, valves, and pyrotechnics respectfully as well as the regulator of the spacecraft power bus. Furthermore, the architecture is highly adaptable and employed fault-tolerant design methods enabling a host of other mission uses including reliable remote data collection. The objective of this design is to both provide a needed flight component that meets the stringent environmental requirements of current deep space missions and to add a new element to a growing library that can be used as a standard building block for future missions to the outer planets.
High-level intuitive features (HLIFs) for intuitive skin lesion description.
Amelard, Robert; Glaister, Jeffrey; Wong, Alexander; Clausi, David A
2015-03-01
A set of high-level intuitive features (HLIFs) is proposed to quantitatively describe melanoma in standard camera images. Melanoma is the deadliest form of skin cancer. With rising incidence rates and subjectivity in current clinical detection methods, there is a need for melanoma decision support systems. Feature extraction is a critical step in melanoma decision support systems. Existing feature sets for analyzing standard camera images are comprised of low-level features, which exist in high-dimensional feature spaces and limit the system's ability to convey intuitive diagnostic rationale. The proposed HLIFs were designed to model the ABCD criteria commonly used by dermatologists such that each HLIF represents a human-observable characteristic. As such, intuitive diagnostic rationale can be conveyed to the user. Experimental results show that concatenating the proposed HLIFs with a full low-level feature set increased classification accuracy, and that HLIFs were able to separate the data better than low-level features with statistical significance. An example of a graphical interface for providing intuitive rationale is given.
Song, Tao; Zhang, Feng-ping; Liu, Yao-min; Wu, Zong-wen; Suo, You-rui
2012-08-01
In the present research, a novel method was established for determination of five fatty acids in soybean oil by transmission reflection-near infrared spectroscopy. The optimum conditions of mathematics model of five components (C16:0, C18:0, C18:1, C18:2 and C18:3) were studied, including the sample set selection, chemical value analysis, the detection methods and condition. Chemical value was analyzed by gas chromatography. One hundred fifty eight samples were selected, 138 for modeling set, 10 for testing set and 10 for unknown sample set. All samples were placed in sample pools and scanned by transmission reflection-near infrared spectrum after sonicleaning for 10 minute. The 1100-2500 nm spectral region was analyzed. The acquisition interval was 2 nm. Modified partial least square method was chosen for calibration mode creating. Result demonstrated that the 1-VR of five fatty acids between the reference value of the modeling sample set and the near infrared spectrum predictive value were 0.8839, 0.5830, 0.9001, 0.9776 and 0.9596, respectively. And the SECV of five fatty acids between the reference value of the modeling sample set and the near infrared spectrum predictive value were 0.42, 0.29, 0.83, 0.46 and 0.21, respectively. The standard error of the calibration (SECV) of five fatty acids between the reference value of testing sample set and the near infrared spectrum predictive value were 0.891, 0.790, 0.900, 0.976 and 0.942, respectively. It was proved that the near infrared spectrum predictive value was linear with chemical value and the mathematical model established for fatty acids of soybean oil was feasible. For validation, 10 unknown samples were selected for analysis by near infrared spectrum. The result demonstrated that the relative standard deviation between predict value and chemical value was less than 5.50%. That was to say that transmission reflection-near infrared spectroscopy had a good veracity in analysis of fatty acids of soybean oil.
Hancewicz, Thomas M; Xiao, Chunhong; Zhang, Shuliang; Misra, Manoj
2013-12-01
In vivo confocal Raman spectroscopy has become the measurement technique of choice for skin health and skin care related communities as a way of measuring functional chemistry aspects of skin that are key indicators for care and treatment of various skin conditions. Chief among these techniques are stratum corneum water content, a critical health indicator for severe skin condition related to dryness, and natural moisturizing factor components that are associated with skin protection and barrier health. In addition, in vivo Raman spectroscopy has proven to be a rapid and effective method for quantifying component penetration in skin for topically applied skin care formulations. The benefit of such a capability is that noninvasive analytical chemistry can be performed in vivo in a clinical setting, significantly simplifying studies aimed at evaluating product performance. This presumes, however, that the data and analysis methods used are compatible and appropriate for the intended purpose. The standard analysis method used by most researchers for in vivo Raman data is ordinary least squares (OLS) regression. The focus of work described in this paper is the applicability of OLS for in vivo Raman analysis with particular attention given to use for non-ideal data that often violate the inherent limitations and deficiencies associated with proper application of OLS. We then describe a newly developed in vivo Raman spectroscopic analysis methodology called multivariate curve resolution-augmented ordinary least squares (MCR-OLS), a relatively simple route to addressing many of the issues with OLS. The method is compared with the standard OLS method using the same in vivo Raman data set and using both qualitative and quantitative comparisons based on model fit error, adherence to known data constraints, and performance against calibration samples. A clear improvement is shown in each comparison for MCR-OLS over standard OLS, thus supporting the premise that the MCR-OLS method is better suited for general-purpose multicomponent analysis of in vivo Raman spectral data. This suggests that the methodology is more readily adaptable to a wide range of component systems and is thus more generally applicable than standard OLS.
International bowel function basic spinal cord injury data set.
Krogh, K; Perkash, I; Stiens, S A; Biering-Sørensen, F
2009-03-01
International expert working group. To develop an International Bowel Function Basic Spinal Cord Injury (SCI) Data Set presenting a standardized format for the collection and reporting of a minimal amount of information on bowel function in daily practice or in research. Working group consisting of members appointed by the American Spinal Injury Association (ASIA) and the International Spinal Cord Society (ISCoS). A draft prepared by the working group was reviewed by Executive Committee of the International SCI Standards and Data Sets, and later by ISCoS Scientific Committee and the ASIA Board. Relevant and interested scientific and professional (international) organizations and societies (approximately 40) were also invited to review the data set and it was posted on the ISCoS and ASIA websites for 3 months to allow comments and suggestions. The ISCoS Scientific Committee, Council and ASIA Board received the data set for final review and approval. The International Bowel Function Basic SCI Data Set includes the following 12 items: date of data collection, gastrointestinal or anal sphincter dysfunction unrelated to SCI, surgical procedures on the gastrointestinal tract, awareness of the need to defecate, defecation method and bowel care procedures, average time required for defecation, frequency of defecation, frequency of fecal incontinence, need to wear pad or plug, medication affecting bowel function/constipating agents, oral laxatives and perianal problems. An International Bowel Function Basic SCI Data Set has been developed.
A simple method for plasma total vitamin C analysis suitable for routine clinical laboratory use.
Robitaille, Line; Hoffer, L John
2016-04-21
In-hospital hypovitaminosis C is highly prevalent but almost completely unrecognized. Medical awareness of this potentially important disorder is hindered by the inability of most hospital laboratories to determine plasma vitamin C concentrations. The availability of a simple, reliable method for analyzing plasma vitamin C could increase opportunities for routine plasma vitamin C analysis in clinical medicine. Plasma vitamin C can be analyzed by high performance liquid chromatography (HPLC) with electrochemical (EC) or ultraviolet (UV) light detection. We modified existing UV-HPLC methods for plasma total vitamin C analysis (the sum of ascorbic and dehydroascorbic acid) to develop a simple, constant-low-pH sample reduction procedure followed by isocratic reverse-phase HPLC separation using a purely aqueous low-pH non-buffered mobile phase. Although EC-HPLC is widely recommended over UV-HPLC for plasma total vitamin C analysis, the two methods have never been directly compared. We formally compared the simplified UV-HPLC method with EC-HPLC in 80 consecutive clinical samples. The simplified UV-HPLC method was less expensive, easier to set up, required fewer reagents and no pH adjustments, and demonstrated greater sample stability than many existing methods for plasma vitamin C analysis. When compared with the gold-standard EC-HPLC method in 80 consecutive clinical samples exhibiting a wide range of plasma vitamin C concentrations, it performed equivalently. The easy set up, simplicity and sensitivity of the plasma vitamin C analysis method described here could make it practical in a normally equipped hospital laboratory. Unlike any prior UV-HPLC method for plasma total vitamin C analysis, it was rigorously compared with the gold-standard EC-HPLC method and performed equivalently. Adoption of this method could increase the availability of plasma vitamin C analysis in clinical medicine.
Xue, Gang; Song, Wen-qi; Li, Shu-chao
2015-01-01
In order to achieve the rapid identification of fire resistive coating for steel structure of different brands in circulating, a new method for the fast discrimination of varieties of fire resistive coating for steel structure by means of near infrared spectroscopy was proposed. The raster scanning near infrared spectroscopy instrument and near infrared diffuse reflectance spectroscopy were applied to collect the spectral curve of different brands of fire resistive coating for steel structure and the spectral data were preprocessed with standard normal variate transformation(standard normal variate transformation, SNV) and Norris second derivative. The principal component analysis (principal component analysis, PCA)was used to near infrared spectra for cluster analysis. The analysis results showed that the cumulate reliabilities of PC1 to PC5 were 99. 791%. The 3-dimentional plot was drawn with the scores of PC1, PC2 and PC3 X 10, which appeared to provide the best clustering of the varieties of fire resistive coating for steel structure. A total of 150 fire resistive coating samples were divided into calibration set and validation set randomly, the calibration set had 125 samples with 25 samples of each variety, and the validation set had 25 samples with 5 samples of each variety. According to the principal component scores of unknown samples, Mahalanobis distance values between each variety and unknown samples were calculated to realize the discrimination of different varieties. The qualitative analysis model for external verification of unknown samples is a 10% recognition ration. The results demonstrated that this identification method can be used as a rapid, accurate method to identify the classification of fire resistive coating for steel structure and provide technical reference for market regulation.
Predicting who will drop out of nursing courses: a machine learning exercise.
Moseley, Laurence G; Mead, Donna M
2008-05-01
The concepts of causation and prediction are different, and have different implications for practice. This distinction is applied here to studies of the problem of student attrition (although it is more widely applicable). Studies of attrition from nursing courses have tended to concentrate on causation, trying, largely unsuccessfully, to elicit what causes drop out. However, the problem may more fruitfully be cast in terms of predicting who is likely to drop out. One powerful method for attempting to make predictions is rule induction. This paper reports the use of the Answer Tree package from SPSS for that purpose. The main data set consisted of 3978 records on 528 nursing students, split into a training set and a test set. The source was standard university student records. The method obtained 84% sensitivity, 70% specificity, and 94% accuracy on previously unseen cases. The method requires large amounts of high quality data. When such data are available, rule induction offers a way to reduce attrition. It would be desirable to compare its results with those of predictions made by tutors using more informal conventional methods.
Evolutionary game theory using agent-based methods.
Adami, Christoph; Schossau, Jory; Hintze, Arend
2016-12-01
Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.
Ashton, Carol M; Wray, Nelda P; Jarman, Anna F; Kolman, Jacob M; Wenner, Danielle M; Brody, Baruch A
2013-01-01
Background If trials of therapeutic interventions are to serve society’s interests, they must be of high methodological quality and must satisfy moral commitments to human subjects. The authors set out to develop a clinical-trials compendium in which standards for the ethical treatment of human subjects are integrated with standards for research methods. Methods The authors rank-ordered the world’s nations and chose the 31 with >700 active trials as of 24 July 2008. Governmental and other authoritative entities of the 31 countries were searched, and 1004 English-language documents containing ethical and/or methodological standards for clinical trials were identified. The authors extracted standards from 144 of those: 50 designated as ‘core’, 39 addressing trials of invasive procedures and a 5% sample (N=55) of the remainder. As the integrating framework for the standards we developed a coherent taxonomy encompassing all elements of a trial’s stages. Findings Review of the 144 documents yielded nearly 15 000 discrete standards. After duplicates were removed, 5903 substantive standards remained, distributed in the taxonomy as follows: initiation, 1401 standards, 8 divisions; design, 1869 standards, 16 divisions; conduct, 1473 standards, 8 divisions; analysing and reporting results, 997 standards, four divisions; and post-trial standards, 168 standards, 5 divisions. Conclusions The overwhelming number of source documents and standards uncovered in this study was not anticipated beforehand and confirms the extraordinary complexity of the clinical trials enterprise. This taxonomy of multinational ethical and methodological standards may help trialists and overseers improve the quality of clinical trials, particularly given the globalisation of clinical research. PMID:21429960
Study unique artistic lopburi province for design brass tea set of bantahkrayang community
NASA Astrophysics Data System (ADS)
Pliansiri, V.; Seviset, S.
2017-07-01
The objectives of this study were as follows: 1) to study the production process of handcrafted Brass Tea Set; and 2) to design and develop the handcrafted of Brass Tea Set. The process of design was started by mutual analytical processes and conceptual framework for product design, Quality Function Deployment, Theory of Inventive Problem Solving, Principles of Craft Design, and Principle of Reverse Engineering. The experts in field of both Industrial Product Design and Brass Handicraft Product, have evaluated the Brass Tea Set design and created prototype of Brass tea set by the sample of consumers who have ever bought the Brass Tea Set of Bantahkrayang Community on this research. The statistics methods used were percentage, mean ({{{\\overline X}} = }) and standard deviation (S.D.) 3. To assess consumer satisfaction toward of handcrafted Brass tea set was at the high level.
Optimization Methods in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.
2009-09-01
Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.
NASA Technical Reports Server (NTRS)
Marley, Mike
2008-01-01
The focus of this paper will be on the thermal balance testing for the Operationally Responsive Space Standard Bus Battery. The Standard Bus thermal design required that the battery be isolated from the bus itself. This required the battery to have its own thermal control, including heaters and a radiator surface. Since the battery was not ready for testing during the overall bus thermal balance testing, a separate test was conducted to verify the thermal design for the battery. This paper will discuss in detail, the test set up, test procedure, and results from this test. Additionally this paper will consider the methods taken to determine the heat dissipation of the battery during charge and discharge. It seems that the heat dissipation for Lithium Ion batteries is relatively unknown and hard to quantify. The methods used during test and the post test analysis to estimate the heat dissipation of the battery will be discussed.
MALDI-TOF mass spectrometry as a potential tool for Trichomonas vaginalis identification.
Calderaro, Adriana; Piergianni, Maddalena; Montecchini, Sara; Buttrini, Mirko; Piccolo, Giovanna; Rossi, Sabina; Arcangeletti, Maria Cristina; Medici, Maria Cristina; Chezzi, Carlo; De Conto, Flora
2016-06-10
Trichomonas vaginalis is a flagellated protozoan causing trichomoniasis, a sexually transmitted human infection, with around 276.4 million new cases estimated by World Health Organization. Culture is the gold standard method for the diagnosis of T. vaginalis infection. Recently, immunochromatographic assays as well as PCR assays for the detection of T. vaginalis antigen or DNA, respectively, have been also available. Although the well-known genome sequence of T. vaginalis has made possible the application of proteomic studies, few data are available about the overall proteomic expression profiling of T. vaginalis. The aim of this study was to investigate the potential application of MALDI-TOF MS as a new tool for the identification of T. vaginalis. Twenty-one isolates were analysed by MALDI-TOF MS after the creation of a Main Spectrum Profile (MSP) from a T. vaginalis reference strain (G3) and its subsequent supplementation in the Bruker Daltonics database, not including any profile of protozoa. This was achieved after the development of a new identification method created by modifying the range setting (6-10 kDa) for the MALDI-TOF MS analysis in order to exclude the overlapping of peaks derived from the culture media used in this study. Two MSP reference spectra were created in 2 different range: 3-15 kDa (standard range setting) and 6-10 kDa (new range setting). Both MSP spectra were deposited in the MALDI BioTyper database for further identification of additional T. vaginalis strains. All the 21 strains analysed in this study were correctly identified by using the new identification method. In this study it was demonstrated that changes in the MALDI-TOF MS standard parameters usually used to identify bacteria and fungi allowed the identification of the protozoan T. vaginalis. This study shows the usefulness of MALDI-TOF MS in the reliable identification of microorganism grown on complex liquid media such as the protozoan T. vaginalis, on the basis of the proteic profile and not on the basis of single markers, by using a "new range setting" different from that developed for bacteria and fungi.
NASA Astrophysics Data System (ADS)
Lucido, J. M.
2013-12-01
Scientists in the fields of hydrology, geophysics, and climatology are increasingly using the vast quantity of publicly-available data to address broadly-scoped scientific questions. For example, researchers studying contamination of nearshore waters could use a combination of radar indicated precipitation, modeled water currents, and various sources of in-situ monitoring data to predict water quality near a beach. In discovering, gathering, visualizing and analyzing potentially useful data sets, data portals have become invaluable tools. The most effective data portals often aggregate distributed data sets seamlessly and allow multiple avenues for accessing the underlying data, facilitated by the use of open standards. Additionally, adequate metadata are necessary for attribution, documentation of provenance and relating data sets to one another. Metadata also enable thematic, geospatial and temporal indexing of data sets and entities. Furthermore, effective portals make use of common vocabularies for scientific methods, units of measure, geologic features, chemical, and biological constituents as they allow investigators to correctly interpret and utilize data from external sources. One application that employs these principles is the National Ground Water Monitoring Network (NGWMN) Data Portal (http://cida.usgs.gov/ngwmn), which makes groundwater data from distributed data providers available through a single, publicly accessible web application by mediating and aggregating native data exposed via web services on-the-fly into Open Geospatial Consortium (OGC) compliant service output. That output may be accessed either through the map-based user interface or through the aforementioned OGC web services. Furthermore, the Geo Data Portal (http://cida.usgs.gov/climate/gdp/), which is a system that provides users with data access, subsetting and geospatial processing of large and complex climate and land use data, exemplifies the application of International Standards Organization (ISO) metadata records to enhance data discovery for both human and machine interpretation. Lastly, the Water Quality Portal (http://www.waterqualitydata.us/) achieves interoperable dissemination of water quality data by referencing a vocabulary service for mapping constituents and methods between the USGS and USEPA. The NGWMN Data Portal, Geo Data Portal and Water Quality Portal are three examples of best practices when implementing data portals that provide distributed scientific data in an integrated, standards-based approach.
Influence of Installation Errors On the Output Data of the Piezoelectric Vibrations Transducers
NASA Astrophysics Data System (ADS)
Kozuch, Barbara; Chelmecki, Jaroslaw; Tatara, Tadeusz
2017-10-01
The paper examines an influence of installation errors of the piezoelectric vibrations transducers on the output data. PCB Piezotronics piezoelectric accelerometers were used to perform calibrations by comparison. The measurements were performed with TMS 9155 Calibration Workstation version 5.4.0 at frequency in the range of 5Hz - 2000Hz. Accelerometers were fixed on the calibration station in a so-called back-to-back configuration in accordance with the applicable international standard - ISO 16063-21: Methods for the calibration of vibration and shock transducers - Part 21: Vibration calibration by comparison to a reference transducer. The first accelerometer was calibrated by suitable methods with traceability to a primary reference transducer. Each subsequent calibration was performed when changing one setting in relation to the original calibration. The alterations were related to negligence and failures in relation to the above-mentioned standards and operating guidelines - e.g. the sensor was not tightened or appropriate substance was not placed. Also, there was modified the method of connection which was in the standards requirements. Different kind of wax, light oil, grease and other assembly methods were used. The aim of the study was to verify the significance of standards requirements and to estimate of their validity. The authors also wanted to highlight the most significant calibration errors. Moreover, relation between various appropriate methods of the connection was demonstrated.
Smith, Andrew G; Eckerle, Michelle; Mvalo, Tisungane; Weir, Brian; Martinson, Francis; Chalira, Alfred; Lufesi, Norman; Mofolo, Innocent; Hosseinipour, Mina
2017-01-01
Introduction Pneumonia is a leading cause of mortality among children in low-resource settings. Mortality is greatest among children with high-risk conditions including HIV infection or exposure, severe malnutrition and/or severe hypoxaemia. WHO treatment recommendations include low-flow oxygen for children with severe pneumonia. Bubble continuous positive airway pressure (bCPAP) is a non-invasive support modality that provides positive end-expiratory pressure and oxygen. bCPAP is effective in the treatment of neonates in low-resource settings; its efficacy is unknown for high-risk children with severe pneumonia in low-resource settings. Methods and analysis CPAP IMPACT is a randomised clinical trial comparing bCPAP to low-flow oxygen in the treatment of severe pneumonia among high-risk children 1–59 months of age. High-risk children are stratified into two subgroups: (1) HIV infection or exposure and/or severe malnutrition; (2) severe hypoxaemia. The trial is being conducted in a Malawi district hospital and will enrol 900 participants. The primary outcome is in-hospital mortality rate of children treated with standard care as compared with bCPAP. Ethics and dissemination CPAP IMPACT has approval from the Institutional Review Boards of all investigators. An urgent need exists to determine whether bCPAP decreases mortality among high-risk children with severe pneumonia to inform resource utilisation in low-resource settings. Trial registration number NCT02484183; Pre-results. PMID:28883928
Mushi, Martha Fidelis; Paterno, Laurent; Tappe, Dennis; Deogratius, Anna Pendo; Seni, Jeremiah; Moremi, Nyambura; Mirambo, Mariam Mwijuma; Mshana, Stephen Eliatosha
2014-01-01
Campylobacter species are recognized as a major cause of acute gastroenteritis in humans throughout the world. The diagnosis is mainly based on stool culture. This study was done to evaluate the effectiveness of staining methods (Gram stain using 0.3% carbol fuchsin as counter stain and 1% carbol fuchsin direct stain) versus culture as the gold standard. A total of 300 children attending Bugando Medical Centre (BMC) and the Sekou Toure regional hospital with acute watery diarrhea were enrolled. Two sets of slides were prepared stained with 1% carbol fuchsin for 30 seconds first set, and the second set stained with Gram's stain using 0.3% carbol fuchsin as counter stain for five minutes. Concurrently, stool samples were inoculated on Preston Agar selective. Of 300 stool specimens, 14(4.7%) showed positive culture after 48 hours of incubation and 28 (9.3%) shows typical morphology of Campylobacter species by both Gram stain and direct stain. The sensitivity of the Gram stain using 0.3% carbol fuchsin as counter stain and 1% carbol fuchsin simple stain versus culture as gold standard was 64.3%, with a specificity of 93.4%. The positive predictive value and negative predictive value were 32.1% and 98.2% respectively. The detection of Campylobacter by 1% carbol fuchsin is simple, inexpensive, and fast, with both a high sensitivity and specificity. Laboratories in settings with high prevalence of campylobacteriosis and/or limited resources can employ 1% carbol fuchsin direct stain in detecting campylobacter infections.
Ambient-temperature incubation for the field detection of Escherichia coli in drinking water.
Brown, J; Stauber, C; Murphy, J L; Khan, A; Mu, T; Elliott, M; Sobsey, M D
2011-04-01
Escherichia coli is the pre-eminent microbiological indicator used to assess safety of drinking water globally. The cost and equipment requirements for processing samples by standard methods may limit the scale of water quality testing in technologically less developed countries and other resource-limited settings, however. We evaluate here the use of ambient-temperature incubation in detection of E. coli in drinking water samples as a potential cost-saving and convenience measure with applications in regions with high (>25°C) mean ambient temperatures. This study includes data from three separate water quality assessments: two in Cambodia and one in the Dominican Republic. Field samples of household drinking water were processed in duplicate by membrane filtration (Cambodia), Petrifilm™ (Cambodia) or Colilert® (Dominican Republic) on selective media at both standard incubation temperature (35–37°C) and ambient temperature, using up to three dilutions and three replicates at each dilution. Matched sample sets were well correlated with 80% of samples (n = 1037) within risk-based microbial count strata (E. coli CFU 100 ml−1 counts of <1, 1–10, 11–100, 101–1000, >1000), and a pooled coefficient of variation of 17% (95% CI 15–20%) for paired sample sets across all methods. These results suggest that ambient-temperature incubation of E. coli in at least some settings may yield sufficiently robust data for water safety monitoring where laboratory or incubator access is limited.
Electronic and spectroscopic characterizations of SNP isomers
NASA Astrophysics Data System (ADS)
Trabelsi, Tarek; Al Mogren, Muneerah Mogren; Hochlaf, Majdi; Francisco, Joseph S.
2018-02-01
High-level ab initio electronic structure calculations were performed to characterize SNP isomers. In addition to the known linear SNP, cyc-PSN, and linear SPN isomers, we identified a fourth isomer, linear PSN, which is located ˜2.4 eV above the linear SNP isomer. The low-lying singlet and triplet electronic states of the linear SNP and SPN isomers were investigated using a multi-reference configuration interaction method and large basis set. Several bound electronic states were identified. However, their upper rovibrational levels were predicted to pre-dissociate, leading to S + PN, P + NS products, and multi-step pathways were discovered. For the ground states, a set of spectroscopic parameters were derived using standard and explicitly correlated coupled-cluster methods in conjunction with augmented correlation-consistent basis sets extrapolated to the complete basis set limit. We also considered scalar and core-valence effects. For linear isomers, the rovibrational spectra were deduced after generation of their 3D-potential energy surfaces along the stretching and bending coordinates and variational treatments of the nuclear motions.
Random left censoring: a second look at bone lead concentration measurements
NASA Astrophysics Data System (ADS)
Popovic, M.; Nie, H.; Chettle, D. R.; McNeill, F. E.
2007-09-01
Bone lead concentrations measured in vivo by x-ray fluorescence (XRF) are subjected to left censoring due to limited precision of the technique at very low concentrations. In the analysis of bone lead measurements, inverse variance weighting (IVW) of measurements is commonly used to estimate the mean of a data set and its standard error. Student's t-test is used to compare the IVW means of two sets, testing the hypothesis that the two sets are from the same population. This analysis was undertaken to assess the adequacy of IVW in the analysis of bone lead measurements or to confirm the results of IVW using an independent approach. The rationale is provided for the use of methods of survival data analysis in the study of XRF bone lead measurements. The procedure is provided for bone lead data analysis using the Kaplan-Meier and Nelson-Aalen estimators. The methodology is also outlined for the rank tests that are used to determine whether two censored sets are from the same population. The methods are applied on six data sets acquired in epidemiological studies. The estimated parameters and test statistics were compared with the results of the IVW approach. It is concluded that the proposed methods of statistical analysis can provide valid inference about bone lead concentrations, but the computed parameters do not differ substantially from those derived by the more widely used method of IVW.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwok, A.G.
This paper examines the comfort criteria of ANSI/ASHRAE Standard 55-1992 for their applicability in tropical classrooms. A field study conducted in Hawaii used a variety of methods to collect the data: survey questionnaires, physical measurements, interviews, and behavioral observations. A total of 3,544 students and teachers completed questionnaires in 29 naturally ventilated and air-conditioned classrooms in six schools during two seasons. The majority of classrooms failed to meet the physical specifications of the Standard 55 comfort zone. Thermal neutrality, preference, and acceptability results are compared with other field studies and the Standard 55 criteria. Acceptability votes by occupants of bothmore » naturally ventilated and air-conditioned classrooms exceeded the standard`s 80% acceptability criteria, regardless of whether physical conditions were in or out of the comfort zone. Responses from these two school populations suggest not only a basis for separate comfort standards but energy conservation opportunities through raising thermostat set points.« less
Determination of service standard time for liquid waste parameter in certification institution
NASA Astrophysics Data System (ADS)
Sembiring, M. T.; Kusumawaty, D.
2018-02-01
Baristand Industry Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industry Medan is liquid waste testing service. The company set the standard of service 9 working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company. The purpose of this research is to specify the standard time of each parameter in testing services liquid waste. The method used is the stopwatch time study. There are 45 test parameters in liquid waste laboratory. The measurement of the time done 4 samples per test parameters using the stopwatch. From the measurement results obtained standard time that the standard Minimum Service test of liquid waste is 13 working days if there is testing E. coli.
Results from the (U-Th)/He dating systems in Japan Atomic Energy Agency
NASA Astrophysics Data System (ADS)
Yamada, K.; Hanamuro, T.; Tagami, T.; Yamada, R.; Umeda, K.
2007-12-01
Japan Atomic Energy Agency (JAEA) has jointly set up the lab of the (U-Th)/He dating in cooperation with Kyoto University and National Research Institute for Earth Science and Disaster Prevention. We use the MM5400 rare gas mass spectrometer and the SPQ9000 ICP quadrupole mass spectrometer, belonging to JAEA, and built a new vacuum heater using infrared laser to extract helium. HF decomposes zircon after the alkali-fusion method using XRF bead sampler and LiBO3 in the preparation of ICP solution. Helium is quantified using sensitivity method. Uranium and thorium are using standard addition method. Quantifications of uranium-238 and thorium-232 are only need for parent isotopes to date samples because they are expected that the state of secular equilibrium becomes established and samarium does not compose the samples. At the present stage, we calibrate our systems by dating some standards, such as zircon from the Fish Canyon Tuff and apatite from the Durango, those are the international age standard, and apatite and zircon from the Tanzawa Tonalite Complex, that was dated in Yamada's PhD thesis, as a working standard. We report the results and detailed views of the dating systems.
A Catalog of Molecular Clouds in the Milky Way Galaxy
NASA Astrophysics Data System (ADS)
Wahl, Matthew; Koda, J.
2010-01-01
We have created a complete catalog of molecular clouds in the Milky Way Galaxy. This is an extension of our previous study (Koda et al. 2006) which used a preliminary data set from The Boston University Five College Radio Astronomy Observatory Galactic Ring Survey (BUFCRAO GRS). This work is of the complete data set from this GRS. The data covers the inner part of the northern Galactic disk between galactic longitudes 15 to 56 degrees, galactic latitudes -1.1 to 1.1 degrees, and the entire Galactic velocities. We used the standard cloud identification method. This method searches the data cube for a peak in temperature above a specified value, and then searches around that peak in all directions until the extents of the cloud are found. This method is iterated until all clouds are found. We prefer this method over other methods, because of its simplicity. The properties of our molecular clouds are very similar to those based on a more evolved method (Rathborne et al. 2009).
A novel method for purification of the endogenously expressed fission yeast Set2 complex.
Suzuki, Shota; Nagao, Koji; Obuse, Chikashi; Murakami, Yota; Takahata, Shinya
2014-05-01
Chromatin-associated proteins are heterogeneously and dynamically composed. To gain a complete understanding of DNA packaging and basic nuclear functions, it is important to generate a comprehensive inventory of these proteins. However, biochemical purification of chromatin-associated proteins is difficult and is accompanied by concerns over complex stability, protein solubility and yield. Here, we describe a new method for optimized purification of the endogenously expressed fission yeast Set2 complex, histone H3K36 methyltransferase. Using the standard centrifugation procedure for purification, approximately half of the Set2 protein separated into the insoluble chromatin pellet fraction, making it impossible to recover the large amounts of soluble Set2. To overcome this poor recovery, we developed a novel protein purification technique termed the filtration/immunoaffinity purification/mass spectrometry (FIM) method, which eliminates the need for centrifugation. Using the FIM method, in which whole cell lysates were filtered consecutively through eight different pore sizes (53-0.8μm), a high yield of soluble FLAG-tagged Set2 was obtained from fission yeast. The technique was suitable for affinity purification and produced a low background. A mass spectrometry analysis of anti-FLAG immunoprecipitated proteins revealed that Rpb1, Rpb2 and Rpb3, which have all been reported previously as components of the budding yeast Set2 complex, were isolated from fission yeast using the FIM method. In addition, other subunits of RNA polymerase II and its phosphatase were also identified. In conclusion, the FIM method is valid for the efficient purification of protein complexes that separate into the insoluble chromatin pellet fraction during centrifugation. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Xuefei; Zhang, Wenjing; Tang, Mingsheng
2015-05-12
Coupled-cluster (CC) methods have been extensively used as the high-level approach in quantum electronic structure theory to predict various properties of molecules when experimental results are unavailable. It is often assumed that CC methods, if they include at least up to connected-triple-excitation quasiperturbative corrections to a full treatment of single and double excitations (in particular, CCSD(T)), and a very large basis set, are more accurate than Kohn–Sham (KS) density functional theory (DFT). In the present work, we tested and compared the performance of standard CC and KS methods on bond energy calculations of 20 3d transition metal-containing diatomic molecules againstmore » the most reliable experimental data available, as collected in a database called 3dMLBE20. It is found that, although the CCSD(T) and higher levels CC methods have mean unsigned deviations from experiment that are smaller than most exchange-correlation functionals for metal–ligand bond energies of transition metals, the improvement is less than one standard deviation of the mean unsigned deviation. Furthermore, on average, almost half of the 42 exchange-correlation functionals that we tested are closer to experiment than CCSD(T) with the same extended basis set for the same molecule. The results show that, when both relativistic and core–valence correlation effects are considered, even the very high-level (expensive) CC method with single, double, triple, and perturbative quadruple cluster operators, namely, CCSDT(2)Q, averaged over 20 bond energies, gives a mean unsigned deviation (MUD(20) = 4.7 kcal/mol when one correlates only valence, 3p, and 3s electrons of transition metals and only valence electrons of ligands, or 4.6 kcal/mol when one correlates all core electrons except for 1s shells of transition metals, S, and Cl); and that is similar to some good xc functionals (e.g., B97-1 (MUD(20) = 4.5 kcal/mol) and PW6B95 (MUD(20) = 4.9 kcal/mol)) when the same basis set is used. We found that, for both coupled cluster calculations and KS calculations, the T1 diagnostics correlate the errors better than either the M diagnostics or the B1 DFT-based diagnostics. The potential use of practical standard CC methods as a benchmark theory is further confounded by the finding that CC and DFT methods usually have different signs of the error. We conclude that the available experimental data do not provide a justification for using conventional single-reference CC theory calculations to validate or test xc functionals for systems involving 3d transition metals.« less