20 CFR 404.1643 - Performance accuracy standard.
Code of Federal Regulations, 2011 CFR
2011-04-01
... DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643 Performance... well as the correctness of the decision. For example, if a particular item of medical evidence should... case, that is a performance error. Performance accuracy, therefore, is a higher standard than...
Robinson, Charlotte S; Sharp, Patrick
2012-05-01
Blood glucose monitoring systems (BGMS) are used in the hospital environment to manage blood glucose levels in patients at the bedside. The International Organization for Standardization (ISO) 15197:2003 standard is currently used by regulatory bodies as a minimum requirement for the performance of BGMS, specific to self-testing. There are calls for the tightening of accuracy requirements and implementation of a standard specifically for point-of-care (POC) BGMS. The accuracy of six commonly used BGMS was assessed in a clinical setting, with 108 patients' finger stick capillary samples. Using the accuracy criteria from the existing standard and a range of tightened accuracy criteria, system performance was compared. Other contributors to system performance have been measured, including hematocrit sensitivity and meter error rates encountered in the clinical setting. Five of the six BGMS evaluated met current accuracy criteria within the ISO 15197 standard. Only the Optium Xceed system had >95% of all readings within a tightened criteria of ±12.5% from the reference at glucose levels ≥72 mg/dl (4 mmol/liter) and ±9 mg/dl (0.5 mmol/liter) at glucose levels <72 mg/dl (4 mmol/liter). The Nova StatStrip Xpress had the greatest number of error messages observed; Optium Xceed the least. OneTouch Ultra2, Nova StatStrip Xpress, Accu-Chek Performa, and Contour TS products were all significantly influenced by blood hematocrit levels. From evidence obtained during this clinical evaluation, the Optium Xceed system is most likely to meet future anticipated accuracy standards for POC BGMS. In this clinical study, the results demonstrated the Optium Xceed product to have the highest level of accuracy, to have the lowest occurrence of error messages, and to be least influenced by blood hematocrit levels. © 2012 Diabetes Technology Society.
Accuracy investigation of phthalate metabolite standards.
Langlois, Éric; Leblanc, Alain; Simard, Yves; Thellen, Claude
2012-05-01
Phthalates are ubiquitous compounds whose metabolites are usually determined in urine for biomonitoring studies. Following suspect and unexplained results from our laboratory in an external quality-assessment scheme, we investigated the accuracy of all phthalate metabolite standards in our possession by comparing them with those of several suppliers. Our findings suggest that commercial phthalate metabolite certified solutions are not always accurate and that lot-to-lot discrepancies significantly affect the accuracy of the results obtained with several of these standards. These observations indicate that the reliability of the results obtained from different lots of standards is not equal, which reduces the possibility of intra-laboratory and inter-laboratory comparisons of results. However, agreements of accuracy have been observed for a majority of neat standards obtained from different suppliers, which indicates that a solution to this issue is available. Data accuracy of phthalate metabolites should be of concern for laboratories performing phthalate metabolite analysis because of the standards used. The results of our investigation are presented from the perspective that laboratories performing phthalate metabolite analysis can obtain accurate and comparable results in the future. Our findings will contribute to improving the quality of future phthalate metabolite analyses and will affect the interpretation of past results.
An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.
Obuchowski, Nancy A
2006-02-15
ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.
Why do temporal generalization gradients change when people make decisions as quickly as possible?
Klapproth, Florian; Wearden, John H
2011-08-01
Three experiments investigated temporal generalization performance under conditions in which participants were instructed to make their decisions as quickly as possible (speed), or were allowed to take their time (accuracy). A previous study (Klapproth & Müller, 2008) had shown that under speeded conditions people were more likely to confuse durations shorter than the standard with the standard than in the accuracy conditions, and a possible explanation of this result is that longer stimulus durations are "truncated" (i.e., people make a judgement about them before they have terminated, thereby shortening their effective duration) and that these truncated durations affect the standard used for the task. Experiment 1 investigated performance under speed and accuracy conditions when comparison durations were close to the standard or further away. No performance difference was found as a function of stimulus spacing, even though responses occurred on average before the longest durations had terminated, but this lack of effect was attributed to "task difficulty" effects changing decision thresholds. In Experiment 2, the standard duration was either the longest or the shortest duration in the comparison set, and differences between speed and accuracy groups occurred only when the comparisons were longer than the standard, supporting the "truncation" hypothesis. A third experiment showed that differences between speed and accuracy groups only occurred if some memory of the standard that was valid for more than one trial was used. In general, the results suggest that the generalization gradient shifts in speeded conditions occur because of truncation of longer comparison durations, which influences the effective standard used for the task.
Tsao, Mei-Fen; Chang, Hui-Wen; Chang, Chien-Hsi; Cheng, Chi-Hsuan; Lin, Hsiu-Chen
2017-05-01
Neonatal hypoglycemia may cause severe neurological damages; therefore, tight glycemic control is crucial to identify neonate at risk. Previous blood glucose monitoring system (BGMS) failed to perform well in neonates; there are calls for the tightening of accuracy requirements. It remains a need for accurate BGMS for effective bedside diabetes management in neonatal care within a hospital population. A total of 300 neonates were recruited from local hospitals. Accuracy performance of a commercially available BGMS was evaluated against reference instrument in screening for neonatal hypoglycemia, and assessment was made based on the ISO15197:2013 and a tighter standard. At blood glucose level < 47 mg/dl, BGMS assessed met the minimal accuracy requirement of ISO 15197:2013 and tighter standard at 100% and 97.2%, respectively.
Erby, Lori A H; Roter, Debra L; Biesecker, Barbara B
2011-11-01
To explore the accuracy and consistency of standardized patient (SP) performance in the context of routine genetic counseling, focusing on elements beyond scripted case items including general communication style and affective demeanor. One hundred seventy-seven genetic counselors were randomly assigned to counsel one of six SPs. Videotapes and transcripts of the sessions were analyzed to assess consistency of performance across four dimensions. Accuracy of script item presentation was high; 91% and 89% in the prenatal and cancer cases. However, there were statistically significant differences among SPs in the accuracy of presentation, general communication style, and some aspects of affective presentation. All SPs were rated as presenting with similarly high levels of realism. SP performance over time was generally consistent, with some small but statistically significant differences. These findings demonstrate that well-trained SPs can not only perform the factual elements of a case with high degrees of accuracy and realism; but they can also maintain sufficient levels of uniformity in general communication style and affective demeanor over time to support their use in even the demanding context of genetic counseling. Results indicate a need for an additional focus in training on consistency between different SPs. Copyright © 2010. Published by Elsevier Ireland Ltd.
A reference standard-based quality assurance program for radiology.
Liu, Patrick T; Johnson, C Daniel; Miranda, Rafael; Patel, Maitray D; Phillips, Carrie J
2010-01-01
The authors have developed a comprehensive radiology quality assurance (QA) program that evaluates radiology interpretations and procedures by comparing them with reference standards. Performance metrics are calculated and then compared with benchmarks or goals on the basis of published multicenter data and meta-analyses. Additional workload for physicians is kept to a minimum by having trained allied health staff members perform the comparisons of radiology reports with the reference standards. The performance metrics tracked by the QA program include the accuracy of CT colonography for detecting polyps, the false-negative rate for mammographic detection of breast cancer, the accuracy of CT angiography detection of coronary artery stenosis, the accuracy of meniscal tear detection on MRI, the accuracy of carotid artery stenosis detection on MR angiography, the accuracy of parathyroid adenoma detection by parathyroid scintigraphy, the success rate for obtaining cortical tissue on ultrasound-guided core biopsies of pelvic renal transplants, and the technical success rate for peripheral arterial angioplasty procedures. In contrast with peer-review programs, this reference standard-based QA program minimizes the possibilities of reviewer bias and erroneous second reviewer interpretations. The more objective assessment of performance afforded by the QA program will provide data that can easily be used for education and management conferences, research projects, and multicenter evaluations. Additionally, such performance data could be used by radiology departments to demonstrate their value over nonradiology competitors to referring clinicians, hospitals, patients, and third-party payers. Copyright 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.
20 CFR 404.1643 - Performance accuracy standard.
Code of Federal Regulations, 2010 CFR
2010-04-01
... DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643 Performance... have been in the file but was not included, even though its inclusion does not change the result in the...
20 CFR 416.1043 - Performance accuracy standard.
Code of Federal Regulations, 2010 CFR
2010-04-01
... AGED, BLIND, AND DISABLED Determinations of Disability Performance Standards § 416.1043 Performance... have been in the file but was not included, even though its inclusion does not change the result in the...
Clinical accuracy of point-of-care urine culture in general practice.
Holm, Anne; Cordoba, Gloria; Sørensen, Tina Møller; Jessen, Lisbeth Rem; Frimodt-Møller, Niels; Siersma, Volkert; Bjerrum, Lars
2017-06-01
To assess the clinical accuracy (sensitivity (SEN), specificity (SPE), positive predictive value and negative predictive value) of two point-of-care (POC) urine culture tests for the identification of urinary tract infection (UTI) in general practice. Prospective diagnostic accuracy study comparing two index tests (Flexicult™ SSI-Urinary Kit or ID Flexicult™) with a reference standard (urine culture performed in the microbiological department). General practice in the Copenhagen area patients. Adult female patients consulting their general practitioner with suspected uncomplicated, symptomatic UTI. (1) Overall accuracy of POC urine culture in general practice. (2) Individual accuracy of each of the two POC tests in this study. (3) Accuracy of POC urine culture in general practice with enterococci excluded, since enterococci are known to multiply in boric acid used for transportation for the reference standard. (4) Accuracy based on expert reading of photographs of POC urine cultures performed in general practice. Standard culture performed in the microbiological department was used as reference standard for all four measures. Twenty general practices recruited 341 patients with suspected uncomplicated UTI. The overall agreement between index test and reference was 0.76 (CI: 0.71-0.80), SEN 0.88 (CI: 0.83-0.92) and SPE 0.55 (CI: 0.46-0.64). The two POC tests produced similar results individually. Overall agreement with enterococci excluded was 0.82 (0.77-0.86) and agreement between expert readings of photographs and reference results was 0.81 (CI: 0.76-0.85). POC culture used in general practice has high SEN but low SPE. Low SPE could be due to both misinterpretation in general practice and an imperfect reference standard. Registration number: ClinicalTrials.gov NCT02323087.
NASA Technical Reports Server (NTRS)
Hellwig, H.; Stein, S. R.; Walls, F. L.; Kahan, A.
1978-01-01
The relationship between system performance and clock or oscillator performance is discussed. Tradeoffs discussed include: short term stability versus bandwidth requirements; frequency accuracy versus signal acquisition time; flicker of frequency and drift versus resynchronization time; frequency precision versus communications traffic volume; spectral purity versus bit error rate, and frequency standard stability versus frequency selection and adjustability. The benefits and tradeoffs of using precise frequency and time signals are various levels of precision and accuracy are emphasized.
Wong, Yu-Tung; Finley, Charles C; Giallo, Joseph F; Buckmire, Robert A
2011-08-01
To introduce a novel method of combining robotics and the CO(2) laser micromanipulator to provide excellent precision and performance repeatability designed for surgical applications. Pilot feasibility study. We developed a portable robotic controller that appends to a standard CO(2) laser micromanipulator. The robotic accuracy and laser beam path repeatability were compared to six experienced users of the industry standard micromanipulator performing the same simulated surgical tasks. Helium-neon laser beam video tracking techniques were employed. The robotic controller demonstrated superiority over experienced human manual micromanipulator control in accuracy (laser path within 1 mm of idealized centerline), 97.42% (standard deviation [SD] 2.65%), versus 85.11% (SD 14.51%), P = .018; and laser beam path repeatability (area of laser path divergence on successive trials), 21.42 mm(2) (SD 4.35 mm(2) ) versus 65.84 mm(2) (SD 11.93 mm(2) ), P = .006. Robotic micromanipulator control enhances accuracy and repeatability for specific laser tasks. Computerized control opens opportunity for alternative user interfaces and additional safety features. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Ekins, Kylie; Morphet, Julia
2015-11-01
The Australasian Triage Scale aims to ensure that the triage category allocated, reflects the urgency with which the patient needs medical assistance. This is dependent on triage nurse accuracy in decision making. The Australasian Triage Scale also aims to facilitate triage decision consistency between individuals and organisations. Various studies have explored the accuracy and consistency of triage decisions throughout Australia, yet no studies have specifically focussed on triage decision making in rural health services. Further, no standard has been identified by which accuracy or consistency should be measured. Australian emergency departments are measured against a set of standard performance indicators, including time from triage to patient review, and patient length of stay. There are currently no performance indicators for triage consistency. An online questionnaire was developed to collect demographic data and measure triage accuracy and consistency. The questionnaire utilised previously validated triage scenarios.(1) Triage decision accuracy was measured, and consistency was compared by health site type using Fleiss' kappa. Forty-six triage nurses participated in this study. The accuracy of participants' triage decision-making decreased with each less urgent triage category. Post-graduate qualifications had no bearing on triage accuracy. There was no significant difference in the consistency of decision-making between paediatric and adult scenarios. Overall inter-rater agreement using Fleiss' kappa coefficient, was 0.4. This represents a fair-to-good level of inter-rater agreement. A standard definition of accuracy and consistency in triage nurse decision making is required. Inaccurate triage decisions can result in increased morbidity and mortality. It is recommended that emergency department performance indicator thresholds be utilised as a benchmark for national triage consistency. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...) Data recording, calculations, and reporting; (v) Accuracy audit procedures, including sampling and...
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... with conducting performance tests under § 63.7. Verification of operational status shall, at a minimum... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... with conducting performance tests under § 63.7. Verification of operational status shall, at a minimum... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...
Portable oil bath for high-accuracy resistance transfer and maintenance
NASA Astrophysics Data System (ADS)
Shiota, Fuyuhiko
1999-10-01
A portable oil bath containing one standard resistor for high-accuracy resistance transfer and maintenance was developed and operated for seven years in the National Research Laboratory of Metrology. The aim of the bath is to save labor and apparatus for high-accuracy resistance transfer and maintenance by consistently keeping the standard resistor in an optimum environmental condition. The details of the prototype system, including its performance, are described together with some suggestions for a more practical bath design, which adopts the same concept.
Radiosonde pressure sensor performance - Evaluation using tracking radars
NASA Technical Reports Server (NTRS)
Parsons, C. L.; Norcross, G. A.; Brooks, R. L.
1984-01-01
The standard balloon-borne radiosonde employed for synoptic meteorology provides vertical profiles of temperature, pressure, and humidity as a function of elapsed time. These parameters are used in the hypsometric equation to calculate the geopotential altitude at each sampling point during the balloon's flight. It is important that the vertical location information be accurate. The present investigation was conducted with the objective to evaluate the altitude determination accuracy of the standard radiosonde throughout the entire balloon profile. The tests included two other commercially available pressure sensors to see if they could provide improved accuracy in the stratosphere. The pressure-measuring performance of standard baroswitches, premium baroswitches, and hypsometers in balloon-borne sondes was correlated with tracking radars. It was found that the standard and premium baroswitches perform well up to about 25 km altitude, while hypsometers provide more reliable data above 25 km.
Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido
2015-04-14
The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.
ERIC Educational Resources Information Center
Labuhn, Andju Sara; Zimmerman, Barry J.; Hasselhorn, Marcus
2010-01-01
The purpose of this study was to examine the effects of self-evaluative standards and graphed feedback on calibration accuracy and performance in mathematics. Specifically, we explored the influence of mastery learning standards as opposed to social comparison standards as well as of individual feedback as opposed to social comparison feedback. 90…
Setting Performance Standards for Technical and Nontechnical Competence in General Surgery.
Szasz, Peter; Bonrath, Esther M; Louridas, Marisa; Fecso, Andras B; Howe, Brett; Fehr, Adam; Ott, Michael; Mack, Lloyd A; Harris, Kenneth A; Grantcharov, Teodor P
2017-07-01
The objectives of this study were to (1) create a technical and nontechnical performance standard for the laparoscopic cholecystectomy, (2) assess the classification accuracy and (3) credibility of these standards, (4) determine a trainees' ability to meet both standards concurrently, and (5) delineate factors that predict standard acquisition. Scores on performance assessments are difficult to interpret in the absence of established standards. Trained raters observed General Surgery residents performing laparoscopic cholecystectomies using the Objective Structured Assessment of Technical Skill (OSATS) and the Objective Structured Assessment of Non-Technical Skills (OSANTS) instruments, while as also providing a global competent/noncompetent decision for each performance. The global decision was used to divide the trainees into 2 contrasting groups and the OSATS or OSANTS scores were graphed per group to determine the performance standard. Parametric statistics were used to determine classification accuracy and concurrent standard acquisition, receiver operator characteristic (ROC) curves were used to delineate predictive factors. Thirty-six trainees were observed 101 times. The technical standard was an OSATS of 21.04/35.00 and the nontechnical standard an OSANTS of 22.49/35.00. Applying these standards, competent/noncompetent trainees could be discriminated in 94% of technical and 95% of nontechnical performances (P < 0.001). A 21% discordance between technically and nontechnically competent trainees was identified (P < 0.001). ROC analysis demonstrated case experience and trainee level were both able to predict achieving the standards with an area under the curve (AUC) between 0.83 and 0.96 (P < 0.001). The present study presents defensible standards for technical and nontechnical performance. Such standards are imperative to implementing summative assessments into surgical training.
Thematic and positional accuracy assessment of digital remotely sensed data
Russell G. Congalton
2007-01-01
Accuracy assessment or validation has become a standard component of any land cover or vegetation map derived from remotely sensed data. Knowing the accuracy of the map is vital to any decisionmaking performed using that map. The process of assessing the map accuracy is time consuming and expensive. It is very important that the procedure be well thought out and...
DOTD standards for GPS data collection accuracy : [tech summary].
DOT National Transportation Integrated Search
2015-09-01
Positional data collection e orts performed by personnel and contractors of the Louisiana Department of Transportation and Development : (DOTD) requires a reliable and consistent measurement framework for ensuring accuracy and precision. Global Na...
A comprehensive evaluation of strip performance in multiple blood glucose monitoring systems.
Katz, Laurence B; Macleod, Kirsty; Grady, Mike; Cameron, Hilary; Pfützner, Andreas; Setford, Steven
2015-05-01
Accurate self-monitoring of blood glucose is a key component of effective self-management of glycemic control. Accurate self-monitoring of blood glucose results are required for optimal insulin dosing and detection of hypoglycemia. However, blood glucose monitoring systems may be susceptible to error from test strip, user, environmental and pharmacological factors. This report evaluated 5 blood glucose monitoring systems that each use Verio glucose test strips for precision, effect of hematocrit and interferences in laboratory testing, and lay user and system accuracy in clinical testing according to the guidelines in ISO15197:2013(E). Performance of OneTouch(®) VerioVue™ met or exceeded standards described in ISO15197:2013 for precision, hematocrit performance and interference testing in a laboratory setting. Performance of OneTouch(®) Verio IQ™, OneTouch(®) Verio Pro™, OneTouch(®) Verio™, OneTouch(®) VerioVue™ and Omni Pod each met or exceeded accuracy standards for user performance and system accuracy in a clinical setting set forth in ISO15197:2013(E).
42 CFR 493.1256 - Standard: Control procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Systems § 493.1256 Standard: Control procedures. (a) For each test system, the laboratory is responsible... test system failure, adverse environmental conditions, and operator performance. (2) Monitor over time the accuracy and precision of test performance that may be influenced by changes in test system...
42 CFR 493.1256 - Standard: Control procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Systems § 493.1256 Standard: Control procedures. (a) For each test system, the laboratory is responsible... test system failure, adverse environmental conditions, and operator performance. (2) Monitor over time the accuracy and precision of test performance that may be influenced by changes in test system...
42 CFR 493.1256 - Standard: Control procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Systems § 493.1256 Standard: Control procedures. (a) For each test system, the laboratory is responsible... test system failure, adverse environmental conditions, and operator performance. (2) Monitor over time the accuracy and precision of test performance that may be influenced by changes in test system...
42 CFR 493.1256 - Standard: Control procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Systems § 493.1256 Standard: Control procedures. (a) For each test system, the laboratory is responsible... test system failure, adverse environmental conditions, and operator performance. (2) Monitor over time the accuracy and precision of test performance that may be influenced by changes in test system...
New Criteria for Assessing the Accuracy of Blood Glucose Monitors Meeting, October 28, 2011
Walsh, John; Roberts, Ruth; Vigersky, Robert A.; Schwartz, Frank
2012-01-01
Glucose meters (GMs) are routinely used for self-monitoring of blood glucose by patients and for point-of-care glucose monitoring by health care providers in outpatient and inpatient settings. Although widely assumed to be accurate, numerous reports of inaccuracies with resulting morbidity and mortality have been noted. Insulin dosing errors based on inaccurate GMs are most critical. On October 28, 2011, the Diabetes Technology Society invited 45 diabetes technology clinicians who were attending the 2011 Diabetes Technology Meeting to participate in a closed-door meeting entitled New Criteria for Assessing the Accuracy of Blood Glucose Monitors. This report reflects the opinions of most of the attendees of that meeting. The Food and Drug Administration (FDA), the public, and several medical societies are currently in dialogue to establish a new standard for GM accuracy. This update to the FDA standard is driven by improved meter accuracy, technological advances (pumps, bolus calculators, continuous glucose monitors, and insulin pens), reports of hospital and outpatient deaths, consumer complaints about inaccuracy, and research studies showing that several approved GMs failed to meet FDA or International Organization for Standardization standards in post-approval testing. These circumstances mandate a set of new GM standards that appropriately match the GMs’ analytical accuracy to the clinical accuracy required for their intended use, as well as ensuring their ongoing accuracy following approval. The attendees of the New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting proposed a graduated standard and other methods to improve GM performance, which are discussed in this meeting report. PMID:22538160
New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting, October 28, 2011.
Walsh, John; Roberts, Ruth; Vigersky, Robert A; Schwartz, Frank
2012-03-01
Glucose meters (GMs) are routinely used for self-monitoring of blood glucose by patients and for point-of-care glucose monitoring by health care providers in outpatient and inpatient settings. Although widely assumed to be accurate, numerous reports of inaccuracies with resulting morbidity and mortality have been noted. Insulin dosing errors based on inaccurate GMs are most critical. On October 28, 2011, the Diabetes Technology Society invited 45 diabetes technology clinicians who were attending the 2011 Diabetes Technology Meeting to participate in a closed-door meeting entitled New Criteria for Assessing the Accuracy of Blood Glucose Monitors. This report reflects the opinions of most of the attendees of that meeting. The Food and Drug Administration (FDA), the public, and several medical societies are currently in dialogue to establish a new standard for GM accuracy. This update to the FDA standard is driven by improved meter accuracy, technological advances (pumps, bolus calculators, continuous glucose monitors, and insulin pens), reports of hospital and outpatient deaths, consumer complaints about inaccuracy, and research studies showing that several approved GMs failed to meet FDA or International Organization for Standardization standards in postapproval testing. These circumstances mandate a set of new GM standards that appropriately match the GMs' analytical accuracy to the clinical accuracy required for their intended use, as well as ensuring their ongoing accuracy following approval. The attendees of the New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting proposed a graduated standard and other methods to improve GM performance, which are discussed in this meeting report. © 2012 Diabetes Technology Society.
Kida, Adriana de S. B.; de Ávila, Clara R. B.; Capellini, Simone A.
2016-01-01
Purpose: To study reading comprehension performance profiles of children with dyslexia as well as language-based learning disability (LBLD) by means of retelling tasks. Method: One hundred and five children from 2nd to 5th grades of elementary school were gathered into six groups: Dyslexia group (D; n = 19), language-based learning disability group (LBLD; n = 16); their respective control groups paired according to different variables – age, gender, grade and school system (public or private; D-control and LBLD-control); and other control groups paired according to different reading accuracy (D-accuracy; LBLD-accuracy). All of the children read an expository text and orally retold the story as they understood it. The analysis quantified propositions (main ideas and details) and retold links. A retelling reference standard (3–0) was also established from the best to the worst performance. We compared both clinical groups (D and LBLD) with their respective control groups by means of Mann–Whitney tests. Results: D showed the same total of propositions, links and reference standards as D-control, but performed better than D-accuracy in macro structural (total of links) and super structural (retelling reference standard) measures. Results suggest that dyslexic children are able to use their linguistic competence and their own background knowledge to minimize the effects of their decoding deficit, especially at the highest text processing levels. LBLD performed worse than LBLD-control in all of the retelling measures and LBLD showed worse performance than LBLD-accuracy in the total retold links and retelling reference standard. Those results suggest that both decoding and linguistic difficulties affect reading comprehension. Moreover, the linguistic deficits presented by LBLD students do not allow these pupils to perform as competently in terms of text comprehension as the children with dyslexia do. Thus, failure in the macro and super-structural information processing of the expository text were evidenced. Conclusion: Each clinical group showed a different retelling profile. Such findings support the view that there are differences between these two clinical populations in the non-phonological dimensions of language. PMID:27313551
Storey, Helen L.; Huang, Ying; Crudder, Chris; Golden, Allison; de los Santos, Tala; Hawkins, Kenneth
2015-01-01
Novel typhoid diagnostics currently under development have the potential to improve clinical care, surveillance, and the disease burden estimates that support vaccine introduction. Blood culture is most often used as the reference method to evaluate the accuracy of new typhoid tests; however, it is recognized to be an imperfect gold standard. If no single gold standard test exists, use of a composite reference standard (CRS) can improve estimation of diagnostic accuracy. Numerous studies have used a CRS to evaluate new typhoid diagnostics; however, there is no consensus on an appropriate CRS. In order to evaluate existing tests for use as a reference test or inclusion in a CRS, we performed a systematic review of the typhoid literature to include all index/reference test combinations observed. We described the landscape of comparisons performed, showed results of a meta-analysis on the accuracy of the more common combinations, and evaluated sources of variability based on study quality. This wide-ranging meta-analysis suggests that no single test has sufficiently good performance but some existing diagnostics may be useful as part of a CRS. Additionally, based on findings from the meta-analysis and a constructed numerical example demonstrating the use of CRS, we proposed necessary criteria and potential components of a typhoid CRS to guide future recommendations. Agreement and adoption by all investigators of a standardized CRS is requisite, and would improve comparison of new diagnostics across independent studies, leading to the identification of a better reference test and improved confidence in prevalence estimates. PMID:26566275
van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M
2017-11-27
Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory analysis our results show similar diagnostic accuracy comparing anatomical (AUC 0.86(0.83-0.89)) and functional reference standards (AUC 0.88(0.84-0.90)). Only the per territory analysis sensitivity did not show significant heterogeneity. None of the groups showed signs of publication bias. The clinical value of semi-quantitative and quantitative CMR perfusion analysis remains uncertain due to extensive inter-study heterogeneity and large differences in CMR perfusion acquisition protocols, reference standards, and methods of assessment of myocardial perfusion parameters. For wide spread implementation, standardization of CMR perfusion techniques is essential. CRD42016040176 .
Comparison of two head-up displays in simulated standard and noise abatement night visual approaches
NASA Technical Reports Server (NTRS)
Cronn, F.; Palmer, E. A., III
1975-01-01
Situation and command head-up displays were evaluated for both standard and two segment noise abatement night visual approaches in a fixed base simulation of a DC-8 transport aircraft. The situation display provided glide slope and pitch attitude information. The command display provided glide slope information and flight path commands to capture a 3 deg glide slope. Landing approaches were flown in both zero wind and wind shear conditions. For both standard and noise abatement approaches, the situation display provided greater glidepath accuracy in the initial phase of the landing approaches, whereas the command display was more effective in the final approach phase. Glidepath accuracy was greater for the standard approaches than for the noise abatement approaches in all phases of the landing approach. Most of the pilots preferred the command display and the standard approach. Substantial agreement was found between each pilot's judgment of his performance and his actual performance.
Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime
2018-06-14
The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.
Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.
Evaluation of diagnostic accuracy in detecting ordered symptom statuses without a gold standard
Wang, Zheyu; Zhou, Xiao-Hua; Wang, Miqu
2011-01-01
Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown—imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease. PMID:21209155
Biranjia-Hurdoyal, Susheela D; Seetulsingh-Goorah, Sharmila P
2016-01-01
The aim was to determine the performances of four Helicobacter pylori serological detection kits in different target groups, using Amplified IDEIA™ Hp StAR™ as gold standard. Kits studied were Rapid Immunochromatoghraphic Hexagon, Helicoblot 2.1, an EIA IgG kit and EIA IgA kit. Stool and blood samples were collected from 162 apparently healthy participants (control) and 60 Type 2 diabetes mellitus (T2DM) patients. The performances of the four serological detection kits were found to be affected by gender, age, health status and ethnicity of the participants. In the control group, the Helicoblot 2.1 kit had the best performance (AUC = 0.85; p<0.05, accuracy = 86.4%), followed by EIA IgG (AUC = 0.75; p<0.05, accuracy = 75.2%). The Rapid Hexagon and EIA IgA kits had relatively poor performances. In the T2DM subgroup, the kits H2.1 and EIA IgG had best performances, with accuracies of 96.5% and 93.1% respectively. The performance of EIA IgG improved with adjustment of its cut-off value. The performances of the detection kits were affected by various factors which should be taken into consideration.
Adherence to Standards for Reporting Diagnostic Accuracy in Emergency Medicine Research.
Gallo, Lucas; Hua, Nadia; Mercuri, Mathew; Silveira, Angela; Worster, Andrew
2017-08-01
Diagnostic tests are used frequently in the emergency department (ED) to guide clinical decision making and, hence, influence clinical outcomes. The Standards for Reporting of Diagnostic Accuracy (STARD) criteria were developed to ensure that diagnostic test studies are performed and reported to best inform clinical decision making in the ED. The objective was to determine the extent to which diagnostic studies published in emergency medicine journals adhered to STARD 2003 criteria. Diagnostic studies published in eight MEDLINE-listed, peer-reviewed, emergency medicine journals over a 5-year period were reviewed for compliance to STARD criteria. A total of 12,649 articles were screened and 114 studies were included in our study. Twenty percent of these were randomly selected for assessment using STARD 2003 criteria. Adherence to STARD 2003 reporting standards for each criteria ranged from 8.7% adherence (criteria-reporting adverse events from performing index test or reference standard) to 100% (multiple criteria). Just over half of STARD criteria are reported in more than 80% studies. As poorly reported studies may negatively impact their clinical usefulness, it is essential that studies of diagnostic test accuracy be performed and reported adequately. Future studies should assess whether studies have improved compliance with the STARD 2015 criteria amendment. © 2017 by the Society for Academic Emergency Medicine.
Akinwuntan, A E; Backus, D; Grayson, J; Devos, H
2018-05-26
Some symptoms of multiple sclerosis (MS) affect driving. In a recent study, performance on five cognitive tests predicted the on-road test performance of individuals with relapsing-remitting MS with 91% accuracy, 70% sensitivity and 97% specificity. However, the accuracy with which the battery will predict the driving performance of a different cohort that includes all types of MS is unknown. Participants (n = 118; 48 ± 9 years of age; 97 females) performed a comprehensive off-road evaluation that lasted about 3 h and a standardized on-road test that lasted approximately 45 min over a 2-day period within the same week. Performance on the five cognitive tests was used to predict participants' performance on the standardized on-road test. Performance on the five tests together predicted outcome of the on-road test with 82% accuracy, 42% sensitivity and 90% specificity. The accuracy of predicting the on-road performance of a new MS cohort using performance on the battery of five cognitive tests remained very high (82%). The battery, which was administrable in <45 min and cost ~$150, was better at identifying those who actually passed the on-road test (90% specificity). The sensitivity (42%) of the battery indicated that it should not be used as the sole determinant of poor driving-related cognitive skills. A fail performance on the battery should only imply that more comprehensive testing is warranted. © 2018 EAN.
Analytical and Clinical Performance of Blood Glucose Monitors
Boren, Suzanne Austin; Clarke, William L.
2010-01-01
Background The objective of this study was to understand the level of performance of blood glucose monitors as assessed in the published literature. Methods Medline from January 2000 to October 2009 and reference lists of included articles were searched to identify eligible studies. Key information was abstracted from eligible studies: blood glucose meters tested, blood sample, meter operators, setting, sample of people (number, diabetes type, age, sex, and race), duration of diabetes, years using a glucose meter, insulin use, recommendations followed, performance evaluation measures, and specific factors affecting the accuracy evaluation of blood glucose monitors. Results Thirty-one articles were included in this review. Articles were categorized as review articles of blood glucose accuracy (6 articles), original studies that reported the performance of blood glucose meters in laboratory settings (14 articles) or clinical settings (9 articles), and simulation studies (2 articles). A variety of performance evaluation measures were used in the studies. The authors did not identify any studies that demonstrated a difference in clinical outcomes. Examples of analytical tools used in the description of accuracy (e.g., correlation coefficient, linear regression equations, and International Organization for Standardization standards) and how these traditional measures can complicate the achievement of target blood glucose levels for the patient were presented. The benefits of using error grid analysis to quantify the clinical accuracy of patient-determined blood glucose values were discussed. Conclusions When examining blood glucose monitor performance in the real world, it is important to consider if an improvement in analytical accuracy would lead to improved clinical outcomes for patients. There are several examples of how analytical tools used in the description of self-monitoring of blood glucose accuracy could be irrelevant to treatment decisions. PMID:20167171
The application of robotics to microlaryngeal laser surgery.
Buckmire, Robert A; Wong, Yu-Tung; Deal, Allison M
2015-06-01
To evaluate the performance of human subjects, using a prototype robotic micromanipulator controller in a simulated, microlaryngeal operative setting. Observational cross-sectional study. Twenty-two human subjects with varying degrees of laser experience performed CO2 laser surgical tasks within a simulated microlaryngeal operative setting using an industry standard manual micromanipulator (MMM) and a prototype robotic micromanipulator controller (RMC). Accuracy, repeatability, and ablation consistency measures were obtained for each human subject across both conditions and for the preprogrammed RMC device. Using the standard MMM, surgeons with >10 previous laser cases performed superior to subjects with fewer cases on measures of error percentage and cumulative error (P = .045 and .03, respectively). No significant differences in performance were observed between subjects using the RMC device. In the programmed (P/A) mode, the RMC performed equivalently or superiorly to experienced human subjects on accuracy and repeatability measures, and nearly an order of magnitude better on measures of ablation consistency. The programmed RMC performed significantly better for repetition error when compared to human subjects with <100 previous laser cases (P = .04). Experienced laser surgeons perform better than novice surgeons on tasks of accuracy and repeatability using the MMM device but roughly equivalently using the novel RMC. Operated in the P/A mode, the RMC performs equivalently or superior to experienced laser surgeons using the industry standard MMM for all measured parameters, and delivers an ablation consistency nearly an order of magnitude better than human laser operators. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Code of Federal Regulations, 2014 CFR
2014-07-01
... accuracy that is traceable to National Institute of Standards and Technology (NIST) standards. (ii) The... section. (i) Perform a single-point calibration using an NIST-certified buffer solution that is accurate... include a redundant pH sensor, perform a single point calibration using an NIST-certified buffer solution...
Code of Federal Regulations, 2013 CFR
2013-07-01
... accuracy that is traceable to National Institute of Standards and Technology (NIST) standards. (ii) The... section. (i) Perform a single-point calibration using an NIST-certified buffer solution that is accurate... include a redundant pH sensor, perform a single point calibration using an NIST-certified buffer solution...
Andreini, Daniele; Lin, Fay Y; Rizvi, Asim; Cho, Iksung; Heo, Ran; Pontone, Gianluca; Bartorelli, Antonio L; Mushtaq, Saima; Villines, Todd C; Carrascosa, Patricia; Choi, Byoung Wook; Bloom, Stephen; Wei, Han; Xing, Yan; Gebow, Dan; Gransar, Heidi; Chang, Hyuk-Jae; Leipsic, Jonathon; Min, James K
2018-06-01
Motion artifact can reduce the diagnostic accuracy of coronary CT angiography (CCTA) for coronary artery disease (CAD). The purpose of this study was to compare the diagnostic performance of an algorithm dedicated to correcting coronary motion artifact with the performance of standard reconstruction methods in a prospective international multicenter study. Patients referred for clinically indicated invasive coronary angiography (ICA) for suspected CAD prospectively underwent an investigational CCTA examination free from heart rate-lowering medications before they underwent ICA. Blinded core laboratory interpretations of motion-corrected and standard reconstructions for obstructive CAD (≥ 50% stenosis) were compared with ICA findings. Segments unevaluable owing to artifact were considered obstructive. The primary endpoint was per-subject diagnostic accuracy of the intracycle motion correction algorithm for obstructive CAD found at ICA. Among 230 patients who underwent CCTA with the motion correction algorithm and standard reconstruction, 92 (40.0%) had obstructive CAD on the basis of ICA findings. At a mean heart rate of 68.0 ± 11.7 beats/min, the motion correction algorithm reduced the number of nondiagnostic scans compared with standard reconstruction (20.4% vs 34.8%; p < 0.001). Diagnostic accuracy for obstructive CAD with the motion correction algorithm (62%; 95% CI, 56-68%) was not significantly different from that of standard reconstruction on a per-subject basis (59%; 95% CI, 53-66%; p = 0.28) but was superior on a per-vessel basis: 77% (95% CI, 74-80%) versus 72% (95% CI, 69-75%) (p = 0.02). The motion correction algorithm was superior in subgroups of patients with severely obstructive (≥ 70%) stenosis, heart rate ≥ 70 beats/min, and vessels in the atrioventricular groove. The motion correction algorithm studied reduces artifacts and improves diagnostic performance for obstructive CAD on a per-vessel basis and in selected subgroups on a per-subject basis.
Machado, Diogo Alcino de Abreu Ribeiro Carvalho; Esteves, Dina da Assunção Azevedo; Branca, Pedro Manuel Araújo de Sousa
Laryngoscope is a key tool in anesthetic practice. Direct laryngoscopy is a crucial moment and inadequate laryngoscope's light can lead to catastrophic consequences. From our experience laryngoscope's light is assessed in a subjective manner and we believe a more precise evaluation should be used. Our objective is to compare the accuracy of a smartphone compared to a lux meter. Secondly we audited our Operating Room laryngoscopes. We designed a pragmatic study, using as primary outcome the accuracy of a smartphone compared to the lux meter. Further we audited with both the lux meter and the smartphone all laryngoscopes and blades ready to use in our Operating Rooms, using the International Standard form the International Organization for Standardization. For primary outcome we found no significant difference between devices. Our audit showed that only 2 in 48 laryngoscopes complied with the ISO norm. When comparing the measurements between the lux meter and the smartphone we found no significant difference. Ideally every laryngoscope should perform as required. We believe all laryngoscopes should have a practical but reliable and objective test prior to its utilization. Our results suggest the smartphone was accurate enough to be used as a lux meter to test laryngoscope's light. Audit results showing only 4% comply with the ISO standard are consistent with other studies. The tested smartphone has enough accuracy to perform light measurement in laryngoscopes. We believe this is a step further to perform an objective routine check to laryngoscope's light. Copyright © 2016. Published by Elsevier Editora Ltda.
Teacher Perspectives of the Use of Student Performance Data in Teacher Evaluations
ERIC Educational Resources Information Center
Hopkins, Paul Thomas
2013-01-01
The purpose of this study was to determine how K-12 public school teachers perceive the use of student performance data in teacher evaluations. The proprietary, utility, feasibility, and accuracy standards created by the Joint Committee on Standards for Education Evaluation (JCSEE) served as a framework for the study. An online survey was deployed…
Laboratory Performance Evaluation Report of SEL 421 Phasor Measurement Unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; faris, Anthony J.; Martin, Kenneth E.
2007-12-01
PNNL and BPA have been in close collaboration on laboratory performance evaluation of phasor measurement units for over ten years. A series of evaluation tests are designed to confirm accuracy and determine measurement performance under a variety of conditions that may be encountered in actual use. Ultimately the testing conducted should provide parameters that can be used to adjust all measurements to a standardized basis. These tests are performed with a standard relay test set using recorded files of precisely generated test signals. The test set provides test signals at a level and in a format suitable for input tomore » a PMU that accurately reproduces the signals in both signal amplitude and timing. Test set outputs are checked to confirm the accuracy of the output signal. The recorded signals include both current and voltage waveforms and a digital timing track used to relate the PMU measured value with the test signal. Test signals include steady-state waveforms to test amplitude, phase, and frequency accuracy, modulated signals to determine measurement and rejection bands, and step tests to determine timing and response accuracy. Additional tests are included as necessary to fully describe the PMU operation. Testing is done with a BPA phasor data concentrator (PDC) which provides communication support and monitors data input for dropouts and data errors.« less
Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424
A ground truth based comparative study on clustering of gene expression data.
Zhu, Yitan; Wang, Zuyi; Miller, David J; Clarke, Robert; Xuan, Jianhua; Hoffman, Eric P; Wang, Yue
2008-05-01
Given the variety of available clustering methods for gene expression data analysis, it is important to develop an appropriate and rigorous validation scheme to assess the performance and limitations of the most widely used clustering algorithms. In this paper, we present a ground truth based comparative study on the functionality, accuracy, and stability of five data clustering methods, namely hierarchical clustering, K-means clustering, self-organizing maps, standard finite normal mixture fitting, and a caBIG toolkit (VIsual Statistical Data Analyzer--VISDA), tested on sample clustering of seven published microarray gene expression datasets and one synthetic dataset. We examined the performance of these algorithms in both data-sufficient and data-insufficient cases using quantitative performance measures, including cluster number detection accuracy and mean and standard deviation of partition accuracy. The experimental results showed that VISDA, an interactive coarse-to-fine maximum likelihood fitting algorithm, is a solid performer on most of the datasets, while K-means clustering and self-organizing maps optimized by the mean squared compactness criterion generally produce more stable solutions than the other methods.
NASA Astrophysics Data System (ADS)
Leng, Shuai; Zhou, Wei; Yu, Zhicong; Halaweish, Ahmed; Krauss, Bernhard; Schmidt, Bernhard; Yu, Lifeng; Kappler, Steffen; McCollough, Cynthia
2017-09-01
Photon-counting computed tomography (PCCT) uses a photon counting detector to count individual photons and allocate them to specific energy bins by comparing photon energy to preset thresholds. This enables simultaneous multi-energy CT with a single source and detector. Phantom studies were performed to assess the spectral performance of a research PCCT scanner by assessing the accuracy of derived images sets. Specifically, we assessed the accuracy of iodine quantification in iodine map images and of CT number accuracy in virtual monoenergetic images (VMI). Vials containing iodine with five known concentrations were scanned on the PCCT scanner after being placed in phantoms representing the attenuation of different size patients. For comparison, the same vials and phantoms were also scanned on 2nd and 3rd generation dual-source, dual-energy scanners. After material decomposition, iodine maps were generated, from which iodine concentration was measured for each vial and phantom size and compared with the known concentration. Additionally, VMIs were generated and CT number accuracy was compared to the reference standard, which was calculated based on known iodine concentration and attenuation coefficients at each keV obtained from the U.S. National Institute of Standards and Technology (NIST). Results showed accurate iodine quantification (root mean square error of 0.5 mgI/cc) and accurate CT number of VMIs (percentage error of 8.9%) using the PCCT scanner. The overall performance of the PCCT scanner, in terms of iodine quantification and VMI CT number accuracy, was comparable to that of EID-based dual-source, dual-energy scanners.
Accuracy of semen counting chambers as determined by the use of latex beads.
Seaman, E K; Goluboff, E; BarChama, N; Fisch, H
1996-10-01
To assess the accuracy of the Hemacytometer (Hausser Scientific, Horsham, PA), Makler (Sefi-Medical Instrument, Haifa, Israel), Cell-VU (Millennium Sciences Inc., New York, NY), and Micro-Cell chambers (Conception Technologies, San Diego, CA) counting chambers. A solution containing a known concentration of latex beads was used as the standard to perform counts on the four different counting chambers. Bead counts for the four different chambers were compared with the bead counts of the standard solution. Variability within chambers also was determined. Mean bead concentrations for both the Cell-VU and Micro-Cell chambers were consistently similar to the bead concentration of the standard solution. Both the hemacytometer and the Makler chambers overestimated the actual bead concentration of the standard solution by as much as 50% and revealed significant interchamber variability. Our data revealed marked differences in the accuracy and reliability of the different counting chambers tested and emphasized the need for standardization and quality control of laboratory procedures.
NASA Astrophysics Data System (ADS)
André, M. P.; Galperin, M.; Berry, A.; Ojeda-Fournier, H.; O'Boyle, M.; Olson, L.; Comstock, C.; Taylor, A.; Ledgerwood, M.
Our computer-aided diagnostic (CADx) tool uses advanced image processing and artificial intelligence to analyze findings on breast sonography images. The goal is to standardize reporting of such findings using well-defined descriptors and to improve accuracy and reproducibility of interpretation of breast ultrasound by radiologists. This study examined several factors that may impact accuracy and reproducibility of the CADx software, which proved to be highly accurate and stabile over several operating conditions.
Neurocognitive and Behavioral Predictors of Math Performance in Children with and without ADHD
Antonini, Tanya N.; O’Brien, Kathleen M.; Narad, Megan E.; Langberg, Joshua M.; Tamm, Leanne; Epstein, Jeff N.
2014-01-01
Objective: This study examined neurocognitive and behavioral predictors of math performance in children with and without attention-deficit/hyperactivity disorder (ADHD). Method: Neurocognitive and behavioral variables were examined as predictors of 1) standardized mathematics achievement scores,2) productivity on an analog math task, and 3) accuracy on an analog math task. Results: Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the Attentional Network Task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Conclusion: Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. PMID:24071774
Neurocognitive and Behavioral Predictors of Math Performance in Children With and Without ADHD.
Antonini, Tanya N; Kingery, Kathleen M; Narad, Megan E; Langberg, Joshua M; Tamm, Leanne; Epstein, Jeffery N
2016-02-01
This study examined neurocognitive and behavioral predictors of math performance in children with and without ADHD. Neurocognitive and behavioral variables were examined as predictors of (a) standardized mathematics achievement scores, (b) productivity on an analog math task, and (c) accuracy on an analog math task. Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the attentional network task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. © The Author(s) 2013.
Accuracy of the Lifebox pulse oximeter during hypoxia in healthy volunteers.
Dubowitz, G; Breyer, K; Lipnick, M; Sall, J W; Feiner, J; Ikeda, K; MacLeod, D B; Bickler, P E
2013-12-01
Pulse oximetry is a standard of care during anaesthesia in high-income countries. However, 70% of operating environments in low- and middle-income countries have no pulse oximeter. The 'Lifebox' oximetry project set out to bridge this gap with an inexpensive oximeter meeting CE (European Conformity) and ISO (International Organization for Standardization) standards. To date, there are no performance-specific accuracy data on this instrument. The aim of this study was to establish whether the Lifebox pulse oximeter provides clinically reliable haemoglobin oxygen saturation (Sp O2 ) readings meeting USA Food and Drug Administration 510(k) standards. Using healthy volunteers, inspired oxygen fraction was adjusted to produce arterial haemoglobin oxygen saturation (Sa O2 ) readings between 71% and 100% measured with a multi-wavelength oximeter. Lifebox accuracy was expressed using bias (Sp O2 - Sa O2 ), precision (SD of the bias) and the root mean square error (Arms). Simultaneous readings of Sa O2 and Sp O2 in 57 subjects showed a mean (SD) bias of -0.41% (2.28%) and Arms 2.31%. The Lifebox pulse oximeter meets current USA Food and Drug Administration standards for accuracy, thus representing an inexpensive solution for patient monitoring without compromising standards. © 2013 The Association of Anaesthetists of Great Britain and Ireland.
Accuracy of clinical diagnosis of Parkinson disease: A systematic review and meta-analysis.
Rizzo, Giovanni; Copetti, Massimiliano; Arcuti, Simona; Martino, Davide; Fontana, Andrea; Logroscino, Giancarlo
2016-02-09
To evaluate the diagnostic accuracy of clinical diagnosis of Parkinson disease (PD) reported in the last 25 years by a systematic review and meta-analysis. We searched for articles published between 1988 and August 2014. Studies were included if reporting diagnostic parameters regarding clinical diagnosis of PD or crude data. The selected studies were subclassified based on different study setting, type of test diagnosis, and gold standard. Bayesian meta-analyses of available data were performed. We selected 20 studies, including 11 using pathologic examination as gold standard. Considering only these 11 studies, the pooled diagnostic accuracy was 80.6% (95% credible interval [CrI] 75.2%-85.3%). Accuracy was 73.8% (95% CrI 67.8%-79.6%) for clinical diagnosis performed mainly by nonexperts. Accuracy of clinical diagnosis performed by movement disorders experts rose from 79.6% (95% CrI 46%-95.1%) of initial assessment to 83.9% (95% CrI 69.7%-92.6%) of refined diagnosis after follow-up. Using UK Parkinson's Disease Society Brain Bank Research Center criteria, the pooled diagnostic accuracy was 82.7% (95% CrI 62.6%-93%). The overall validity of clinical diagnosis of PD is not satisfying. The accuracy did not significantly improve in the last 25 years, particularly in the early stages of disease, where response to dopaminergic treatment is less defined and hallmarks of alternative diagnoses such as atypical parkinsonism may not have emerged. Misclassification rate should be considered to calculate the sample size both in observational studies and randomized controlled trials. Imaging and biomarkers are urgently needed to improve the accuracy of clinical diagnosis in vivo. © 2016 American Academy of Neurology.
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-01-01
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument. PMID:29621142
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-04-05
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument.
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Accuracy and speed feedback: Global and local effects on strategy use
Touron, Dayna R.; Hertzog, Christopher
2013-01-01
Background Skill acquisition often involves a shift from an effortful algorithm-based strategy to more fluent memory-based performance. Older adults’ slower strategy transitions can be ascribed to both slowed learning and metacognitive factors. Experimenters often provide feedback on response accuracy; this emphasis may either inadvertently reinforce older adults’ conservatism or might highlight that retrieval is generally quite accurate. RT feedback can lead to more rapid shift to retrieval (Hertzog, Touron, & Hines, 2007). Methods This study parametrically varied trial-by-trial feedback to examine whether strategy shifts in the noun-pair task in younger (M = 19) and older adults (M = 67) were influenced by type of performance feedback: none, trial accuracy, trial RT, or both accuracy and RT. Results Older adults who received accuracy feedback retrieved more often, particularly on difficult rearranged trials, and participants who receive speed feedback performed the scanning strategy more quickly. Age differences were also obtained in local (trial-level) reactivity to task performance, but these were not affected by feedback. Conclusions Accuracy and speed feedback had distinct global (general) influences on task strategies and performance. In particular, it appears that the standard practice of providing trial-by-trial accuracy feedback might facilitate older adults’ use of retrieval strategies in skill acquisition tasks. PMID:24785594
Estimating the Accuracy of Neurocognitive Effort Measures in the Absence of a "Gold Standard"
ERIC Educational Resources Information Center
Mossman, Douglas; Wygant, Dustin B.; Gervais, Roger O.
2012-01-01
Psychologists frequently use symptom validity tests (SVTs) to help determine whether evaluees' test performance or reported symptoms accurately represent their true functioning and capability. Most studies evaluating the accuracy of SVTs have used either known-group comparisons or simulation designs, but these approaches have well-known…
Effect of Accreditation on Accuracy of Diagnostic Tests in Medical Laboratories.
Jang, Mi Ae; Yoon, Young Ahn; Song, Junghan; Kim, Jeong Ho; Min, Won Ki; Lee, Ji Sung; Lee, Yong Wha; Lee, You Kyoung
2017-05-01
Medical laboratories play a central role in health care. Many laboratories are taking a more focused and stringent approach to quality system management. In Korea, laboratory standardization efforts undertaken by the Korean Laboratory Accreditation Program (KLAP) and the Korean External Quality Assessment Scheme (KEQAS) may have facilitated an improvement in laboratory performance, but there are no fundamental studies demonstrating that laboratory standardization is effective. We analyzed the results of the KEQAS to identify significant differences between laboratories with or without KLAP and to determine the impact of laboratory standardization on the accuracy of diagnostic tests. We analyzed KEQAS participant data on clinical chemistry tests such as albumin, ALT, AST, and glucose from 2010 to 2013. As a statistical parameter to assess performance bias between laboratories, we compared 4-yr variance index score (VIS) between the two groups with or without KLAP. Compared with the group without KLAP, the group with KLAP exhibited significantly lower geometric means of 4-yr VIS for all clinical chemistry tests (P<0.0001); this difference justified a high level of confidence in standardized services provided by accredited laboratories. Confidence intervals for the mean of each test in the two groups (accredited and non-accredited) did not overlap, suggesting that the means of the groups are significantly different. These results confirmed that practice standardization is strongly associated with the accuracy of test results. Our study emphasizes the necessity of establishing a system for standardization of diagnostic testing. © The Korean Society for Laboratory Medicine
Effect of Accreditation on Accuracy of Diagnostic Tests in Medical Laboratories
Jang, Mi-Ae; Yoon, Young Ahn; Song, Junghan; Kim, Jeong-Ho; Min, Won-Ki; Lee, Ji Sung
2017-01-01
Background Medical laboratories play a central role in health care. Many laboratories are taking a more focused and stringent approach to quality system management. In Korea, laboratory standardization efforts undertaken by the Korean Laboratory Accreditation Program (KLAP) and the Korean External Quality Assessment Scheme (KEQAS) may have facilitated an improvement in laboratory performance, but there are no fundamental studies demonstrating that laboratory standardization is effective. We analyzed the results of the KEQAS to identify significant differences between laboratories with or without KLAP and to determine the impact of laboratory standardization on the accuracy of diagnostic tests. Methods We analyzed KEQAS participant data on clinical chemistry tests such as albumin, ALT, AST, and glucose from 2010 to 2013. As a statistical parameter to assess performance bias between laboratories, we compared 4-yr variance index score (VIS) between the two groups with or without KLAP. Results Compared with the group without KLAP, the group with KLAP exhibited significantly lower geometric means of 4-yr VIS for all clinical chemistry tests (P<0.0001); this difference justified a high level of confidence in standardized services provided by accredited laboratories. Confidence intervals for the mean of each test in the two groups (accredited and non-accredited) did not overlap, suggesting that the means of the groups are significantly different. Conclusions These results confirmed that practice standardization is strongly associated with the accuracy of test results. Our study emphasizes the necessity of establishing a system for standardization of diagnostic testing. PMID:28224767
NASA Astrophysics Data System (ADS)
Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.
2017-05-01
The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.
Coelho, Luiz Gonzaga Vaz; Silva, Arilto Eleutério da; Coelho, Maria Clara de Freitas; Penna, Francisco Guilherme Cancela e; Ferreira, Rafael Otto Antunes; Santa-Cecilia, Elisa Viana
2011-01-01
The standard doses of (13)C-urea in (13)C-urea breath test is 75 mg. To assess the diagnostic accuracy of (13)C-urea breath test containing 25 mg of (13)C-urea comparing with the standard doses of 75 mg in the diagnosis of Helicobacter pylori infection. Two hundred seventy adult patients (96 males, 174 females, median age 41 years) performed the standard (13)C-urea breath test (75 mg (13)C-urea) and repeated the (13)C-urea breath test using only 25 mg of (13)C-urea within a 2 week interval. The test was performed using an infrared isotope analyzer. Patients were considered positive if delta over baseline was >4.0‰ at the gold standard test. One hundred sixty-one (59.6%) patients were H. pylori negative and 109 (40.4%) were positive by the gold standard test. Using receiver operating characteristic analysis we established a cut-off value of 3.4% as the best value of 25 mg (13)C-urea breath test to discriminate positive and negative patients, considering the H. pylori prevalence (95% CI: 23.9-37.3) at our setting. Therefore, we obtained to 25 mg (13)C-urea breath test a diagnostic accuracy of 92.9% (95% CI: 88.1-97.9), sensitivity 83.5% (95% CI: 75.4-89.3), specificity 99.4% (95% CI: 96.6-99.9), positive predictive value 98.3% (95% CI: 92.4-99.4), and negative predictive value 93.0% (95% CI: 88.6-96.1). Low-dose (13)C-urea breath test (25 mg (13)C-urea) does not reach accuracy sufficient to be recommended in clinical setting where a 30% prevalence of H. pylori infection is observed. Further studies should be done to determine the diagnostic accuracy of low doses of (13)C-urea in the urea breath test.
Kang, Tae Wook; Rhim, Hyunchul; Lee, Min Woo; Kim, Young-sun; Choi, Dongil; Lim, Hyo Keun
2014-01-01
To perform a systematic review of compliance with standardized terminology and reporting criteria for radiofrequency (RF) tumor ablation, proposed by the International Working Group on Image-Guided Tumor Ablation in 2003, in the published reports. Literature search in the PubMed database was performed using index keywords, PubMed limit system, and eligibility criteria. The entire content of each article was reviewed to assess the terminology used for procedure terms, imaging findings, therapeutic efficacy, follow-up, and complications. Accuracy of the terminology and the use of alternative terms instead of standard terminology were analyzed. In addition, disparities in accuracy of terminology in articles according to the medical specialty and the type of radiology journal were evaluated. Among the articles (n = 308) included in this study, the accuracy of the terms 'procedure or session', 'treatment', 'index tumor', 'ablation zone', 'technical success', 'primary technique effectiveness rate', 'secondary technique effectiveness rate', 'local tumor progression', 'major complication', and 'minor complication' was 97% (298/307), 97% (291/300), 8% (25/307), 65% (103/159), 55% (52/94), 33% (42/129), 94% (17/18), 45% (88/195), 99% (79/80), and 100% (77/77), respectively. The overall accuracy of each term showed a tendency to improve over the years. The most commonly used alternative terms for 'technical success' and 'local tumor progression' were 'complete ablation' and 'local (tumor) recurrence', respectively. The accuracy of terminology in articles published in radiology journals was significantly greater than that of terminology in articles published in non-radiology journals, especially in Radiology and The Journal of Vascular and Interventional Radiology. The proposal for standardization of terminology and reporting criteria for RF tumor ablation has been gaining support according to the recently published scientific reports, especially in the field of radiology. However, more work is still needed for the complete standardization of terminology.
Kang, Tae Wook; Lee, Min Woo; Kim, Young-sun; Choi, Dongil; Lim, Hyo Keun
2014-01-01
Objective To perform a systematic review of compliance with standardized terminology and reporting criteria for radiofrequency (RF) tumor ablation, proposed by the International Working Group on Image-Guided Tumor Ablation in 2003, in the published reports. Materials and Methods Literature search in the PubMed database was performed using index keywords, PubMed limit system, and eligibility criteria. The entire content of each article was reviewed to assess the terminology used for procedure terms, imaging findings, therapeutic efficacy, follow-up, and complications. Accuracy of the terminology and the use of alternative terms instead of standard terminology were analyzed. In addition, disparities in accuracy of terminology in articles according to the medical specialty and the type of radiology journal were evaluated. Results Among the articles (n = 308) included in this study, the accuracy of the terms 'procedure or session', 'treatment', 'index tumor', 'ablation zone', 'technical success', 'primary technique effectiveness rate', 'secondary technique effectiveness rate', 'local tumor progression', 'major complication', and 'minor complication' was 97% (298/307), 97% (291/300), 8% (25/307), 65% (103/159), 55% (52/94), 33% (42/129), 94% (17/18), 45% (88/195), 99% (79/80), and 100% (77/77), respectively. The overall accuracy of each term showed a tendency to improve over the years. The most commonly used alternative terms for 'technical success' and 'local tumor progression' were 'complete ablation' and 'local (tumor) recurrence', respectively. The accuracy of terminology in articles published in radiology journals was significantly greater than that of terminology in articles published in non-radiology journals, especially in Radiology and The Journal of Vascular and Interventional Radiology. Conclusion The proposal for standardization of terminology and reporting criteria for RF tumor ablation has been gaining support according to the recently published scientific reports, especially in the field of radiology. However, more work is still needed for the complete standardization of terminology. PMID:24497798
Accuracy in planar cutting of bones: an ISO-based evaluation.
Cartiaux, Olivier; Paul, Laurent; Docquier, Pierre-Louis; Francq, Bernard G; Raucent, Benoît; Dombre, Etienne; Banse, Xavier
2009-03-01
Computer- and robot-assisted technologies are capable of improving the accuracy of planar cutting in orthopaedic surgery. This study is a first step toward formulating and validating a new evaluation methodology for planar bone cutting, based on the standards from the International Organization for Standardization. Our experimental test bed consisted of a purely geometrical model of the cutting process around a simulated bone. Cuts were performed at three levels of surgical assistance: unassisted, computer-assisted and robot-assisted. We measured three parameters of the standard ISO1101:2004: flatness, parallelism and location of the cut plane. The location was the most relevant parameter for assessing cutting errors. The three levels of assistance were easily distinguished using the location parameter. Our ISO methodology employs the location to obtain all information about translational and rotational cutting errors. Location may be used on any osseous structure to compare the performance of existing assistance technologies.
Building an Evaluation Scale using Item Response Theory.
Lalor, John P; Wu, Hao; Yu, Hong
2016-11-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.
Building an Evaluation Scale using Item Response Theory
Lalor, John P.; Wu, Hao; Yu, Hong
2016-01-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.1 PMID:28004039
F-16 Task Analysis Criterion-Referenced Objective and Objectives Hierarchy Report. Volume 4
1981-03-01
Initiation cues: Engine flameout Systems presenting cues: Aircraft fuel, engine STANDARD: Authority: TACR 60-2 Performance precision: TD in first 1/3 of...task: None Initiation cues: On short final Systems preventing cues: N/A STANDARD: Authority: 60-2 Performance precision: +/- .5 AOA; TD zone 150-1000...precision: +/- .05 AOA; TD Zone 150-1000 Computational accuracy: N/A ... . . ... . ... e e m I TASK NO.: 1.9.4 BEHAVIOR: Perform short field landing
Aithal, Venkatesh; Kei, Joseph; Driscoll, Carlie; Murakoshi, Michio; Wada, Hiroshi
2018-02-01
Diagnosing conductive conditions in newborns is challenging for both audiologists and otolaryngologists. Although high-frequency tympanometry (HFT), acoustic stapedial reflex tests, and wideband absorbance measures are useful diagnostic tools, there is performance measure variability in their detection of middle ear conditions. Additional diagnostic sensitivity and specificity measures gained through new technology such as sweep frequency impedance (SFI) measures may assist in the diagnosis of middle ear dysfunction in newborns. The purpose of this study was to determine the test performance of SFI to predict the status of the outer and middle ear in newborns against commonly used reference standards. Automated auditory brainstem response (AABR), HFT (1000 Hz), transient evoked otoacoustic emission (TEOAE), distortion product otoacoustic emission (DPOAE), and SFI tests were administered to the study sample. A total of 188 neonates (98 males and 90 females) with a mean gestational age of 39.4 weeks were included in the sample. Mean age at the time of testing was 44.4 hr. Diagnostic accuracy of SFI was assessed in terms of its ability to identify conductive conditions in neonates when compared with nine different reference standards (including four single tests [AABR, HFT, TEOAE, and DPOAE] and five test batteries [HFT + DPOAE, HFT + TEOAE, DPOAE + TEOAE, DPOAE + AABR, and TEOAE + AABR]), using receiver operating characteristic (ROC) analysis and traditional test performance measures such as sensitivity and specificity. The test performance of SFI against the test battery reference standard of HFT + DPOAE and single reference standard of HFT was high with an area under the ROC curve (AROC) of 0.87 and 0.82, respectively. Although the HFT + DPOAE test battery reference standard performed better than the HFT reference standard in predicting middle ear conductive conditions in neonates, the difference in AROC was not significant. Further analysis revealed that the highest sensitivity and specificity for SFI (86% and 88%, respectively) was obtained when compared with the reference standard of HFT + DPOAE. Among the four single reference standards, SFI had the highest sensitivity and specificity (76% and 88%, respectively) when compared against the HFT reference standard. The high test performance of SFI against the HFT and HFT + DPOAE reference standards indicates that the SFI measure has appropriate diagnostic accuracy in detection of conductive conditions in newborns. Hence, the SFI test could be used as adjunct tool to identify conductive conditions in universal newborn hearing screening programs, and can also be used in diagnostic follow-up assessments. American Academy of Audiology
When are circular lesions square? A national clinical education skin lesion audit and study.
Miranda, Benjamin H; Herman, Katie A; Malahias, Marco; Juma, Ali
2014-09-01
Skin cancer is the most prevalent cancer by organ type and referral accuracy is vital for diagnosis and management. The British Association of Dermatologists (BAD) and literature highlight the importance of accurate skin lesion examination, diagnosis and educationally-relevant studies. We undertook a review of the relevant literature, a national audit of skin lesion description standards and a study of speciality training influences on these descriptions. Questionnaires (n=200), with pictures of a circular and an oval lesion, were distributed to UK dermatology/plastic surgery consultants and speciality trainees (ST), general practitioners (GP), and medical students (MS). The following variables were analysed against a pre-defined 95% inclusion accuracy standard: site, shape, size, skin/colour, and presence of associated scars. There were 250 lesion descriptions provided by 125 consultants, STs, GPs, and MSs. Inclusion accuracy was greatest for consultants over STs (80% vs. 68%; P<0.001), GPs (57%) and MSs (46%) (P<0.0001), for STs over GPs (P<0.010) and MSs (P<0.0001) and for GPs over MSs (P<0.010), all falling below audit standard. Size description accuracy sub-analysis according to circular/oval dimensions was as follows: consultants (94%), GPs (80%), STs (73%), MSs (37%), with the most common error implying a quadrilateral shape (66%). Addressing BAD guidelines and published requirements for more empirical performance data to improve teaching methods, we performed a national audit and studied skin lesion descriptions. To improve diagnostic and referral accuracy for patients, healthcare professionals must strive towards accuracy (a circle is not a square). We provide supportive evidence that increased speciality training improves this process and propose that greater focus is placed on such training early on during medical training, and maintained throughout clinical practice.
Accuracy and Precision of Visual Stimulus Timing in PsychoPy: No Timing Errors in Standard Usage
Garaizar, Pablo; Vadillo, Miguel A.
2014-01-01
In a recent report published in PLoS ONE, we found that the performance of PsychoPy degraded with very short timing intervals, suggesting that it might not be perfectly suitable for experiments requiring the presentation of very brief stimuli. The present study aims to provide an updated performance assessment for the most recent version of PsychoPy (v1.80) under different hardware/software conditions. Overall, the results show that PsychoPy can achieve high levels of precision and accuracy in the presentation of brief visual stimuli. Although occasional timing errors were found in very demanding benchmarking tests, there is no reason to think that they can pose any problem for standard experiments developed by researchers. PMID:25365382
Performance Evaluation of Five Turbidity Sensors in Three Primary Standards
Snazelle, Teri T.
2015-10-28
Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard deviation of 0.51 percent for the operating range, which was limited to 0.01–1600 NTU at the time of this report. Test results indicated an average percent error of 19.81 percent in the three standards for the EXO turbidity sensor and 9.66 percent for the YSI 6136. The significant variability in sensor performance in the three primary standards suggests that although all three types are accepted as primary calibration standards, they are not interchangeable, and sensor results in the three types of standards are not directly comparable.
ERIC Educational Resources Information Center
Townsend, James T.; Altieri, Nicholas
2012-01-01
Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the "workload…
Schlegel, Claudia; Bonvin, Raphael; Rethans, Jan Joost; van der Vleuten, Cees
2014-10-14
Abstract Introduction: High-stake objective structured clinical examinations (OSCEs) with standardized patients (SPs) should offer the same conditions to all candidates throughout the exam. SP performance should therefore be as close to the original role script as possible during all encounters. In this study, we examined the impact of video in SP training on SPs' role accuracy, investigating how the use of different types of video during SP training improves the accuracy of SP portrayal. Methods: In a randomized post-test, control group design three groups of 12 SPs each with different types of video training and one control group of 12 SPs without video use in SP training were compared. The three intervention groups used role-modeling video, performance-feedback video, or a combination of both. Each SP from each group had four students encounter. Two blinded faculty members rated the 192 video-recorded encounters, using a case-specific rating instrument to assess SPs' role accuracy. Results: SPs trained by video showed significantly (p < 0.001) better role accuracy than SPs trained without video over the four sequential portrayals. There was no difference between the three types of video training. Discussion: Use of video during SP training enhances the accuracy of SP portrayal compared with no video, regardless of the type of video intervention used.
Wu, Chunwei; Guan, Qingxiao; Wang, Shumei; Rong, Yueying
2017-01-01
Root of Panax ginseng C. A. Mey (Renseng in Chinese) is a famous Traditional Chinese Medicine. Ginsenosides are the major bioactive components. However, the shortage and high cost of some ginsenoside reference standards make it is difficult for quality control of P. ginseng . A method, single standard for determination of multicomponents (SSDMC), was developed for the simultaneous determination of nine ginsenosides in P. ginseng (ginsenoside Rg 1 , Re, Rf, Rg 2 , Rb 1 , Rc, Rb 2 , Rb 3 , Rd). The analytes were separated on Inertsil ODS-3 C18 (250 mm × 4.6 mm, 5 μm) with gradient elution of acetonitrile and water. The flow rate was 1 mL/min and detection wavelength was set at 203 nm. The feasibility and accuracy of SSDMC were checked by the external standard method, and various high-performance liquid chromatographic (HPLC) instruments and chromatographic conditions were investigated to verify its applicability. Using ginsenoside Rg 1 as the internal reference substance, the contents of other eight ginsenosides were calculated according to conversion factors (F) by HPLC. The method was validated with linearity ( r 2 ≥ 0.9990), precision (relative standard deviation [RSD] ≤2.9%), accuracy (97.5%-100.8%, RSD ≤ 1.6%), repeatability, and stability. There was no significant difference between the SSDMC method and the external standard method. New SSDMC method could be considered as an ideal mean to analyze the components for which reference standards are not readily available. A method, single standard for determination of multicomponents (SSDMC), was established by high-performance liquid chromatography for the simultaneous determination of nine ginsenosides in Panax ginseng (ginsenoside Rg1, Re, Rf, Rg2, Rb1, Rc, Rb2, Rb3, Rd)Various chromatographic conditions were investigated to verify applicability of FsThe feasibility and accuracy of SSDMC were checked by the external standard method. Abbreviations used: DRT: Different value of retention time; F: Conversion factor; HPLC: High-performance Liquid Chromatography; LOD: Limit of detection; LOQ: Limit of quantitation; PD: Percent difference; PPD: 20(S)-protopanaxadiol; PPT: 20(S)-protopanaxatriol; RSD: Relative standard deviation; SSDMC: Single Standard for Determination of Multicomponents; TCM: Traditional Chinese Medicine.
Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara
2018-04-06
The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.
Xu, Rende; Li, Chenguang; Qian, Juying; Ge, Junbo
2015-11-01
Invasive fractional flow reserve (FFR) is the gold standard for the determination of physiologic stenosis severity and the need for revascularization. FFR computed from standard acquired coronary computed tomographic angiography datasets (FFRCT) is an emerging technology which allows calculation of FFR using resting image data from coronary computed tomographic angiography (CCTA). However, the diagnostic accuracy of FFRCT in the evaluation of lesion-specific myocardial ischemia remains to be confirmed, especially in patients with intermediate coronary stenosis. We performed an integrated analysis of data from 3 prospective, international, and multicenter trials, which assessed the diagnostic performance of FFRCT using invasive FFR as a reference standard. Three studies evaluating 609 patients and 1050 vessels were included. The total calculated sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of FFRCT were 82.8%, 77.7%, 60.8%, 91.6%, and 79.2%, respectively, for the per-vessel analysis, and 89.4%, 70.5%, 69.7%, 89.7%, and 78.7%, respectively, for the per-patient analysis. Compared with CCTA alone, FFRCT demonstrated significantly improved accuracy (P < 0.001) in detecting lesion-specific ischemia. In patients with intermediate coronary stenosis, FFRCT remained both highly sensitive and specific with respect to the diagnosis of ischemia. In conclusion, FFRCT appears to be a reliable noninvasive alternative to invasive FFR, as it demonstrates high accuracy in the determination of anatomy and lesion-specific ischemia, which justifies the performance of additional randomized controlled trials to evaluate both the clinical benefits and the cost-effectiveness of FFRCT-guided coronary revascularization.
Karch, Annika; Koch, Armin; Zapf, Antonia; Zerr, Inga; Karch, André
2016-10-01
To investigate how choice of gold standard biases estimates of sensitivity and specificity in studies reassessing the diagnostic accuracy of biomarkers that are already part of a lifetime composite gold standard (CGS). We performed a simulation study based on the real-life example of the biomarker "protein 14-3-3" used for diagnosing Creutzfeldt-Jakob disease. Three different types of gold standard were compared: perfect gold standard "autopsy" (available in a small fraction only; prone to partial verification bias), lifetime CGS (including the biomarker under investigation; prone to incorporation bias), and "best available" gold standard (autopsy if available, otherwise CGS). Sensitivity was unbiased when comparing 14-3-3 with autopsy but overestimated when using CGS or "best available" gold standard. Specificity of 14-3-3 was underestimated in scenarios comparing 14-3-3 with autopsy (up to 24%). In contrast, overestimation (up to 20%) was observed for specificity compared with CGS; this could be reduced to 0-10% when using the "best available" gold standard. Choice of gold standard affects considerably estimates of diagnostic accuracy. Using the "best available" gold standard (autopsy where available, otherwise CGS) leads to valid estimates of specificity, whereas sensitivity is estimated best when tested against autopsy alone. Copyright © 2016 Elsevier Inc. All rights reserved.
42 CFR 493.1253 - Standard: Establishment and verification of performance specifications.
Code of Federal Regulations, 2014 CFR
2014-10-01
... establish performance specifications for any test system used by the laboratory before April 24, 2003. (b)(1... approved test system must do the following before reporting patient test results: (i) Demonstrate that it... following performance characteristics: (A) Accuracy. (B) Precision. (C) Reportable range of test results for...
42 CFR 493.1253 - Standard: Establishment and verification of performance specifications.
Code of Federal Regulations, 2013 CFR
2013-10-01
... establish performance specifications for any test system used by the laboratory before April 24, 2003. (b)(1... approved test system must do the following before reporting patient test results: (i) Demonstrate that it... following performance characteristics: (A) Accuracy. (B) Precision. (C) Reportable range of test results for...
42 CFR 493.1253 - Standard: Establishment and verification of performance specifications.
Code of Federal Regulations, 2012 CFR
2012-10-01
... establish performance specifications for any test system used by the laboratory before April 24, 2003. (b)(1... approved test system must do the following before reporting patient test results: (i) Demonstrate that it... following performance characteristics: (A) Accuracy. (B) Precision. (C) Reportable range of test results for...
42 CFR 493.1253 - Standard: Establishment and verification of performance specifications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... establish performance specifications for any test system used by the laboratory before April 24, 2003. (b)(1... approved test system must do the following before reporting patient test results: (i) Demonstrate that it... following performance characteristics: (A) Accuracy. (B) Precision. (C) Reportable range of test results for...
Claessen, Michiel H G; van der Ham, Ineke J M; van Zandvoort, Martine J E
2015-01-01
The tablet computer initiates an important step toward computerized administration of neuropsychological tests. Because of its lack of standardization, the Corsi Block-Tapping Task could benefit from advantages inherent to computerization. This task, which requires reproduction of a sequence of movements by tapping blocks as demonstrated by an examiner, is widely used as a representative of visuospatial attention and working memory. The aim was to validate a computerized version of the Corsi Task (e-Corsi) by comparing recall accuracy to that on the standard task. Forty university students (Mage = 22.9 years, SD = 2.7 years; 20 female) performed the standard Corsi Task and the e-Corsi on an iPad 3. Results showed higher accuracy in forward reproduction on the standard Corsi compared with the e-Corsi, whereas backward performance was comparable. These divergent performance patterns on the 2 versions (small-to-medium effect sizes) are explained as a result of motor priming and interference effects. This finding implies that computerization has serious consequences for the cognitive concepts that the Corsi Task is assumed to assess. Hence, whereas the e-Corsi was shown to be useful with respect to administration and registration, these findings also stress the need for reconsideration of the underlying theoretical concepts of this task.
NASA Astrophysics Data System (ADS)
DSuryadi; Delyuzar; Soekimin
2018-03-01
Indonesia is the second country with the TB (tuberculosis) burden in the world. Improvement in controlling TB and reducing the complications can accelerate early diagnosis and correct treatment. PCR test is a gold standard. However, it is quite expensive for routine diagnosis. Therefore, an accurate and cheaper diagnostic method such as fine needle aspiration biopsy is needed. The study aimsto determine the accuracy of fine needle aspiration biopsy cytology in the diagnosis of tuberculous lymphadenitis. A cross-sectional analytic study was conducted to the samples from patients suspected with tuberculous lymphadenitis. The fine needle aspiration biopsy (FNAB)test was performed and confirmed by PCR test.There is a comparison to the sensitivity, specificity, accuracy, positive predictive value and negative predictive value of both methods. Sensitivity (92.50%), specificity (96.49%), accuracy (94.85%), positive predictive value (94.87%) and negative predictive value (94.83%) were in FNAB test compared to gold standard. We concluded that fine needle aspiration biopsy is a recommendation for a cheaper and accurate diagnostic test for tuberculous lymphadenitis diagnosis.
A traceability procedure has been established which allows specialty gas producers to prepare gaseous pollutant Certified Reference Materials (CRMs). The accuracy, stability and homogeneity of the CRMs approach those of NBS Standard Reference Materials (SRMs). Part of this proced...
A traceability procedure has been established which allows specialty gas producers to prepare gaseous pollutant Certified Reference Materials (CRM's). The accuracy, stability and homogeneity of the CRM's approach those of NBS Standard Reference Materials (SRM's). As of October 19...
Bailey, Timothy S; Klaff, Leslie J; Wallace, Jane F; Greene, Carmine; Pardo, Scott; Harrison, Bern; Simmons, David A
2016-07-01
As blood glucose monitoring system (BGMS) accuracy is based on comparison of BGMS and laboratory reference glucose analyzer results, reference instrument accuracy is important to discriminate small differences between BGMS and reference glucose analyzer results. Here, we demonstrate the important role of reference glucose analyzer accuracy in BGMS accuracy evaluations. Two clinical studies assessed the performance of a new BGMS, using different reference instrument procedures. BGMS and YSI analyzer results were compared for fingertip blood that was obtained by untrained subjects' self-testing and study staff testing, respectively. YSI analyzer accuracy was monitored using traceable serum controls. In study 1 (N = 136), 94.1% of BGMS results were within International Organization for Standardization (ISO) 15197:2013 accuracy criteria; YSI analyzer serum control results showed a negative bias (-0.64% to -2.48%) at the first site and a positive bias (3.36% to 6.91%) at the other site. In study 2 (N = 329), 97.8% of BGMS results were within accuracy criteria; serum controls showed minimal bias (<0.92%) at both sites. These findings suggest that the ability to demonstrate that a BGMS meets accuracy guidelines is influenced by reference instrument accuracy. © 2016 Diabetes Technology Society.
Performance characterization of structured light-based fingerprint scanner
NASA Astrophysics Data System (ADS)
Hassebrook, Laurence G.; Wang, Minghao; Daley, Raymond C.
2013-05-01
Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation's image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.
NASA Astrophysics Data System (ADS)
Frye, G. E.; Hauser, C. K.; Townsend, G.; Sellers, E. W.
2011-04-01
Since the introduction of the P300 brain-computer interface (BCI) speller by Farwell and Donchin in 1988, the speed and accuracy of the system has been significantly improved. Larger electrode montages and various signal processing techniques are responsible for most of the improvement in performance. New presentation paradigms have also led to improvements in bit rate and accuracy (e.g. Townsend et al (2010 Clin. Neurophysiol. 121 1109-20)). In particular, the checkerboard paradigm for online P300 BCI-based spelling performs well, has started to document what makes for a successful paradigm, and is a good platform for further experimentation. The current paper further examines the checkerboard paradigm by suppressing items which surround the target from flashing during calibration (i.e. the suppression condition). In the online feedback mode the standard checkerboard paradigm is used with a stepwise linear discriminant classifier derived from the suppression condition and one classifier derived from the standard checkerboard condition, counter-balanced. The results of this research demonstrate that using suppression during calibration produces significantly more character selections/min ((6.46) time between selections included) than the standard checkerboard condition (5.55), and significantly fewer target flashes are needed per selection in the SUP condition (5.28) as compared to the RCP condition (6.17). Moreover, accuracy in the SUP and RCP conditions remained equivalent (~90%). Mean theoretical bit rate was 53.62 bits/min in the suppression condition and 46.36 bits/min in the standard checkerboard condition (ns). Waveform morphology also showed significant differences in amplitude and latency.
Information filtering via biased heat conduction.
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou et al., Proc. Natl. Acad. Sci. USA 107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Probability of Failure Analysis Standards and Guidelines for Expendable Launch Vehicles
NASA Astrophysics Data System (ADS)
Wilde, Paul D.; Morse, Elisabeth L.; Rosati, Paul; Cather, Corey
2013-09-01
Recognizing the central importance of probability of failure estimates to ensuring public safety for launches, the Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST), the National Aeronautics and Space Administration (NASA), and U.S. Air Force (USAF), through the Common Standards Working Group (CSWG), developed a guide for conducting valid probability of failure (POF) analyses for expendable launch vehicles (ELV), with an emphasis on POF analysis for new ELVs. A probability of failure analysis for an ELV produces estimates of the likelihood of occurrence of potentially hazardous events, which are critical inputs to launch risk analysis of debris, toxic, or explosive hazards. This guide is intended to document a framework for POF analyses commonly accepted in the US, and should be useful to anyone who performs or evaluates launch risk analyses for new ELVs. The CSWG guidelines provide performance standards and definitions of key terms, and are being revised to address allocation to flight times and vehicle response modes. The POF performance standard allows a launch operator to employ alternative, potentially innovative methodologies so long as the results satisfy the performance standard. Current POF analysis practice at US ranges includes multiple methodologies described in the guidelines as accepted methods, but not necessarily the only methods available to demonstrate compliance with the performance standard. The guidelines include illustrative examples for each POF analysis method, which are intended to illustrate an acceptable level of fidelity for ELV POF analyses used to ensure public safety. The focus is on providing guiding principles rather than "recipe lists." Independent reviews of these guidelines were performed to assess their logic, completeness, accuracy, self- consistency, consistency with risk analysis practices, use of available information, and ease of applicability. The independent reviews confirmed the general validity of the performance standard approach and suggested potential updates to improve the accuracy each of the example methods, especially to address reliability growth.
Oates, R P; Mcmanus, Michelle; Subbiah, Seenivasan; Klein, David M; Kobelski, Robert
2017-07-14
Internal standards are essential in electrospray ionization liquid chromatography-mass spectrometry (ESI-LC-MS) to correct for systematic error associated with ionization suppression and/or enhancement. A wide array of instrument setups and interfaces has created difficulty in comparing the quantitation of absolute analyte response across laboratories. This communication demonstrates the use of primary standards as operational qualification standards for LC-MS instruments and their comparison with commonly accepted internal standards. In monitoring the performance of internal standards for perfluorinated compounds, potassium hydrogen phthalate (KHP) presented lower inter-day variability in instrument response than a commonly accepted deuterated perfluorinated internal standard (d3-PFOS), with percent relative standard deviations less than or equal to 6%. The inter-day precision of KHP was greater than d3-PFOS over a 28-day monitoring of perfluorooctanesulfonic acid (PFOS), across concentrations ranging from 0 to 100μg/L. The primary standard trometamol (Trizma) performed as well as known internal standards simeton and tris (2-chloroisopropyl) phosphate (TCPP), with intra-day precision of Trizma response as low as 7% RSD on day 28. The inter-day precision of Trizma response was found to be greater than simeton and TCPP, across concentrations of neonicotinoids ranging from 1 to 100μg/L. This study explores the potential of primary standards to be incorporated into LC-MS/MS methodology to improve the quantitative accuracy in environmental contaminant analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Sousa, Thiago Oliveira; Haiter-Neto, Francisco; Nascimento, Eduarda Helena Leandro; Peroni, Leonardo Vieira; Freitas, Deborah Queiroz; Hassan, Bassam
2017-07-01
The aim of this study was to assess the diagnostic accuracy of periapical radiography (PR) and cone-beam computed tomographic (CBCT) imaging in the detection of the root canal configuration (RCC) of human premolars. PR and CBCT imaging of 114 extracted human premolars were evaluated by 2 oral radiologists. RCC was recorded according to Vertucci's classification. Micro-computed tomographic imaging served as the gold standard to determine RCC. Accuracy, sensitivity, specificity, and predictive values were calculated. The Friedman test compared both PR and CBCT imaging with the gold standard. CBCT imaging showed higher values for all diagnostic tests compared with PR. Accuracy was 0.55 and 0.89 for PR and CBCT imaging, respectively. There was no difference between CBCT imaging and the gold standard, whereas PR differed from both CBCT and micro-computed tomographic imaging (P < .0001). CBCT imaging was more accurate than PR for evaluating different types of RCC individually. Canal configuration types III, VII, and "other" were poorly identified on CBCT imaging with a detection accuracy of 50%, 0%, and 43%, respectively. With PR, all canal configurations except type I were poorly visible. PR presented low performance in the detection of RCC in premolars, whereas CBCT imaging showed no difference compared with the gold standard. Canals with complex configurations were less identifiable using both imaging methods, especially PR. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cao, Qian; Wan, Xiaoxia; Li, Junfeng; Liu, Qiang; Liang, Jingxing; Li, Chan
2016-10-01
This paper proposed two weight functions based on principal component analysis (PCA) to reserve more colorimetric information in spectral data compression process. One weight function consisted of the CIE XYZ color-matching functions representing the characteristic of the human visual system, while another was made up of the CIE XYZ color-matching functions of human visual system and relative spectral power distribution of the CIE standard illuminant D65. The improvement obtained from the proposed two methods were tested to compress and reconstruct the reflectance spectra of 1600 glossy Munsell color chips and 1950 Natural Color System color chips as well as six multispectral images. The performance was evaluated by the mean values of color difference under the CIE 1931 standard colorimetric observer and the CIE standard illuminant D65 and A. The mean values of root mean square errors between the original and reconstructed spectra were also calculated. The experimental results show that the proposed two methods significantly outperform the standard PCA and another two weighted PCA in the aspects of colorimetric reconstruction accuracy with very slight degradation in spectral reconstruction accuracy. In addition, weight functions with the CIE standard illuminant D65 can improve the colorimetric reconstruction accuracy compared to weight functions without the CIE standard illuminant D65.
ANSI/ASHRAE/IES Standard 90.1-2016 Performance Rating Method Reference Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goel, Supriya; Rosenberg, Michael I.; Eley, Charles
This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1-2016 (Standard 90.1-2016). The PRM can be used to demonstrate compliance with the standard and to rate the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. Use of the PRM for demonstrating compliance with Standard 90.1 is a new feature of the 2016 edition. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users ofmore » the PRM.« less
Standardized assessment of infrared thermographic fever screening system performance
NASA Astrophysics Data System (ADS)
Ghassemi, Pejhman; Pfefer, Joshua; Casamento, Jon; Wang, Quanzeng
2017-03-01
Thermal modalities represent the only currently viable mass fever screening approach for outbreaks of infectious disease pandemics such as Ebola and SARS. Non-contact infrared thermometers (NCITs) and infrared thermographs (IRTs) have been previously used for mass fever screening in transportation hubs such as airports to reduce the spread of disease. While NCITs remain a more popular choice for fever screening in the field and at fixed locations, there has been increasing evidence in the literature that IRTs can provide greater accuracy in estimating core body temperature if appropriate measurement practices are applied - including the use of technically suitable thermographs. Therefore, the purpose of this study was to develop a battery of evaluation test methods for standardized, objective and quantitative assessment of thermograph performance characteristics critical to assessing suitability for clinical use. These factors include stability, drift, uniformity, minimum resolvable temperature difference, and accuracy. Two commercial IRT models were characterized. An external temperature reference source with high temperature accuracy was utilized as part of the screening thermograph. Results showed that both IRTs are relatively accurate and stable (<1% error of reading with stability of +/-0.05°C). Overall, results of this study may facilitate development of standardized consensus test methods to enable consistent and accurate use of IRTs for fever screening.
Longcroft-Wheaton, G; Brown, J; Cowlishaw, D; Higgins, B; Bhandari, P
2012-10-01
The resolution of endoscopes has increased in recent years. Modern Fujinon colonoscopes have a charge-coupled device (CCD) pixel density of 650,000 pixels compared with the 410,000 pixel CCD in standard-definition scopes. Acquiring high-definition scopes represents a significant capital investment and their clinical value remains uncertain. The aim of the current study was to investigate the impact of high-definition endoscopes on the in vivo histology prediction of colonic polyps. Colonoscopy procedures were performed using Fujinon colonoscopes and EPX-4400 processor. Procedures were randomized to be performed using either a standard-definition EC-530 colonoscope or high-definition EC-530 and EC-590 colonoscopes. Polyps of <10 mm were assessed using both white light imaging (WLI) and flexible spectral imaging color enhancement (FICE), and the predicted diagnosis was recorded. Polyps were removed and sent for histological analysis by a pathologist who was blinded to the endoscopic diagnosis. The predicted diagnosis was compared with the histology to calculate the accuracy, sensitivity, and specificity of in vivo assessment using either standard or high-definition scopes. A total of 293 polyps of <10 mm were examined–150 polyps using the standard-definition colonoscope and 143 polyps using high-definition colonoscopes. There was no difference in sensitivity, specificity or accuracy between the two scopes when WLI was used (standard vs. high: accuracy 70% [95% CI 62–77] vs. 73% [95% CI 65–80]; P=0.61). When FICE was used, high-definition colonoscopes showed a sensitivity of 93% compared with 83% for standard-definition colonoscopes (P=0.048); specificity was 81% and 82%, respectively. There was no difference between high- and standard-definition colonoscopes when white light was used, but FICE significantly improved the in vivo diagnosis of small polyps when high-definition scopes were used compared with standard definition.
The cooking task: making a meal of executive functions
Doherty, T. A.; Barker, L. A.; Denniss, R.; Jalil, A.; Beer, M. D.
2015-01-01
Current standardized neuropsychological tests may fail to accurately capture real-world executive deficits. We developed a computer-based Cooking Task (CT) assessment of executive functions and trialed the measure with a normative group before use with a head-injured population. Forty-six participants completed the computerized CT and subtests from standardized neuropsychological tasks, including the Tower and Sorting Tests of executive function from the Delis-Kaplan Executive Function System (D-KEFS) and the Cambridge prospective memory test (CAMPROMPT), in order to examine whether standardized executive function tasks, predicted performance on measurement indices from the CT. Findings showed that verbal comprehension, rule detection and prospective memory contributed to measures of prospective planning accuracy and strategy implementation of the CT. Results also showed that functions necessary for cooking efficacy differ as an effect of task demands (difficulty levels). Performance on rule detection, strategy implementation and flexible thinking executive function measures contributed to accuracy on the CT. These findings raise questions about the functions captured by present standardized tasks particularly at varying levels of difficulty and during dual-task performance. Our preliminary findings also indicate that CT measures can effectively distinguish between executive function and Full Scale IQ abilities. Results of the present study indicate that the CT shows promise as an ecologically valid measure of executive function for future use with a head-injured population and indexes selective executive function’s captured by standardized tests. PMID:25717294
The cooking task: making a meal of executive functions.
Doherty, T A; Barker, L A; Denniss, R; Jalil, A; Beer, M D
2015-01-01
Current standardized neuropsychological tests may fail to accurately capture real-world executive deficits. We developed a computer-based Cooking Task (CT) assessment of executive functions and trialed the measure with a normative group before use with a head-injured population. Forty-six participants completed the computerized CT and subtests from standardized neuropsychological tasks, including the Tower and Sorting Tests of executive function from the Delis-Kaplan Executive Function System (D-KEFS) and the Cambridge prospective memory test (CAMPROMPT), in order to examine whether standardized executive function tasks, predicted performance on measurement indices from the CT. Findings showed that verbal comprehension, rule detection and prospective memory contributed to measures of prospective planning accuracy and strategy implementation of the CT. Results also showed that functions necessary for cooking efficacy differ as an effect of task demands (difficulty levels). Performance on rule detection, strategy implementation and flexible thinking executive function measures contributed to accuracy on the CT. These findings raise questions about the functions captured by present standardized tasks particularly at varying levels of difficulty and during dual-task performance. Our preliminary findings also indicate that CT measures can effectively distinguish between executive function and Full Scale IQ abilities. Results of the present study indicate that the CT shows promise as an ecologically valid measure of executive function for future use with a head-injured population and indexes selective executive function's captured by standardized tests.
Accuracy Performance Evaluation of Beidou Navigation Satellite System
NASA Astrophysics Data System (ADS)
Wang, W.; Hu, Y. N.
2017-03-01
Accuracy is one of the key elements of the regional Beidou Navigation Satellite System (BDS) performance standard. In this paper, we review the definition specification and evaluation standard of the BDS accuracy. Current accuracy of the regional BDS is analyzed through the ground measurements and compared with GPS in terms of dilution of precision (DOP), signal-in-space user range error (SIS URE), and positioning accuracy. The Positioning DOP (PDOP) map of BDS around Chinese mainland is compared with that of GPS. The GPS PDOP is between 1.0-2.0 and does not vary with the user latitude and longitude, while the BDS PDOP varies between 1.5-5.0, and increases as the user latitude increases, and as the user longitude apart from 118°. The accuracies of the broadcast orbits of BDS are assessed by taking the precise orbits from International GNSS Service (IGS) as the reference, and by making satellite laser ranging (SLR) residuals. The radial errors of the BDS inclined geosynchronous orbit (IGSO) and medium orbit (MEO) satellites broadcast orbits are at the 0.5m level, which are larger than those of GPS satellites at the 0.2m level. The SLR residuals of geosynchronous orbit (GEO) satellites are 65.0cm, which are larger than those of IGSO, and MEO satellites, at the 50.0cm level. The accuracy of broadcast clock offset parameters of BDS is computed by taking the clock measurements of Two-way Satellite Radio Time Frequency Transfer as the reference. Affected by the age of broadcast clock parameters, the error of the broadcast clock offset parameters of the MEO satellites is the largest, at the 0.80m level. Finally, measurements of the multi-GNSS (MGEX) receivers are used for positioning accuracy assessment of BDS and GPS. It is concluded that the positioning accuracy of regional BDS is better than 10m at the horizontal component and the vertical component. The combined positioning accuracy of both systems is better than one specific system.
Bailey, Timothy S.; Klaff, Leslie J.; Wallace, Jane F.; Greene, Carmine; Pardo, Scott; Harrison, Bern; Simmons, David A.
2016-01-01
Background: As blood glucose monitoring system (BGMS) accuracy is based on comparison of BGMS and laboratory reference glucose analyzer results, reference instrument accuracy is important to discriminate small differences between BGMS and reference glucose analyzer results. Here, we demonstrate the important role of reference glucose analyzer accuracy in BGMS accuracy evaluations. Methods: Two clinical studies assessed the performance of a new BGMS, using different reference instrument procedures. BGMS and YSI analyzer results were compared for fingertip blood that was obtained by untrained subjects’ self-testing and study staff testing, respectively. YSI analyzer accuracy was monitored using traceable serum controls. Results: In study 1 (N = 136), 94.1% of BGMS results were within International Organization for Standardization (ISO) 15197:2013 accuracy criteria; YSI analyzer serum control results showed a negative bias (−0.64% to −2.48%) at the first site and a positive bias (3.36% to 6.91%) at the other site. In study 2 (N = 329), 97.8% of BGMS results were within accuracy criteria; serum controls showed minimal bias (<0.92%) at both sites. Conclusions: These findings suggest that the ability to demonstrate that a BGMS meets accuracy guidelines is influenced by reference instrument accuracy. PMID:26902794
Widdifield, Jessica; Bombardier, Claire; Bernatsky, Sasha; Paterson, J Michael; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra A; Jaakkimainen, R Liisa; Thorne, J Carter; Tu, Karen
2014-06-23
We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of "[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]" had a sensitivity of 78% (95% CI 69-88), specificity of 100% (95% CI 100-100), PPV of 78% (95% CI 69-88) and NPV of 100% (95% CI 100-100). Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group.
75 FR 62401 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-08
... collection; Title of Information Collection: Clinical Laboratory Improvement Amendment (CLIA) of 1988 and... laboratories that perform testing on human beings to meet performance requirements (quality standards) in order... functions; (2) the accuracy of the estimated burden; (3) ways to enhance the quality, utility, and clarity...
Accuracy and coverage of the modernized Polish Maritime differential GPS system
NASA Astrophysics Data System (ADS)
Specht, Cezary
2011-01-01
The DGPS navigation service augments The NAVSTAR Global Positioning System by providing localized pseudorange correction factors and ancillary information which are broadcast over selected marine reference stations. The DGPS service position and integrity information satisfy requirements in coastal navigation and hydrographic surveys. Polish Maritime DGPS system has been established in 1994 and modernized (in 2009) to meet the requirements set out in IMO resolution for a future GNSS, but also to preserve backward signal compatibility of user equipment. Having finalized installation of the new technology L1, L2 reference equipment performance tests were performed.The paper presents results of the coverage modeling and accuracy measuring campaign based on long-term signal analyses of the DGPS reference station Rozewie, which was performed for 26 days in July 2009. Final results allowed to verify the coverage area of the differential signal from reference station and calculated repeatable and absolute accuracy of the system, after the technical modernization. Obtained field strength level area and position statistics (215,000 fixes) were compared to past measurements performed in 2002 (coverage) and 2005 (accuracy), when previous system infrastructure was in operation.So far, no campaigns were performed on differential Galileo. However, as signals, signal processing and receiver techniques are comparable to those know from DGPS. Because all satellite differential GNSS systems use the same transmission standard (RTCM), maritime DGPS Radiobeacons are standardized in all radio communication aspects (frequency, binary rate, modulation), then the accuracy results of differential Galileo can be expected as a similar to DGPS.Coverage of the reference station was calculated based on unique software, which calculate the signal strength level based on transmitter parameters or field signal strength measurement campaign, done in the representative points. The software works based on Baltic sea vector map, ground electric parameters and models atmospheric noise level in the transmission band.
Information filtering via biased heat conduction
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou , Proc. Natl. Acad. Sci. USA PNASA60027-842410.1073/pnas.1000488107107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Laboratory and field based evaluation of chromatography ...
The Monitor for AeRosols and GAses in ambient air (MARGA) is an on-line ion-chromatography-based instrument designed for speciation of the inorganic gas and aerosol ammonium-nitrate-sulfate system. Previous work to characterize the performance of the MARGA has been primarily based on field comparison to other measurement methods to evaluate accuracy. While such studies are useful, the underlying reasons for disagreement among methods are not always clear. This study examines aspects of MARGA accuracy and precision specifically related to automated chromatography analysis. Using laboratory standards, analytical accuracy, precision, and method detection limits derived from the MARGA chromatography software are compared to an alternative software package (Chromeleon, Thermo Scientific Dionex). Field measurements are used to further evaluate instrument performance, including the MARGA’s use of an internal LiBr standard to control accuracy. Using gas/aerosol ratios and aerosol neutralization state as a case study, the impact of chromatography on measurement error is assessed. The new generation of on-line chromatography-based gas and particle measurement systems have many advantages, including simultaneous analysis of multiple pollutants. The Monitor for Aerosols and Gases in Ambient Air (MARGA) is such an instrument that is used in North America, Europe, and Asia for atmospheric process studies as well as routine monitoring. While the instrument has been evaluat
[Precision and accuracy of "a pocket" pulse oximeter in Mexico City].
Torre-Bouscoulet, Luis; Chávez-Plascencia, Elizabeth; Vázquez-García, Juan Carlos; Pérez-Padilla, Rogelio
2006-01-01
Pulse oximeters are frequently used in the clinical practice and we must known their precision and accuracy. The objective was to evaluate the precision and accuracy of a "pocket" pulse oximeter at an altitude of 2,240 m above sea level. We tested miniature pulse oximeters (Onyx 9,500, Nonin Finger Pulse Oximeter) in 96 patients sent to the pulmonary laboratory for an arterial blood sample. Patients were tested with 5 pulse oximeters placed in each of the fingers of the hand oposite to that used for the arterial puncture. The gold standard was the oxygen saturation of the arterial blood sample. Blood samples had SaO2 of 87.2 +/- 11.0 (between 42.2 and 97.9%). Pulse oximeters had a mean error of 0.28 +/- 3.1%. SaO2 = (1.204 x SpO2) - 17.45966 (r = 0.92, p < 0.0001). Intraclass correlation coefficient between each of five pulse oximeters against the arterial blood standard ranged between 0.87 and 0.99. HbCO (2.4 +/- 0.6) did not affect the accuracy. The miniature oximeter Nonin is precise and accurate at 2,240 m of altitude. The observed levels of HbCO did not affect the performance of the equipment. The oximeter good performance, small size and low cost enhances its clinical usefulness.
Freckmann, Guido; Baumstark, Annette; Schmid, Christina; Pleus, Stefan; Link, Manuela; Haug, Cornelia
2014-02-01
Systems for self-monitoring of blood glucose (SMBG) have to provide accurate and reproducible blood glucose (BG) values in order to ensure adequate therapeutic decisions by people with diabetes. Twelve SMBG systems were compared in a standardized manner under controlled laboratory conditions: nine systems were available on the German market and were purchased from a local pharmacy, and three systems were obtained from the manufacturer (two systems were available on the U.S. market, and one system was not yet introduced to the German market). System accuracy was evaluated following DIN EN ISO (International Organization for Standardization) 15197:2003. In addition, measurement reproducibility was assessed following a modified TNO (Netherlands Organization for Applied Scientific Research) procedure. Comparison measurements were performed with either the glucose oxidase method (YSI 2300 STAT Plus™ glucose analyzer; YSI Life Sciences, Yellow Springs, OH) or the hexokinase method (cobas(®) c111; Roche Diagnostics GmbH, Mannheim, Germany) according to the manufacturer's measurement procedure. The 12 evaluated systems showed between 71.5% and 100% of the measurement results within the required system accuracy limits. Ten systems fulfilled with the evaluated test strip lot minimum accuracy requirements specified by DIN EN ISO 15197:2003. In addition, accuracy limits of the recently published revision ISO 15197:2013 were applied and showed between 54.5% and 100% of the systems' measurement results within the required accuracy limits. Regarding measurement reproducibility, each of the 12 tested systems met the applied performance criteria. In summary, 83% of the systems fulfilled with the evaluated test strip lot minimum system accuracy requirements of DIN EN ISO 15197:2003. Each of the tested systems showed acceptable measurement reproducibility. In order to ensure sufficient measurement quality of each distributed test strip lot, regular evaluations are required.
Suh, Young Joo; Kim, Young Jin; Kim, Jin Young; Chang, Suyon; Im, Dong Jin; Hong, Yoo Jin; Choi, Byoung Wook
2017-11-01
We aimed to determine the effect of a whole-heart motion-correction algorithm (new-generation snapshot freeze, NG SSF) on the image quality of cardiac computed tomography (CT) images in patients with mechanical valve prostheses compared to standard images without motion correction and to compare the diagnostic accuracy of NG SSF and standard CT image sets for the detection of prosthetic valve abnormalities. A total of 20 patients with 32 mechanical valves who underwent wide-coverage detector cardiac CT with single-heartbeat acquisition were included. The CT image quality for subvalvular (below the prosthesis) and valvular regions (valve leaflets) of mechanical valves was assessed by two observers on a four-point scale (1 = poor, 2 = fair, 3 = good, and 4 = excellent). Paired t-tests or Wilcoxon signed rank tests were used to compare image quality scores and the number of diagnostic phases (image quality score≥3) between the standard image sets and NG SSF image sets. Diagnostic performance for detection of prosthetic valve abnormalities was compared between two image sets with the final diagnosis set by re-operation or clinical findings as the standard reference. NG SSF image sets had better image quality scores than standard image sets for both valvular and subvalvular regions (P < 0.05 for both). The number of phases that were of diagnostic image quality per patient was significantly greater in the NG SSF image set than standard image set for both valvular and subvalvular regions (P < 0.0001). Diagnostic performance of NG SSF image sets for the detection of prosthetic abnormalities (20 pannus and two paravalvular leaks) was greater than that of standard image sets (P < 0.05). Application of NG SSF can improve CT image quality and diagnostic accuracy in patients with mechanical valves compared to standard images. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
ACCESS: Design and Sub-System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary Elizabeth; Morris, Matthew J.; McCandliss, Stephan R.; Rasucher, Bernard J.; Kimble, Randy A.; Kruk, Jeffrey W.; Pelton, Russell; Mott, D. Brent; Wen, Hiting; Foltz, Roger;
2012-01-01
Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. ACCESS, "Absolute Color Calibration Experiment for Standard Stars", is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 -1.7 micrometer bandpass.
Yu-Fei, Wang; Wei-Ping, Jia; Ming-Hsun, Wu; Miao-O, Chien; Ming-Chang, Hsieh; Chi-Pin, Wang; Ming-Shih, Lee
2017-09-01
System accuracy of current blood glucose monitors (BGMs) in the market has already been evaluated extensively, yet mostly focused on European and North American manufacturers. Data on BGMs manufactured in the Asia-Pacific region remain to be established. In this study, we sought to assess the accuracy performance of 19 BGMs manufactured in the Asia-pacific region. A total of 19 BGMs were obtained from local pharmacies in China. The study was conducted at three hospitals located in the Asia-Pacific region. Measurement results of each system were compared with results of the reference instrument (YSI 2300 PLUS Glucose Analyzer), and accuracy evaluation was performed in accordance to the ISO 15197:2003 and updated 2015 guidelines. Radar plots, which is a new method, are described herein to visualize the analytical performance of the 19 BGMs evaluated. Consensus error grid is a tool for evaluating the clinical significance of the results. The 19 BGMs resulted in a satisfaction rate between 83.5% and 100.0% within ISO 15197:2003 error limits, and between 71.3% and 100.0% within EN ISO 15197:2015 (ISO 15197:2013) error limits. Of the 19 BGMs evaluated, 12 met the minimal accuracy requirement of the ISO 15197:2003 standard, whereas only 4 met the tighter EN ISO 15197:2015 (ISO 15197:2013) requirements. Accuracy evaluation of BGMs should be performed regularly to maximize patient safety.
Woo, Sungmin; Suh, Chong Hyun; Kim, Sang Youn; Cho, Jeong Yeon; Kim, Seung Hyup
2018-01-01
The purpose of this study was to perform a head-to-head comparison between high-b-value (> 1000 s/mm 2 ) and standard-b-value (800-1000 s/mm 2 ) DWI regarding diagnostic performance in the detection of prostate cancer. The MEDLINE and EMBASE databases were searched up to April 1, 2017. The analysis included diagnostic accuracy studies in which high- and standard-b-value DWI were used for prostate cancer detection with histopathologic examination as the reference standard. Methodologic quality was assessed with the revised Quality Assessment of Diagnostic Accuracy Studies tool. Sensitivity and specificity of all studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Meta-regression and multiple-subgroup analyses were performed to compare the diagnostic performances of high- and standard-b-value DWI. Eleven studies (789 patients) were included. High-b-value DWI had greater pooled sensitivity (0.80 [95% CI, 0.70-0.87]) (p = 0.03) and specificity (0.92 [95% CI, 0.87-0.95]) (p = 0.01) than standard-b-value DWI (sensitivity, 0.78 [95% CI, 0.66-0.86]); specificity, 0.87 [95% CI, 0.77-0.93] (p < 0.01). Multiple-subgroup analyses showed that specificity was consistently higher for high- than for standard-b-value DWI (p ≤ 0.05). Sensitivity was significantly higher for high- than for standard-b-value DWI only in the following subgroups: peripheral zone only, transition zone only, multiparametric protocol (DWI and T2-weighted imaging), visual assessment of DW images, and per-lesion analysis (p ≤ 0.04). In a head-to-head comparison, high-b-value DWI had significantly better sensitivity and specificity for detection of prostate cancer than did standard-b-value DWI. Multiple-subgroup analyses showed that specificity was consistently superior for high-b-value DWI.
Thorne, John C; Coggins, Truman E; Carmichael Olson, Heather; Astley, Susan J
2007-04-01
To evaluate classification accuracy and clinical feasibility of a narrative analysis tool for identifying children with a fetal alcohol spectrum disorder (FASD). Picture-elicited narratives generated by 16 age-matched pairs of school-aged children (FASD vs. typical development [TD]) were coded for semantic elaboration and reference strategy by judges who were unaware of age, gender, and group membership of the participants. Receiver operating characteristic (ROC) curves were used to examine the classification accuracy of the resulting set of narrative measures for making 2 classifications: (a) for the 16 children diagnosed with FASD, low performance (n = 7) versus average performance (n = 9) on a standardized expressive language task and (b) FASD (n = 16) versus TD (n = 16). Combining the rates of semantic elaboration and pragmatically inappropriate reference perfectly matched a classification based on performance on the standardized language task. More importantly, the rate of ambiguous nominal reference was highly accurate in classifying children with an FASD regardless of their performance on the standardized language task (area under the ROC curve = .863, confidence interval = .736-.991). Results support further study of the diagnostic utility of narrative analysis using discourse level measures of elaboration and children's strategic use of reference.
Truong, Quynh A; Knaapen, Paul; Pontone, Gianluca; Andreini, Daniele; Leipsic, Jonathon; Carrascosa, Patricia; Lu, Bin; Branch, Kelley; Raman, Subha; Bloom, Stephen; Min, James K
2015-10-01
Dual-energy CT (DECT) has potential to improve myocardial perfusion for physiologic assessment of coronary artery disease (CAD). Diagnostic performance of rest-stress DECT perfusion (DECTP) is unknown. DECIDE-Gold is a prospective multicenter study to evaluate the accuracy of DECT to detect hemodynamic (HD) significant CAD, as compared to fractional flow reserve (FFR) as a reference standard. Eligible participants are subjects with symptoms of CAD referred for invasive coronary angiography (ICA). Participants will undergo DECTP, which will be performed by pharmacological stress, and participants will subsequently proceed to ICA and FFR. HD-significant CAD will be defined as FFR ≤ 0.80. In those undergoing myocardial stress imaging (MPI) by positron emission tomography (PET), single photon emission computed tomography (SPECT) or cardiac magnetic resonance (CMR) imaging, ischemia will be graded by % ischemic myocardium. Blinded core laboratory interpretation will be performed for CCTA, DECTP, MPI, ICA, and FFR. Primary endpoint is accuracy of DECTP to detect ≥1 HD-significant stenosis at the subject level when compared to FFR. Secondary and tertiary endpoints are accuracies of combinations of DECTP at the subject and vessel levels compared to FFR and MPI. DECIDE-Gold will determine the performance of DECTP for diagnosing ischemia.
NASA Astrophysics Data System (ADS)
Peterson, James Preston, II
Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.
Mind the gap: Increased inter-letter spacing as a means of improving reading performance.
Dotan, Shahar; Katzir, Tami
2018-06-05
Theeffects of text display, specificallywithin-word spacing, on children's reading at different developmental levels has barely been investigated.This study explored the influence of manipulating inter-letter spacing on reading performance (accuracy and rate) of beginner Hebrew readers compared with older readers and of low-achieving readers compared with age-matched high-achieving readers.A computer-based isolated word reading task was performed by 132 first and third graders. Words were displayed under two spacing conditions: standard spacing (100%) and increased spacing (150%). Words were balanced for length and frequency across conditions. Results indicated that increased spacing contributed to reading accuracy without affecting reading rate. Interestingly, all first graders benefitted fromthe spaced condition. Thiseffect was found only in long words but not in short words. Among third graders, only low-achieving readers gained in accuracy fromthespaced condition. Thetheoretical and clinical effects ofthefindings are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
Adaptation and fallibility in experts' judgments of novice performers.
Larson, Jeffrey S; Billeter, Darron M
2017-02-01
Competition judges are often selected for their expertise, under the belief that a high level of performance expertise should enable accurate judgments of the competitors. Contrary to this assumption, we find evidence that expertise can reduce judgment accuracy. Adaptation level theory proposes that discriminatory capacity decreases with greater distance from one's adaptation level. Because experts' learning has produced an adaptation level close to ideal performance standards, they may be less able to discriminate among lower-level competitors. As a result, expertise increases judgment accuracy of high-level competitions but decreases judgment accuracy of low-level competitions. Additionally, we demonstrate that, consistent with an adaptation level theory account of expert judgment, experts systematically give more critical ratings than intermediates or novices. In summary, this work demonstrates a systematic change in human perception that occurs as task learning increases. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Autonomous Relative Navigation for Formation-Flying Satellites Using GPS
NASA Technical Reports Server (NTRS)
Gramling, Cheryl; Carpenter, J. Russell; Long, Anne; Kelbel, David; Lee, Taesul
2000-01-01
The Goddard Space Flight Center is currently developing advanced spacecraft systems to provide autonomous navigation and control of formation flyers. This paper discusses autonomous relative navigation performance for a formation of four eccentric, medium-altitude Earth-orbiting satellites using Global Positioning System (GPS) Standard Positioning Service (SPS) and "GPS-like " intersatellite measurements. The performance of several candidate relative navigation approaches is evaluated. These analyses indicate that an autonomous relative navigation position accuracy of 1meter root-mean-square can be achieved by differencing high-accuracy filtered solutions if only measurements from common GPS space vehicles are used in the independently estimated solutions.
Accuracy of force and center of pressure measures of the Wii Balance Board.
Bartlett, Harrison L; Ting, Lena H; Bingham, Jeffrey T
2014-01-01
The Nintendo Wii Balance Board (WBB) is increasingly used as an inexpensive force plate for assessment of postural control; however, no documentation of force and COP accuracy and reliability is publicly available. Therefore, we performed a standard measurement uncertainty analysis on 3 lightly and 6 heavily used WBBs to provide future users with information about the repeatability and accuracy of the WBB force and COP measurements. Across WBBs, we found the total uncertainty of force measurements to be within ± 9.1N, and of COP location within ± 4.1mm. However, repeatability of a single measurement within a board was better (4.5 N, 1.5mm), suggesting that the WBB is best used for relative measures using the same device, rather than absolute measurement across devices. Internally stored calibration values were comparable to those determined experimentally. Further, heavy wear did not significantly degrade performance. In combination with prior evaluation of WBB performance and published standards for measuring human balance, our study provides necessary information to evaluate the use of the WBB for analysis of human balance control. We suggest the WBB may be useful for low-resolution measurements, but should not be considered as a replacement for laboratory-grade force plates. Published by Elsevier B.V.
Accuracy of force and center of pressure measures of the Wii Balance Board
Bartlett, Harrison L.; Ting, Lena H.; Bingham, Jeffrey T.
2013-01-01
The Nintendo Wii Balance Board (WBB) is increasingly used as an inexpensive force plate for assessment of postural control; however, no documentation of force and COP accuracy and reliability is publicly available. Therefore, we performed a standard measurement uncertainty analysis on 3 lightly and 6 heavily used WBBs to provide future users with information about the repeatability and accuracy of the WBB force and COP measurements. Across WBBs, we found the total uncertainty of force measurements to be within ±9.1 N, and of COP location within ±4.1 mm. However, repeatability of a single measurement within a board was better (4.5 N, 1.5 mm), suggesting that the WBB is best used for relative measures using the same device, rather than absolute measurement across devices. Internally stored calibration values were comparable to those determined experimentally. Further, heavy wear did not significantly degrade performance. In combination with prior evaluation of WBB performance and published standards for measuring human balance, our study provides necessary information to evaluate the use of the WBB for analysis of human balance control. We suggest the WBB may be useful for low-resolution measurements, but should not be considered as a replacement for laboratory-grade force plates. PMID:23910725
Templar, Alexander; Woodhouse, Stefan; Keshavarz-Moore, Eli; Nesbeth, Darren N
2016-08-01
Advances in synthetic genomics are now well underway in yeasts due to the low cost of synthetic DNA. These new capabilities also bring greater need for quantitating the presence, loss and rearrangement of loci within synthetic yeast genomes. Methods for achieving this will ideally; i) be robust to industrial settings, ii) adhere to a global standard and iii) be sufficiently rapid to enable at-line monitoring during cell growth. The methylotrophic yeast Pichia pastoris (P. pastoris) is increasingly used for industrial production of biotherapeutic proteins so we sought to answer the following questions for this particular yeast species. Is time-consuming DNA purification necessary to obtain accurate end-point polymerase chain reaction (e-pPCR) and quantitative PCR (qPCR) data? Can the novel linear regression of efficiency qPCR method (LRE qPCR), which has properties desirable in a synthetic biology standard, match the accuracy of conventional qPCR? Does cell cultivation scale influence PCR performance? To answer these questions we performed e-pPCR and qPCR in the presence and absence of cellular material disrupted by a mild 30s sonication procedure. The e-pPCR limit of detection (LOD) for a genomic target locus was 50pg (4.91×10(3) copies) of purified genomic DNA (gDNA) but the presence of cellular material reduced this sensitivity sixfold to 300pg gDNA (2.95×10(4) copies). LRE qPCR matched the accuracy of a conventional standard curve qPCR method. The presence of material from bioreactor cultivation of up to OD600=80 did not significantly compromise the accuracy of LRE qPCR. We conclude that a simple and rapid cell disruption step is sufficient to render P. pastoris samples of up to OD600=80 amenable to analysis using LRE qPCR which we propose as a synthetic biology standard. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Pardo, Scott; Simmons, David A
2016-09-01
The relationship between International Organization for Standardization (ISO) accuracy criteria and mean absolute relative difference (MARD), 2 methods for assessing the accuracy of blood glucose meters, is complex. While lower MARD values are generally better than higher MARD values, it is not possible to define a particular MARD value that ensures a blood glucose meter will satisfy the ISO accuracy criteria. The MARD value that ensures passing the ISO accuracy test can be described only as a probabilistic range. In this work, a Bayesian model is presented to represent the relationship between ISO accuracy criteria and MARD. Under the assumptions made in this work, there is nearly a 100% chance of satisfying ISO 15197:2013 accuracy requirements if the MARD value is between 3.25% and 5.25%. © 2016 Diabetes Technology Society.
Evaluation of pulse-oximetry oxygen saturation taken through skin protective covering
James, Jyotsna; Tiwari, Lokesh; Upadhyay, Pramod; Sreenivas, Vishnubhatla; Bhambhani, Vikas; Puliyel, Jacob M
2006-01-01
Background The hard edges of adult finger clip probes of the pulse oximetry oxygen saturation (POOS) monitor can cause skin damage if used for prolonged periods in a neonate. Covering the skin under the probe with Micropore surgical tape or a gauze piece might prevent such injury. The study was done to see if the protective covering would affect the accuracy of the readings. Methods POOS was studied in 50 full-term neonates in the first week of life. After obtaining consent from their parents the neonates had POOS readings taken directly (standard technique) and through the protective covering. Bland-Altman plots were used to compare the new method with the standard technique. A test of repeatability for each method was also performed. Results The Bland-Altman plots suggest that there is no significant loss of accuracy when readings are taken through the protective covering. The mean difference was 0.06 (SD of 1.39) and 0.04 (SD 1.3) with Micropore and gauze respectively compared to the standard method. The mean difference was 0.22 (SD 0.23) on testing repeatability with the standard method. Conclusion Interposing Micropore or gauze does not significantly affect the accuracy of the POOS reading. The difference between the standard method and the new method was less than the difference seen on testing repeatability of the standard method. PMID:16677394
Evaluation of pulse-oximetry oxygen saturation taken through skin protective covering.
James, Jyotsna; Tiwari, Lokesh; Upadhyay, Pramod; Sreenivas, Vishnubhatla; Bhambhani, Vikas; Puliyel, Jacob M
2006-05-06
The hard edges of adult finger clip probes of the pulse oximetry oxygen saturation (POOS) monitor can cause skin damage if used for prolonged periods in a neonate. Covering the skin under the probe with Micropore surgical tape or a gauze piece might prevent such injury. The study was done to see if the protective covering would affect the accuracy of the readings. POOS was studied in 50 full-term neonates in the first week of life. After obtaining consent from their parents the neonates had POOS readings taken directly (standard technique) and through the protective covering. Bland-Altman plots were used to compare the new method with the standard technique. A test of repeatability for each method was also performed. The Bland-Altman plots suggest that there is no significant loss of accuracy when readings are taken through the protective covering. The mean difference was 0.06 (SD of 1.39) and 0.04 (SD 1.3) with Micropore and gauze respectively compared to the standard method. The mean difference was 0.22 (SD 0.23) on testing repeatability with the standard method. Interposing Micropore or gauze does not significantly affect the accuracy of the POOS reading. The difference between the standard method and the new method was less than the difference seen on testing repeatability of the standard method.
Accuracy of user-friendly blood typing kits tested under simulated military field conditions.
Bienek, Diane R; Charlton, David G
2011-04-01
Rapid user-friendly ABO-Rh blood typing kits (Eldon Home Kit 2511, ABO-Rh Combination Blood Typing Experiment Kit) were evaluated to determine their accuracy when used under simulated military field conditions and after long-term storage at various temperatures and humidities. Rates of positive tests between control groups, experimental groups, and industry standards were measured and analyzed using the Fisher's exact chi-square method to identify significant differences (p < or = 0.05). When Eldon Home Kits 2511 were used in various operational conditions, the results were comparable to those obtained with the control group and with the industry standard. The performance of the ABO-Rh Combination Blood Typing Experiment Kit was adversely affected by prolonged storage in temperatures above 37 degrees C. The diagnostic performance of commercial blood typing kits varies according to product and environmental storage conditions.
NOTE: Implementation of angular response function modeling in SPECT simulations with GATE
NASA Astrophysics Data System (ADS)
Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.
2010-05-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.
Nutrigenomics, beta-cell function and type 2 diabetes.
Nino-Fong, R; Collins, Tm; Chan, Cb
2007-03-01
The present investigation was designed to investigate the accuracy and precision of lactate measurement obtained with contemporary biosensors (Chiron Diagnostics, Nova Biomedical) and standard enzymatic photometric procedures (Sigma Diagnostics, Abbott Laboratories, Analyticon). Measurements were performed in vitro before and after the stepwise addition of 1 molar sodium lactate solution to samples of fresh frozen plasma to systematically achieve lactate concentrations of up to 20 mmol/l. Precision of the methods investigated varied between 1% and 7%, accuracy ranged between 2% and -33% with the variability being lowest in the Sigma photometric procedure (6%) and more than 13% in both biosensor methods. Biosensors for lactate measurement provide adequate accuracy in mean with the limitation of highly variable results. A true lactate value of 6 mmol/l was found to be presented between 4.4 and 7.6 mmol/l or even with higher difference. Biosensors and standard enzymatic photometric procedures are only limited comparable because the differences between paired determinations presented to be several mmol. The advantage of biosensors is the complete lack of preanalytical sample preparation which appeared to be the major limitation of standard photometry methods.
Matsunami, Risë K; Angelides, Kimon; Engler, David A
2015-05-18
There is currently considerable discussion about the accuracy of blood glucose concentrations determined by personal blood glucose monitoring systems (BGMS). To date, the FDA has allowed new BGMS to demonstrate accuracy in reference to other glucose measurement systems that use the same or similar enzymatic-based methods to determine glucose concentration. These types of reference measurement procedures are only comparative in nature and are subject to the same potential sources of error in measurement and system perturbations as the device under evaluation. It would be ideal to have a completely orthogonal primary method that could serve as a true standard reference measurement procedure for establishing the accuracy of new BGMS. An isotope-dilution liquid chromatography/mass spectrometry (ID-UPLC-MRM) assay was developed using (13)C6-glucose as a stable isotope analogue to specifically measure glucose concentration in human plasma, and validated for use against NIST standard reference materials, and against fresh isolates of whole blood and plasma into which exogenous glucose had been spiked. Assay performance was quantified to NIST-traceable dry weight measures for both glucose and (13)C6-glucose. The newly developed assay method was shown to be rapid, highly specific, sensitive, accurate, and precise for measuring plasma glucose levels. The assay displayed sufficient dynamic range and linearity to measure across the range of both normal and diabetic blood glucose levels. Assay performance was measured to within the same uncertainty levels (<1%) as the NIST definitive method for glucose measurement in human serum. The newly developed ID UPLC-MRM assay can serve as a validated reference measurement procedure to which new BGMS can be assessed for glucose measurement performance. © 2015 Diabetes Technology Society.
Matsunami, Risë K.; Angelides, Kimon; Engler, David A.
2015-01-01
Background: There is currently considerable discussion about the accuracy of blood glucose concentrations determined by personal blood glucose monitoring systems (BGMS). To date, the FDA has allowed new BGMS to demonstrate accuracy in reference to other glucose measurement systems that use the same or similar enzymatic-based methods to determine glucose concentration. These types of reference measurement procedures are only comparative in nature and are subject to the same potential sources of error in measurement and system perturbations as the device under evaluation. It would be ideal to have a completely orthogonal primary method that could serve as a true standard reference measurement procedure for establishing the accuracy of new BGMS. Methods: An isotope-dilution liquid chromatography/mass spectrometry (ID-UPLC-MRM) assay was developed using 13C6-glucose as a stable isotope analogue to specifically measure glucose concentration in human plasma, and validated for use against NIST standard reference materials, and against fresh isolates of whole blood and plasma into which exogenous glucose had been spiked. Assay performance was quantified to NIST-traceable dry weight measures for both glucose and 13C6-glucose. Results: The newly developed assay method was shown to be rapid, highly specific, sensitive, accurate, and precise for measuring plasma glucose levels. The assay displayed sufficient dynamic range and linearity to measure across the range of both normal and diabetic blood glucose levels. Assay performance was measured to within the same uncertainty levels (<1%) as the NIST definitive method for glucose measurement in human serum. Conclusions: The newly developed ID UPLC-MRM assay can serve as a validated reference measurement procedure to which new BGMS can be assessed for glucose measurement performance. PMID:25986627
Setford, Steven; Grady, Mike; Mackintosh, Stephen; Donald, Robert; Levy, Brian
2018-05-01
MARD (mean absolute relative difference) is increasingly used to describe performance of glucose monitoring systems, providing a single-value quantitative measure of accuracy and allowing comparisons between different monitoring systems. This study reports MARDs for the OneTouch Verio® glucose meter clinical data set of 80 258 data points (671 individual batches) gathered as part of a 7.5-year self-surveillance program Methods: Test strips were routinely sampled from randomly selected manufacturer's production batches and sent to one of 3 clinic sites for clinical accuracy assessment using fresh capillary blood from patients with diabetes, using both the meter system and standard laboratory reference instrument. Evaluation of the distribution of strip batch MARD yielded a mean value of 5.05% (range: 3.68-6.43% at ±1.96 standard deviations from mean). The overall MARD for all clinic data points (N = 80 258) was also 5.05%, while a mean bias of 1.28 was recorded. MARD by glucose level was found to be consistent, yielding a maximum value of 4.81% at higher glucose (≥100 mg/dL) and a mean absolute difference (MAD) of 5.60 mg/dL at low glucose (<100 mg/dL). MARD by year of manufacture varied from 4.67-5.42% indicating consistent accuracy performance over the surveillance period. This 7.5-year surveillance program showed that this meter system exhibits consistently low MARD by batch, glucose level and year, indicating close agreement with established reference methods whilste exhibiting lower MARD values than continuous glucose monitoring (CGM) systems and providing users with confidence in the performance when transitioning to each new strip batch.
Shahriyari, Leili
2017-11-03
One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
The Mediating Relation between Symbolic and Nonsymbolic Foundations of Math Competence
Price, Gavin R.; Fuchs, Lynn S.
2016-01-01
This study investigated the relation between symbolic and nonsymbolic magnitude processing abilities with 2 standardized measures of math competence (WRAT Arithmetic and KeyMath Numeration) in 150 3rd- grade children (mean age 9.01 years). Participants compared sets of dots and pairs of Arabic digits with numerosities 1–9 for relative numerical magnitude. In line with previous studies, performance on both symbolic and nonsymbolic magnitude processing was related to math ability. Performance metrics combining reaction and accuracy, as well as weber fractions, were entered into mediation models with standardized math test scores. Results showed that symbolic magnitude processing ability fully mediates the relation between nonsymbolic magnitude processing and math ability, regardless of the performance metric or standardized test. PMID:26859564
The Mediating Relation between Symbolic and Nonsymbolic Foundations of Math Competence.
Price, Gavin R; Fuchs, Lynn S
2016-01-01
This study investigated the relation between symbolic and nonsymbolic magnitude processing abilities with 2 standardized measures of math competence (WRAT Arithmetic and KeyMath Numeration) in 150 3rd-grade children (mean age 9.01 years). Participants compared sets of dots and pairs of Arabic digits with numerosities 1-9 for relative numerical magnitude. In line with previous studies, performance on both symbolic and nonsymbolic magnitude processing was related to math ability. Performance metrics combining reaction and accuracy, as well as weber fractions, were entered into mediation models with standardized math test scores. Results showed that symbolic magnitude processing ability fully mediates the relation between nonsymbolic magnitude processing and math ability, regardless of the performance metric or standardized test.
Frikha, Mohamed; Chaâri, Nesrine; Derbel, Mohammad S; Elghoul, Yousri; Zinkovsky, Anatoly V; Chamari, Karim
2017-09-01
The present study addressed the lack of data on the effect of different types of stretching on selected measures of throwing accuracy. We hypothesized that the stretching procedures, within pre-exercise warm-up, could affect the accuracy and the consistency in throwing darts performances under different stress conditions. Eighteen right-handed schoolboys (13.1±0.4 years, 166±0.1 cm and 54.5±9 kg; mean±SD) completed the Darts Throwing Accuracy Test in free (FDT) and in time-pressure (TPDT) conditions, either after static (SS), dynamic (DS), ballistic (BS) or no-stretching (NS) protocols, on nonconsecutive days and in a counter-balanced randomized order. After performing 5 minutes of light standardized jogging and one of the three stretching protocols for 10 minutes, each participant completed the FDT and TPDT tests. Mean scores, missed darts and variability of scores, were recorded and analyzed using a two-way ANOVA with repeated measures. Heart rate (HR), ratings of perceived exertion (RPE) and the task difficulty perception (DP), were recorded through each experimental session. There was no effect of the stretching procedures on accuracy in FDT. However, in the TPDT condition, better performances were recorded after NS and SS compared to DS and BS. The accuracy performances decreased in TPDT by 9.6% after NS (P<0.01); 15.3% after DS (P<0.001) and 11.8% after BS (P<0.001); but not after SS (P<0.05). Static stretching helped reducing the adverse effects of time-pressure on darts throwing performance. Consequently, static exercises are recommended before practicing activities requiring both upper limbs speed and accuracy.
Accurate and Standardized Coronary Wave Intensity Analysis.
Rivolo, Simone; Patterson, Tiffany; Asrress, Kaleab N; Marber, Michael; Redwood, Simon; Smith, Nicolas P; Lee, Jack
2017-05-01
Coronary wave intensity analysis (cWIA) has increasingly been applied in the clinical research setting to distinguish between the proximal and distal mechanical influences on coronary blood flow. Recently, a cWIA-derived clinical index demonstrated prognostic value in predicting functional recovery postmyocardial infarction. Nevertheless, the known operator dependence of the cWIA metrics currently hampers its routine application in clinical practice. Specifically, it was recently demonstrated that the cWIA metrics are highly dependent on the chosen Savitzky-Golay filter parameters used to smooth the acquired traces. Therefore, a novel method to make cWIA standardized and automatic was proposed and evaluated in vivo. The novel approach combines an adaptive Savitzky-Golay filter with high-order central finite differencing after ensemble-averaging the acquired waveforms. Its accuracy was assessed using in vivo human data. The proposed approach was then modified to automatically perform beat wise cWIA. Finally, the feasibility (accuracy and robustness) of the method was evaluated. The automatic cWIA algorithm provided satisfactory accuracy under a wide range of noise scenarios (≤10% and ≤20% error in the estimation of wave areas and peaks, respectively). These results were confirmed when beat-by-beat cWIA was performed. An accurate, standardized, and automated cWIA was developed. Moreover, the feasibility of beat wise cWIA was demonstrated for the first time. The proposed algorithm provides practitioners with a standardized technique that could broaden the application of cWIA in the clinical practice as enabling multicenter trials. Furthermore, the demonstrated potential of beatwise cWIA opens the possibility investigating the coronary physiology in real time.
ERIC Educational Resources Information Center
Hopkins, Paul
2016-01-01
The purpose of this study was to determine how K-12 public school teachers perceive the use of student performance data in teacher evaluations. The proprietary, utility, feasibility, and accuracy standards created by the Joint Committee on Standards for Education Evaluation served as a framework for the study. An online survey was deployed to a…
Hehmke, Bernd; Berg, Sabine; Salzsieder, Eckhard
2017-05-01
Continuous standardized verification of the accuracy of blood glucose meter systems for self-monitoring after their introduction into the market is an important clinically tool to assure reliable performance of subsequently released lots of strips. Moreover, such published verification studies permit comparison of different blood glucose monitoring systems and, thus, are increasingly involved in the process of evidence-based purchase decision making.
NASA Technical Reports Server (NTRS)
Stokes, R. L.
1979-01-01
Tests performed to determine accuracy and efficiency of bus separators used in microprocessors are presented. Functional, AC parametric, and DC parametric tests were performed in a Tektronix S-3260 automated test system. All the devices passed the functional tests and yielded nominal values in the parametric test.
Building Energy Simulation Test for Existing Homes (BESTEST-EX) (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.; Polly, B.
2011-12-01
This presentation discusses the goals of NREL Analysis Accuracy R&D; BESTEST-EX goals; what BESTEST-EX is; how it works; 'Building Physics' cases; 'Building Physics' reference results; 'utility bill calibration' cases; limitations and potential future work. Goals of NREL Analysis Accuracy R&D are: (1) Provide industry with the tools and technical information needed to improve the accuracy and consistency of analysis methods; (2) Reduce the risks associated with purchasing, financing, and selling energy efficiency upgrades; and (3) Enhance software and input collection methods considering impacts on accuracy, cost, and time of energy assessments. BESTEST-EX Goals are: (1) Test software predictions of retrofitmore » energy savings in existing homes; (2) Ensure building physics calculations and utility bill calibration procedures perform up to a minimum standard; and (3) Quantify impact of uncertainties in input audit data and occupant behavior. BESTEST-EX is a repeatable procedure that tests how well audit software predictions compare to the current state of the art in building energy simulation. There is no direct truth standard. However, reference software have been subjected to validation testing, including comparisons with empirical data.« less
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning.
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-06-07
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer.
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-01-01
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer. PMID:27273294
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders
2013-10-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
Performance of two updated blood glucose monitoring systems: an evaluation following ISO 15197:2013.
Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Jendrike, Nina; Haug, Cornelia; Freckmann, Guido
2016-05-01
Objective For patients with diabetes, regular self-monitoring of blood glucose (SMBG) is essential to ensure adequate glycemic control. Therefore, accurate and reliable blood glucose measurements with SMBG systems are necessary. The international standard ISO 15197 describes requirements for SMBG systems, such as limits within which 95% of glucose results have to fall to reach acceptable system accuracy. The 2013 version of this standard sets higher demands, especially regarding system accuracy, than the currently still valid edition. ISO 15197 can be applied by manufacturers to receive a CE mark for their system. Research design and methods This study was an accuracy evaluation following ISO 15197:2013 section 6.3 of two recently updated SMBG systems (Contour * and Contour TS; Bayer Consumer Care AG, Basel, Switzerland) with an improved algorithm to investigate whether the systems fulfill the requirements of the new standard. For this purpose, capillary blood samples of approximately 100 participants were measured with three test strip lots of both systems and deviations from glucose values obtained with a hexokinase-based comparison method (Cobas Integra † 400 plus; Roche Instrument Center, Rotkreuz, Switzerland) were determined. Percentages of values within the acceptance criteria of ISO 15197:2013 were calculated. This study was registered at clinicaltrials.gov (NCT02358408). Main outcome Both updated systems fulfilled the system accuracy requirements of ISO 15197:2013 as 98.5% to 100% of the results were within the stipulated limits. Furthermore, all results were within the clinically non-critical zones A and B of the consensus error grid for type 1 diabetes. Conclusions The technical improvement of the systems ensured compliance with ISO 15197 in the hands of healthcare professionals even in its more stringent 2013 version. Alternative presentation of system accuracy results in radar plots provides additional information with certain advantages. In addition, the surveillance error grid offers a modern tool to assess a system's clinical performance.
Evaluation metrics for bone segmentation in ultrasound
NASA Astrophysics Data System (ADS)
Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas
2015-03-01
Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.
Small-Grid Dithering Strategy for Improved Coronagraphic Performance with JWST
NASA Astrophysics Data System (ADS)
Lajoie, Charles-Philippe; Soummer, Remi; Pueyo, Laurent; Hines, Dean C.; Nelan, Edmund P.; JWST Coronagraphs Working Group
2015-01-01
Contrast performances for most coronagraph designs typically depend rather strongly on the accuracy of target acquisition. For JWST, target acquisition away from the center of the coronagraphs will allow for centroid measurement, which will in turn be used to command a small-angle maneuver (SAM) to accurately place the star behind the coronagraphic mask. With this approach, the SAM accuracy inherently limits the contrast performance of the coronagraphs, especially given that a reference star (or self-reference after telescope roll) might also be required. For such differential measurements, the reproducibility of the TA is therefore a very important factor. Here, we propose a novel coronagraphic observation concept whereby the reference PSF is first acquired using a standard TA, followed by coronagraphic observations on a small grid of dithered positions. Sub-pixel dithers (5-10mas each) provide a small reference PSF library that sample the possible variations in the PSF shape due to imperfect TAs. This small library can then be used for example with principal component analysis for PSF subtraction (e.g; LOCI or KLIP algorithms). Such very small dithers can be achieved with the JWST attitude control system without overhead and with higher accuracy than a SAM since they take advantage of the fine steering mirror under closed-loop fine guidance. We discuss and evaluate the performance gains from this observation scenario compared to the standard TA for MIRI Four-Quadrant Phase Mask coronagraphs and provide numerical simulations for a some astrophysical targets of interest.
Evaluation of Relative Navigation Algorithms for Formation-Flying Satellites
NASA Technical Reports Server (NTRS)
Kelbel, David; Lee, Taesul; Long, Anne; Carpenter, J. Russell; Gramling, Cheryl
2001-01-01
Goddard Space Flight Center is currently developing advanced spacecraft systems to provide autonomous navigation and control of formation flyers. This paper discusses autonomous relative navigation performance for formations in eccentric, medium, and high-altitude Earth orbits using Global Positioning System (GPS) Standard Positioning Service (SPS) and intersatellite range measurements. The performance of several candidate relative navigation approaches is evaluated. These analyses indicate that the relative navigation accuracy is primarily a function of the frequency of acquisition and tracking of the GPS signals. A relative navigation position accuracy of 0.5 meters root-mean-square (RMS) can be achieved for formations in medium-attitude eccentric orbits that can continuously track at least one GPS signal. A relative navigation position accuracy of better than 75 meters RMS can be achieved for formations in high-altitude eccentric orbits that have sparse tracking of the GPS signals. The addition of round-trip intersatellite range measurements can significantly improve relative navigation accuracy for formations with sparse tracking of the GPS signals.
Standardization of Solar Mirror Reflectance Measurements - Round Robin Test: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyen, S.; Lupfert, E.; Fernandez-Garcia, A.
2010-10-01
Within the SolarPaces Task III standardization activities, DLR, CIEMAT, and NREL have concentrated on optimizing the procedure to measure the reflectance of solar mirrors. From this work, the laboratories have developed a clear definition of the method and requirements needed of commercial instruments for reliable reflectance results. A round robin test was performed between the three laboratories with samples that represent all of the commercial solar mirrors currently available for concentrating solar power (CSP) applications. The results show surprisingly large differences in hemispherical reflectance (sh) of 0.007 and specular reflectance (ss) of 0.004 between the laboratories. These differences indicate themore » importance of minimum instrument requirements and standardized procedures. Based on these results, the optimal procedure will be formulated and validated with a new round robin test in which a better accuracy is expected. Improved instruments and reference standards are needed to reach the necessary accuracy for cost and efficiency calculations.« less
Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D
2014-03-01
Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A. L.; Walker, R. E.; Gokhman, B.
1985-01-01
Performance requirements regarding geometric accuracy have been defined in terms of end product goals, but until recently no precise details have been given concerning the conditions under which that accuracy is to be achieved. In order to achieve higher spatial and spectral resolutions, the Thematic Mapper (TM) sensor was designed to image in both forward and reverse mirror sweeps in two separate focal planes. Both hardware and software have been augmented and changed during the course of the Landsat TM developments to achieve improved geometric accuracy. An investigation has been conducted to determine if the TM meets the National Map Accuracy Standards for geometric accuracy at larger scales. It was found that TM imagery, in terms of geometry, has come close to, and in some cases exceeded, its stringent specifications.
NASA Astrophysics Data System (ADS)
García-Resúa, Carlos; Pena-Verdeal, Hugo; Miñones, Mercedes; Gilino, Jorge; Giraldez, Maria J.; Yebra-Pimentel, Eva
2013-11-01
High tear fluid osmolarity is a feature common to all types of dry eye. This study was designed to establish the accuracy of two osmometers, a freezing point depression osmometer (Fiske 110) and an electrical impedance osmometer (TearLab™) by using standard samples. To assess the accuracy of the measurements provided by the two instruments we used 5 solutions of known osmolarity/osmolality; 50, 290 and 850 mOsm/kg and 292 and 338 mOsm/L. Fiske 110 is designed to be used in samples of 20 μl, so measurements were made on 1:9, 1:4, 1:1 and 1:0 dilutions of the standards. Tear Lab is addressed to be used in tear film and only a sample of 0.05 μl is required, so no dilutions were employed. Due to the smaller measurement range of the TearLab, the 50 and 850 mOsm/kg standards were not included. 20 measurements per standard sample were used and differences with the reference value was analysed by one sample t-test. Fiske 110 showed that osmolarity measurements differed statistically from standard values except those recorded for 290 mOsm/kg standard diluted 1:1 (p = 0.309), the 292 mOsm/L H2O sample (1:1) and 338 mOsm/L H2O standard (1:4). The more diluted the sample, the higher the error rate. For the TearLab measurements, one-sample t-test indicated that all determinations differed from the theoretical values (p = 0.001), though differences were always small. For undiluted solutions, Fiske 110 shows similar performance than TearLab. However, for the diluted standards, Fiske 110 worsens.
The density-salinity relation of standard seawater
NASA Astrophysics Data System (ADS)
Schmidt, Hannes; Seitz, Steffen; Hassel, Egon; Wolf, Henning
2018-01-01
The determination of salinity by means of electrical conductivity relies on stable salt proportions in the North Atlantic Ocean, because standard seawater, which is required for salinometer calibration, is produced from water of the North Atlantic. To verify the long-term stability of the standard seawater composition, it was proposed to perform measurements of the standard seawater density. Since the density is sensitive to all salt components, a density measurement can detect any change in the composition. A conversion of the density values to salinity can be performed by means of a density-salinity relation. To use such a relation with a target uncertainty in salinity comparable to that in salinity obtained from conductivity measurements, a density measurement with an uncertainty of 2 g m-3 is mandatory. We present a new density-salinity relation based on such accurate density measurements. The substitution measurement method used is described and density corrections for uniform isotopic and chemical compositions are reported. The comparison of densities calculated using the new relation with those calculated using the present reference equations of state TEOS-10 suggests that the density accuracy of TEOS-10 (as well as that of EOS-80) has been overestimated, as the accuracy of some of its underlying density measurements had been overestimated. The new density-salinity relation may be used to verify the stable composition of standard seawater by means of routine density measurements.
Engelken, Florian; Wassilew, Georgi I; Köhlitz, Torsten; Brockhaus, Sebastian; Hamm, Bernd; Perka, Carsten; Diederichs, und Gerd
2014-01-01
The purpose of this study was to quantify the performance of the Goutallier classification for assessing fatty degeneration of the gluteus muscles from magnetic resonance (MR) images and to compare its performance to a newly proposed system. Eighty-four hips with clinical signs of gluteal insufficiency and 50 hips from asymptomatic controls were analyzed using a standard classification system (Goutallier) and a new scoring system (Quartile). Interobserver reliability and intraobserver repeatability were determined, and accuracy was assessed by comparing readers' scores with quantitative estimates of the proportion of intramuscular fat based on MR signal intensities (gold standard). The existing Goutallier classification system and the new Quartile system performed equally well in assessing fatty degeneration of the gluteus muscles, both showing excellent levels of interrater and intrarater agreement. While the Goutallier classification system has the advantage of being widely known, the benefit of the Quartile system is that it is based on more clearly defined grades of fatty degeneration. Copyright © 2014 Elsevier Inc. All rights reserved.
Experimental study of low-cost fiber optic distributed temperature sensor system performance
NASA Astrophysics Data System (ADS)
Dashkov, Michael V.; Zharkov, Alexander D.
2016-03-01
The distributed control of temperature is an actual task for various application such as oil & gas fields, high-voltage power lines, fire alarm systems etc. The most perspective are optical fiber distributed temperature sensors (DTS). They have advantages on accuracy, resolution and range, but have a high cost. Nevertheless, for some application the accuracy of measurement and localization aren't so important as cost. The results of an experimental study of low-cost Raman based DTS based on standard OTDR are represented.
Jenke, Dennis; Sadain, Salma; Nunez, Karen; Byrne, Frances
2007-01-01
The performance of an ion chromatographic method for measuring citrate and phosphate in pharmaceutical solutions is evaluated. Performance characteristics examined include accuracy, precision, specificity, response linearity, robustness, and the ability to meet system suitability criteria. In general, the method is found to be robust within reasonable deviations from its specified operating conditions. Analytical accuracy is typically 100 +/- 3%, and short-term precision is not more than 1.5% relative standard deviation. The instrument response is linear over a range of 50% to 150% of the standard preparation target concentrations (12 mg/L for phosphate and 20 mg/L for citrate), and the results obtained using a single-point standard versus a calibration curve are essentially equivalent. A small analytical bias is observed and ascribed to the relative purity of the differing salts, used as raw materials in tested finished products and as reference standards in the analytical method. The assay is specific in that no phosphate or citrate peaks are observed in a variety of method-related solutions and matrix blanks (with and without autoclaving). The assay with manual preparation of the eluents is sensitive to the composition of the eluent in the sense that the eluent must be effectively degassed and protected from CO(2) ingress during use. In order for the assay to perform effectively, extensive system equilibration and conditioning is required. However, a properly conditioned and equilibrated system can be used to test a number of samples via chromatographic runs that include many (> 50) injections.
Results of the performance verification of the CoaguChek XS system.
Plesch, W; Wolf, T; Breitenbeck, N; Dikkeschei, L D; Cervero, A; Perez, P L; van den Besselaar, A M H P
2008-01-01
This is the first paper reporting a performance verification study of a point-of-care (POC) monitor for prothrombin time (PT) testing according to the requirements given in chapter 8 of the International Organization for Standardization (ISO) 17593:2007 standard "Clinical laboratory testing and in vitro medical devices - Requirements for in vitro monitoring systems for self-testing of oral anticoagulant therapy". The monitor under investigation was the new CoaguChek XS system which is designed for use in patient self testing. Its detection principle is based on the amperometric measurement of the thrombin activity generated by starting the coagulation cascade using a recombinant human thromboplastin. The system performance verification study was performed at four study centers using venous and capillary blood samples on two test strip lots. Laboratory testing was performed from corresponding frozen plasma samples with six commercial thromboplastins. Samples from 73 normal donors and 297 patients on oral anticoagulation therapy were collected. Results were assessed using a refined data set of 260 subjects according to the ISO 17593:2007 standard. Each of the two test strip lots met the acceptance criteria of ISO 17593:2007 versus all thromboplastins (bias -0.19 to 0.18 INR; >97% of data within accuracy limits). The coefficient of variation for imprecision of the PT determinations in INR ranged from 2.0% to 3.2% in venous, and from 2.9% to 4.0% in capillary blood testing. Capillary versus venous INR data showed agreement of results with regression lines equal to the line of identity. The new system demonstrated a high level of trueness and accuracy, and low imprecision in INR testing. It can be concluded that the CoaguChek XS system complies with the requirements in chapter 8 of the ISO standard 17593:2007.
Patricia L. Faulkner; Michele M. Schoeneberger; Kim H. Ludovici
1993-01-01
Foliar tissue was collected from a field study designed to test impacts of atmospheric pollutants on loblolIy pine (Pinus taeda L.) seedlings. Standard enzymatic (ENZ) and high performance liquid chromatography (HPLC) methods were used to analyze the tissue for soluble sugars. A comparison of the methods revealed no significant diffennces in accuracy...
Braun, Tobias; Grüneberg, Christian; Thiel, Christian
2018-04-01
Routine screening for frailty could be used to timely identify older people with increased vulnerability und corresponding medical needs. The aim of this study was the translation and cross-cultural adaptation of the PRISMA-7 questionnaire, the FRAIL scale and the Groningen Frailty Indicator (GFI) into the German language as well as a preliminary analysis of the diagnostic test accuracy of these instruments used to screen for frailty. A diagnostic cross-sectional study was performed. The instrument translation into German followed a standardized process. Prefinal versions were clinically tested on older adults who gave structured in-depth feedback on the scales in order to compile a final revision of the German language scale versions. For the analysis of diagnostic test accuracy (criterion validity), PRISMA-7, FRAIL scale and GFI were considered the index tests. Two reference tests were applied to assess frailty, either based on Fried's model of a Physical Frailty Phenotype or on the model of deficit accumulation, expressed in a Frailty Index. Prefinal versions of the German translations of each instrument were produced and completed by 52 older participants (mean age: 73 ± 6 years). Some minor issues concerning comprehensibility and semantics of the scales were identified and resolved. Using the Physical Frailty Phenotype (frailty prevalence: 4%) criteria as a reference standard, the accuracy of the instruments was excellent (area under the curve AUC >0.90). Taking the Frailty Index (frailty prevalence: 23%) as the reference standard, the accuracy was good (AUC between 0.73 and 0.88). German language versions of PRISMA-7, FRAIL scale and GFI have been established and preliminary results indicate sufficient diagnostic test accuracy that needs to be further established.
NASA Astrophysics Data System (ADS)
Vorotnikov, A. A.; Klimov, D. D.; Romash, E. V.; Bashevskaya, O. S.; Poduraev, Yu. V.; Bazykyan, E. A.; Chunihin, A. A.
2018-03-01
Industrial robots perform technological operations, such as spot and arc welding, machining and laser cutting along different trajectories within their performance characteristics. The evaluation of these characteristics is carried out according to the criteria of the standard ISO 9283. The criteria of this standard are applicable in industrial manufacturing, but not in the medical industry, as they are not developed in the framework of medical tasks. Therefore, it is necessary to evaluate according to criteria built on different principles. In this article, the question of comparative evaluation of trajectories from program movements of a robot and manual movements of a surgeon, arising during the development of robotic medical complexes using industrial robots, is considered. A comparative evaluation is required to prove the expediency of automating medical operations in maxillofacial surgery. This study focuses on the estimation of velocity accuracy of a medical instrument. To obtain the velocity of the medical instrument, coordinates of the trajectory points from the program movements of the robot KUKA LWR4+ and trajectories from the manual movements of a professional surgeon have been measured. The measurement was carried out using a coordinate measuring machine, the laser tracker Leica LTD800. The accuracy estimation was carried out by two criteria: the criterion set out in the ISO 9283 standard, and the developed alternative criterion, the description of which is presented in this article. A quantitative comparative evaluation of the trajectories of a robot and a surgeon was obtained.
Abuhamad, Alfred; Zhao, Yili; Abuhamad, Sharon; Sinkovskaya, Elena; Rao, Rashmi; Kanaan, Camille; Platt, Lawrence
2016-01-01
This study aims to validate the feasibility and accuracy of a new standardized six-step approach to the performance of the focused basic obstetric ultrasound examination, and compare the new approach to the regular approach performed in the scheduled obstetric ultrasound examination. A new standardized six-step approach to the performance of the focused basic obstetric ultrasound examination, to evaluate fetal presentation, fetal cardiac activity, presence of multiple pregnancy, placental localization, amniotic fluid volume evaluation, and biometric measurements, was prospectively performed on 100 pregnant women between 18(+0) and 27(+6) weeks of gestation and another 100 pregnant women between 28(+0) and 36(+6) weeks of gestation. The agreement of findings for each of the six steps of the standardized six-step approach was evaluated against the regular approach. In all ultrasound examinations performed, substantial to perfect agreement (Kappa value between 0.64 and 1.00) was observed between the new standardized six-step approach and the regular approach. The new standardized six-step approach to the focused basic obstetric ultrasound examination can be performed successfully and accurately between 18(+0) and 36(+6) weeks of gestation. This standardized approach can be of significant benefit to limited resource settings and in point of care obstetric ultrasound applications. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron
2017-05-01
This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.
Oh, Hyoung-Chul; Kang, Hyun; Lee, Jae Young; Choi, Geun Joo; Choi, Jung Sik
2016-11-01
To compare the diagnostic accuracy of endoscopic ultrasound-guided core needle aspiration with that of standard fine-needle aspiration by systematic review and meta-analysis. Studies using 22/25-gauge core needles, irrespective of comparison with standard fine needles, were comprehensively reviewed. Pooled sensitivity, specificity, diagnostic odds ratio (DOR), and summary receiver operating characteristic curves for the diagnosis of malignancy were used to estimate the overall diagnostic efficiency. The pooled sensitivity, specificity, and DOR of the core needle for the diagnosis of malignancy were 0.88 (95% confidence interval [CI], 0.84 to 0.90), 0.99 (95% CI, 0.96 to 1), and 167.37 (95% CI, 65.77 to 425.91), respectively. The pooled sensitivity, specificity, and DOR of the standard needle were 0.84 (95% CI, 0.79 to 0.88), 1 (95% CI, 0.97 to 1), and 130.14 (95% CI, 34.00 to 495.35), respectively. The area under the curve of core and standard needle in the diagnosis of malignancy was 0.974 and 0.955, respectively. The core and standard needle were comparable in terms of pancreatic malignancy diagnosis. There was no significant difference in procurement of optimal histologic cores between core and standard needles (risk ratio [RR], 0.545; 95% CI, 0.187 to 1.589). The number of needle passes for diagnosis was significantly lower with the core needle (standardized mean difference, -0.72; 95% CI, -1.02 to -0.41). There were no significant differences in overall complications (RR, 1.26; 95% CI, 0.34 to 4.62) and technical failure (RR, 5.07; 95% CI, 0.68 to 37.64). Core and standard needles were comparable in terms of diagnostic accuracy, technical performance, and safety profile.
20 CFR 658.601 - State agency responsibility.
Code of Federal Regulations, 2010 CFR
2010-04-01
... and accuracy of documents prepared in the course of service delivery; and (E) Effectiveness of JS... deficiencies has been effective. (7)(a) The provisions of the JS regulations which require numerical and... carry out JS regulations, including regulations on performance standards and program emphases, and any...
NASA Technical Reports Server (NTRS)
Crawford, Bradley L.
2007-01-01
The angle measurement system (AMS) developed at NASA Langley Research Center (LaRC) is a system for many uses. It was originally developed to check taper fits in the wind tunnel model support system. The system was further developed to measure simultaneous pitch and roll angles using 3 orthogonally mounted accelerometers (3-axis). This 3-axis arrangement is used as a transfer standard from the calibration standard to the wind tunnel facility. It is generally used to establish model pitch and roll zero and performs the in-situ calibration on model attitude devices. The AMS originally used a laptop computer running DOS based software but has recently been upgraded to operate in a windows environment. Other improvements have also been made to the software to enhance its accuracy and add features. This paper will discuss the accuracy and calibration methodologies used in this system and some of the features that have contributed to its popularity.
Imaging evaluation of non-alcoholic fatty liver disease: focused on quantification.
Lee, Dong Ho
2017-12-01
Non-alcoholic fatty liver disease (NAFLD) has been an emerging major health problem, and the most common cause of chronic liver disease in Western countries. Traditionally, liver biopsy has been gold standard method for quantification of hepatic steatosis. However, its invasive nature with potential complication as well as measurement variability are major problem. Thus, various imaging studies have been used for evaluation of hepatic steatosis. Ultrasonography provides fairly good accuracy to detect moderate-to-severe degree hepatic steatosis, but limited accuracy for mild steatosis. Operator-dependency and subjective/qualitative nature of examination are another major drawbacks of ultrasonography. Computed tomography can be considered as an unsuitable imaging modality for evaluation of NAFLD due to potential risk of radiation exposure and limited accuracy in detecting mild steatosis. Both magnetic resonance spectroscopy and magnetic resonance imaging using chemical shift technique provide highly accurate and reproducible diagnostic performance for evaluating NAFLD, and therefore, have been used in many clinical trials as a non-invasive reference of standard method.
Imaging evaluation of non-alcoholic fatty liver disease: focused on quantification
2017-01-01
Non-alcoholic fatty liver disease (NAFLD) has been an emerging major health problem, and the most common cause of chronic liver disease in Western countries. Traditionally, liver biopsy has been gold standard method for quantification of hepatic steatosis. However, its invasive nature with potential complication as well as measurement variability are major problem. Thus, various imaging studies have been used for evaluation of hepatic steatosis. Ultrasonography provides fairly good accuracy to detect moderate-to-severe degree hepatic steatosis, but limited accuracy for mild steatosis. Operator-dependency and subjective/qualitative nature of examination are another major drawbacks of ultrasonography. Computed tomography can be considered as an unsuitable imaging modality for evaluation of NAFLD due to potential risk of radiation exposure and limited accuracy in detecting mild steatosis. Both magnetic resonance spectroscopy and magnetic resonance imaging using chemical shift technique provide highly accurate and reproducible diagnostic performance for evaluating NAFLD, and therefore, have been used in many clinical trials as a non-invasive reference of standard method. PMID:28994271
NASA Technical Reports Server (NTRS)
Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)
1979-01-01
A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.
Relative Navigation of Formation-Flying Satellites
NASA Technical Reports Server (NTRS)
Long, Anne; Kelbel, David; Lee, Taesul; Leung, Dominic; Carpenter, J. Russell; Grambling, Cheryl
2002-01-01
This paper compares autonomous relative navigation performance for formations in eccentric, medium and high-altitude Earth orbits using Global Positioning System (GPS) Standard Positioning Service (SPS), crosslink, and celestial object measurements. For close formations, the relative navigation accuracy is highly dependent on the magnitude of the uncorrelated measurement errors. A relative navigation position accuracy of better than 10 centimeters root-mean-square (RMS) can be achieved for medium-altitude formations that can continuously track at least one GPS signal. A relative navigation position accuracy of better than 15 meters RMS can be achieved for high-altitude formations that have sparse tracking of the GPS signals. The addition of crosslink measurements can significantly improve relative navigation accuracy for formations that use sparse GPS tracking or celestial object measurements for absolute navigation.
Buczinski, S; Fecteau, G; Chigerwe, M; Vandeweerd, J M
2016-06-01
Calves are highly dependent of colostrum (and antibody) intake because they are born agammaglobulinemic. The transfer of passive immunity in calves can be assessed directly by dosing immunoglobulin G (IgG) or by refractometry or Brix refractometry. The latter are easier to perform routinely in the field. This paper presents a protocol for a systematic review meta-analysis to assess the diagnostic accuracy of refractometry or Brix refractometry versus dosage of IgG as a reference standard test. With this review protocol we aim to be able to report refractometer and Brix refractometer accuracy in terms of sensitivity and specificity as well as to quantify the impact of any study characteristic on test accuracy.
Sastrawan, J; Jones, C; Akhalwaya, I; Uys, H; Biercuk, M J
2016-08-01
We introduce concepts from optimal estimation to the stabilization of precision frequency standards limited by noisy local oscillators. We develop a theoretical framework casting various measures for frequency standard variance in terms of frequency-domain transfer functions, capturing the effects of feedback stabilization via a time series of Ramsey measurements. Using this framework, we introduce an optimized hybrid predictive feedforward measurement protocol that employs results from multiple past measurements and transfer-function-based calculations of measurement covariance to improve the accuracy of corrections within the feedback loop. In the presence of common non-Markovian noise processes these measurements will be correlated in a calculable manner, providing a means to capture the stochastic evolution of the local oscillator frequency during the measurement cycle. We present analytic calculations and numerical simulations of oscillator performance under competing feedback schemes and demonstrate benefits in both correction accuracy and long-term oscillator stability using hybrid feedforward. Simulations verify that in the presence of uncompensated dead time and noise with significant spectral weight near the inverse cycle time predictive feedforward outperforms traditional feedback, providing a path towards developing a class of stabilization software routines for frequency standards limited by noisy local oscillators.
NASA Technical Reports Server (NTRS)
Dagalakis, N.; Wavering, A. J.; Spidaliere, P.
1991-01-01
Test procedures are proposed for the NASA DTF (Development Test Flight)-1 positioning tests of the FTS (Flight Telerobotic Servicer). The unique problems associated with the DTF-1 mission are discussed, standard robot performance tests and terminology are reviewed and a very detailed description of flight-like testing and analysis is presented. The major technical problem associated with DTF-1 is that only one position sensor can be used, which will be fixed at one location, with a working volume which is probably smaller than some of the robot errors to be measured. Radiation heating of the arm and the sensor could also cause distortions that would interfere with the test. Two robot performance testing committees have established standard testing procedures relevant to the DTF-1. Due to the technical problems associated with DTF-1, these procedures cannot be applied directly. These standard tests call for the use of several test positions at specific locations. Only one position, that of the position sensor, can be used by DTF-1. Off-line programming accuracy might be impossible to measure and in that case it will have to be replaced by forward kinetics accuracy.
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-01
Purpose To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). Methods 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. Results 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. Conclusions CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR. PMID:28042958
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-31
To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR.
An optical lattice clock with accuracy and stability at the 10(-18) level.
Bloom, B J; Nicholson, T L; Williams, J R; Campbell, S L; Bishof, M; Zhang, X; Zhang, W; Bromley, S L; Ye, J
2014-02-06
Progress in atomic, optical and quantum science has led to rapid improvements in atomic clocks. At the same time, atomic clock research has helped to advance the frontiers of science, affecting both fundamental and applied research. The ability to control quantum states of individual atoms and photons is central to quantum information science and precision measurement, and optical clocks based on single ions have achieved the lowest systematic uncertainty of any frequency standard. Although many-atom lattice clocks have shown advantages in measurement precision over trapped-ion clocks, their accuracy has remained 16 times worse. Here we demonstrate a many-atom system that achieves an accuracy of 6.4 × 10(-18), which is not only better than a single-ion-based clock, but also reduces the required measurement time by two orders of magnitude. By systematically evaluating all known sources of uncertainty, including in situ monitoring of the blackbody radiation environment, we improve the accuracy of optical lattice clocks by a factor of 22. This single clock has simultaneously achieved the best known performance in the key characteristics necessary for consideration as a primary standard-stability and accuracy. More stable and accurate atomic clocks will benefit a wide range of fields, such as the realization and distribution of SI units, the search for time variation of fundamental constants, clock-based geodesy and other precision tests of the fundamental laws of nature. This work also connects to the development of quantum sensors and many-body quantum state engineering (such as spin squeezing) to advance measurement precision beyond the standard quantum limit.
College of American Pathologists Cancer Protocols: Optimizing Format for Accuracy and Efficiency.
Strickland-Marmol, Leah B; Muro-Cacho, Carlos A; Barnett, Scott D; Banas, Matthew R; Foulis, Philip R
2016-06-01
-The data in College of American Pathologists cancer protocols have to be presented effectively to health care providers. There is no consensus on the format of those protocols, resulting in various designs among pathologists. Cancer protocols are independently created by site-specific experts, so there is inconsistent wording and repetition of data. This lack of standardization can be confusing and may lead to interpretation errors. -To define a synopsis format that is effective in delivering essential pathologic information and to evaluate the aesthetic appeal and the impact of varying format styles on the speed and accuracy of data extraction. -We queried individuals from several health care backgrounds using varying formats of the fallopian tube protocol of the College of American Pathologists without content modification to investigate their aesthetic appeal, accuracy, efficiency, and readability/complexity. Descriptive statistics, an item difficulty index, and 3 tests of readability were used. -Columned formats were aesthetically more appealing than justified formats (P < .001) and were associated with greater accuracy and efficiency. Incorrect assumptions were made about items not included in the protocol. Uniform wording and short sentences were associated with better performance by participants. -Based on these data, we propose standardized protocol formats for cancer resections of the fallopian tube and the more-familiar colon, employing headers, short phrases, and uniform terminology. This template can be easily and minimally modified for other sites, standardizing format and verbiage and increasing user accuracy and efficiency. Principles of human factors engineering should be considered in the display of patient data.
NASA Astrophysics Data System (ADS)
Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui
2018-02-01
Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
Computer-Assisted Classification Patterns in Autoimmune Diagnostics: The AIDA Project
Benammar Elgaaied, Amel; Cascio, Donato; Bruno, Salvatore; Ciaccio, Maria Cristina; Cipolla, Marco; Fauci, Alessandro; Morgante, Rossella; Taormina, Vincenzo; Gorgi, Yousr; Marrakchi Triki, Raja; Ben Ahmed, Melika; Louzir, Hechmi; Yalaoui, Sadok; Imene, Sfar; Issaoui, Yassine; Abidi, Ahmed; Ammar, Myriam; Bedhiafi, Walid; Ben Fraj, Oussama; Bouhaha, Rym; Hamdi, Khouloud; Soumaya, Koudhi; Neili, Bilel; Asma, Gati; Lucchese, Mariano; Catanzaro, Maria; Barbara, Vincenza; Brusca, Ignazio; Fregapane, Maria; Amato, Gaetano; Friscia, Giuseppe; Neila, Trai; Turkia, Souayeh; Youssra, Haouami; Rekik, Raja; Bouokez, Hayet; Vasile Simone, Maria; Fauci, Francesco; Raso, Giuseppe
2016-01-01
Antinuclear antibodies (ANAs) are significant biomarkers in the diagnosis of autoimmune diseases in humans, done by mean of Indirect ImmunoFluorescence (IIF) method, and performed by analyzing patterns and fluorescence intensity. This paper introduces the AIDA Project (autoimmunity: diagnosis assisted by computer) developed in the framework of an Italy-Tunisia cross-border cooperation and its preliminary results. A database of interpreted IIF images is being collected through the exchange of images and double reporting and a Gold Standard database, containing around 1000 double reported images, has been settled. The Gold Standard database is used for optimization of a CAD (Computer Aided Detection) solution and for the assessment of its added value, in order to be applied along with an Immunologist as a second Reader in detection of autoantibodies. This CAD system is able to identify on IIF images the fluorescence intensity and the fluorescence pattern. Preliminary results show that CAD, used as second Reader, appeared to perform better than Junior Immunologists and hence may significantly improve their efficacy; compared with two Junior Immunologists, the CAD system showed higher Intensity Accuracy (85,5% versus 66,0% and 66,0%), higher Patterns Accuracy (79,3% versus 48,0% and 66,2%), and higher Mean Class Accuracy (79,4% versus 56,7% and 64.2%). PMID:27042658
Resch, Christine; Keulers, Esther; Martens, Rosa; van Heugten, Caroline; Hurks, Petra
2018-04-05
Providing children with organizational strategy instruction on the Rey Osterrieth Complex Figure (ROCF) has previously been found to improve organizational and accuracy performance on this task. It is unknown whether strategy instruction on the ROCF would also transfer to performance improvement on copying and the recall of another complex figure. Participants were 98 typically developing children (aged 9.5-12.6 years, M = 10.6). Children completed the ROCF (copy and recall) as a pretest. Approximately a month later, they were randomized to complete the ROCF with strategy instruction in the form of a stepwise administration of the ROCF or again in the standard format. All children then copied and recalled the Modified Taylor Complex Figure (MTCF). All productions were assessed in terms of organization, accuracy and completion time. Organization scores for the MTCF did not differ for the two groups for the copy production, but did differ for the recall production, indicating transfer. Accuracy and completion times did not differ between groups. Performance on all measures, except copy accuracy, improved between pretest ROCF and posttest MTCF production for both groups, suggesting practice effects. Findings indicate that transfer of strategy instruction from one complex figure to another is only present for organization of recalled information. The increase in RCF-OSS scores did not lead to a higher accuracy or a faster copy or recall.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164
Yoon, Paul K; Zihajehzadeh, Shaghayegh; Bong-Soo Kang; Park, Edward J
2015-08-01
This paper proposes a novel indoor localization method using the Bluetooth Low Energy (BLE) and an inertial measurement unit (IMU). The multipath and non-line-of-sight errors from low-power wireless localization systems commonly result in outliers, affecting the positioning accuracy. We address this problem by adaptively weighting the estimates from the IMU and BLE in our proposed cascaded Kalman filter (KF). The positioning accuracy is further improved with the Rauch-Tung-Striebel smoother. The performance of the proposed algorithm is compared against that of the standard KF experimentally. The results show that the proposed algorithm can maintain high accuracy for position tracking the sensor in the presence of the outliers.
Reproducibility in light microscopy: Maintenance, standards and SOPs.
Deagle, Rebecca C; Wee, Tse-Luen Erika; Brown, Claire M
2017-08-01
Light microscopy has grown to be a valuable asset in both the physical and life sciences. It is a highly quantitative method available in individual research laboratories and often centralized in core facilities. However, although quantitative microscopy is becoming a customary tool in research, it is rarely standardized. To achieve accurate quantitative microscopy data and reproducible results, three levels of standardization must be considered: (1) aspects of the microscope, (2) the sample, and (3) the detector. The accuracy of the data is only as reliable as the imaging system itself, thereby imposing the need for routine standard performance testing. Depending on the task some maintenance procedures should be performed once a month, some before each imaging session, while others conducted annually. This text should be implemented as a resource for researchers to integrate with their own standard operating procedures to ensure the highest quality quantitative microscopy data. Copyright © 2017. Published by Elsevier Ltd.
Performance testing of radiobioassay laboratories: In vivo measurements, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacLellan, J.A.; Traub, R.J.; Olsen, P.C.
1990-04-01
A study of two rounds of in vivo laboratory performance testing was undertaken by Pacific Northwest Laboratory (PNL) to determine the appropriateness of the in vivo performance criteria of draft American National Standards Institute (ANSI) standard ANSI N13.3, Performance Criteria for Bioassay.'' The draft standard provides guidance to in vivo counting facilities regarding the sensitivity, precision, and accuracy of measurements for certain categories of commonly assayed radionuclides and critical regions of the body. This report concludes the testing program by presenting the results of the Round Two testing. Testing involved two types of measurements: chest counting for radionuclide detection inmore » the lung, and whole body counting for detection of uniformly distributed material. Each type of measurement was further divided into radionuclide categories as defined in the draft standard. The appropriateness of the draft standard criteria by measuring a laboratory's ability to attain them were judged by the results of both round One and Round Two testing. The testing determined that performance criteria are set at attainable levels, and the majority of in vivo monitoring facilities passed the criteria when complete results were submitted. 18 refs., 18 figs., 15 tabs.« less
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
NASA Astrophysics Data System (ADS)
Łazarek, Łukasz; Antończak, Arkadiusz J.; Wójcik, Michał R.; Drzymała, Jan; Abramski, Krzysztof M.
2014-07-01
Laser-induced breakdown spectroscopy (LIBS), like many other spectroscopic techniques, is a comparative method. Typically, in qualitative analysis, synthetic certified standard with a well-known elemental composition is used to calibrate the system. Nevertheless, in all laser-induced techniques, such calibration can affect the accuracy through differences in the overall composition of the chosen standard. There are also some intermediate factors, which can cause imprecision in measurements, such as optical absorption, surface structure and thermal conductivity. In this work the calibration performed for the LIBS technique utilizes pellets made directly from the tested materials (old well-characterized samples). This choice produces a considerable improvement in the accuracy of the method. This technique was adopted for the determination of trace elements in industrial copper concentrates, standardized by conventional atomic absorption spectroscopy with a flame atomizer. A series of copper flotation concentrate samples was analyzed for three elements: silver, cobalt and vanadium. We also proposed a method of post-processing the measurement data to minimize matrix effects and permit reliable analysis. It has been shown that the described technique can be used in qualitative and quantitative analyses of complex inorganic materials, such as copper flotation concentrates. It was noted that the final validation of such methodology is limited mainly by the accuracy of the characterization of the standards.
Performance Analysis of Low-Cost Single-Frequency GPS Receivers in Hydrographic Surveying
NASA Astrophysics Data System (ADS)
Elsobeiey, M.
2017-10-01
The International Hydrographic Organization (IHO) has issued standards that provide the minimum requirements for different types of hydrographic surveys execution to collect data to be used to compile navigational charts. Such standards are usually updated from time to time to reflect new survey techniques and practices and must be achieved to assure both surface navigation safety and marine environment protection. Hydrographic surveys can be classified to four orders namely, special order, order 1a, order 1b, and order 2. The order of hydrographic surveys to use should be determined in accordance with the importance to the safety of navigation in the surveyed area. Typically, geodetic-grade dual-frequency GPS receivers are utilized for position determination during data collection in hydrographic surveys. However, with the evolution of high-sensitivity low-cost single-frequency receivers, it is very important to evaluate the performance of such receivers. This paper investigates the performance of low-cost single-frequency GPS receivers in hydrographic surveying applications. The main objective is to examine whether low-cost single-frequency receivers fulfil the IHO standards for hydrographic surveys. It is shown that the low-cost single-frequency receivers meet the IHO horizontal accuracy for all hydrographic surveys orders at any depth. However, the single-frequency receivers meet only order 2 requirements for vertical accuracy at depth more than or equal 100 m.
How reliable is apparent age at death on cadavers?
Amadasi, Alberto; Merusi, Nicolò; Cattaneo, Cristina
2015-07-01
The assessment of age at death for identification purposes is a frequent and tough challenge for forensic pathologists and anthropologists. Too frequently, visual assessment of age is performed on well-preserved corpses, a method considered subjective and full of pitfalls, but whose level of inadequacy no one has yet tested or proven. This study consisted in the visual estimation of the age of 100 cadavers performed by a total of 37 observers among those usually attending the dissection room. Cadavers were of Caucasian ethnicity, well preserved, belonging to individuals who died of natural death. All the evaluations were performed prior to autopsy. Observers assessed the age with ranges of 5 and 10 years, indicating also the body part they mainly observed for each case. Globally, the 5-year range had an accuracy of 35%, increasing to 69% with the 10-year range. The highest accuracy was in the 31-60 age category (74.7% with the 10-year range), and the skin seemed to be the most reliable age parameter (71.5% of accuracy when observed), while the face was considered most frequently, in 92.4% of cases. A simple formula with the general "mean of averages" in the range given by the observers and related standard deviations was then developed; the average values with standard deviations of 4.62 lead to age estimation with ranges of some 20 years that seem to be fairly reliable and suitable, sometimes in alignment with classic anthropological methods, in the age estimation of well-preserved corpses.
Prentice, Boone M; Chumbley, Chad W; Hachey, Brian C; Norris, Jeremy L; Caprioli, Richard M
2016-10-04
Quantitative matrix-assisted laser desorption/ionization time-of-flight (MALDI TOF) approaches have historically suffered from poor accuracy and precision mainly due to the nonuniform distribution of matrix and analyte across the target surface, matrix interferences, and ionization suppression. Tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity as well as improve signal-to-noise ratios by eliminating interferences from chemical noise, alleviating some concerns about dynamic range. However, conventional MALDI TOF/TOF modalities typically only scan for a single MS/MS event per laser shot, and multiplex assays require sequential analyses. We describe here new methodology that allows for multiple TOF/TOF fragmentation events to be performed in a single laser shot. This technology allows the reference of analyte intensity to that of the internal standard in each laser shot, even when the analyte and internal standard are quite disparate in m/z, thereby improving quantification while maintaining chemical specificity and duty cycle. In the quantitative analysis of the drug enalapril in pooled human plasma with ramipril as an internal standard, a greater than 4-fold improvement in relative standard deviation (<10%) was observed as well as improved coefficients of determination (R 2 ) and accuracy (>85% quality controls). Using this approach we have also performed simultaneous quantitative analysis of three drugs (promethazine, enalapril, and verapamil) using deuterated analogues of these drugs as internal standards.
Tu, S; Rosenthal, M; Wang, D; Huang, J; Chen, Y
2016-09-01
Controversies about the performance of conventional prenatal screening using maternal serum and ultrasound markers (PSMSUM) in detecting Down syndrome (DS) have been raised as a result of a recently available noninvasive prenatal test based on cell-free fetal DNA sequencing. To evaluate the screening performance of PSMSUM in detecting DS in Chinese women. An exhaustive literature search of MEDLINE, Embase, the Cochrane Library, ISI Web of Science and China BioMedical Disc. Primary studies, published from January 2004 to November 2014, which examined the screening accuracy of PSMSUM in pregnant Chinese women, compared with a reference standard, either chromosomal verification or inspection of the newborn. Data were extracted as screening positive/negative results for Down and non-Down syndrome pregnancies, allowing estimation of sensitivities and specificities. Risks of bias within and across studies were assessed. Screening accuracy measures were pooled using a bivariate random effects regression model. Seventy-eight studies, involving six categories of PSMSUM, were included. Second-trimester double serum [pooled sensitivity (SEN) = 0.80, pooled specificity (SPE) = 0.95] and triple-serum (pooled SEN = 0.79, pooled SPE = 0.96) screening were the predominant PSMSUM methods. The screening performances of these methods achieved the national standard but varied enormously across studies. First-trimester combined screening (pooled SEN = 0.92, pooled SPE = 0.93) and second-trimester quadruple serum screening (median SEN = 0.86, median SPE = 0.96) performed better, but were rarely used. Second-trimester maternal serum screening has the potential to achieve satisfactory screening performance in middle- and low-income countries. The reported enormous range in screening performance of second-trimester PSMSUM calls for urgent implementation of methods for performance optimization. Meta-analysis results show good accuracy of maternal serum and ultrasound screening for trisomy 21 in Chinese women. © 2016 Royal College of Obstetricians and Gynaecologists.
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
High-order asynchrony-tolerant finite difference schemes for partial differential equations
NASA Astrophysics Data System (ADS)
Aditya, Konduri; Donzis, Diego A.
2017-12-01
Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.
Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures
NASA Astrophysics Data System (ADS)
Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.
2017-12-01
Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.
Georgakis, D. Christine; Trace, David A.; Naeymi-Rad, Frank; Evens, Martha
1990-01-01
Medical expert systems require comprehensive evaluation of their diagnostic accuracy. The usefulness of these systems is limited without established evaluation methods. We propose a new methodology for evaluating the diagnostic accuracy and the predictive capacity of a medical expert system. We have adapted to the medical domain measures that have been used in the social sciences to examine the performance of human experts in the decision making process. Thus, in addition to the standard summary measures, we use measures of agreement and disagreement, and Goodman and Kruskal's λ and τ measures of predictive association. This methodology is illustrated by a detailed retrospective evaluation of the diagnostic accuracy of the MEDAS system. In a study using 270 patients admitted to the North Chicago Veterans Administration Hospital, diagnoses produced by MEDAS are compared with the discharge diagnoses of the attending physicians. The results of the analysis confirm the high diagnostic accuracy and predictive capacity of the MEDAS system. Overall, the agreement of the MEDAS system with the “gold standard” diagnosis of the attending physician has reached a 90% level.
Evaluation of the semi-continuous Monitor for Aerosols and Gases in Ambient Air (MARGA, Metrohm Applikon B.V.) was conducted with an emphasis on examination of accuracy and precision associated with processing of chromatograms. Using laboratory standards and atmospheric measureme...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-29
.... Descriptive information using standardized descriptors for skills, abilities, interests, knowledge, work... which: Evaluate whether the proposed collection of information is necessary for the proper performance...; Evaluate the accuracy of the agency's estimate of the burden of the proposed collection of information...
On the Power Dependence of Extraneous Microwave Fields in Atomic Frequency Standards
2005-01-01
uncertainty”, Metrologia 35 (1998) pp. 829-845. [6] K. Dorenwendt and A. Bauch, “Spurious Microwave Fields in Caesium Atomic Beam Standards...Cesium Beam Clocks Induced by Microwave Leakages”, IEEE Trans. UFFC 45 (1998)728-738. [8] M. Abgrall, “Evaluation des Performances de la Fontaine...Proc of the EFTF 2005 – in press. [12] A. DeMarchi, “The Optically Pumped Caesium Fountain: 10-15 Frequency Accuracy?”, Metrologia 18 (1982) pp
Fluid-flow-rate metrology: laboratory uncertainties and traceabilities
NASA Astrophysics Data System (ADS)
Mattingly, G. E.
1991-03-01
Increased concerns for improved fluid flowrate measurement are driving the fluid metering community-meter manufacturers and users alike-to search for better verification and documentation for their fluid measurements. These concerns affect both our domestic and international market places they permeate our technologies - aerospace chemical processes automotive bioengineering etc. They involve public health and safety and they impact our national defense. These concerns are based upon the rising value of fluid resources and products and the importance of critical material accountability. These values directly impact the accuracy needs of fluid buyers and sellers in custody transfers. These concerns impact the designers and operators of chemical process systems where control and productivity optimization depend critically upon measurement precision. Public health and safety depend upon the quality of numerous pollutant measurements - both liquid and gaseous. The performance testing of engines - both automotive and aircraft are critically based upon accurate fuel measurements - both liquid and oxidizer streams. Fluid flowrate measurements are established differently from counterparts in length and mass measurement systems because these have the benefits of " identity" standards. For rate measurement systems the metrology is based upon " derived standards" . These use facilities and transfer standards which are designed built characterized and used to constitute basic measurement capabilities and quantify performance - accuracy and precision. Because " identity standards" do not exist for flow measurements facsimiles or equivalents must
Research: Comparison of the Accuracy of a Pocket versus Standard Pulse Oximeter.
da Costa, João Cordeiro; Faustino, Paula; Lima, Ricardo; Ladeira, Inês; Guimarães, Miguel
2016-01-01
Pulse oximetry has become an essential tool in clinical practice. With patient self-management becoming more prevalent, pulse oximetry self-monitoring has the potential to become common practice in the near future. This study sought to compare the accuracy of two pulse oximeters, a high-quality standard pulse oximeter and an inexpensive pocket pulse oximeter, and to compare both devices with arterial blood co-oximetry oxygen saturation. A total of 95 patients (35.8% women; mean [±SD] age 63.1 ± 13.9 years; mean arterial pressure was 92 ± 12.0 mmHg; mean axillar temperature 36.3 ± 0.4°C) presenting to our hospital for blood gas analysis was evaluated. The Bland-Altman technique was performed to calculate bias and precision, as well as agreement limits. Student's t test was performed. Standard oximeter presented 1.84% bias and a precision error of 1.80%. Pocket oximeter presented a bias of 1.85% and a precision error of 2.21%. Agreement limits were -1.69% to 5.37% (standard oximeter) and -2.48% to 6.18% (pocket oximeter). Both oximeters presented bias, which was expected given previous research. The pocket oximeter was less precise but had agreement limits that were comparable with current evidence. Pocket oximeters can be powerful allies in clinical monitoring of patients based on a self-monitoring/efficacy strategy.
ACCESS: integration and pre-flight performance
NASA Astrophysics Data System (ADS)
Kaiser, Mary Elizabeth; Morris, Matthew J.; Aldoroty, Lauren N.; Pelton, Russell; Kurucz, Robert; Peacock, Grant O.; Hansen, Jason; McCandliss, Stephan R.; Rauscher, Bernard J.; Kimble, Randy A.; Kruk, Jeffrey W.; Wright, Edward L.; Orndorff, Joseph D.; Feldman, Paul D.; Moos, H. Warren; Riess, Adam G.; Gardner, Jonathan P.; Bohlin, Ralph; Deustua, Susana E.; Dixon, W. V.; Sahnow, David J.; Perlmutter, Saul
2017-09-01
Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. ACCESS, "Absolute Color Calibration Experiment for Standard Stars", is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 - 1.7μm bandpass. This paper describes the sub-system testing, payload integration, avionics operations, and data transfer for the ACCESS instrument.
From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation
NASA Astrophysics Data System (ADS)
Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj
2007-09-01
In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.
Estimating local scaling properties for the classification of interstitial lung disease patterns
NASA Astrophysics Data System (ADS)
Huber, Markus B.; Nagarajan, Mahesh B.; Leinsinger, Gerda; Ray, Lawrence A.; Wismueller, Axel
2011-03-01
Local scaling properties of texture regions were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honeycombing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and the estimation of local scaling properties with Scaling Index Method (SIM). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions including the Bonferroni correction. The best classification results were obtained by the set of SIM features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers with the highest accuracy (94.1%, 93.7%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced texture features using local scaling properties can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.
Gimenez, Thais; Braga, Mariana Minatel; Raggio, Daniela Procida; Deery, Chris; Ricketts, David N; Mendes, Fausto Medeiros
2013-01-01
Fluorescence-based methods have been proposed to aid caries lesion detection. Summarizing and analysing findings of studies about fluorescence-based methods could clarify their real benefits. We aimed to perform a comprehensive systematic review and meta-analysis to evaluate the accuracy of fluorescence-based methods in detecting caries lesions. Two independent reviewers searched PubMed, Embase and Scopus through June 2012 to identify papers/articles published. Other sources were checked to identify non-published literature. STUDY ELIGIBILITY CRITERIA, PARTICIPANTS AND DIAGNOSTIC METHODS: The eligibility criteria were studies that: (1) have assessed the accuracy of fluorescence-based methods of detecting caries lesions on occlusal, approximal or smooth surfaces, in both primary or permanent human teeth, in the laboratory or clinical setting; (2) have used a reference standard; and (3) have reported sufficient data relating to the sample size and the accuracy of methods. A diagnostic 2×2 table was extracted from included studies to calculate the pooled sensitivity, specificity and overall accuracy parameters (Diagnostic Odds Ratio and Summary Receiver-Operating curve). The analyses were performed separately for each method and different characteristics of the studies. The quality of the studies and heterogeneity were also evaluated. Seventy five studies met the inclusion criteria from the 434 articles initially identified. The search of the grey or non-published literature did not identify any further studies. In general, the analysis demonstrated that the fluorescence-based method tend to have similar accuracy for all types of teeth, dental surfaces or settings. There was a trend of better performance of fluorescence methods in detecting more advanced caries lesions. We also observed moderate to high heterogeneity and evidenced publication bias. Fluorescence-based devices have similar overall performance; however, better accuracy in detecting more advanced caries lesions has been observed.
Bayesian Estimation of Combined Accuracy for Tests with Verification Bias
Broemeling, Lyle D.
2011-01-01
This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests. PMID:26859487
Ataer-Cansizoglu, Esra; Bolon-Canedo, Veronica; Campbell, J Peter; Bozkurt, Alican; Erdogmus, Deniz; Kalpathy-Cramer, Jayashree; Patel, Samir; Jonas, Karyn; Chan, R V Paul; Ostmo, Susan; Chiang, Michael F
2015-11-01
We developed and evaluated the performance of a novel computer-based image analysis system for grading plus disease in retinopathy of prematurity (ROP), and identified the image features, shapes, and sizes that best correlate with expert diagnosis. A dataset of 77 wide-angle retinal images from infants screened for ROP was collected. A reference standard diagnosis was determined for each image by combining image grading from 3 experts with the clinical diagnosis from ophthalmoscopic examination. Manually segmented images were cropped into a range of shapes and sizes, and a computer algorithm was developed to extract tortuosity and dilation features from arteries and veins. Each feature was fed into our system to identify the set of characteristics that yielded the highest-performing system compared to the reference standard, which we refer to as the "i-ROP" system. Among the tested crop shapes, sizes, and measured features, point-based measurements of arterial and venous tortuosity (combined), and a large circular cropped image (with radius 6 times the disc diameter), provided the highest diagnostic accuracy. The i-ROP system achieved 95% accuracy for classifying preplus and plus disease compared to the reference standard. This was comparable to the performance of the 3 individual experts (96%, 94%, 92%), and significantly higher than the mean performance of 31 nonexperts (81%). This comprehensive analysis of computer-based plus disease suggests that it may be feasible to develop a fully-automated system based on wide-angle retinal images that performs comparably to expert graders at three-level plus disease discrimination. Computer-based image analysis, using objective and quantitative retinal vascular features, has potential to complement clinical ROP diagnosis by ophthalmologists.
Tracked ultrasound calibration studies with a phantom made of LEGO bricks
NASA Astrophysics Data System (ADS)
Soehl, Marie; Walsh, Ryan; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor
2014-03-01
In this study, spatial calibration of tracked ultrasound was compared by using a calibration phantom made of LEGO® bricks and two 3-D printed N-wire phantoms. METHODS: The accuracy and variance of calibrations were compared under a variety of operating conditions. Twenty trials were performed using an electromagnetic tracking device with a linear probe and three trials were performed using varied probes, varied tracking devices and the three aforementioned phantoms. The accuracy and variance of spatial calibrations found through the standard deviation and error of the 3-D image reprojection were used to compare the calibrations produced from the phantoms. RESULTS: This study found no significant difference between the measured variables of the calibrations. The average standard deviation of multiple 3-D image reprojections with the highest performing printed phantom and those from the phantom made of LEGO® bricks differed by 0.05 mm and the error of the reprojections differed by 0.13 mm. CONCLUSION: Given that the phantom made of LEGO® bricks is significantly less expensive, more readily available, and more easily modified than precision-machined N-wire phantoms, it prompts to be a viable calibration tool especially for quick laboratory research and proof of concept implementations of tracked ultrasound navigation.
Solnica, Bogdan
2009-01-01
In this issue of Journal of Diabetes Science and Technology, Chang and colleagues present the analytical performance evaluation of the OneTouch® UltraVue™ blood glucose meter. This device is an advanced construction with a color display, used-strip ejector, no-button interface, and short assay time. Accuracy studies were performed using a YSI 2300 analyzer, considered the reference. Altogether, 349 pairs of results covering a wide range of blood glucose concentrations were analyzed. Patients with diabetes performed a significant part of the tests. Obtained results indicate good accuracy of OneTouch UltraVue blood glucose monitoring system, satisfying the International Organization for Standardization recommendations and thereby locating >95% of tests within zone A of the error grid. Results of the precision studies indicate good reproducibility of measurements. In conclusion, the evaluation of the OneTouch UltraVue meter revealed good analytical performance together with convenient handling useful for self-monitoring of blood glucose performed by elderly diabetes patients. PMID:20144432
Solnica, Bogdan
2009-09-01
In this issue of Journal of Diabetes Science and Technology, Chang and colleagues present the analytical performance evaluation of the OneTouch UltraVue blood glucose meter. This device is an advanced construction with a color display, used-strip ejector, no-button interface, and short assay time. Accuracy studies were performed using a YSI 2300 analyzer, considered the reference. Altogether, 349 pairs of results covering a wide range of blood glucose concentrations were analyzed. Patients with diabetes performed a significant part of the tests. Obtained results indicate good accuracy of OneTouch UltraVue blood glucose monitoring system, satisfying the International Organization for Standardization recommendations and thereby locating >95% of tests within zone A of the error grid. Results of the precision studies indicate good reproducibility of measurements. In conclusion, the evaluation of the OneTouch UltraVue meter revealed good analytical performance together with convenient handling useful for self-monitoring of blood glucose performed by elderly diabetes patients. 2009 Diabetes Technology Society.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.
Hussein, Mohamed
2017-07-01
Accurate delivery of an injection into the intra-articular space of the knee is achieved in only two thirds of knees when using the standard anterolateral portal. The use of a modified full-flexion anterolateral portal provides a highly accurate, less painful, and more effective method for reproducible intra-articular injection without the need for ultrasonographic or fluoroscopic guidance in patients with dry osteoarthritis of the knee. The accuracy of needle placement was assessed in a prospective series of 140 consecutive injections in patients with symptomatic degenerative knee arthritis without clinical knee effusion. Procedural pain was determined using the Numerical Rating Scale. The accuracy rates of needle placement were confirmed with fluoroscopic imaging to document the dispersion pattern of injected contrast material. Using the standard anterolateral portal, 52 of 70 injections were confirmed to have been placed in the intra-articular space on the first attempt (accuracy rate, 74.2%). Using the modified full-flexion anterolateral portal, 68 of 70 injections were placed in the intra-articular space on the first attempt (accuracy rate, 97.1%; P = 0.000). This study revealed that using the modified full-flexion anterolateral portal for injections into the knee joint resulted in more accurate and less painful injections than those performed by the same orthopaedic surgeon using the standard anterolateral portal. In addition, the technique offered therapeutic delivery into the joint without the need for fluoroscopic confirmation. Therapeutic Level II.
NASA Astrophysics Data System (ADS)
Chen, Xi; Walker, John T.; Geron, Chris
2017-10-01
Evaluation of the semi-continuous Monitor for AeRosols and GAses in ambient air (MARGA, Metrohm Applikon B.V.) was conducted with an emphasis on examination of accuracy and precision associated with processing of chromatograms. Using laboratory standards and atmospheric measurements, analytical accuracy, precision and method detection limits derived using the commercial MARGA software were compared to an alternative chromatography procedure consisting of a custom Java script to reformat raw MARGA conductivity data and Chromeleon (Thermo Scientific Dionex) software for peak integration. Our analysis revealed issues with accuracy and precision resulting from misidentification and misintegration of chromatograph peaks by the MARGA automated software as well as a systematic bias at low concentrations for anions. Reprocessing and calibration of raw MARGA data using the alternative chromatography method lowered method detection limits and reduced variability (precision) between parallel sampler boxes. Instrument performance was further evaluated during a 1-month intensive field campaign in the fall of 2014, including analysis of diurnal patterns of gaseous and particulate water-soluble species (NH3, SO2, HNO3, NH4+, SO42- and NO3-), gas-to-particle partitioning and particle neutralization state. At ambient concentrations below ˜ 1 µg m-3, concentrations determined using the MARGA software are biased +30 and +10 % for NO3- and SO42-, respectively, compared to concentrations determined using the alternative chromatography procedure. Differences between the two methods increase at lower concentrations. We demonstrate that positively biased NO3- and SO42- measurements result in overestimation of aerosol acidity and introduce nontrivial errors to ion balances of inorganic aerosol. Though the source of the bias is uncertain, it is not corrected by the MARGA online single-point internal LiBr standard. Our results show that calibration and verification of instrument accuracy by multilevel external standards is required to adequately control analytical accuracy. During the field intensive, the MARGA was able to capture rapid compositional changes in PM2.5 due to changes in meteorology and air mass history relative to known source regions of PM precursors, including a fine NO3- aerosol event associated with intrusion of Arctic air into the southeastern US.
NASA Astrophysics Data System (ADS)
Iwaki, Y.
2010-07-01
The Quality Assurance (QA) of measurand has been discussed over many years by Quality Engineering (QE). It is need to more discuss about ISO standard. It is mining to find out root fault element for improvement of measured accuracy, and it remove. The accuracy assurance needs to investigate the Reference Material (RM) for calibration and an improvement accuracy of data processing. This research follows the accuracy improvement in field of data processing by how to improve of accuracy. As for the fault element relevant to measurement accuracy, in many cases, two or more element is buried exist. The QE is to assume the generating frequency of fault state, and it is solving from higher ranks for fault factor first by "Failure Mode and Effects Analysis (FMEA)". Then QE investigate the root cause over the fault element by "Root Cause Analysis (RCA)" and "Fault Tree Analysis (FTA)" and calculate order to the generating element of assume specific fault. These days comes, the accuracy assurance of measurement result became duty in the Professional Test (PT). ISO standard was legislated by ISO-GUM (Guide of express Uncertainty in Measurement) as guidance of an accuracy assurance in 1993 [1] for QA. Analysis method of ISO-GUM is changed into Exploratory Data Analysis (EDA) from Analysis of Valiance (ANOVA). EDA calculate one by one until an assurance performance is obtained according to "Law of the propagation of uncertainty". If the truth value was unknown, ISO-GUM is changed into reference value. A reference value set up by the EDA and it does check with a Key Comparison (KC) method. KC is comparing between null hypothesis and frequency hypothesis. It performs operation of assurance by ISO-GUM in order of standard uncertainty, the combined uncertainty of many fault elements and an expansion uncertain for assurance. An assurance value is authorized by multiplying the final expansion uncertainty [2] by K of coverage factor. K-value is calculated from the Effective Free Degree (EFD) which thought the number of samples is important. Free degree is based on maximum likelihood method of an improved information criterion (AIC) for a Quality Control (QC). The assurance performance of ISO-GUM is come out by set up of the confidence interval [3] and is decided. The result of research of "Decided level/Minimum Detectable Concentration (DL/MDC)" was able to profit by the operation. QE has developed for the QC of industry. However, these have been processed by regression analysis by making frequency probability of a statistic value into normalized distribution. The occurrence probability of the statistics value of a fault element which is accompanied element by a natural phenomenon becomes an abnormal distribution in many cases. The abnormal distribution needs to obtain an assurance value by other method than statistical work of type B in ISO-GUM. It is tried fusion the improvement of worker by QE became important for reservation of the reliability of measurement accuracy and safety. This research was to make the result of Blood Chemical Analysis (BCA) in the field of clinical test.
McInnes, Matthew D F; Moher, David; Thombs, Brett D; McGrath, Trevor A; Bossuyt, Patrick M; Clifford, Tammy; Cohen, Jérémie F; Deeks, Jonathan J; Gatsonis, Constantine; Hooft, Lotty; Hunt, Harriet A; Hyde, Christopher J; Korevaar, Daniël A; Leeflang, Mariska M G; Macaskill, Petra; Reitsma, Johannes B; Rodin, Rachel; Rutjes, Anne W S; Salameh, Jean-Paul; Stevens, Adrienne; Takwoingi, Yemisi; Tonelli, Marcello; Weeks, Laura; Whiting, Penny; Willis, Brian H
2018-01-23
Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.
Translational Imaging Spectroscopy for Proximal Sensing
Rogass, Christian; Koerting, Friederike M.; Mielke, Christian; Brell, Maximilian; Boesche, Nina K.; Bade, Maria; Hohmann, Christian
2017-01-01
Proximal sensing as the near field counterpart of remote sensing offers a broad variety of applications. Imaging spectroscopy in general and translational laboratory imaging spectroscopy in particular can be utilized for a variety of different research topics. Geoscientific applications require a precise pre-processing of hyperspectral data cubes to retrieve at-surface reflectance in order to conduct spectral feature-based comparison of unknown sample spectra to known library spectra. A new pre-processing chain called GeoMAP-Trans for at-surface reflectance retrieval is proposed here as an analogue to other algorithms published by the team of authors. It consists of a radiometric, a geometric and a spectral module. Each module consists of several processing steps that are described in detail. The processing chain was adapted to the broadly used HySPEX VNIR/SWIR imaging spectrometer system and tested using geological mineral samples. The performance was subjectively and objectively evaluated using standard artificial image quality metrics and comparative measurements of mineral and Lambertian diffuser standards with standard field and laboratory spectrometers. The proposed algorithm provides highly qualitative results, offers broad applicability through its generic design and might be the first one of its kind to be published. A high radiometric accuracy is achieved by the incorporation of the Reduction of Miscalibration Effects (ROME) framework. The geometric accuracy is higher than 1 μpixel. The critical spectral accuracy was relatively estimated by comparing spectra of standard field spectrometers to those from HySPEX for a Lambertian diffuser. The achieved spectral accuracy is better than 0.02% for the full spectrum and better than 98% for the absorption features. It was empirically shown that point and imaging spectrometers provide different results for non-Lambertian samples due to their different sensing principles, adjacency scattering impacts on the signal and anisotropic surface reflection properties. PMID:28800111
Verification of spectrophotometric method for nitrate analysis in water samples
NASA Astrophysics Data System (ADS)
Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu
2017-12-01
The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.
On a more rigorous gravity field processing for future LL-SST type gravity satellite missions
NASA Astrophysics Data System (ADS)
Daras, I.; Pail, R.; Murböck, M.
2013-12-01
In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.
Gardner, Ian A; Whittington, Richard J; Caraguel, Charles G B; Hick, Paul; Moody, Nicholas J G; Corbeil, Serge; Garver, Kyle A.; Warg, Janet V.; Arzul, Isabelle; Purcell, Maureen; St. J. Crane, Mark; Waltzek, Thomas B.; Olesen, Niels J; Lagno, Alicia Gallardo
2016-01-01
Complete and transparent reporting of key elements of diagnostic accuracy studies for infectious diseases in cultured and wild aquatic animals benefits end-users of these tests, enabling the rational design of surveillance programs, the assessment of test results from clinical cases and comparisons of diagnostic test performance. Based on deficiencies in the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines identified in a prior finfish study (Gardner et al. 2014), we adapted the Standards for Reporting of Animal Diagnostic Accuracy Studies—paratuberculosis (STRADAS-paraTB) checklist of 25 reporting items to increase their relevance to finfish, amphibians, molluscs, and crustaceans and provided examples and explanations for each item. The checklist, known as STRADAS-aquatic, was developed and refined by an expert group of 14 transdisciplinary scientists with experience in test evaluation studies using field and experimental samples, in operation of reference laboratories for aquatic animal pathogens, and in development of international aquatic animal health policy. The main changes to the STRADAS-paraTB checklist were to nomenclature related to the species, the addition of guidelines for experimental challenge studies, and the designation of some items as relevant only to experimental studies and ante-mortem tests. We believe that adoption of these guidelines will improve reporting of primary studies of test accuracy for aquatic animal diseases and facilitate assessment of their fitness-for-purpose. Given the importance of diagnostic tests to underpin the Sanitary and Phytosanitary agreement of the World Trade Organization, the principles outlined in this paper should be applied to other World Organisation for Animal Health (OIE)-relevant species.
Gardner, Ian A; Whittington, Richard J; Caraguel, Charles G B; Hick, Paul; Moody, Nicholas J G; Corbeil, Serge; Garver, Kyle A; Warg, Janet V; Arzul, Isabelle; Purcell, Maureen K; Crane, Mark St J; Waltzek, Thomas B; Olesen, Niels J; Gallardo Lagno, Alicia
2016-02-25
Complete and transparent reporting of key elements of diagnostic accuracy studies for infectious diseases in cultured and wild aquatic animals benefits end-users of these tests, enabling the rational design of surveillance programs, the assessment of test results from clinical cases and comparisons of diagnostic test performance. Based on deficiencies in the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines identified in a prior finfish study (Gardner et al. 2014), we adapted the Standards for Reporting of Animal Diagnostic Accuracy Studies-paratuberculosis (STRADAS-paraTB) checklist of 25 reporting items to increase their relevance to finfish, amphibians, molluscs, and crustaceans and provided examples and explanations for each item. The checklist, known as STRADAS-aquatic, was developed and refined by an expert group of 14 transdisciplinary scientists with experience in test evaluation studies using field and experimental samples, in operation of reference laboratories for aquatic animal pathogens, and in development of international aquatic animal health policy. The main changes to the STRADAS-paraTB checklist were to nomenclature related to the species, the addition of guidelines for experimental challenge studies, and the designation of some items as relevant only to experimental studies and ante-mortem tests. We believe that adoption of these guidelines will improve reporting of primary studies of test accuracy for aquatic animal diseases and facilitate assessment of their fitness-for-purpose. Given the importance of diagnostic tests to underpin the Sanitary and Phytosanitary agreement of the World Trade Organization, the principles outlined in this paper should be applied to other World Organisation for Animal Health (OIE)-relevant species.
Age-Related Differences in Listening Effort During Degraded Speech Recognition.
Ward, Kristina M; Shen, Jing; Souza, Pamela E; Grieco-Calub, Tina M
The purpose of the present study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Twenty-five younger adults (YA; 18-24 years) and 21 older adults (OA; 56-82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants' responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners' performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (single task vs. dual task); and (3) per group (YA vs. OA). Speech recognition declined with increasing spectral degradation for both YA and OA when they performed the task in isolation or concurrently with the visual monitoring task. OA were slower and less accurate than YA on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared with single-task performance, OA experienced greater declines in secondary-task accuracy, but not reaction time, than YA. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. OA experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than YA. These findings are interpreted as suggesting that OA expended greater listening effort than YA, which may be partially attributed to age-related differences in executive control.
Christiansen, Mark; Greene, Carmine; Pardo, Scott; Warchal-Windham, Mary Ellen; Harrison, Bern; Morin, Robert; Bailey, Timothy S
2017-05-01
These studies investigated the accuracy of the new Contour ® Next ONE blood glucose monitoring system (BGMS) that is designed to sync with the Contour™ Diabetes app on a smartphone or tablet. A laboratory study tested fingertip capillary blood samples from 100 subjects in duplicate using 3 test strip lots, based on ISO 15197:2013 Section 6.3 analytical accuracy standards. A clinical study assessed accuracy per ISO 15197:2013 Section 8 criteria. Subjects with (n = 333) or without (n = 43) diabetes and who had not used the BGMS previously were enrolled. Each subject performed a self-test using the BGMS, which was repeated by a site staff member. Alternate site tests and venipunctures were also performed for analysis. A questionnaire was provided to assess user feedback on ease of use. In the laboratory study, 100% (600/600) of combined results for all 3 test strip lots met ISO 15197:2013 Section 6.3 accuracy criteria. In the clinical study, among subjects with diabetes, 99.4% (327/329) of subject self-test results, 99.7% (331/332) of results obtained by study staff, 97.2% (309/318) of subject palm results, and 100% (330/330) of venous results met ISO 15197:2013 Section 8 accuracy criteria. Moreover, 97.6% (321/329) of subject self-test results were within ±10 mg/dl (±0.6 mmol/L) or ±10% of the YSI reference result. Questionnaire results indicated that most subjects considered the system easy to use. The BGMS exceeded ISO 15197:2013 accuracy criteria in the laboratory and in a clinical setting.
Performance analysis of multiple Indoor Positioning Systems in a healthcare environment.
Van Haute, Tom; De Poorter, Eli; Crombez, Pieter; Lemic, Filip; Handziski, Vlado; Wirström, Niklas; Wolisz, Adam; Voigt, Thiemo; Moerman, Ingrid
2016-02-03
The combination of an aging population and nursing staff shortages implies the need for more advanced systems in the healthcare industry. Many key enablers for the optimization of healthcare systems require provisioning of location awareness for patients (e.g. with dementia), nurses, doctors, assets, etc. Therefore, many Indoor Positioning Systems (IPSs) will be indispensable in healthcare systems. However, although many IPSs have been proposed in literature, most of these have been evaluated in non-representative environments such as office buildings rather than in a hospital. To remedy this, the paper evaluates the performance of existing IPSs in an operational modern healthcare environment: the "Sint-Jozefs kliniek Izegem" hospital in Belgium. The evaluation (data-collecting and data-processing) is executed using a standardized methodology and evaluates the point accuracy, room accuracy and latency of multiple IPSs. To evaluate the solutions, the position of a stationary device was requested at 73 evaluation locations. By using the same evaluation locations for all IPSs the performance of all systems could objectively be compared. Several trends can be identified such as the fact that Wi-Fi based fingerprinting solutions have the best accuracy result (point accuracy of 1.21 m and room accuracy of 98%) however it requires calibration before use and needs 5.43 s to estimate the location. On the other hand, proximity based solutions (based on sensor nodes) are significantly cheaper to install, do not require calibration and still obtain acceptable room accuracy results. As a conclusion of this paper, Wi-Fi based solutions have the most potential for an indoor positioning service in case when accuracy is the most important metric. Applying the fingerprinting approach with an anchor installed in every two rooms is the preferred solution for a hospital environment.
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.
Bregni, Stefano
2016-04-01
The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.
Perera, C; Chakrabarti, R; Islam, F M A; Crowston, J
2015-01-01
Purpose Smartphone-based Snellen visual acuity charts has become popularized; however, their accuracy has not been established. This study aimed to evaluate the equivalence of a smartphone-based visual acuity chart with a standard 6-m Snellen visual acuity (6SVA) chart. Methods First, a review of available Snellen chart applications on iPhone was performed to determine the most accurate application based on optotype size. Subsequently, a prospective comparative study was performed by measuring conventional 6SVA and then iPhone visual acuity using the ‘Snellen' application on an Apple iPhone 4. Results Eleven applications were identified, with accuracy of optotype size ranging from 4.4–39.9%. Eighty-eight patients from general medical and surgical wards in a tertiary hospital took part in the second part of the study. The mean difference in logMAR visual acuity between the two charts was 0.02 logMAR (95% limit of agreement −0.332, 0.372 logMAR). The largest mean difference in logMAR acuity was noted in the subgroup of patients with 6SVA worse than 6/18 (n=5), who had a mean difference of two Snellen visual acuity lines between the charts (0.276 logMAR). Conclusion We did not identify a Snellen visual acuity app at the time of study, which could predict a patients standard Snellen visual acuity within one line. There was considerable variability in the optotype accuracy of apps. Further validation is required for assessment of acuity in patients with severe vision impairment. PMID:25931170
Competency-based assessment in surgeon-performed head and neck ultrasonography: A validity study.
Todsen, Tobias; Melchiors, Jacob; Charabi, Birgitte; Henriksen, Birthe; Ringsted, Charlotte; Konge, Lars; von Buchwald, Christian
2018-06-01
Head and neck ultrasonography (HNUS) increasingly is used as a point-of-care diagnostic tool by otolaryngologists. However, ultrasonography (US) is a very operator-dependent image modality. Hence, this study aimed to explore the diagnostic accuracy of surgeon-performed HNUS and to establish validity evidence for an objective structured assessment of ultrasound skills (OSAUS) used for competency-based assessment. A prospective experimental study. Six otolaryngologists and 11 US novices were included in a standardized test setup for which they had to perform focused HNUS of eight patients suspected for different head and neck lesions. Their diagnostic accuracy was calculated based on the US reports, and two blinded raters assessed the video-recorded US performance using the OSAUS scale. The otolaryngologists obtained a high diagnostic accuracy on 88% (range 63%-100%) compared to the US novices on 38% (range 0-63%); P < 0.001. The OSAUS score demonstrated good inter-case reliability (0.85) and inter-rater reliability (0.76), and significant discrimination between otolaryngologist and US novices; P < 0.001. A strong correlation between the OSAUS score and the diagnostic accuracy was found (Spearman's ρ, 0.85; P < P 0.001), and a pass/fail score was established at 2.8. Strong validity evidence supported the use of the OSAUS scale to assess HNUS competence with good reliability, significant discrimination between US competence levels, and a strong correlation of assessment score to diagnostic accuracy. An OSAUS pass/fail score was established and could be used for competence-based assessment in surgeon-performed HNUS. NA. Laryngoscope, 128:1346-1352, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
The accuracy of an electromagnetic navigation system in lateral skull base approaches.
Komune, Noritaka; Matsushima, Ken; Matsuo, Satoshi; Safavi-Abbasi, Sam; Matsumoto, Nozomu; Rhoton, Albert L
2017-02-01
Image-guided optical tracking systems are being used with increased frequency in lateral skull base surgery. Recently, electromagnetic tracking systems have become available for use in this region. However, the clinical accuracy of the electromagnetic tracking system has not been examined in lateral skull base surgery. This study evaluates the accuracy of electromagnetic navigation in lateral skull base surgery. Cadaveric and radiographic study. Twenty cadaveric temporal bones were dissected in a surgical setting under a commercially available, electromagnetic surgical navigation system. The target registration error (TRE) was measured at 28 surgical landmarks during and after performing the standard translabyrinthine and middle cranial fossa surgical approaches to the internal acoustic canal. In addition, three demonstrative procedures that necessitate navigation with high accuracy were performed; that is, canalostomy of the superior semicircular canal from the middle cranial fossa, 1 cochleostomy from the middle cranial fossa, 2 and infralabyrinthine approach to the petrous apex. 3 RESULTS: Eleven of 17 (65%) of the targets in the translabyrinthine approach and five of 11 (45%) of the targets in the middle fossa approach could be identified in the navigation system with TRE of less than 0.5 mm. Three accuracy-dependent procedures were completed without anatomical injury of important anatomical structures. The electromagnetic navigation system had sufficient accuracy to be used in the surgical setting. It was possible to perform complex procedures in the lateral skull base under the guidance of the electromagnetically tracked navigation system. N/A. Laryngoscope, 2016 127:450-459, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Accuracy of ultrasound-guided nerve blocks of the cervical zygapophysial joints.
Siegenthaler, Andreas; Mlekusch, Sabine; Trelle, Sven; Schliessbach, Juerg; Curatolo, Michele; Eichenberger, Urs
2012-08-01
Cervical zygapophysial joint nerve blocks typically are performed with fluoroscopic needle guidance. Descriptions of ultrasound-guided block of these nerves are available, but only one small study compared ultrasound with fluoroscopy, and only for the third occipital nerve. To evaluate the potential usefulness of ultrasound-guidance in clinical practice, studies that determine the accuracy of this technique using a validated control are essential. The aim of this study was to determine the accuracy of ultrasound-guided nerve blocks of the cervical zygapophysial joints using fluoroscopy as control. Sixty volunteers were studied. Ultrasound-imaging was used to place the needle to the bony target of cervical zygapophysial joint nerve blocks. The levels of needle placement were determined randomly (three levels per volunteer). After ultrasound-guided needle placement and application of 0.2 ml contrast dye, fluoroscopic imaging was performed for later evaluation by a blinded pain physician and considered as gold standard. Raw agreement, chance-corrected agreement κ, and chance-independent agreement Φ between the ultrasound-guided placement and the assessment using fluoroscopy were calculated to quantify accuracy. One hundred eighty needles were placed in 60 volunteers. Raw agreement was 87% (95% CI 81-91%), κ was 0.74 (0.64-0.83), and Φ 0.99 (0.99-0.99). Accuracy varied significantly between the different cervical nerves: it was low for the C7 medial branch, whereas all other levels showed very good accuracy. Ultrasound-imaging is an accurate technique for performing cervical zygapophysial joint nerve blocks in volunteers, except for the medial branch blocks of C7.
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
NASA Technical Reports Server (NTRS)
Carpenter, Paul
2003-01-01
Electron-probe microanalysis standards and issues related to measurement and accuracy of microanalysis will be discussed. Critical evaluation of standards based on homogeneity and comparison with wet-chemical analysis will be made. Measurement problems such as spectrometer dead-time will be discussed. Analytical accuracy issues will be evaluated for systems by alpha-factor analysis and comparison with experimental k-ratio databases.
Performance of Cleared Blood Glucose Monitors.
Klonoff, David C; Prahalad, Priya
2015-07-01
Cleared blood glucose monitor (BGM) systems do not always perform as accurately for users as they did to become cleared. We performed a literature review of recent publications between 2010 and 2014 that present data about the frequency of inaccurate performance using ISO 15197 2003 and ISO 15197 2013 as target standards. We performed an additional literature review of publications that present data about the clinical and economic risks of inaccurate BGMs for making treatment decisions or calibrating continuous glucose monitors (CGMs). We found 11 publications describing performance of 98 unique BGM systems. 53 of these 98 (54%) systems met ISO 15197 2003 and 31 of the 98 (32%) tested systems met ISO 15197 2013 analytical accuracy standards in all studies in which they were evaluated. Of the tested systems, 33 were identified by us as FDA-cleared. Among these FDA-cleared BGM systems, 24 out of 32 (75%) met ISO 15197 2003 and 15 out of 31 (48.3%) met ISO 15197 2013 in all studies in which they were evaluated. Among the non-FDA-cleared BGM systems, 29 of 65 (45%) met ISO 15197 2003 and 15 out of 65 (23%) met ISO 15197 2013 in all studies in which they were evaluated. It is more likely that an FDA-cleared BGM system, compared to a non-FDA-cleared BGM system, will perform according to ISO 15197 2003 (χ(2) = 6.2, df = 3, P = 0.04) and ISO 15197 2013 (χ(2) = 11.4, df = 3, P = 0.003). We identified 7 articles about clinical risks and 3 articles about economic risks of inaccurate BGMs. We conclude that a significant proportion of cleared BGMs do not perform at the level for which they were cleared or according to international standards of accuracy. Such poor performance leads to adverse clinical and economic consequences. © 2015 Diabetes Technology Society.
Use of the color trails test as an embedded measure of performance validity.
Henry, George K; Algina, James
2013-01-01
One hundred personal injury litigants and disability claimants referred for a forensic neuropsychological evaluation were administered both portions of the Color Trails Test (CTT) as part of a more comprehensive battery of standardized tests. Subjects who failed two or more free-standing tests of cognitive performance validity formed the Failed Performance Validity (FPV) group, while subjects who passed all free-standing performance validity measures were assigned to the Passed Performance Validity (PPV) group. A cutscore of ≥45 seconds to complete Color Trails 1 (CT1) was associated with a classification accuracy of 78%, good sensitivity (66%) and high specificity (90%), while a cutscore of ≥84 seconds to complete Color Trails 2 (CT2) was associated with a classification accuracy of 82%, good sensitivity (74%) and high specificity (90%). A CT1 cutscore of ≥58 seconds, and a CT2 cutscore ≥100 seconds was associated with 100% positive predictive power at base rates from 20 to 50%.
76 FR 55819 - Track Safety Standards; Concrete Crossties
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-09
... geometry car that had operated over BNSF's Seadrift subdivision on December 14, 2010. According to AAR... is surprised that AAR asserts at this stage in the rulemaking process that the technology to perform... accuracy specified by Sec. 213.234, or \\1/8\\ of an inch, without mandating which technology should be used...
Nakamura, Masakazu; Iso, Hiroyasu; Kitamura, Akihiko; Imano, Hironori; Noda, Hiroyuki; Kiyama, Masahiko; Sato, Shinichi; Yamagishi, Kazumasa; Nishimura, Kunihiro; Nakai, Michikazu; Vesper, Hubert W; Teramoto, Tamio; Miyamoto, Yoshihiro
2016-11-01
Background The US Centers for Disease Control and Prevention ensured adequate performance of the routine triglycerides methods used in Japan by a chromotropic acid reference measurement procedure used by the Centers for Disease Control and Prevention lipid standardization programme as a reference point. We examined standardized data to clarify the performance of routine triglycerides methods. Methods The two routine triglycerides methods were the fluorometric method of Kessler and Lederer and the enzymatic method. The methods were standardized using 495 Centers for Disease Control and Prevention reference pools with 98 different concentrations ranging between 0.37 and 5.15 mmol/L in 141 survey runs. The triglycerides criteria for laboratories which perform triglycerides analyses are used: accuracy, as bias ≤5% from the Centers for Disease Control and Prevention reference value and precision, as measured by CV, ≤5%. Results The correlation of the bias of both methods to the Centers for Disease Control and Prevention reference method was: y (%bias) = 0.516 × (Centers for Disease Control and Prevention reference value) -1.292 ( n = 495, R 2 = 0.018). Triglycerides bias at medical decision points of 1.13, 1.69 and 2.26 mmol/L was -0.71%, -0.42% and -0.13%, respectively. For the combined precision, the equation y (CV) = -0.398 × (triglycerides value) + 1.797 ( n = 495, R 2 = 0.081) was used. Precision was 1.35%, 1.12% and 0.90%, respectively. It was shown that triglycerides measurements at Osaka were stable for 36 years. Conclusions The epidemiologic laboratory in Japan met acceptable accuracy goals for 88.7% of all samples, and met acceptable precision goals for 97.8% of all samples measured through the Centers for Disease Control and Prevention lipid standardization programme and demonstrated stable results for an extended period of time.
Nakamura, Masakazu; Iso, Hiroyasu; Kitamura, Akihiko; Imano, Hironori; Noda, Hiroyuki; Kiyama, Masahiko; Sato, Shinichi; Yamagishi, Kazumasa; Nishimura, Kunihiro; Nakai, Michikazu; Vesper, Hubert W; Teramoto, Tamio; Miyamoto, Yoshihiro
2017-01-01
Background The US Centers for Disease Control and Prevention ensured adequate performance of the routine triglycerides methods used in Japan by a chromotropic acid reference measurement procedure used by the Centers for Disease Control and Prevention lipid standardization programme as a reference point. We examined standardized data to clarify the performance of routine triglycerides methods. Methods The two routine triglycerides methods were the fluorometric method of Kessler and Lederer and the enzymatic method. The methods were standardized using 495 Centers for Disease Control and Prevention reference pools with 98 different concentrations ranging between 0.37 and 5.15 mmol/L in 141 survey runs. The triglycerides criteria for laboratories which perform triglycerides analyses are used: accuracy, as bias ≤5% from the Centers for Disease Control and Prevention reference value and precision, as measured by CV, ≤5%. Results The correlation of the bias of both methods to the Centers for Disease Control and Prevention reference method was: y (%bias) = 0.516 × (Centers for Disease Control and Prevention reference value) −1.292 (n = 495, R2 = 0.018). Triglycerides bias at medical decision points of 1.13, 1.69 and 2.26 mmol/L was −0.71%, −0.42% and −0.13%, respectively. For the combined precision, the equation y (CV) = −0.398 × (triglycerides value) + 1.797 (n = 495, R2 = 0.081) was used. Precision was 1.35%, 1.12% and 0.90%, respectively. It was shown that triglycerides measurements at Osaka were stable for 36 years. Conclusions The epidemiologic laboratory in Japan met acceptable accuracy goals for 88.7% of all samples, and met acceptable precision goals for 97.8% of all samples measured through the Centers for Disease Control and Prevention lipid standardization programme and demonstrated stable results for an extended period of time. PMID:26680645
Automation of the anthrone assay for carbohydrate concentration determinations.
Turula, Vincent E; Gore, Thomas; Singh, Suddham; Arumugham, Rasappa G
2010-03-01
Reported is the adaptation of a manual polysaccharide assay applicable for glycoconjugate vaccines such as Prevenar to an automated liquid handling system (LHS) for improved performance. The anthrone assay is used for carbohydrate concentration determinations and was scaled to the microtiter plate format with appropriate mixing, dispensing, and measuring operations. Adaptation and development of the LHS platform was performed with both dextran polysaccharides of various sizes and pneumococcal serotype 6A polysaccharide (PnPs 6A). A standard plate configuration was programmed such that the LHS diluted both calibration standards and a test sample multiple times with six replicate preparations per dilution. This extent of replication minimized the effect of any single deviation or delivery error that might have occurred. Analysis of the dextran polymers ranging in size from 214 kDa to 3.755 MDa showed that regardless of polymer chain length the hydrolysis was complete, as evident by uniform concentration measurements. No plate positional absorbance bias was observed; of 12 plates analyzed to examine positional bias the largest deviation observed was 0.02% percent relative standard deviation (%RSD). The high purity dextran also afforded the opportunity to assess LHS accuracy; nine replicate analyses of dextran yielded a mean accuracy of 101% recovery. As for precision, a total of 22 unique analyses were performed on a single lot of PnPs 6A, and the resulting variability was 2.5% RSD. This work demonstrated the capability of a LHS to perform the anthrone assay consistently and a reduced assay cycle time for greater laboratory capacity.
NASA Astrophysics Data System (ADS)
Gupta, Arun; Kim, Kyeong Yun; Hwang, Donghwi; Lee, Min Sun; Lee, Dong Soo; Lee, Jae Sung
2018-06-01
SPECT plays important role in peptide receptor targeted radionuclide therapy using theranostic radionuclides such as Lu-177 for the treatment of various cancers. However, SPECT studies must be quantitatively accurate because the reliable assessment of tumor uptake and tumor-to-normal tissue ratios can only be performed using quantitatively accurate images. Hence, it is important to evaluate performance parameters and quantitative accuracy of preclinical SPECT systems for therapeutic radioisotopes before conducting pre- and post-therapy SPECT imaging or dosimetry studies. In this study, we evaluated system performance and quantitative accuracy of NanoSPECT/CT scanner for Lu-177 imaging using point source and uniform phantom studies. We measured recovery coefficient, uniformity, spatial resolution, system sensitivity and calibration factor for mouse whole body standard aperture. We also performed the experiments using Tc-99m to compare the results with that of Lu-177. We found that the recovery coefficient of more than 70% for Lu-177 at the optimum noise level when nine iterations were used. The spatial resolutions of Lu-177 with and without adding uniform background was comparable to that of Tc-99m in axial, radial and tangential directions. System sensitivity measured for Lu-177 was almost three times less than that of Tc-99m.
Albert, Mark V; Azeze, Yohannes; Courtois, Michael; Jayaraman, Arun
2017-02-06
Although commercially available activity trackers can aid in tracking therapy and recovery of patients, most devices perform poorly for patients with irregular movement patterns. Standard machine learning techniques can be applied on recorded accelerometer signals in order to classify the activities of ambulatory subjects with incomplete spinal cord injury in a way that is specific to this population and the location of the recording-at home or in the clinic. Subjects were instructed to perform a standardized set of movements while wearing a waist-worn accelerometer in the clinic and at-home. Activities included lying, sitting, standing, walking, wheeling, and stair climbing. Multiple classifiers and validation methods were used to quantify the ability of the machine learning techniques to distinguish the activities recorded in-lab or at-home. In the lab, classifiers trained and tested using within-subject cross-validation provided an accuracy of 91.6%. When the classifier was trained on data collected in the lab but tested on at home data, the accuracy fell to 54.6% indicating distinct movement patterns between locations. However, the accuracy of the at-home classifications, when training the classifier with at-home data, improved to 85.9%. Individuals with unique movement patterns can benefit from using tailored activity recognition algorithms easily implemented using modern machine learning methods on collected movement data.
Zhang, He; Hou, Chang; Zhou, Zhi; Zhang, Hao; Zhou, Gen; Zhang, Gui
2014-01-01
The diagnostic performance of 64-detector computed tomographic angiography (CTA) for detection of small intracranial aneurysms (SIAs) was evaluated. In this prospective study, 112 consecutive patients underwent 64-detector CTA before volume-rendering rotation digital subtraction angiography (VR-RDSA) or surgery. VR-RDSA or intraoperative findings or both were used as the gold standards. The accuracy, sensitivity, specificity, and positive predictive values (PPV) and negative predictive values (NPV), as measures to detect or rule out SIAs, were determined by patient-based and aneurysm size-based evaluations. The reference standard methods revealed 84 small aneurysms in 71 patients. The results of patient-based 64-detector CTA evaluation for SIAs were: accuracy, 98.2%; sensitivity, 98.6%; specificity, 97.6%; PPV, 98.6%; and NPV, 97.6%. The aneurysm-based evaluation results were: accuracy, 96.8%; sensitivity, 97.6%; specificity, 95.1%; PPV, 97.6%; and NPV, 95.1%. Two false-positive and two false-negative findings for aneurysms <3 mm in size occurred in the 64-detector CTA analysis. The diagnostic performance of 64-detector CTA did not improve much compared with 16-detector CTA for detecting SIAs, especially for very small aneurysms. VR-RDSA is still necessary for patients with a history of subarachnoid hemorrhage if the CTA findings are negative. Copyright © 2012 by the American Society of Neuroimaging.
Yoon, Jong Lull; Cho, Jung Jin; Park, Kyung Mi; Noh, Hye Mi; Park, Yong Soon
2015-02-01
Associations between body mass index (BMI), body fat percentage (BF%), and health risks differ between Asian and European populations. BMI is commonly used to diagnose obesity; however, its accuracy in detecting adiposity in Koreans is unknown. The present cross-sectional study aimed at assessing the accuracy of BMI in determining BF%-defined obesity in 6,017 subjects (age 20-69 yr, 43.6% men) from the 2009 Korean National Health and Nutrition Examination Survey. We assessed the diagnostic performance of BMI using the Western Pacific Regional Office of World Health Organization reference standard for BF%-defined obesity by sex and age and identified the optimal BMI cut-off for BF%-defined obesity using receiver operating characteristic curve analysis. BMI-defined obesity (≥25 kg/m(2)) was observed in 38.7% of men and 28.1% of women, with a high specificity (89%, men; 84%, women) but poor sensitivity (56%, men; 72% women) for BF%-defined obesity (25.2%, men; 31.1%, women). The optimal BMI cut-off (24.2 kg/m(2)) had 78% sensitivity and 71% specificity. BMI demonstrated limited diagnostic accuracy for adiposity in Korea. There was a -1.3 kg/m(2) difference in optimal BMI cut-offs between Korea and America, smaller than the 5-unit difference between the Western Pacific Regional Office and global World Health Organization obesity criteria.
Design of Malaria Diagnostic Criteria for the Sysmex XE-2100 Hematology Analyzer
Campuzano-Zuluaga, Germán; Álvarez-Sánchez, Gonzalo; Escobar-Gallo, Gloria Elcy; Valencia-Zuluaga, Luz Marina; Ríos-Orrego, Alexandra Marcela; Pabón-Vidal, Adriana; Miranda-Arboleda, Andrés Felipe; Blair-Trujillo, Silvia; Campuzano-Maya, Germán
2010-01-01
Thick film, the standard diagnostic procedure for malaria, is not always ordered promptly. A failsafe diagnostic strategy using an XE-2100 analyzer is proposed, and for this strategy, malaria diagnostic models for the XE-2100 were developed and tested for accuracy. Two hundred eighty-one samples were distributed into Plasmodium vivax, P. falciparum, and acute febrile syndrome groups for model construction. Model validation was performed using 60% of malaria cases and a composite control group of samples from AFS and healthy participants from endemic and non-endemic regions. For P. vivax, two observer-dependent models (accuracy = 95.3–96.9%), one non–observer-dependent model using built-in variables (accuracy = 94.7%), and one non–observer-dependent model using new and built-in variables (accuracy = 96.8%) were developed. For P. falciparum, two non–observer-dependent models (accuracies = 85% and 89%) were developed. These models could be used by health personnel or be integrated as a malaria alarm for the XE-2100 to prompt early malaria microscopic diagnosis. PMID:20207864
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
Evaluation of centroiding algorithm error for Nano-JASMINE
NASA Astrophysics Data System (ADS)
Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki
2014-08-01
The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.
Interpretation of bedside chest X-rays in the ICU: is the radiologist still needed?
Martini, Katharina; Ganter, Christoph; Maggiorini, Marco; Winklehner, Anna; Leupi-Skibinski, Katarzyna E; Frauenfelder, Thomas; Nguyen-Kim, Thi Dan Linh
2015-01-01
To compare diagnostic accuracy of intensivists to radiologists in reading bedside chest X-rays. In a retrospective trial, 33 bedside chest X-rays were evaluated by five radiologists and five intensivists with different experience. Images were evaluated for devices and lung pathologies. Interobserver agreement and diagnostic accuracy were calculated. Computed tomography served as reference standard. Seniors had higher diagnostic accuracy than residents (mean-ExpB(Senior)=1.456; mean-ExpB(Resident)=1.635). Interobserver agreement for installations was more homogenously distributed between radiologists compared to intensivists (ExpB(Rad)=1.204-1.672; ExpB(Int)=1.005-2.368). Seniors had comparable diagnostic accuracy. No significant difference in diagnostic performance was seen between seniors of both disciplines, whereas the resident intensivists might still benefit from an interdisciplinary dialogue. Copyright © 2015 Elsevier Inc. All rights reserved.
Schmidt, Robert L; Factor, Rachel E; Affolter, Kajsa E; Cook, Joshua B; Hall, Brian J; Narra, Krishna K; Witt, Benjamin L; Wilson, Andrew R; Layfield, Lester J
2012-01-01
Diagnostic test accuracy (DTA) studies on fine-needle aspiration cytology (FNAC) often show considerable variability in diagnostic accuracy between study centers. Many factors affect the accuracy of FNAC. A complete description of the testing parameters would help make valid comparisons between studies and determine causes of performance variation. We investigated the manner in which test conditions are specified in FNAC DTA studies to determine which parameters are most commonly specified and the frequency with which they are specified and to see whether there is significant variability in reporting practice. We identified 17 frequently reported test parameters and found significant variation in the reporting of these test specifications across studies. On average, studies reported 5 of the 17 items that would be required to specify the test conditions completely. A more complete and standardized reporting of methods, perhaps by means of a checklist, would improve the interpretation of FNAC DTA studies.
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
2013-07-01
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.
A test of the reward-value hypothesis.
Smith, Alexandra E; Dalecki, Stefan J; Crystal, Jonathon D
2017-03-01
Rats retain source memory (memory for the origin of information) over a retention interval of at least 1 week, whereas their spatial working memory (radial maze locations) decays within approximately 1 day. We have argued that different forgetting functions dissociate memory systems. However, the two tasks, in our previous work, used different reward values. The source memory task used multiple pellets of a preferred food flavor (chocolate), whereas the spatial working memory task provided access to a single pellet of standard chow-flavored food at each location. Thus, according to the reward-value hypothesis, enhanced performance in the source memory task stems from enhanced encoding/memory of a preferred reward. We tested the reward-value hypothesis by using a standard 8-arm radial maze task to compare spatial working memory accuracy of rats rewarded with either multiple chocolate or chow pellets at each location using a between-subjects design. The reward-value hypothesis predicts superior accuracy for high-valued rewards. We documented equivalent spatial memory accuracy for high- and low-value rewards. Importantly, a 24-h retention interval produced equivalent spatial working memory accuracy for both flavors. These data are inconsistent with the reward-value hypothesis and suggest that reward value does not explain our earlier findings that source memory survives unusually long retention intervals.
Hao, Pengyu; Wang, Li; Niu, Zheng
2015-01-01
A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597
Inattentional blindness increased with augmented reality surgical navigation.
Dixon, Benjamin J; Daly, Michael J; Chan, Harley H L; Vescan, Allan; Witterick, Ian J; Irish, Jonathan C
2014-01-01
Augmented reality (AR) surgical navigation systems, designed to increase accuracy and efficiency, have been shown to negatively impact on attention. We wished to assess the effect "head-up" AR displays have on attention, efficiency, and accuracy, while performing a surgical task, compared with the same information being presented on a submonitor (SM). Fifty experienced otolaryngology surgeons (n = 42) and senior otolaryngology trainees (n = 8) performed an endoscopic surgical navigation exercise on a predissected cadaveric model. Computed tomography-generated anatomic contours were fused with the endoscopic image to provide an AR view. Subjects were randomized to perform the task with a standard endoscopic monitor with the AR navigation displayed on an SM or with AR as a single display. Accuracy, task completion time, and the recognition of unexpected findings (a foreign body and a critical complication) were recorded. Recognition of the foreign body was significantly better in the SM group (15/25 [60%]) compared with the AR alone group (8/25 [32%]; p = 0.02). There was no significant difference in task completion time (p = 0.83) or accuracy (p = 0.78) between the two groups. Providing identical surgical navigation on a SM, rather than on a single head-up display, reduced the level of inattentional blindness as measured by detection of unexpected findings. These gains were achieved without any measurable impact on efficiency or accuracy. AR displays may distract the user and we caution injudicious adoption of this technology for medical procedures.
Schnakers, Caroline; Vanhaudenhuyse, Audrey; Giacino, Joseph; Ventura, Manfredi; Boly, Melanie; Majerus, Steve; Moonen, Gustave; Laureys, Steven
2009-07-21
Previously published studies have reported that up to 43% of patients with disorders of consciousness are erroneously assigned a diagnosis of vegetative state (VS). However, no recent studies have investigated the accuracy of this grave clinical diagnosis. In this study, we compared consensus-based diagnoses of VS and MCS to those based on a well-established standardized neurobehavioral rating scale, the JFK Coma Recovery Scale-Revised (CRS-R). We prospectively followed 103 patients (55 +/- 19 years) with mixed etiologies and compared the clinical consensus diagnosis provided by the physician on the basis of the medical staff's daily observations to diagnoses derived from CRS-R assessments performed by research staff. All patients were assigned a diagnosis of 'VS', 'MCS' or 'uncertain diagnosis.' Of the 44 patients diagnosed with VS based on the clinical consensus of the medical team, 18 (41%) were found to be in MCS following standardized assessment with the CRS-R. In the 41 patients with a consensus diagnosis of MCS, 4 (10%) had emerged from MCS, according to the CRS-R. We also found that the majority of patients assigned an uncertain diagnosis by clinical consensus (89%) were in MCS based on CRS-R findings. Despite the importance of diagnostic accuracy, the rate of misdiagnosis of VS has not substantially changed in the past 15 years. Standardized neurobehavioral assessment is a more sensitive means of establishing differential diagnosis in patients with disorders of consciousness when compared to diagnoses determined by clinical consensus.
NASA Astrophysics Data System (ADS)
Kazantseva, L.
2011-09-01
The collection of photographic images of Kiev University Observatory covers a period of almost a hundred years and it is interesting from scientific and historical point of view. The study of contemporary techniques of such observations, processing of negatives, creating of copies of them, a photometric standards using various photographic emulsions and photographic materials in combination with preserved photographic techniques and astronomical instruments (from telescopes unique home made photometer to cassettes) - reflect the age-old history of photographic field of astronomy. For the first, celestial objects, astronomical events, star fields, recorded on such a long time interval have a valuable information. For the second, complete restoration of information causes many difficulties. Even with well-preserved emulsion for a hundred years, the standards for description of photographs repeatedly were changing; not all magazines of observations are preserved; sometimes it is not possible to install a toll, which held up. Therefore phase of systematization and cataloguing the collection is very important and quite difficult. Conduction of observations in expedition conditions with various instruments requires a comparative assessment of their accuracy. This division performed on a series of collections, identified photos, and selected certain standards, scanned images of each series by the standard method compared with atalogue information. In the future such work will enable a quick search and use images in conjunction with other than the object coordinates, date, method of observation, and for astrometry and photometric accuracy.
COLAcode: COmoving Lagrangian Acceleration code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin V.
2016-02-01
COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.
A novel ultra-wideband 80 GHz FMCW radar system for contactless monitoring of vital signs.
Wang, Siying; Pohl, Antje; Jaeschke, Timo; Czaplik, Michael; Köny, Marcus; Leonhardt, Steffen; Pohl, Nils
2015-01-01
In this paper an ultra-wideband 80 GHz FMCW-radar system for contactless monitoring of respiration and heart rate is investigated and compared to a standard monitoring system with ECG and CO(2) measurements as reference. The novel FMCW-radar enables the detection of the physiological displacement of the skin surface with submillimeter accuracy. This high accuracy is achieved with a large bandwidth of 10 GHz and the combination of intermediate frequency and phase evaluation. This concept is validated with a radar system simulation and experimental measurements are performed with different radar sensor positions and orientations.
Chernyshev, Oleg Y; Garami, Zsolt; Calleja, Sergio; Song, Joon; Campbell, Morgan S; Noser, Elizabeth A; Shaltoni, Hashem; Chen, Chin-I; Iguchi, Yasuyuki; Grotta, James C; Alexandrov, Andrei V
2005-01-01
We routinely perform an urgent bedside neurovascular ultrasound examination (NVUE) with carotid/vertebral duplex and transcranial Doppler (TCD) in patients with acute cerebral ischemia. We aimed to determine the yield and accuracy of NVUE to identify lesions amenable for interventional treatment (LAITs). NVUE was performed with portable carotid duplex and TCD using standardized fast-track (<15 minutes) insonation protocols. Digital subtraction angiography (DSA) was the gold standard for identifying LAIT. These lesions were defined as proximal intra- or extracranial occlusions, near-occlusions, > or =50% stenoses or thrombus in the symptomatic artery. One hundred and fifty patients (70 women, mean age 66+/-15 years) underwent NVUE at median 128 minutes after symptom onset. Fifty-four patients (36%) received intravenous or intra-arterial thrombolysis (median National Institutes of Health Stroke Scale (NIHSS) score 14, range 4 to 29; 81% had NIHSS > or =10 points). NVUE demonstrated LAITs in 98% of patients eligible for thrombolysis, 76% of acute stroke patients ineligible for thrombolysis (n=63), and 42% in patients with transient ischemic attack (n=33), P<0.001. Urgent DSA was performed in 30 patients on average 230 minutes after NVUE. Compared with DSA, NVUE predicted LAIT presence with 100% sensitivity and 100% specificity, although individual accuracy parameters for TCD and carotid duplex specific to occlusion location ranged 75% to 96% because of the presence of tandem lesions and 10% rate of no temporal windows. Bedside neurovascular ultrasound examination, combining carotid/vertebral duplex with TCD yields a substantial proportion of LAITs in excellent agreement with urgent DSA.
A Study on Performance and Safety Tests of Defibrillator Equipment.
Tavakoli Golpaygani, A; Movahedi, M M; Reza, M
2017-12-01
Nowadays, more than 10,000 different types of medical devices can be found in hospitals. This way, medical electrical equipment is being employed in a wide variety of fields in medical sciences with different physiological effects and measurements. Hospitals and medical centers must ensure that their critical medical devices are safe, accurate, reliable and operational at the required level of performance. Defibrillators are critical resuscitation devices. The use of reliable defibirillators has led to more effective treatments and improved patient safety through better control and management of complications during Cardiopulmonary Resuscitation (CPR). The metrological reliability of twenty frequent use, manual defibrillators in use ten hospitals (4 private and 6 public) in one of the provinces of Iran according to international and national standards was evaluated. Quantitative analysis of control and instrument accuracy showed the amount of the obtained results in many units are critical which had less value over the standard limitations especially in devices with poor battery. For the accuracy of delivered energy analysis, only twelve units delivered acceptable output values and the precision in the output energy measurements especialy in weak battry condition, after activation of discharge alarm, were low. Obtained results indicate a need for new and severe regulations on periodic performance verifications and medical equipment quality control program especially for high risk instruments. It is also necessary to provide training courses on the fundumentals of operation and performane parameters for medical staff in the field of meterology in medicine and how one can get good accuracy results especially in high risk medical devices.
A Study on Performance and Safety Tests of Defibrillator Equipment
Tavakoli Golpaygani, A.; Movahedi, M.M.; Reza, M.
2017-01-01
Introduction: Nowadays, more than 10,000 different types of medical devices can be found in hospitals. This way, medical electrical equipment is being employed in a wide variety of fields in medical sciences with different physiological effects and measurements. Hospitals and medical centers must ensure that their critical medical devices are safe, accurate, reliable and operational at the required level of performance. Defibrillators are critical resuscitation devices. The use of reliable defibirillators has led to more effective treatments and improved patient safety through better control and management of complications during Cardiopulmonary Resuscitation (CPR). Materials and Methods: The metrological reliability of twenty frequent use, manual defibrillators in use ten hospitals (4 private and 6 public) in one of the provinces of Iran according to international and national standards was evaluated. Results: Quantitative analysis of control and instrument accuracy showed the amount of the obtained results in many units are critical which had less value over the standard limitations especially in devices with poor battery. For the accuracy of delivered energy analysis, only twelve units delivered acceptable output values and the precision in the output energy measurements especialy in weak battry condition, after activation of discharge alarm, were low. Conclusion: Obtained results indicate a need for new and severe regulations on periodic performance verifications and medical equipment quality control program especially for high risk instruments. It is also necessary to provide training courses on the fundumentals of operation and performane parameters for medical staff in the field of meterology in medicine and how one can get good accuracy results especially in high risk medical devices. PMID:29445716
Genome-based prediction of test cross performance in two subsequent breeding cycles.
Hofheinz, Nina; Borchardt, Dietrich; Weissleder, Knuth; Frisch, Matthias
2012-12-01
Genome-based prediction of genetic values is expected to overcome shortcomings that limit the application of QTL mapping and marker-assisted selection in plant breeding. Our goal was to study the genome-based prediction of test cross performance with genetic effects that were estimated using genotypes from the preceding breeding cycle. In particular, our objectives were to employ a ridge regression approach that approximates best linear unbiased prediction of genetic effects, compare cross validation with validation using genetic material of the subsequent breeding cycle, and investigate the prospects of genome-based prediction in sugar beet breeding. We focused on the traits sugar content and standard molasses loss (ML) and used a set of 310 sugar beet lines to estimate genetic effects at 384 SNP markers. In cross validation, correlations >0.8 between observed and predicted test cross performance were observed for both traits. However, in validation with 56 lines from the next breeding cycle, a correlation of 0.8 could only be observed for sugar content, for standard ML the correlation reduced to 0.4. We found that ridge regression based on preliminary estimates of the heritability provided a very good approximation of best linear unbiased prediction and was not accompanied with a loss in prediction accuracy. We conclude that prediction accuracy assessed with cross validation within one cycle of a breeding program can not be used as an indicator for the accuracy of predicting lines of the next cycle. Prediction of lines of the next cycle seems promising for traits with high heritabilities.
Lucentini, Luca; Ferretti, Emanuele; Veschetti, Enrico; Achene, Laura; Turrio-Baldassarri, Luigi; Ottaviani, Massimo; Bogialli, Sara
2009-01-01
A simple and sensitive liquid chromatographic-tandem mass spectrometric (LC/MS/MS) method has been developed and validated to confirm and quantify acrylamide monomer (AA) in drinking water using [13C3] acrylamide as internal standard (IS). After a preconcentration by solid-phase extraction with spherical activated carbon, analytes were chromatographed on IonPac ICE-AS1 column (9 x 250 mm) under isocratic conditions using acetonitrile-water-0.1 M formic acid (43 + 52 + 5, v/v/v) as the mobile phase. Analysis was achieved using a triple-quadrupole mass analyzer equipped with a turbo ion spray interface. For confirmation and quantification of the analytes, MS data acquisition was performed in the multireaction monitoring mode, selecting 2 precursor ion to product ion transitions for both AA and IS. The method was validated for linearity, sensitivity, accuracy, precision, extraction efficiency, and matrix effect. Linearity in tap water was observed over the concentration range 0.1-2.0 microg/L. Limits of detection and quantification were 0.02 and 0.1 microg/L, respectively. Interday and intraday assays were performed across 3 validation levels (0.1, 0.5, and 1.5 microg/L). Accuracy (as mean recovery) ranged from 89.3 to 96.2% with relative standard deviation <7.98%. Performance characteristics of this LC/MS/MS method make it suitable for regulatory confirmatory analysis of AA in drinking water in compliance with European Union and U.S. Environmental Protection Agency standards.
Ferreira, Ana Paula A; Póvoa, Luciana C; Zanier, José F C; Ferreira, Arthur S
2017-02-01
The aim of this study was to assess the thorax-rib static method (TRSM), a palpation method for locating the seventh cervical spinous process (C7SP), and to report clinical data on the accuracy of this method and that of the neck flexion-extension method (FEM), using radiography as the gold standard. A single-blinded, cross-sectional diagnostic accuracy study was conducted. One hundred and one participants from a primary-to-tertiary health care center (63 men, 56 ± 17 years of age) had their neck palpated using the FEM and the TRSM. A single examiner performed both the FEM and TRSM in a random sequence. Radiopaque markers were placed at each location with the aid of an ultraviolet lamp. Participants underwent chest radiography for assessment of the superimposed inner body structure, which was located by using either the FEM or the TRSM. Accuracy in identifying the C7SP was 18% and 33% (P = .013) with use of the FEM and the TRSM, respectively. The cumulative accuracy considering both caudal and cephalic directions (C7SP ± 1SP) increased to 58% and 81% (P = .001) with use of the FEM and the TRSM, respectively. Age had a significant effect on the accuracy of FEM (P = .027) but not on the accuracy of TRSM (P = .939). Sex, body mass, body height, and body mass index had no significant effects on the accuracy of both the FEM (P = .209 or higher) and the TRSM (P = .265 or higher). The TRMS located the C7SP more accurately compared with the FEM at any given level of anatomic detail, although both still underperformed in terms of acceptable accuracy for a clinical setting. Copyright © 2016. Published by Elsevier Inc.
Mistry, Binoy; Stewart De Ramirez, Sarah; Kelen, Gabor; Schmitz, Paulo S K; Balhara, Kamna S; Levin, Scott; Martinez, Diego; Psoter, Kevin; Anton, Xavier; Hinson, Jeremiah S
2018-05-01
We assess accuracy and variability of triage score assignment by emergency department (ED) nurses using the Emergency Severity Index (ESI) in 3 countries. In accordance with previous reports and clinical observation, we hypothesize low accuracy and high variability across all sites. This cross-sectional multicenter study enrolled 87 ESI-trained nurses from EDs in Brazil, the United Arab Emirates, and the United States. Standardized triage scenarios published by the Agency for Healthcare Research and Quality (AHRQ) were used. Accuracy was defined by concordance with the AHRQ key and calculated as percentages. Accuracy comparisons were made with one-way ANOVA and paired t test. Interrater reliability was measured with Krippendorff's α. Subanalyses based on nursing experience and triage scenario type were also performed. Mean accuracy pooled across all sites and scenarios was 59.2% (95% confidence interval [CI] 56.4% to 62.0%) and interrater reliability was modest (α=.730; 95% CI .692 to .767). There was no difference in overall accuracy between sites or according to nurse experience. Medium-acuity scenarios were scored with greater accuracy (76.4%; 95% CI 72.6% to 80.3%) than high- or low-acuity cases (44.1%, 95% CI 39.3% to 49.0% and 54%, 95% CI 49.9% to 58.2%), and adult scenarios were scored with greater accuracy than pediatric ones (66.2%, 95% CI 62.9% to 69.7% versus 46.9%, 95% CI 43.4% to 50.3%). In this multinational study, concordance of nurse-assigned ESI score with reference standard was universally poor and variability was high. Although the ESI is the most popular ED triage tool in the United States and is increasingly used worldwide, our findings point to a need for more reliable ED triage tools. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Zietze, Stefan; Müller, Rainer H; Brecht, René
2008-03-01
In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.
What Can the Diffusion Model Tell Us About Prospective Memory?
Horn, Sebastian S.; Bayen, Ute J.; Smith, Rebekah E.
2011-01-01
Cognitive process models, such as Ratcliff’s (1978) diffusion model, are useful tools for examining cost- or interference effects in event-based prospective memory (PM). The diffusion model includes several parameters that provide insight into how and why ongoing-task performance may be affected by a PM task and is ideally suited to analyze performance because both reaction time and accuracy are taken into account. Separate analyses of these measures can easily yield misleading interpretations in cases of speed-accuracy tradeoffs. The diffusion model allows us to measure possible criterion shifts and is thus an important methodological improvement over standard analyses. Performance in an ongoing lexical decision task (Smith, 2003) was analyzed with the diffusion model. The results suggest that criterion shifts play an important role when a PM task is added, but do not fully explain the cost effect on RT. PMID:21443332
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuerweger, C; European Cyberknife Center Munich, Munich, DE; Prins, P
Purpose: To assess characteristics and performance of the “Incise™” MLC (41 leaf pairs, 2.5mm width, FFF linac) mounted on the robotic SRS/SBRT platform “CyberKnife M6™” in a pre-clinical 5 months (11/2014–03/2015) test period. Methods: Beam properties were measured with unshielded diodes and EBT3 film. The CyberKnife workspace for MLC was analyzed by transforming robot node coordinates (cranial / body paths) into Euler geometry. Bayouth tests for leaf / bank position accuracy were performed in standard (A/P) and clinically relevant non-standard positions, before and after exercising the MLC for 10+ minutes. Total system and delivery accuracy were assessed in End-to-End testsmore » and dosimetric verification of exemplary plans. Stability over time was evaluated in Picket-Fence-and adapted Winston-Lutz-tests (AQA) for different collimator angles. Results: Penumbrae (80–20%, with 100%=2*dose at inflection point; SAD 80cm; 10cm depth) parallel / perpendicular to leaf motion were 2.87/2.64mm for the smallest (0×76×0.75cm{sup 2}) and 5.34/4.94mm for the largest (9.76×9.75cm{sup 2}) square field. MLC circular field penumbrae exceeded fixed cones by 10–20% (e.g. 60mm: 4.0 vs. 3.6mm; 20mm: 3.6 vs. 2.9mm). Interleaf leakage was <0.5%. Clinically accessible workspace with MLC covered (non-coplanar) gantry angles of [-113°;+112°] (cranial) and [-108°;+102°] (body), and collimator angles of [-100°;+107°] (cranial) and [-91°;+100°] (body). Average leaf position offsets were ≤0.2mm in 14 standard A/P Bayouth tests and ≤0.6mm in 8 non-standard direction tests. Pre-test MLC exercise increased jaggedness (range ±0.3mm vs. ±0.5mm) and allowed to identify one malfunctioning leaf motor. Total system accuracy with MLC was 0.39±0.06mm in 6 End-to-End tests. Picket-Fence and AQA showed no adverse trends during the test period. Conclusion: The Incise™ MLC for CyberKnife M6™ displayed high accuracy and mechanical stability over the test period. The specific CyberKnife geometry and performance after exercise demand dedicated QA measures. This work is in part funded by a research grant from Accuray Inc, Sunnyvale, USA. Erasmus MC Cancer Institute also has research collaborations with Elekta AB, Stockholm, Sweden. C Fuerweger has previously received speaker honoraria from Accuray Inc, Sunnyvale, USA.« less
Setford, Steven; Smith, Antony; McColl, David; Grady, Mike; Koria, Krisna; Cameron, Hilary
2015-01-01
Assess laboratory and in-clinic performance of the OneTouch Select(®) Plus test system against ISO 15197:2013 standard for measurement of blood glucose. System performance assessed in laboratory against key patient, environmental and pharmacologic factors. User performance was assessed in clinic by system-naïve lay-users. Healthcare professionals assessed system accuracy on diabetes subjects in clinic. The system demonstrated high levels of performance, meeting ISO 15197:2013 requirements in laboratory testing (precision, linearity, hematocrit, temperature, humidity and altitude). System performance was tested against 28 interferents, with an adverse interfering effect only being recorded for pralidoxime iodide. Clinic user performance results fulfilled ISO 15197:2013 accuracy criteria. Subjects agreed that the color range indicator clearly showed if they were low, in-range or high and helped them better understand glucose results. The system evaluated is accurate and meets all ISO 15197:2013 requirements as per the tests described. The color range indicator helped subjects understand glucose results and supports patients in following healthcare professional recommendations on glucose targets.
Fernández-Friera, Leticia; García-Ruiz, José Manuel; García-Álvarez, Ana; Fernández-Jiménez, Rodrigo; Sánchez-González, Javier; Rossello, Xavier; Gómez-Talavera, Sandra; López-Martín, Gonzalo J; Pizarro, Gonzalo; Fuster, Valentín; Ibáñez, Borja
2017-05-01
Area at risk (AAR) quantification is important to evaluate the efficacy of cardioprotective therapies. However, postinfarction AAR assessment could be influenced by the infarcted coronary territory. Our aim was to determine the accuracy of T 2 -weighted short tau triple-inversion recovery (T 2 W-STIR) cardiac magnetic resonance (CMR) imaging for accurate AAR quantification in anterior, lateral, and inferior myocardial infarctions. Acute reperfused myocardial infarction was experimentally induced in 12 pigs, with 40-minute occlusion of the left anterior descending (n = 4), left circumflex (n = 4), and right coronary arteries (n = 4). Perfusion CMR was performed during selective intracoronary gadolinium injection at the coronary occlusion site (in vivo criterion standard) and, additionally, a 7-day CMR, including T 2 W-STIR sequences, was performed. Finally, all animals were sacrificed and underwent postmortem Evans blue staining (classic criterion standard). The concordance between the CMR-based criterion standard and T 2 W-STIR to quantify AAR was high for anterior and inferior infarctions (r = 0.73; P = .001; mean error = 0.50%; limits = -12.68%-13.68% and r = 0.87; P = .001; mean error = -1.5%; limits = -8.0%-5.8%, respectively). Conversely, the correlation for the circumflex territories was poor (r = 0.21, P = .37), showing a higher mean error and wider limits of agreement. A strong correlation between pathology and the CMR-based criterion standard was observed (r = 0.84, P < .001; mean error = 0.91%; limits = -7.55%-9.37%). T 2 W-STIR CMR sequences are accurate to determine the AAR for anterior and inferior infarctions; however, their accuracy for lateral infarctions is poor. These findings may have important implications for the design and interpretation of clinical trials evaluating the effectiveness of cardioprotective therapies. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Leardini, Alberto; Lullini, Giada; Giannini, Sandro; Berti, Lisa; Ortolani, Maurizio; Caravaggi, Paolo
2014-09-11
Several rehabilitation systems based on inertial measurement units (IMU) are entering the market for the control of exercises and to measure performance progression, particularly for recovery after lower limb orthopaedic treatments. IMU are easy to wear also by the patient alone, but the extent to which IMU's malpositioning in routine use can affect the accuracy of the measurements is not known. A new such system (Riablo™, CoRehab, Trento, Italy), using audio-visual biofeedback based on videogames, was assessed against state-of-the-art gait analysis as the gold standard. The sensitivity of the system to errors in the IMU's position and orientation was measured in 5 healthy subjects performing two hip joint motion exercises. Root mean square deviation was used to assess differences in the system's kinematic output between the erroneous and correct IMU position and orientation.In order to estimate the system's accuracy, thorax and knee joint motion of 17 healthy subjects were tracked during the execution of standard rehabilitation tasks and compared with the corresponding measurements obtained with an established gait protocol using stereophotogrammetry. A maximum mean error of 3.1 ± 1.8 deg and 1.9 ± 0.8 deg from the angle trajectory with correct IMU position was recorded respectively in the medio-lateral malposition and frontal-plane misalignment tests. Across the standard rehabilitation tasks, the mean distance between the IMU and gait analysis systems was on average smaller than 5°. These findings showed that the tested IMU based system has the necessary accuracy to be safely utilized in rehabilitation programs after orthopaedic treatments of the lower limb.
Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.
2016-01-01
Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796
Classification of interstitial lung disease patterns with topological texture features
NASA Astrophysics Data System (ADS)
Huber, Markus B.; Nagarajan, Mahesh; Leinsinger, Gerda; Ray, Lawrence A.; Wismüller, Axel
2010-03-01
Topological texture features were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honey-combing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. A set of 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and three Minkowski Functionals (MFs, e.g. MF.euler). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions and the significance thresholds were adjusted for multiple comparisons by the Bonferroni correction. The best classification results were obtained by the MF features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers. The highest accuracy was found for MF.euler (97.5%, 96.6%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced topological texture features can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.
NASA Astrophysics Data System (ADS)
Goh, Shu Ting
Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due to the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft's range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method's error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.
Busse, Harald; Riedel, Tim; Garnov, Nikita; Thörmer, Gregor; Kahn, Thomas; Moche, Michael
2015-01-01
MRI is of great clinical utility for the guidance of special diagnostic and therapeutic interventions. The majority of such procedures are performed iteratively ("in-and-out") in standard, closed-bore MRI systems with control imaging inside the bore and needle adjustments outside the bore. The fundamental limitations of such an approach have led to the development of various assistance techniques, from simple guidance tools to advanced navigation systems. The purpose of this work was to thoroughly assess the targeting accuracy, workflow and usability of a clinical add-on navigation solution on 240 simulated biopsies by different medical operators. Navigation relied on a virtual 3D MRI scene with real-time overlay of the optically tracked biopsy needle. Smart reference markers on a freely adjustable arm ensured proper registration. Twenty-four operators - attending (AR) and resident radiologists (RR) as well as medical students (MS) - performed well-controlled biopsies of 10 embedded model targets (mean diameter: 8.5 mm, insertion depths: 17-76 mm). Targeting accuracy, procedure times and 13 Likert scores on system performance were determined (strong agreement: 5.0). Differences in diagnostic success rates (AR: 93%, RR: 88%, MS: 81%) were not significant. In contrast, between-group differences in biopsy times (AR: 4:15, RR: 4:40, MS: 5:06 min:sec) differed significantly (p<0.01). Mean overall rating was 4.2. The average operator would use the system again (4.8) and stated that the outcome justifies the extra effort (4.4). Lowest agreement was reported for the robustness against external perturbations (2.8). The described combination of optical tracking technology with an automatic MRI registration appears to be sufficiently accurate for instrument guidance in a standard (closed-bore) MRI environment. High targeting accuracy and usability was demonstrated on a relatively large number of procedures and operators. Between groups with different expertise there were significant differences in experimental procedure times but not in the number of successful biopsies.
Accuracy of CNV Detection from GWAS Data.
Zhang, Dandan; Qian, Yudong; Akula, Nirmala; Alliey-Rodriguez, Ney; Tang, Jinsong; Gershon, Elliot S; Liu, Chunyu
2011-01-13
Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites--Birdsuite, Partek, HelixTree, and PennCNV-Affy--in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two "gold standards," the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a "gold standard" for detection of CNVs remains to be established.
NASA Technical Reports Server (NTRS)
Wilson, C.; Dye, R.; Reed, L.
1982-01-01
The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.
Voleti, Pramod B; Hamula, Mathew J; Baldwin, Keith D; Lee, Gwo-Chin
2014-09-01
The purpose of this systematic review and meta-analysis is to compare patient-specific instrumentation (PSI) versus standard instrumentation for total knee arthroplasty (TKA) with regard to coronal and sagittal alignment, operative time, intraoperative blood loss, and cost. A systematic query in search of relevant studies was performed, and the data published in these studies were extracted and aggregated. In regard to coronal alignment, PSI demonstrated improved accuracy in femorotibial angle (FTA) (P=0.0003), while standard instrumentation demonstrated improved accuracy in hip-knee-ankle angle (HKA) (P=0.02). Importantly, there were no differences between treatment groups in the percentages of FTA or HKA outliers (>3 degrees from target alignment) (P=0.7). Sagittal alignment, operative time, intraoperative blood loss, and cost were also similar between groups (P>0.1 for all comparisons). Copyright © 2014 Elsevier Inc. All rights reserved.
On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP.
Winkler, Irene; Debener, Stefan; Müller, Klaus-Robert; Tangermann, Michael
2015-01-01
Standard artifact removal methods for electroencephalographic (EEG) signals are either based on Independent Component Analysis (ICA) or they regress out ocular activity measured at electrooculogram (EOG) channels. Successful ICA-based artifact reduction relies on suitable pre-processing. Here we systematically evaluate the effects of high-pass filtering at different frequencies. Offline analyses were based on event-related potential data from 21 participants performing a standard auditory oddball task and an automatic artifactual component classifier method (MARA). As a pre-processing step for ICA, high-pass filtering between 1-2 Hz consistently produced good results in terms of signal-to-noise ratio (SNR), single-trial classification accuracy and the percentage of `near-dipolar' ICA components. Relative to no artifact reduction, ICA-based artifact removal significantly improved SNR and classification accuracy. This was not the case for a regression-based approach to remove EOG artifacts.
[THE VERIFICATION OF ANALYTICAL CHARACTERISTICS OF THREE MODELS OF GLUCOMETERS].
Timofeev, A V; Khaibulina, E T; Mamonov, R A; Gorst, K A
2016-01-01
The individual portable systems of control of glucose level in blood commonly known as glucometers permit to patients with diabetes mellitus to independently correct pharmaceutical therapy. The effectiveness of this correction depends on accuracy of control of glucose level. The evaluation was implemented concerning minimal admissible accuracy and clinical accuracy of control of glucose level of devices Contour TC, Satellite Express and One Touch Select according standards expounded in GOST 15197-2011 and international standard ISO 15197-2013. It is demonstrated that Contour TC and One Touch Select meet the requirements of these standards in part of accuracy while Satellite Express does not.
Antonelli, Giorgia; Padoan, Andrea; Aita, Ada; Sciacovelli, Laura; Plebani, Mario
2017-08-28
Background The International Standard ISO 15189 is recognized as a valuable guide in ensuring high quality clinical laboratory services and promoting the harmonization of accreditation programmes in laboratory medicine. Examination procedures must be verified in order to guarantee that their performance characteristics are congruent with the intended scope of the test. The aim of the present study was to propose a practice model for implementing procedures employed for the verification of validated examination procedures already used for at least 2 years in our laboratory, in agreement with the ISO 15189 requirement at the Section 5.5.1.2. Methods In order to identify the operative procedure to be used, approved documents were identified, together with the definition of performance characteristics to be evaluated for the different methods; the examination procedures used in laboratory were analyzed and checked for performance specifications reported by manufacturers. Then, operative flow charts were identified to compare the laboratory performance characteristics with those declared by manufacturers. Results The choice of performance characteristics for verification was based on approved documents used as guidance, and the specific purpose tests undertaken, a consideration being made of: imprecision and trueness for quantitative methods; diagnostic accuracy for qualitative methods; imprecision together with diagnostic accuracy for semi-quantitative methods. Conclusions The described approach, balancing technological possibilities, risks and costs and assuring the compliance of the fundamental component of result accuracy, appears promising as an easily applicable and flexible procedure helping laboratories to comply with the ISO 15189 requirements.
ERIC Educational Resources Information Center
Benjamin, Aaron S.; Tullis, Jonathan G.; Lee, Ji Hae
2013-01-01
Rating scales are a standard measurement tool in psychological research. However, research has suggested that the cognitive burden involved in maintaining the criteria used to parcel subjective evidence into ratings introduces "decision noise" and affects estimates of performance in the underlying task. There has been debate over whether…
Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S
2014-05-01
We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.
ERIC Educational Resources Information Center
Mohan, Vidhu; Kumar, Dalip
1976-01-01
Does measurement of intelligence through a concolidated score imply that two or more subjects obtaining the same score are also undergoing the same mental process? Introverts are supposed to opt for accuracy and extraverts for speed. Attempts to investigate the qualitative differences between extraverts and introverts on an intelligence test.…
Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme
2017-06-01
To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p < 0.02), while specificity was significantly higher for supine sampling compared with 24-h urine, 95 vs. 90% (p < 0.03). Partial areas under the curve were 0.942, 0.913, and 0.932 for supine sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.
Value of physical tests in diagnosing cervical radiculopathy: a systematic review.
Thoomes, Erik J; van Geest, Sarita; van der Windt, Danielle A; Falla, Deborah; Verhagen, Arianne P; Koes, Bart W; Thoomes-de Graaf, Marloes; Kuijper, Barbara; Scholten-Peeters, Wendy G M; Vleggeert-Lankamp, Carmen L
2018-01-01
In clinical practice, the diagnosis of cervical radiculopathy is based on information from the patient's history, physical examination, and diagnostic imaging. Various physical tests may be performed, but their diagnostic accuracy is unknown. This study aimed to summarize and update the evidence on diagnostic performance of tests carried out during a physical examination for the diagnosis of cervical radiculopathy. A review of the accuracy of diagnostic tests was carried out. The study sample comprised diagnostic studies comparing results of tests performed during a physical examination in diagnosing cervical radiculopathy with a reference standard of imaging or surgical findings. Sensitivity, specificity, likelihood ratios are presented, together with pooled results for sensitivity and specificity. A literature search up to March 2016 was performed in CENTRAL, PubMed (MEDLINE), Embase, CINAHL, Web of Science, and Google Scholar. The methodological quality of studies was assessed using the QUADAS-2. Five diagnostic accuracy studies were identified. Only Spurling's test was evaluated in more than one study, showing high specificity ranging from 0.89 to 1.00 (95% confidence interval [CI]: 0.59-1.00); sensitivity varied from 0.38 to 0.97 (95% CI: 0.21-0.99). No studies were found that assessed the diagnostic accuracy of widely used neurological tests such as key muscle strength, tendon reflexes, and sensory impairments. There is limited evidence for accuracy of physical examination tests for the diagnosis of cervical radiculopathy. When consistent with patient history, clinicians may use a combination of Spurling's, axial traction, and an Arm Squeeze test to increase the likelihood of a cervical radiculopathy, whereas a combined results of four negative neurodynamics tests and an Arm Squeeze test could be used to rule out the disorder. Copyright © 2017 Elsevier Inc. All rights reserved.
Yang, Pan; Peng, Yulan; Zhao, Haina; Luo, Honghao; Jin, Ya; He, Yushuang
2015-01-01
Static shear wave elastography (SWE) is used to detect breast lesions, but slice and plane selections result in discrepancies. To evaluate the intraobserver reproducibility of continuous SWE, and whether quantitative elasticities in orthogonal planes perform better in the differential diagnosis of breast lesions. One hundred and twenty-two breast lesions scheduled for ultrasound-guided biopsy were recruited. Continuous SWE scans were conducted in orthogonal planes separately. Quantitative elasticities and histopathology results were collected. Reproducibility in the same plane and diagnostic performance in different planes were evaluated. The maximum and mean elasticities of the hardest portion, and standard deviation of whole lesion, had high inter-class correlation coefficients (0.87 to 0.95) and large areas under receiver operation characteristic curve (0.887 to 0.899). Without loss of accuracy, sensitivities had increased in orthogonal planes compared with single plane (from 73.17% up to 82.93% at most). Mean elasticity of whole lesion and lesion-to-parenchyma ratio were significantly less reproducible and less accurate. Continuous SWE is highly reproducible for the same observer. The maximum and mean elasticities of the hardest portion and standard deviation of whole lesion are most reliable. Furthermore, the sensitivities of the three parameters are improved in orthogonal planes without loss of accuracies.
Umemura, Atsushi; Oeda, Tomoko; Hayashi, Ryutaro; Tomita, Satoshi; Kohsaka, Masayuki; Yamamoto, Kenji; Sawada, Hideyuki
2013-01-01
It is often hard to differentiate Parkinson's disease (PD) and parkinsonian variant of multiple system atrophy (MSA-P), especially in the early stages. Cardiac sympathetic denervation and putaminal rarefaction are specific findings for PD and MSA-P, respectively. We investigated diagnostic accuracy of putaminal apparent diffusion coefficient (ADC) test for MSA-P and (123)I-metaiodobenzylguanidine (MIBG) scintigram for PD, especially in early-stage patients. The referral standard diagnosis of PD and MSA-P were the diagnostic criteria of the United Kingdom Parkinson's Disease Society Brain Bank Criteria and the second consensus criteria, respectively. Based on the referral standard criteria, diagnostic accuracy [area under the receiver-operator characteristic curve (AUC), sensitivity and specificity] of the ADC and MIBG tests was estimated retrospectively. Diagnostic accuracy of these tests performed within 3 years of symptom onset was also investigated. ADC and MIBG tests were performed on 138 patients (20 MSA and 118 PD). AUC was 0.95 and 0.83 for the ADC and MIBG tests, respectively. Sensitivity and specificity were 85.0% and 89.0% for MSA-P diagnosis by ADC test and 67.0% and 80.0% for PD diagnosis by MIBG test. When these tests were restricted to patients with disease duration ≤ 3 years, the sensitivity and specificity were 75.0% and 91.4% for the ADC test (MSA-P diagnosis) and 47.7% and 92.3% for the MIBG test (PD diagnosis). Both tests were useful in differentiating between PD and MSA-P, even in the early stages. In early-stage patients, elevated putaminal ADC was a diagnostic marker for MSA-P. Despite high specificity of the MIBG test, careful neurological history and examinations were required for PD diagnosis because of possible false-negative results.
Hillarp, A; Friedman, K D; Adcock-Funk, D; Tiefenbacher, S; Nichols, W L; Chen, D; Stadler, M; Schwartz, B A
2015-11-01
The ability of von Willebrand factor (VWF) to bind platelet GP Ib and promote platelet plug formation is measured in vitro using the ristocetin cofactor (VWF:RCo) assay. Automated assay systems make testing more accessible for diagnosis, but do not necessarily improve sensitivity and accuracy. We assessed the performance of a modified automated VWF:RCo assay protocol for the Behring Coagulation System (BCS(®) ) compared to other available assay methods. Results from different VWF:RCo assays in a number of specialized commercial and research testing laboratories were compared using plasma samples with varying VWF:RCo activities (0-1.2 IU mL(-1) ). Samples were prepared by mixing VWF concentrate or plasma standard into VWF-depleted plasma. Commercially available lyophilized standard human plasma was also studied. Emphasis was put on the low measuring range. VWF:RCo accuracy was calculated based on the expected values, whereas precision was obtained from repeated measurements. In the physiological concentration range, most of the automated tests resulted in acceptable accuracy, with varying reproducibility dependent on the method. However, several assays were inaccurate in the low measuring range. Only the modified BCS protocol showed acceptable accuracy over the entire measuring range with improved reproducibility. A modified BCS(®) VWF:RCo method can improve sensitivity and thus enhances the measuring range. Furthermore, the modified BCS(®) assay displayed good precision. This study indicates that the specific modifications - namely the combination of increased ristocetin concentration, reduced platelet content, VWF-depleted plasma as on-board diluent and a two-curve calculation mode - reduces the issues seen with current VWF:RCo activity assays. © 2015 John Wiley & Sons Ltd.
Evaluation of Mid-Size Male Hybrid III Models for use in Spaceflight Occupant Protection Analysis
NASA Technical Reports Server (NTRS)
Putnam, J.; Somers, J.; Wells, J.; Newby, N.; Currie-Gregg, N.; Lawrence, C.
2016-01-01
Introduction: In an effort to improve occupant safety during dynamic phases of spaceflight, the National Aeronautics and Space Administration (NASA) has worked to develop occupant protection standards for future crewed spacecraft. One key aspect of these standards is the identification of injury mechanisms through anthropometric test devices (ATDs). Within this analysis, both physical and computational ATD evaluations are required to reasonably encompass the vast range of loading conditions any spaceflight crew may encounter. In this study the accuracy of publically available mid-size male HIII ATD finite element (FE) models are evaluated within applicable loading conditions against extensive sled testing performed on their physical counterparts. Methods: A series of sled tests were performed at the Wright Patterson Air force Base (WPAFB) employing variations of magnitude, duration, and impact direction to encompass the dynamic loading range for expected spaceflight. FE simulations were developed to the specifications of the test setup and driven using measured acceleration profiles. Both fast and detailed FE models of the mid-size male HIII were ran to quantify differences in their accuracy and thus assess the applicability of each within this field. Results: Preliminary results identify the dependence of model accuracy on loading direction, magnitude, and rate. Additionally the accuracy of individual response metrics are shown to vary across each model within evaluated test conditions. Causes for model inaccuracy are identified based on the observed relationships. Discussion: Computational modeling provides an essential component to ATD injury metric evaluation used to ensure the safety of future spaceflight occupants. The assessment of current ATD models lays the groundwork for how these models can be used appropriately in the future. Identification of limitations and possible paths for improvement aid in the development of these effective analysis tools.
Evaluation of Mid-Size Male Hybrid III Models for use in Spaceflight Occupant Protection Analysis
NASA Technical Reports Server (NTRS)
Putnam, Jacob B.; Sommers, Jeffrey T.; Wells, Jessica A.; Newby, Nathaniel J.; Currie-Gregg, Nancy J.; Lawrence, Chuck
2016-01-01
In an effort to improve occupant safety during dynamic phases of spaceflight, the National Aeronautics and Space Administration (NASA) has worked to develop occupant protection standards for future crewed spacecraft. One key aspect of these standards is the identification of injury mechanisms through anthropometric test devices (ATDs). Within this analysis, both physical and computational ATD evaluations are required to reasonably encompass the vast range of loading conditions any spaceflight crew may encounter. In this study the accuracy of publically available mid-size male HIII ATD finite element (FE) models are evaluated within applicable loading conditions against extensive sled testing performed on their physical counterparts. Methods: A series of sled tests were performed at the Wright Patterson Air force Base (WPAFB) employing variations of magnitude, duration, and impact direction to encompass the dynamic loading range for expected spaceflight. FE simulations were developed to the specifications of the test setup and driven using measured acceleration profiles. Both fast and detailed FE models of the mid-size male HIII were ran to quantify differences in their accuracy and thus assess the applicability of each within this field. Results: Preliminary results identify the dependence of model accuracy on loading direction, magnitude, and rate. Additionally the accuracy of individual response metrics are shown to vary across each model within evaluated test conditions. Causes for model inaccuracy are identified based on the observed relationships. Discussion: Computational modeling provides an essential component to ATD injury metric evaluation used to ensure the safety of future spaceflight occupants. The assessment of current ATD models lays the groundwork for how these models can be used appropriately in the future. Identification of limitations and possible paths for improvement aid in the development of these effective analysis tools.
Mayoral, Víctor; Pérez-Hernández, Concepción; Muro, Inmaculada; Leal, Ana; Villoria, Jesús; Esquivias, Ana
2018-04-27
Based on the clear neuroanatomical delineation of many neuropathic pain (NP) symptoms, a simple tool for performing a short structured clinical encounter based on the IASP diagnostic criteria was developed to identify NP. This study evaluated its accuracy and usefulness. A case-control study was performed in 19 pain clinics within Spain. A pain clinician used the experimental screening tool (the index test, IT) to assign the descriptions of non-neuropathic (nNP), non-localized neuropathic (nLNP), and localized neuropathic (LNP) to the patients' pain conditions. The reference standard was a formal clinical diagnosis provided by another pain clinician. The accuracy of the IT was compared with that of the Douleur Neuropathique en 4 questions (DN4) and the Leeds Assessment of Neuropathic Signs and Symptoms (LANSS). Six-hundred and sixty-six patients were analyzed. There was a good agreement between the IT and the reference standard (kappa =0.722). The IT was accurate in distinguishing between LNP and nLNP (83.2% sensitivity, 88.2% specificity), between LNP and the other pain categories (nLNP + nNP) (80.0% sensitivity, 90.7% specificity), and between NP and nNP (95.5% sensitivity, 89.1% specificity). The accuracy in distinguishing between NP and nNP was comparable with that of the DN4 and the LANSS. The IT took a median of 10 min to complete. A novel instrument based on an operationalization of the IASP criteria can not only discern between LNP and nLNP, but also provide a high level of diagnostic certainty about the presence of NP after a short clinical encounter.
76 FR 40844 - Changes to Move Update Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-12
... accuracy standard: a. For computerized lists, Coding Accuracy Support System (CASS)- certified address matching software and current USPS City State Product, within a mailer's computer systems or through an...
Clinical Evaluation of a Loop-Mediated Amplification Kit for Diagnosis of Imported Malaria
Polley, Spencer D.; González, Iveth J.; Mohamed, Deqa; Daly, Rosemarie; Bowers, Kathy; Watson, Julie; Mewse, Emma; Armstrong, Margaret; Gray, Christen; Perkins, Mark D.; Bell, David; Kanda, Hidetoshi; Tomita, Norihiro; Kubota, Yutaka; Mori, Yasuyoshi; Chiodini, Peter L.; Sutherland, Colin J.
2013-01-01
Background. Diagnosis of malaria relies on parasite detection by microscopy or antigen detection; both fail to detect low-density infections. New tests providing rapid, sensitive diagnosis with minimal need for training would enhance both malaria diagnosis and malaria control activities. We determined the diagnostic accuracy of a new loop-mediated amplification (LAMP) kit in febrile returned travelers. Methods. The kit was evaluated in sequential blood samples from returned travelers sent for pathogen testing to a specialist parasitology laboratory. Microscopy was performed, and then malaria LAMP was performed using Plasmodium genus and Plasmodium falciparum–specific tests in parallel. Nested polymerase chain reaction (PCR) was performed on all samples as the reference standard. Primary outcome measures for diagnostic accuracy were sensitivity and specificity of LAMP results, compared with those of nested PCR. Results. A total of 705 samples were tested in the primary analysis. Sensitivity and specificity were 98.4% and 98.1%, respectively, for the LAMP P. falciparum primers and 97.0% and 99.2%, respectively, for the Plasmodium genus primers. Post hoc repeat PCR analysis of all 15 tests with discrepant results resolved 4 results in favor of LAMP, suggesting that the primary analysis had underestimated diagnostic accuracy. Conclusions. Malaria LAMP had a diagnostic accuracy similar to that of nested PCR, with a greatly reduced time to result, and was superior to expert microscopy. PMID:23633403
Schoemans, H; Goris, K; Durm, R V; Vanhoof, J; Wolff, D; Greinix, H; Pavletic, S; Lee, S J; Maertens, J; Geest, S D; Dobbels, F; Duarte, R F
2016-08-01
The EBMT Complications and Quality of Life Working Party has developed a computer-based algorithm, the 'eGVHD App', using a user-centered design process. Accuracy was tested using a quasi-experimental crossover design with four expert-reviewed case vignettes in a convenience sample of 28 clinical professionals. Perceived usefulness was evaluated by the technology acceptance model (TAM) and User satisfaction by the Post-Study System Usability Questionnaire (PSSUQ). User experience was positive, with a median of 6 TAM points (interquartile range: 1) and beneficial median total, and subscale PSSUQ scores. The initial standard practice assessment of the vignettes yielded 65% correct results for diagnosis and 45% for scoring. The 'eGVHD App' significantly increased diagnostic and scoring accuracy to 93% (+28%) and 88% (+43%), respectively (both P<0.05). The same trend was observed in the repeated analysis of case 2: accuracy improved by using the App (+31% for diagnosis and +39% for scoring), whereas performance tended to decrease once the App was taken away. The 'eGVHD App' could dramatically improve the quality of care and research as it increased the performance of the whole user group by about 30% at the first assessment and showed a trend for improvement of individual performance on repeated case evaluation.
A consensus-based gold standard for the evaluation of mass casualty triage systems.
Lerner, E Brooke; McKee, Courtney H; Cady, Charles E; Cone, David C; Colella, M Riccardo; Cooper, Arthur; Coule, Phillip L; Lairet, Julio R; Liu, J Marc; Pirrallo, Ronald G; Sasser, Scott M; Schwartz, Richard; Shepherd, Greene; Swienton, Raymond E
2015-01-01
Accuracy and effectiveness analyses of mass casualty triage systems are limited because there are no gold standard definitions for each of the triage categories. Until there is agreement on which patients should be identified by each triage category, it will be impossible to calculate sensitivity and specificity or to compare accuracy between triage systems. To develop a consensus-based, functional gold standard definition for each mass casualty triage category. National experts were recruited through the lead investigators' contacts and their suggested contacts. Key informant interviews were conducted to develop a list of potential criteria for defining each triage category. Panelists were interviewed in order of their availability until redundancy of themes was achieved. Panelists were blinded to each other's responses during the interviews. A modified Delphi survey was developed with the potential criteria identified during the interview and delivered to all recruited experts. In the early rounds, panelists could add, remove, or modify criteria. In the final rounds edits were made to the criteria until at least 80% agreement was achieved. Thirteen national and local experts were recruited to participate in the project. Six interviews were conducted. Three rounds of voting were performed, with 12 panelists participating in the first round, 12 in the second round, and 13 in the third round. After the first two rounds, the criteria were modified according to respondent suggestions. In the final round, over 90% agreement was achieved for all but one criterion. A single e-mail vote was conducted on edits to the final criterion and consensus was achieved. A consensus-based, functional gold standard definition for each mass casualty triage category was developed. These gold standard definitions can be used to evaluate the accuracy of mass casualty triage systems after an actual incident, during training, or for research.
Alves, G G; Kinoshita, A; Oliveira, H F de; Guimarães, F S; Amaral, L L; Baffa, O
2015-07-01
Radiotherapy is one of the main approaches to cure prostate cancer, and its success depends on the accuracy of dose planning. A complicating factor is the presence of a metallic prosthesis in the femur and pelvis, which is becoming more common in elderly populations. The goal of this work was to perform dose measurements to check the accuracy of radiotherapy treatment planning under these complicated conditions. To accomplish this, a scale phantom of an adult pelvic region was used with alanine dosimeters inserted in the prostate region. This phantom was irradiated according to the planned treatment under the following three conditions: with two metallic prostheses in the region of the femur head, with only one prosthesis, and without any prostheses. The combined relative standard uncertainty of dose measurement by electron spin resonance (ESR)/alanine was 5.05%, whereas the combined relative standard uncertainty of the applied dose was 3.35%, resulting in a combined relative standard uncertainty of the whole process of 6.06%. The ESR dosimetry indicated that there was no difference (P>0.05, ANOVA) in dosage between the planned dose and treatments. The results are in the range of the planned dose, within the combined relative uncertainty, demonstrating that the treatment-planning system compensates for the effects caused by the presence of femur and hip metal prostheses.
An alternative data filling approach for prediction of missing data in soft sets (ADFIS).
Sadiq Khan, Muhammad; Al-Garadi, Mohammed Ali; Wahab, Ainuddin Wahid Abdul; Herawan, Tutut
2016-01-01
Soft set theory is a mathematical approach that provides solution for dealing with uncertain data. As a standard soft set, it can be represented as a Boolean-valued information system, and hence it has been used in hundreds of useful applications. Meanwhile, these applications become worthless if the Boolean information system contains missing data due to error, security or mishandling. Few researches exist that focused on handling partially incomplete soft set and none of them has high accuracy rate in prediction performance of handling missing data. It is shown that the data filling approach for incomplete soft set (DFIS) has the best performance among all previous approaches. However, in reviewing DFIS, accuracy is still its main problem. In this paper, we propose an alternative data filling approach for prediction of missing data in soft sets, namely ADFIS. The novelty of ADFIS is that, unlike the previous approach that used probability, we focus more on reliability of association among parameters in soft set. Experimental results on small, 04 UCI benchmark data and causality workbench lung cancer (LUCAP2) data shows that ADFIS performs better accuracy as compared to DFIS.
Performance testing and results of the first Etec CORE-2564
NASA Astrophysics Data System (ADS)
Franks, C. Edward; Shikata, Asao; Baker, Catherine A.
1993-03-01
In order to be able to write 64 megabit DRAM reticles, to prepare to write 256 megabit DRAM reticles and in general to meet the current and next generation mask and reticle quality requirements, Hoya Micro Mask (HMM) installed in 1991 the first CORE-2564 Laser Reticle Writer from Etec Systems, Inc. The system was delivered as a CORE-2500XP and was subsequently upgraded to a 2564. The CORE (Custom Optical Reticle Engraver) system produces photomasks with an exposure strategy similar to that employed by an electron beam system, but it uses a laser beam to deliver the photoresist exposure energy. Since then the 2564 has been tested by Etec's standard Acceptance Test Procedure and by several supplementary HMM techniques to insure performance to all the Etec advertised specifications and certain additional HMM requirements that were more demanding and/or more thorough than the advertised specifications. The primary purpose of the HMM tests was to more closely duplicate mask usage. The performance aspects covered by the tests include registration accuracy and repeatability; linewidth accuracy, uniformity and linearity; stripe butting; stripe and scan linearity; edge quality; system cleanliness; minimum geometry resolution; minimum address size and plate loading accuracy and repeatability.
Chen, Qianting; Dai, Congling; Zhang, Qianjun; Du, Juan; Li, Wen
2016-10-01
To study the prediction performance evaluation with five kinds of bioinformatics software (SIFT, PolyPhen2, MutationTaster, Provean, MutationAssessor). From own database for genetic mutations collected over the past five years, Chinese literature database, Human Gene Mutation Database, and dbSNP, 121 missense mutations confirmed by functional studies, and 121 missense mutations suspected to be pathogenic by pedigree analysis were used as positive gold standard, while 242 missense mutations with minor allele frequency (MAF)>5% in dominant hereditary diseases were used as negative gold standard. The selected mutations were predicted with the five software. Based on the results, the performance of the five software was evaluated for their sensitivity, specificity, positive predict value, false positive rate, negative predict value, false negative rate, false discovery rate, accuracy, and receiver operating characteristic curve (ROC). In terms of sensitivity, negative predictive value and false negative rate, the rank was MutationTaster, PolyPhen2, Provean, SIFT, and MutationAssessor. For specificity and false positive rate, the rank was MutationTaster, Provean, MutationAssessor, SIFT, and PolyPhen2. For positive predict value and false discovery rate, the rank was MutationTaster, Provean, MutationAssessor, PolyPhen2, and SIFT. For area under the ROC curve (AUC) and accuracy, the rank was MutationTaster, Provean, PolyPhen2, MutationAssessor, and SIFT. The prediction performance of software may be different when using different parameters. Among the five software, MutationTaster has the best prediction performance.
Using Clinical Data Standards to Measure Quality: A New Approach.
D'Amore, John D; Li, Chun; McCrary, Laura; Niloff, Jonathan M; Sittig, Dean F; McCoy, Allison B; Wright, Adam
2018-04-01
Value-based payment for care requires the consistent, objective calculation of care quality. Previous initiatives to calculate ambulatory quality measures have relied on billing data or individual electronic health records (EHRs) to calculate and report performance. New methods for quality measure calculation promoted by federal regulations allow qualified clinical data registries to report quality outcomes based on data aggregated across facilities and EHRs using interoperability standards. This research evaluates the use of clinical document interchange standards as the basis for quality measurement. Using data on 1,100 patients from 11 ambulatory care facilities and 5 different EHRs, challenges to quality measurement are identified and addressed for 17 certified quality measures. Iterative solutions were identified for 14 measures that improved patient inclusion and measure calculation accuracy. Findings validate this approach to improving measure accuracy while maintaining measure certification. Organizations that report care quality should be aware of how identified issues affect quality measure selection and calculation. Quality measure authors should consider increasing real-world validation and the consistency of measure logic in respect to issues identified in this research. Schattauer GmbH Stuttgart.
Direct frequency comb optical frequency standard based on two-photon transitions of thermal atoms
Zhang, S. Y.; Wu, J. T.; Zhang, Y. L.; Leng, J. X.; Yang, W. P.; Zhang, Z. G.; Zhao, J. Y.
2015-01-01
Optical clocks have been the focus of science and technology research areas due to their capability to provide highest frequency accuracy and stability to date. Their superior frequency performance promises significant advances in the fields of fundamental research as well as practical applications including satellite-based navigation and ranging. In traditional optical clocks, ultrastable optical cavities, laser cooling and particle (atoms or a single ion) trapping techniques are employed to guarantee high stability and accuracy. However, on the other hand, they make optical clocks an entire optical tableful of equipment, and cannot work continuously for a long time; as a result, they restrict optical clocks used as very convenient and compact time-keeping clocks. In this article, we proposed, and experimentally demonstrated, a novel scheme of optical frequency standard based on comb-directly-excited atomic two-photon transitions. By taking advantage of the natural properties of the comb and two-photon transitions, this frequency standard achieves a simplified structure, high robustness as well as decent frequency stability, which promise widespread applications in various scenarios. PMID:26459877
Spinal cord testing: auditing for quality assurance.
Marr, J A; Reid, B
1991-04-01
A quality assurance audit of spinal cord testing as documented by staff nurses was carried out. Twenty-five patient records were examined for accuracy of documented testing and compared to assessments performed by three investigators. A pilot study established interrater reliability of a tool that was designed especially for this study. Results indicated staff nurses failed to meet pre-established 100% standard in all categories of testing when compared with investigator's findings. Possible reasons for this disparity are discussed as well as indications for modifications in the spinal testing record, teaching program and preset standards.
The Role of Metadata Standards in EOSDIS Search and Retrieval Applications
NASA Technical Reports Server (NTRS)
Pfister, Robin
1999-01-01
Metadata standards play a critical role in data search and retrieval systems. Metadata tie software to data so the data can be processed, stored, searched, retrieved and distributed. Without metadata these actions are not possible. The process of populating metadata to describe science data is an important service to the end user community so that a user who is unfamiliar with the data, can easily find and learn about a particular dataset before an order decision is made. Once a good set of standards are in place, the accuracy with which data search can be performed depends on the degree to which metadata standards are adhered during product definition. NASA's Earth Observing System Data and Information System (EOSDIS) provides examples of how metadata standards are used in data search and retrieval.
ANSI/ASHRAE/IES Standard 90.1-2010 Performance Rating Method Reference Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goel, Supriya; Rosenberg, Michael I.
This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1- 2010 (Standard 90.1-2010).The PRM is used for rating the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM. It should be noted that this document is created independently from ASHRAE and SSPC 90.1 and is not sanctioned nor approved by either ofmore » those entities . Potential users of this manual include energy modelers, software developers and implementers of “beyond code” energy programs. Energy modelers using ASHRAE Standard 90.1-2010 for beyond code programs can use this document as a reference manual for interpreting requirements of the Performance Rating method. Software developers, developing tools for automated creation of the baseline model can use this reference manual as a guideline for developing the rules for the baseline model.« less
Asha, Stephen Edward; Cooke, Andrew
2015-09-01
Suspected body packers may be brought to emergency departments (EDs) close to international airports for abdominal computed tomography (CT) scanning. Senior emergency clinicians may be asked to interpret these CT scans. Missing concealed drug packages have important clinical and forensic implications. The accuracy of emergency clinician interpretation of abdominal CT scans for concealed drugs is not known. Limited evidence suggests that accuracy for identification of concealed packages can be increased by viewing CT images on "lung window" settings. To determine the accuracy of senior emergency clinicians in interpreting abdominal CT scans for concealed drugs, and to determine if this accuracy was improved by viewing scans on both abdominal and lung window settings. Emergency clinicians blinded to all patient identifiers and the radiology report interpreted CT scans of suspected body packers using standard abdominal window settings and then with the addition of lung window settings. The reference standard was the radiologist's report. Fifty-five emergency clinicians reported 235 CT scans. The sensitivity, specificity, and accuracy of interpretation using abdominal windows was 89.9% (95% confidence interval [CI] 83.0-94.7), 81.9% (95% CI 73.7-88.4), and 86.0% (95% CI 81.5-90.4), respectively, and with both window settings was 94.1% (95% CI 88.3-97.6), 76.7% (95% CI 68.0-84.1), 85.5% (95% CI 81.0-90.0), respectively. Diagnostic accuracy was similar regardless of the clinician's experience. Interrater reliability was moderate (kappa 0.46). The accuracy of interpretation of abdominal CT scans performed for the purpose of detecting concealed drug packages by emergency clinicians is not high enough to safely discharge these patients from the ED. The use of lung windows improved sensitivity, but at the expense of specificity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Klink, Thorsten; Geiger, Julia; Both, Marcus; Ness, Thomas; Heinzelmann, Sonja; Reinhard, Matthias; Holl-Ulrich, Konstanze; Duwendag, Dirk; Vaith, Peter; Bley, Thorsten Alexander
2014-12-01
To assess the diagnostic accuracy of contrast material-enhanced magnetic resonance (MR) imaging of superficial cranial arteries in the initial diagnosis of giant cell arteritis ( GCA giant cell arteritis ). Following institutional review board approval and informed consent, 185 patients suspected of having GCA giant cell arteritis were included in a prospective three-university medical center trial. GCA giant cell arteritis was diagnosed or excluded clinically in all patients (reference standard [final clinical diagnosis]). In 53.0% of patients (98 of 185), temporal artery biopsy ( TAB temporal artery biopsy ) was performed (diagnostic standard [ TAB temporal artery biopsy ]). Two observers independently evaluated contrast-enhanced T1-weighted MR images of superficial cranial arteries by using a four-point scale. Diagnostic accuracy, involvement pattern, and systemic corticosteroid ( sCS systemic corticosteroid ) therapy effects were assessed in comparison with the reference standard (total study cohort) and separately in comparison with the diagnostic standard TAB temporal artery biopsy ( TAB temporal artery biopsy subcohort). Statistical analysis included diagnostic accuracy parameters, interobserver agreement, and receiver operating characteristic analysis. Sensitivity of MR imaging was 78.4% and specificity was 90.4% for the total study cohort, and sensitivity was 88.7% and specificity was 75.0% for the TAB temporal artery biopsy subcohort (first observer). Diagnostic accuracy was comparable for both observers, with good interobserver agreement ( TAB temporal artery biopsy subcohort, κ = 0.718; total study cohort, κ = 0.676). MR imaging scores were significantly higher in patients with GCA giant cell arteritis -positive results than in patients with GCA giant cell arteritis -negative results ( TAB temporal artery biopsy subcohort and total study cohort, P < .001). Diagnostic accuracy of MR imaging was high in patients without and with sCS systemic corticosteroid therapy for 5 days or fewer (area under the curve, ≥0.9) and was decreased in patients receiving sCS systemic corticosteroid therapy for 6-14 days. In 56.5% of patients with TAB temporal artery biopsy -positive results (35 of 62), MR imaging displayed symmetrical and simultaneous inflammation of arterial segments. MR imaging of superficial cranial arteries is accurate in the initial diagnosis of GCA giant cell arteritis . Sensitivity probably decreases after more than 5 days of sCS systemic corticosteroid therapy; thus, imaging should not be delayed. Clinical trial registration no. DRKS00000594 . © RSNA, 2014.
Expert diagnosis of plus disease in retinopathy of prematurity from computer-based image analysis
Campbell, J. Peter; Ataer-Cansizoglu, Esra; Bolon-Canedo, Veronica; Bozkurt, Alican; Erdogmus, Deniz; Kalpathy-Cramer, Jayashree; Patel, Samir N.; Reynolds, James D.; Horowitz, Jason; Hutcheson, Kelly; Shapiro, Michael; Repka, Michael X.; Ferrone, Phillip; Drenser, Kimberly; Martinez-Castellanos, Maria Ana; Ostmo, Susan; Jonas, Karyn; Chan, R.V. Paul; Chiang, Michael F.
2016-01-01
Importance Published definitions of “plus disease” in retinopathy of prematurity (ROP) reference arterial tortuosity and venous dilation within the posterior pole based on a standard published photograph. One possible explanation for limited inter-expert reliability for plus disease diagnosis is that experts deviate from the published definitions. Objective To identify vascular features used by experts for diagnosis of plus disease through quantitative image analysis. Design We developed a computer-based image analysis system (Imaging and Informatics in ROP, i-ROP), and trained the system to classify images compared to a reference standard diagnosis (RSD). System performance was analyzed as a function of the field of view (circular crops 1–6 disc diameters [DD] radius) and vessel subtype (arteries only, veins only, or all vessels). The RSD was compared to the majority diagnosis of experts. Setting Routine ROP screening in neonatal intensive care units at 8 academic institutions. Participants A set of 77 digital fundus images was used to develop the i-ROP system. A subset of 73 images was independently classified by 11 ROP experts for validation. Main Outcome Measures The primary outcome measure was the percentage accuracy of i-ROP system classification of plus disease with the RSD as a function of field-of-view and vessel type. Secondary outcome measures included the accuracy of the 11 experts compared to the RSD. Results Accuracy of plus disease diagnosis by the i-ROP computer based system was highest (95%, confidence interval [CI] 94 – 95%) when it incorporated vascular tortuosity from both arteries and veins, and with the widest field of view (6 disc diameter radius). Accuracy was ≤90% when using only arterial tortuosity (P<0.001), and ≤85% using a 2–3 disc diameter view similar to the standard published photograph (p<0.001). Diagnostic accuracy of the i-ROP system (95%) was comparable to that of 11 expert clinicians (79–99%). Conclusions and Relevance ROP experts appear to consider findings from beyond the posterior retina when diagnosing plus disease, and consider tortuosity of both arteries and veins, in contrast to published definitions. It is feasible for a computer-based image analysis system to perform comparably to ROP experts, using manually segmented images. PMID:27077667
Expert Diagnosis of Plus Disease in Retinopathy of Prematurity From Computer-Based Image Analysis.
Campbell, J Peter; Ataer-Cansizoglu, Esra; Bolon-Canedo, Veronica; Bozkurt, Alican; Erdogmus, Deniz; Kalpathy-Cramer, Jayashree; Patel, Samir N; Reynolds, James D; Horowitz, Jason; Hutcheson, Kelly; Shapiro, Michael; Repka, Michael X; Ferrone, Phillip; Drenser, Kimberly; Martinez-Castellanos, Maria Ana; Ostmo, Susan; Jonas, Karyn; Chan, R V Paul; Chiang, Michael F
2016-06-01
Published definitions of plus disease in retinopathy of prematurity (ROP) reference arterial tortuosity and venous dilation within the posterior pole based on a standard published photograph. One possible explanation for limited interexpert reliability for a diagnosis of plus disease is that experts deviate from the published definitions. To identify vascular features used by experts for diagnosis of plus disease through quantitative image analysis. A computer-based image analysis system (Imaging and Informatics in ROP [i-ROP]) was developed using a set of 77 digital fundus images, and the system was designed to classify images compared with a reference standard diagnosis (RSD). System performance was analyzed as a function of the field of view (circular crops with a radius of 1-6 disc diameters) and vessel subtype (arteries only, veins only, or all vessels). Routine ROP screening was conducted from June 29, 2011, to October 14, 2014, in neonatal intensive care units at 8 academic institutions, with a subset of 73 images independently classified by 11 ROP experts for validation. The RSD was compared with the majority diagnosis of experts. The primary outcome measure was the percentage of accuracy of the i-ROP system classification of plus disease, with the RSD as a function of the field of view and vessel type. Secondary outcome measures included the accuracy of the 11 experts compared with the RSD. Accuracy of plus disease diagnosis by the i-ROP computer-based system was highest (95%; 95% CI, 94%-95%) when it incorporated vascular tortuosity from both arteries and veins and with the widest field of view (6-disc diameter radius). Accuracy was 90% or less when using only arterial tortuosity and 85% or less using a 2- to 3-disc diameter view similar to the standard published photograph. Diagnostic accuracy of the i-ROP system (95%) was comparable to that of 11 expert physicians (mean 87%, range 79%-99%). Experts in ROP appear to consider findings from beyond the posterior retina when diagnosing plus disease and consider tortuosity of both arteries and veins, in contrast with published definitions. It is feasible for a computer-based image analysis system to perform comparably with ROP experts, using manually segmented images.
Sallent, A; Vicente, M; Reverté, M M; Lopez, A; Rodríguez-Baeza, A; Pérez-Domínguez, M; Velez, R
2017-10-01
To assess the accuracy of patient-specific instruments (PSIs) versus standard manual technique and the precision of computer-assisted planning and PSI-guided osteotomies in pelvic tumour resection. CT scans were obtained from five female cadaveric pelvises. Five osteotomies were designed using Mimics software: sacroiliac, biplanar supra-acetabular, two parallel iliopubic and ischial. For cases of the left hemipelvis, PSIs were designed to guide standard oscillating saw osteotomies and later manufactured using 3D printing. Osteotomies were performed using the standard manual technique in cases of the right hemipelvis. Post-resection CT scans were quantitatively analysed. Student's t -test and Mann-Whitney U test were used. Compared with the manual technique, PSI-guided osteotomies improved accuracy by a mean 9.6 mm (p < 0.008) in the sacroiliac osteotomies, 6.2 mm (p < 0.008) and 5.8 mm (p < 0.032) in the biplanar supra-acetabular, 3 mm (p < 0.016) in the ischial and 2.2 mm (p < 0.032) and 2.6 mm (p < 0.008) in the parallel iliopubic osteotomies, with a mean linear deviation of 4.9 mm (p < 0.001) for all osteotomies. Of the manual osteotomies, 53% (n = 16) had a linear deviation > 5 mm and 27% (n = 8) were > 10 mm. In the PSI cases, deviations were 10% (n = 3) and 0 % (n = 0), respectively. For angular deviation from pre-operative plans, we observed a mean improvement of 7.06° (p < 0.001) in pitch and 2.94° (p < 0.001) in roll, comparing PSI and the standard manual technique. In an experimental study, computer-assisted planning and PSIs improved accuracy in pelvic tumour resections, bringing osteotomy results closer to the parameters set in pre-operative planning, as compared with standard manual techniques. Cite this article : A. Sallent, M. Vicente, M. M. Reverté, A. Lopez, A. Rodríguez-Baeza, M. Pérez-Domínguez, R. Velez. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study. Bone Joint Res 2017;6:577-583. DOI: 10.1302/2046-3758.610.BJR-2017-0094.R1. © 2017 Sallent et al.
NASA Astrophysics Data System (ADS)
Dash, Jatindra K.; Kale, Mandar; Mukhopadhyay, Sudipta; Khandelwal, Niranjan; Prabhakar, Nidhi; Garg, Mandeep; Kalra, Naveen
2017-03-01
In this paper, we investigate the effect of the error criteria used during a training phase of the artificial neural network (ANN) on the accuracy of the classifier for classification of lung tissues affected with Interstitial Lung Diseases (ILD). Mean square error (MSE) and the cross-entropy (CE) criteria are chosen being most popular choice in state-of-the-art implementations. The classification experiment performed on the six interstitial lung disease (ILD) patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Micronodules, Fibrosis and Healthy from MedGIFT database. The texture features from an arbitrary region of interest (AROI) are extracted using Gabor filter. Two different neural networks are trained with the scaled conjugate gradient back propagation algorithm with MSE and CE error criteria function respectively for weight updation. Performance is evaluated in terms of average accuracy of these classifiers using 4 fold cross-validation. Each network is trained for five times for each fold with randomly initialized weight vectors and accuracies are computed. Significant improvement in classification accuracy is observed when ANN is trained by using CE (67.27%) as error function compared to MSE (63.60%). Moreover, standard deviation of the classification accuracy for the network trained with CE (6.69) error criteria is found less as compared to network trained with MSE (10.32) criteria.
Semaan, Hassan; Bazerbashi, Mohamad F; Siesel, Geoffrey; Aldinger, Paul; Obri, Tawfik
2018-03-01
To determine the accuracy and non-detection rate of cancer related findings (CRFs) on follow-up non-contrast-enhanced CT (NECT) versus contrast-enhanced CT (CECT) images of the abdomen in patients with a known cancer diagnosis. A retrospective review of 352 consecutive CTs of the abdomen performed with and without IV contrast between March 2010 and October 2014 for follow-up of cancer was included. Two radiologists independently assessed the NECT portions of the studies. The reader was provided the primary cancer diagnosis and access to the most recent prior NECT study. The accuracy and non-detection rates were determined by comparing our results to the archived reports as a gold standard. A total of 383 CRFs were found in the archived reports of the 352 abdominal CTs. The average non-detection rate for the NECTs compared to the CECTs was 3.0% (11.5/383) with an accuracy of 97.0% (371.5/383) in identifying CRFs. The most common findings missed were vascular thrombosis with a non-detection rate of 100%. The accuracy for non-vascular CRFs was 99.1%. Follow-up NECT abdomen studies are highly accurate in the detection of CRFs in patients with an established cancer diagnosis, except in cases where vascular involvement is suspected.
Accuracy of visual inspection performed by community health workers in cervical cancer screening.
Driscoll, Susan D; Tappen, Ruth M; Newman, David; Voege-Harvey, Kathi
2018-05-22
Cervical cancer remains the leading cause of cancer and mortality in low-resource areas with healthcare personnel shortages. Visual inspection is a low-resource alternative method of cervical cancer screening in areas with limited access to healthcare. To assess accuracy of visual inspection performed by community health workers (CHWs) and licensed providers, and the effect of provider training on visual inspection accuracy. Five databases and four websites were queried for studies published in English up to December 31, 2015. Derivations of "cervical cancer screening" and "visual inspection" were search terms. Visual inspection screening studies with provider definitions, colposcopy reference standards, and accuracy data were included. A priori variables were extracted by two independent reviewers. Bivariate linear mixed-effects models were used to compare visual inspection accuracy. Provider type was a significant predictor of visual inspection sensitivity (P=0.048); sensitivity was 15 percentage points higher among CHWs than physicians (P=0.014). Components of provider training were significant predictors of sensitivity and specificity. Community-based visual inspection programs using adequately trained CHWs could reduce barriers and expand access to screening, thereby decreasing cervical cancer incidence and mortality for women at highest risk and those living in remote areas with limited access to healthcare personnel. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Schoenthaler, Martin; Avcil, Tuba; Sevcenco, Sabina; Nagele, Udo; Hermann, Thomas E W; Kuehhas, Franklin E; Shariat, Shahrokh F; Frankenschmidt, Alexander; Wetterauer, Ulrich; Miernik, Arkadiusz
2015-01-01
To evaluate the Single-Incision Transumbilical Surgery (SITUS) technique as compared to an established laparoendoscopic single-site surgery (LESS) technique (Single-Port Laparoscopic Surgery, SPLS) and conventional laparoscopy (CLS) in a surgical simulator model. Sixty-three medical students without previous laparoscopic experience were randomly assigned to one of the three groups (SITUS, SPLS and CLS). Subjects were asked to perform five standardized tasks of increasing difficulty adopted from the Fundamentals of Laparoscopic Surgery curriculum. Statistical evaluation included task completion times and accuracy. Overall performances of all tasks (except precision cutting) were significantly faster and of higher accuracy in the CLS and SITUS groups than in the SPLS group (p = 0.004 to p < 0.001). CLS and SITUS groups alone showed no significant difference in performance times and accuracy measurements for all tasks (p = 0.048 to p = 0.989). SITUS proved to be a simple, but highly effective technique to overcome restrictions of SPLS. In a surgical simulator model, novices were able to achieve task performances comparable to CLS and did significantly better than using a port-assisted LESS technique such as SPLS. The demonstrated advantages of SITUS may be attributed to a preservation of the basic principles of conventional laparoscopy, such as the use of straight instruments and an adequate degree of triangulation.
Schnakers, Caroline; Vanhaudenhuyse, Audrey; Giacino, Joseph; Ventura, Manfredi; Boly, Melanie; Majerus, Steve; Moonen, Gustave; Laureys, Steven
2009-01-01
Background Previously published studies have reported that up to 43% of patients with disorders of consciousness are erroneously assigned a diagnosis of vegetative state (VS). However, no recent studies have investigated the accuracy of this grave clinical diagnosis. In this study, we compared consensus-based diagnoses of VS and MCS to those based on a well-established standardized neurobehavioral rating scale, the JFK Coma Recovery Scale-Revised (CRS-R). Methods We prospectively followed 103 patients (55 ± 19 years) with mixed etiologies and compared the clinical consensus diagnosis provided by the physician on the basis of the medical staff's daily observations to diagnoses derived from CRS-R assessments performed by research staff. All patients were assigned a diagnosis of 'VS', 'MCS' or 'uncertain diagnosis.' Results Of the 44 patients diagnosed with VS based on the clinical consensus of the medical team, 18 (41%) were found to be in MCS following standardized assessment with the CRS-R. In the 41 patients with a consensus diagnosis of MCS, 4 (10%) had emerged from MCS, according to the CRS-R. We also found that the majority of patients assigned an uncertain diagnosis by clinical consensus (89%) were in MCS based on CRS-R findings. Conclusion Despite the importance of diagnostic accuracy, the rate of misdiagnosis of VS has not substantially changed in the past 15 years. Standardized neurobehavioral assessment is a more sensitive means of establishing differential diagnosis in patients with disorders of consciousness when compared to diagnoses determined by clinical consensus. PMID:19622138
Goh, Sherry Meow Peng; Swaminathan, Muthukaruppan; Lai, Julian U-Ming; Anwar, Azlinda; Chan, Soh Ha; Cheong, Ian
2017-01-01
High Epstein Barr Virus (EBV) titers detected by the indirect Immunofluorescence Assay (IFA) are a reliable predictor of Nasopharyngeal Carcinoma (NPC). Despite being the gold standard for serological detection of NPC, the IFA is limited by scaling bottlenecks. Specifically, 5 serial dilutions of each patient sample must be prepared and visually matched by an evaluator to one of 5 discrete titers. Here, we describe a simple method for inferring continuous EBV titers from IFA images acquired from NPC-positive patient sera using only a single sample dilution. In the first part of our study, 2 blinded evaluators used a set of reference titer standards to perform independent re-evaluations of historical samples with known titers. Besides exhibiting high inter-evaluator agreement, both evaluators were also in high concordance with historical titers, thus validating the accuracy of the reference titer standards. In the second part of the study, the reference titer standards were IFA-processed and assigned an 'EBV Score' using image analysis. A log-linear relationship between titers and EBV Score was observed. This relationship was preserved even when images were acquired and analyzed 3days post-IFA. We conclude that image analysis of IFA-processed samples can be used to infer a continuous EBV titer with just a single dilution of NPC-positive patient sera. This work opens new possibilities for improving the accuracy and scalability of IFA in the context of clinical screening. Copyright © 2016. Published by Elsevier B.V.
Introducing a feedback training system for guided home rehabilitation.
Kohler, Fabian; Schmitz-Rode, Thomas; Disselhorst-Klug, Catherine
2010-01-15
As the number of people requiring orthopaedic intervention is growing, individualized physiotherapeutic rehabilitation and adequate postoperative care becomes increasingly relevant. The chances of improvement in the patients condition is directly related to the performance and consistency of the physiotherapeutic exercises.In this paper a smart, cost-effective and easy to use Feedback Training System for home rehabilitation based on standard resistive elements is introduced. This ensures high accuracy of the exercises performed and offers guidance and control to the patient by offering direct feedback about the performance of the movements.46 patients were recruited and performed standard physiotherapeutic training to evaluate the system. The results show a significant increase in the patient's ability to reproduce even simple physiotherapeutic exercises when being supported by the Feedback Training System. Thus physiotherapeutic training can be extended into the home environment whilst ensuring a high quality of training.
Lu, Chao-Chin; Leng, Jianwei; Cannon, Grant W; Zhou, Xi; Egger, Marlene; South, Brett; Burningham, Zach; Zeng, Qing; Sauer, Brian C
2016-12-01
Medications with non-standard dosing and unstandardized units of measurement make the estimation of prescribed dose difficult from pharmacy dispensing data. A natural language processing tool named the SIG extractor was developed to identify and extract elements from narrative medication instructions to compute average weekly doses (AWDs) for disease-modifying antirheumatic drugs. The goal of this paper is to evaluate the performance of the SIG extractor. This agreement study utilized Veterans Health Affairs pharmacy data from 2008 to 2012. The SIG extractor was designed to extract key elements from narrative medication schedules (SIGs) for 17 select medications to calculate AWD, and these medications were categorized by generic name and route of administration. The SIG extractor was evaluated against an annotator-derived reference standard for accuracy, which is the fraction of AWDs accurately computed. The overall accuracy was 89% [95% confidence interval (CI) 88%, 90%]. The accuracy was ≥85% for all medications and route combinations, except for cyclophosphamide (oral) and cyclosporine (oral), which were 79% (95%CI 72%, 85%) and 66% (95%CI 58%, 73%), respectively. The SIG extractor performed well on the majority of medications, indicating that AWD calculated by the SIG extractor can be used to improve estimation of AWD when dispensed quantity or days' supply is questionable or improbable. The working model for annotating SIGs and the SIG extractor are generalized and can easily be applied to other medications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Wornom, Stephen F.
1991-01-01
Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.
Cosentino, Felice; Tumino, Emanuele; Passoni, Giovanni Rubis; Morandi, Elisabetta; Capria, Alfonso
2009-08-01
Currently, the best method for CRC screening is colonoscopy, which ideally (where possible) is performed under partial or deep sedation. This study aims to evaluate the efficacy of the Endotics System, a new robotic device composed of a workstation and a disposable probe, in performing accurate and well-tolerated colonoscopies. This new system could also be considered a precursor of other innovating vectors for atraumatic locomotion through natural orifices such as the bowel. The flexible probe adapts its shape to the complex contours of the colon, thereby exerting low strenuous forces during its movement. These novel characteristics allow for a painless and safe colonoscopy, thus eliminating all major associated risks such as infection, cardiopulmonary complications and colon perforation. An experimental study was devised to investigate stress pattern differences between traditional and robotic colonoscopy, in which 40 enrolled patients underwent both robotic and standard colonoscopy within the same day. The stress pattern related to robotic colonoscopy was 90% lower than that of standard colonoscopy. Additionally, the robotic colonoscopy demonstrated a higher diagnostic accuracy, since, due to the lower insufflation rate, it was able to visualize small polyps and angiodysplasias not seen during the standard colonoscopy. All patients rated the robotic colonoscopy as virtually painless compared to the standard colonoscopy, ranking pain and discomfort as 0.9 and 1.1 respectively, on a scale of O to 10, versus 6.9 and 6.8 respectively for the standard device. The new Endotics System demonstrates efficacy in the diagnosis of colonic pathologies using a procedure nearly completely devoid of pain. Therefore, this system can also be looked upon as the first step toward developing and implementing colonoscopy with atraumatic locomotion through the bowel while maintaining a high level of diagnostic accuracy;
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Basedmore » on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.« less
A deformable particle-in-cell method for advective transport in geodynamic modeling
NASA Astrophysics Data System (ADS)
Samuel, Henri
2018-06-01
This paper presents an improvement of the particle-in-cell method commonly used in geodynamic modeling for solving pure advection of sharply varying fields. Standard particle-in-cell approaches use particle kernels to transfer the information carried by the Lagrangian particles to/from the Eulerian grid. These kernels are generally one-dimensional and non-evolutive, which leads to the development of under- and over-sampling of the spatial domain by the particles. This reduces the accuracy of the solution, and may require the use of a prohibitive amount of particles in order to maintain the solution accuracy to an acceptable level. The new proposed approach relies on the use of deformable kernels that account for the strain history in the vicinity of particles. It results in a significant improvement of the spatial sampling by the particles, leading to a much higher accuracy of the numerical solution, for a reasonable computational extra cost. Various 2D tests were conducted to compare the performances of the deformable particle-in-cell method with the particle-in-cell approach. These consistently show that at comparable accuracy, the deformable particle-in-cell method was found to be four to six times more efficient than standard particle-in-cell approaches. The method could be adapted to 3D space and generalized to cases including motionless transport.
Image-enhanced endoscopy with I-scan technology for the evaluation of duodenal villous patterns.
Cammarota, Giovanni; Ianiro, Gianluca; Sparano, Lucia; La Mura, Rossella; Ricci, Riccardo; Larocca, Luigi M; Landolfi, Raffaele; Gasbarrini, Antonio
2013-05-01
I-scan technology is the newly developed endoscopic tool that works in real time and utilizes a digital contrast method to enhance endoscopic image. We performed a feasibility study aimed to determine the diagnostic accuracy of i-scan technology for the evaluation of duodenal villous patterns, having histology as the reference standard. In this prospective, single center, open study, patients undergoing upper endoscopy for an histological evaluation of duodenal mucosa were enrolled. All patients underwent upper endoscopy using high resolution view in association with i-scan technology. During endoscopy, duodenal villous patterns were evaluated and classified as normal, partial villous atrophy, or marked villous atrophy. Results were then compared with histology. One hundred fifteen subjects were recruited in this study. The endoscopist was able to find marked villous atrophy of the duodenum in 12 subjects, partial villous atrophy in 25, and normal villi in the remaining 78 individuals. The i-scan system was demonstrated to have great accuracy (100 %) in the detection of marked villous atrophy patterns. I-scan technology showed quite lower accuracy in determining partial villous atrophy or normal villous patterns (respectively, 90 % for both items). Image-enhancing endoscopic technology allows a clear visualization of villous patterns in the duodenum. By switching from the standard to the i-scan view, it is possible to optimize the accuracy of endoscopy in recognizing villous alteration in subjects undergoing endoscopic evaluation.
A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Bui, Trong T.
1999-01-01
A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.
NASA Technical Reports Server (NTRS)
Irvine, R.; Van Alstine, R.
1979-01-01
The paper compares and describes the advantages of dry tuned gyros over floated gyros for space applications. Attention is given to describing the Teledyne SDG-5 gyro and the second-generation NASA Standard Dry Rotor Inertial Reference Unit (DRIRU II). Certain tests which were conducted to evaluate the SDG-5 and DRIRU II for specific mission requirements are outlined, and their results are compared with published test results on other gyro types. Performance advantages are highlighted.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2014-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion. PMID:25422534
The accuracy of home glucose meters in hypoglycemia.
Sonmez, Alper; Yilmaz, Zeynep; Uckaya, Gokhan; Kilic, Selim; Tapan, Serkan; Taslipinar, Abdullah; Aydogdu, Aydogan; Yazici, Mahmut; Yilmaz, Mahmut Ilker; Serdar, Muhittin; Erbil, M Kemal; Kutlu, Mustafa
2010-08-01
Home glucose meters (HGMs) may not be accurate enough to sense hypoglycemia. We evaluated the accuracy and the capillary and venous comparability of five different HGMs (Optium Xceed [Abbott Diabetes Care, Alameda, CA, USA], Contour TS [Bayer Diabetes Care, Basel, Switzerland], Accu-Chek Go [Roche Ltd., Basel, Switzerland], OneTouch Select [Lifescan, Milpitas, CA, USA], and EZ Smart [Tyson Bioresearch Inc., Chu-Nan, Taiwan]) in an adult population. The insulin hypoglycemia test was performed to 59 subjects (56 males; 23.6 +/- 3.2 years old). Glucose was measured from forearm venous blood and finger capillary samples both before and after regular insulin (0.1 U/kg) was injected. Venous samples were analyzed in the reference laboratory by the hexokinase method. In vitro tests for method comparison and precision analyses were also performed by spiking the glucose-depleted venous blood. All HGMs failed to sense hypoglycemia to some extend. EZ Smart was significantly inferior in critical error Zone D, and OneTouch Select was significantly inferior in the clinically unimportant error Zone B. Accu-Chek Go, Optium Xceed, and Contour TS had similar performances and were significantly better than the other two HGMs according to error grid analysis or International Organization for Standardization criteria. The in vitro tests were consistent with the above clinical data. The capillary and venous consistencies of Accu-Chek Go and OneTouch Select were better than the other HGMs. The present results show that not all the HGMs are accurate enough in low blood glucose levels. The patients and the caregivers should be aware of these restrictions of the HGMs and give more credit to the symptoms of hypoglycemia than the values obtained by the HGMs. Finally, these results indicate that there is a need for the revision of the accuracy standards of HGMs in low blood glucose levels.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2015-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.
USDA-ARS?s Scientific Manuscript database
Error in rater estimates of plant disease severity occur, and standard area diagrams (SADs) help improve accuracy and reliability. The effects of diagram number in a SAD set on accuracy and reliability is unknown. The objective of this study was to compare estimates of pecan scab severity made witho...
Accuracy metrics for judging time scale algorithms
NASA Technical Reports Server (NTRS)
Douglas, R. J.; Boulanger, J.-S.; Jacques, C.
1994-01-01
Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.
Li, Feng; Engelmann, Roger; Pesce, Lorenzo L; Doi, Kunio; Metz, Charles E; Macmahon, Heber
2011-12-01
To determine whether use of bone suppression (BS) imaging, used together with a standard radiograph, could improve radiologists' performance for detection of small lung cancers compared with use of standard chest radiographs alone and whether BS imaging would provide accuracy equivalent to that of dual-energy subtraction (DES) radiography. Institutional review board approval was obtained. The requirement for informed consent was waived. The study was HIPAA compliant. Standard and DES chest radiographs of 50 patients with 55 confirmed primary nodular cancers (mean diameter, 20 mm) as well as 30 patients without cancers were included in the observer study. A new BS imaging processing system that can suppress the conspicuity of bones was applied to the standard radiographs to create corresponding BS images. Ten observers, including six experienced radiologists and four radiology residents, indicated their confidence levels regarding the presence or absence of a lung cancer for each lung, first by using a standard image, then a BS image, and finally DES soft-tissue and bone images. Receiver operating characteristic (ROC) analysis was used to evaluate observer performance. The average area under the ROC curve (AUC) for all observers was significantly improved from 0.807 to 0.867 with BS imaging and to 0.916 with DES (both P < .001). The average AUC for the six experienced radiologists was significantly improved from 0.846 with standard images to 0.894 with BS images (P < .001) and from 0.894 to 0.945 with DES images (P = .001). Use of BS imaging together with a standard radiograph can improve radiologists' accuracy for detection of small lung cancers on chest radiographs. Further improvements can be achieved by use of DES radiography but with the requirement for special equipment and a potential small increase in radiation dose. © RSNA, 2011.
Robinson, P; Hodgson, R; Grainger, A J
2015-01-01
Objective: To assess whether a single isotropic three-dimensional (3D) fast spin echo (FSE) proton density fat-saturated (PD FS) sequence reconstructed in three planes could replace the three PD (FS) sequences in our standard protocol at 1.5 T (Siemens Avanto, Erlangen, Germany). Methods: A 3D FSE PD water excitation sequence was included in the protocol for 95 consecutive patients referred for routine knee MRI. This was used to produce offline reconstructions in axial, sagittal and coronal planes. Two radiologists independently assessed each case twice, once using the standard MRI protocol and once replacing the standard PD (FS) sequences with reconstructions from the 3D data set. Following scoring, the observer reviewed the 3D data set and performed multiplanar reformats to see if this altered confidence. The menisci, ligaments and cartilage were assessed, and statistical analysis was performed using the standard sequence as the reference standard. Results: The reporting accuracy was as follows: medial meniscus (MM) = 90.9%, lateral meniscus (LM) = 93.7%, anterior cruciate ligament (ACL) = 98.9% and cartilage surfaces = 85.8%. Agreement among the readers was for the standard protocol: MM kappa = 0.91, LM = 0.89, ACL = 0.98 and cartilage = 0.84; and for the 3D protocol: MM = 0.86, LM = 0.77, ACL = 0.94 and cartilage = 0.64. Conclusion: A 3D PD FSE sequence reconstructed in three planes gives reduced accuracy and decreased concordance among readers compared with conventional sequences when evaluating the menisci and cartilage with a 1.5-T MRI scanner. Advances in knowledge: Using the existing 1.5-T MR systems, a 3D FSE sequence should not replace two-dimensional sequences. PMID:26067920
Presentation accuracy of the web revisited: animation methods in the HTML5 era.
Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego
2014-01-01
Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies.
Contribution of Visual Information about Ball Trajectory to Baseball Hitting Accuracy
Higuchi, Takatoshi; Nagami, Tomoyuki; Nakata, Hiroki; Watanabe, Masakazu; Isaka, Tadao; Kanosue, Kazuyuki
2016-01-01
The contribution of visual information about a pitched ball to the accuracy of baseball-bat contact may vary depending on the part of trajectory seen. The purpose of the present study was to examine the relationship between hitting accuracy and the segment of the trajectory of the flying ball that can be seen by the batter. Ten college baseball field players participated in the study. The systematic error and standardized variability of ball-bat contact on the bat coordinate system and pitcher-to-catcher direction when hitting a ball launched from a pitching machine were measured with or without visual occlusion and analyzed using analysis of variance. The visual occlusion timing included occlusion from 150 milliseconds (ms) after the ball release (R+150), occlusion from 150 ms before the expected arrival of the launched ball at the home plate (A-150), and a condition with no occlusion (NO). Twelve trials in each condition were performed using two ball speeds (31.9 m·s-1 and 40.3 m·s-1). Visual occlusion did not affect the mean location of ball-bat contact in the bat’s long axis, short axis, and pitcher-to-catcher directions. Although the magnitude of standardized variability was significantly smaller in the bat’s short axis direction than in the bat’s long axis and pitcher-to-catcher directions (p < 0.001), additional visible time from the R+150 condition to the A-150 and NO conditions resulted in a further decrease in standardized variability only in the bat’s short axis direction (p < 0.05). The results suggested that there is directional specificity in the magnitude of standardized variability with different visible time. The present study also confirmed the limitation to visual information is the later part of the ball trajectory for improving hitting accuracy, which is likely due to visuo-motor delay. PMID:26848742
Contribution of Visual Information about Ball Trajectory to Baseball Hitting Accuracy.
Higuchi, Takatoshi; Nagami, Tomoyuki; Nakata, Hiroki; Watanabe, Masakazu; Isaka, Tadao; Kanosue, Kazuyuki
2016-01-01
The contribution of visual information about a pitched ball to the accuracy of baseball-bat contact may vary depending on the part of trajectory seen. The purpose of the present study was to examine the relationship between hitting accuracy and the segment of the trajectory of the flying ball that can be seen by the batter. Ten college baseball field players participated in the study. The systematic error and standardized variability of ball-bat contact on the bat coordinate system and pitcher-to-catcher direction when hitting a ball launched from a pitching machine were measured with or without visual occlusion and analyzed using analysis of variance. The visual occlusion timing included occlusion from 150 milliseconds (ms) after the ball release (R+150), occlusion from 150 ms before the expected arrival of the launched ball at the home plate (A-150), and a condition with no occlusion (NO). Twelve trials in each condition were performed using two ball speeds (31.9 m·s-1 and 40.3 m·s-1). Visual occlusion did not affect the mean location of ball-bat contact in the bat's long axis, short axis, and pitcher-to-catcher directions. Although the magnitude of standardized variability was significantly smaller in the bat's short axis direction than in the bat's long axis and pitcher-to-catcher directions (p < 0.001), additional visible time from the R+150 condition to the A-150 and NO conditions resulted in a further decrease in standardized variability only in the bat's short axis direction (p < 0.05). The results suggested that there is directional specificity in the magnitude of standardized variability with different visible time. The present study also confirmed the limitation to visual information is the later part of the ball trajectory for improving hitting accuracy, which is likely due to visuo-motor delay.
Age-related differences in listening effort during degraded speech recognition
Ward, Kristina M.; Shen, Jing; Souza, Pamela E.; Grieco-Calub, Tina M.
2016-01-01
Objectives The purpose of the current study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Design Twenty-five younger adults (18–24 years) and twenty-one older adults (56–82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants’ responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners’ performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (baseline vs. dual task); and (3) per group (younger vs. older adults). Results Speech recognition declined with increasing spectral degradation for both younger and older adults when they performed the task in isolation or concurrently with the visual monitoring task. Older adults were slower and less accurate than younger adults on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared to single-task performance, older adults experienced greater declines in secondary-task accuracy, but not reaction time, than younger adults. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. Conclusions Older adults experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than younger adults. These findings are interpreted as suggesting that older listeners expended greater listening effort than younger listeners, and may be partially attributed to age-related differences in executive control. PMID:27556526
Radiometric Calibration of the NASA Advanced X-Ray Astrophysics Facility
NASA Technical Reports Server (NTRS)
Kellogg, Edwin M.
1999-01-01
We present the results of absolute calibration of the quantum efficiency of soft x-ray detectors performed at the PTB/BESSY beam lines. The accuracy goal is 1%. We discuss the implementation of that goal. These detectors were used as transfer standards to provide the radiometric calibration of the AXAF X-ray observatory, to be launched in April 1999.
Anzalone, Nicoletta; Castellano, Antonella; Cadioli, Marcello; Conte, Gian Marco; Cuccarini, Valeria; Bizzi, Alberto; Grimaldi, Marco; Costa, Antonella; Grillea, Giovanni; Vitali, Paolo; Aquino, Domenico; Terreni, Maria Rosa; Torri, Valter; Erickson, Bradley J; Caulo, Massimo
2018-06-01
Purpose To evaluate the feasibility of a standardized protocol for acquisition and analysis of dynamic contrast material-enhanced (DCE) and dynamic susceptibility contrast (DSC) magnetic resonance (MR) imaging in a multicenter clinical setting and to verify its accuracy in predicting glioma grade according to the new World Health Organization 2016 classification. Materials and Methods The local research ethics committees of all centers approved the study, and informed consent was obtained from patients. One hundred patients with glioma were prospectively examined at 3.0 T in seven centers that performed the same preoperative MR imaging protocol, including DCE and DSC sequences. Two independent readers identified the perfusion hotspots on maps of volume transfer constant (K trans ), plasma (v p ) and extravascular-extracellular space (v e ) volumes, initial area under the concentration curve, and relative cerebral blood volume (rCBV). Differences in parameters between grades and molecular subtypes were assessed by using Kruskal-Wallis and Mann-Whitney U tests. Diagnostic accuracy was evaluated by using receiver operating characteristic curve analysis. Results The whole protocol was tolerated in all patients. Perfusion maps were successfully obtained in 94 patients. An excellent interreader reproducibility of DSC- and DCE-derived measures was found. Among DCE-derived parameters, v p and v e had the highest accuracy (are under the receiver operating characteristic curve [A z ] = 0.847 and 0.853) for glioma grading. DSC-derived rCBV had the highest accuracy (A z = 0.894), but the difference was not statistically significant (P > .05). Among lower-grade gliomas, a moderate increase in both v p and rCBV was evident in isocitrate dehydrogenase wild-type tumors, although this was not significant (P > .05). Conclusion A standardized multicenter acquisition and analysis protocol of DCE and DSC MR imaging is feasible and highly reproducible. Both techniques showed a comparable, high diagnostic accuracy for grading gliomas. © RSNA, 2018 Online supplemental material is available for this article.
NASA Astrophysics Data System (ADS)
Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.
2016-03-01
The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.
Bias in estimating accuracy of a binary screening test with differential disease verification
Brinton, John T.; Ringham, Brandy M.; Glueck, Deborah H.
2011-01-01
SUMMARY Sensitivity, specificity, positive and negative predictive value are typically used to quantify the accuracy of a binary screening test. In some studies it may not be ethical or feasible to obtain definitive disease ascertainment for all subjects using a gold standard test. When a gold standard test cannot be used an imperfect reference test that is less than 100% sensitive and specific may be used instead. In breast cancer screening, for example, follow-up for cancer diagnosis is used as an imperfect reference test for women where it is not possible to obtain gold standard results. This incomplete ascertainment of true disease, or differential disease verification, can result in biased estimates of accuracy. In this paper, we derive the apparent accuracy values for studies subject to differential verification. We determine how the bias is affected by the accuracy of the imperfect reference test, the percent who receive the imperfect reference standard test not receiving the gold standard, the prevalence of the disease, and the correlation between the results for the screening test and the imperfect reference test. It is shown that designs with differential disease verification can yield biased estimates of accuracy. Estimates of sensitivity in cancer screening trials may be substantially biased. However, careful design decisions, including selection of the imperfect reference test, can help to minimize bias. A hypothetical breast cancer screening study is used to illustrate the problem. PMID:21495059
[Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].
Krimmel, M; Kluba, S; Dietz, K; Reinert, S
2005-03-01
The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.
A systematic review of the PTSD Checklist's diagnostic accuracy studies using QUADAS.
McDonald, Scott D; Brown, Whitney L; Benesek, John P; Calhoun, Patrick S
2015-09-01
Despite the popularity of the PTSD Checklist (PCL) as a clinical screening test, there has been no comprehensive quality review of studies evaluating its diagnostic accuracy. A systematic quality assessment of 22 diagnostic accuracy studies of the English-language PCL using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) assessment tool was conducted to examine (a) the quality of diagnostic accuracy studies of the PCL, and (b) whether quality has improved since the 2003 STAndards for the Reporting of Diagnostic accuracy studies (STARD) initiative regarding reporting guidelines for diagnostic accuracy studies. Three raters independently applied the QUADAS tool to each study, and a consensus among the 4 authors is reported. Findings indicated that although studies generally met standards in several quality areas, there is still room for improvement. Areas for improvement include establishing representativeness, adequately describing clinical and demographic characteristics of the sample, and presenting better descriptions of important aspects of test and reference standard execution. Only 2 studies met each of the 14 quality criteria. In addition, study quality has not appreciably improved since the publication of the STARD Statement in 2003. Recommendations for the improvement of diagnostic accuracy studies of the PCL are discussed. (c) 2015 APA, all rights reserved).
Robbins, Rebecca J; Leonczak, Jadwiga; Johnson, J Christopher; Li, Julia; Kwik-Uribe, Catherine; Prior, Ronald L; Gu, Liwei
2009-06-12
The quantitative parameters and method performance for a normal-phase HPLC separation of flavanols and procyanidins in chocolate and cocoa-containing food products were optimized and assessed. Single laboratory method performance was examined over three months using three separate secondary standards. RSD(r) ranged from 1.9%, 4.5% to 9.0% for cocoa powder, liquor and chocolate samples containing 74.39, 15.47 and 1.87 mg/g flavanols and procyanidins, respectively. Accuracy was determined by comparison to the NIST Standard Reference Material 2384. Inter-lab assessment indicated that variability was quite low for seven different cocoa-containing samples, with a RSD(R) of less than 10% for the range of samples analyzed.
Martens, R; Hurks, P P M; Jolles, J
2014-01-01
This study investigated psychometric properties (standardization and validity) of the Rey Complex Figure Organizational Strategy Score (RCF-OSS) in a sample of 217 healthy children aged 5-7 years. Our results showed that RCF-OSS performance changes significantly between 5 and 7 years of age. While most 5-year-olds used a local approach when copying the Rey-Osterrieth Complex Figure (ROCF), 7-year-olds increasingly adopted a global approach. RCF-OSS performance correlated significantly, but moderately with measures of ROCF accuracy, executive functioning (fluency, working memory, reasoning), and non-executive functioning (visual-motor integration, visual attention, processing speed, numeracy). These findings seem to indicate that RCF-OSS performance reflects a range of cognitive skills at 5 to 7 years of age, including aspects of executive and non-executive functioning.
Comparative analysis of autofocus functions in digital in-line phase-shifting holography.
Fonseca, Elsa S R; Fiadeiro, Paulo T; Pereira, Manuela; Pinheiro, António
2016-09-20
Numerical reconstruction of digital holograms relies on a precise knowledge of the original object position. However, there are a number of relevant applications where this parameter is not known in advance and an efficient autofocusing method is required. This paper addresses the problem of finding optimal focusing methods for use in reconstruction of digital holograms of macroscopic amplitude and phase objects, using digital in-line phase-shifting holography in transmission mode. Fifteen autofocus measures, including spatial-, spectral-, and sparsity-based methods, were evaluated for both synthetic and experimental holograms. The Fresnel transform and the angular spectrum reconstruction methods were compared. Evaluation criteria included unimodality, accuracy, resolution, and computational cost. Autofocusing under angular spectrum propagation tends to perform better with respect to accuracy and unimodality criteria. Phase objects are, generally, more difficult to focus than amplitude objects. The normalized variance, the standard correlation, and the Tenenbaum gradient are the most reliable spatial-based metrics, combining computational efficiency with good accuracy and resolution. A good trade-off between focus performance and computational cost was found for the Fresnelet sparsity method.
High accuracy time transfer synchronization
NASA Technical Reports Server (NTRS)
Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.
1995-01-01
In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.
A Gold Standards Approach to Training Instructors to Evaluate Crew Performance
NASA Technical Reports Server (NTRS)
Baker, David P.; Dismukes, R. Key
2003-01-01
The Advanced Qualification Program requires that airlines evaluate crew performance in Line Oriented Simulation. For this evaluation to be meaningful, instructors must observe relevant crew behaviors and evaluate those behaviors consistently and accurately against standards established by the airline. The airline industry has largely settled on an approach in which instructors evaluate crew performance on a series of event sets, using standardized grade sheets on which behaviors specific to event set are listed. Typically, new instructors are given a class in which they learn to use the grade sheets and practice evaluating crew performance observed on videotapes. These classes emphasize reliability, providing detailed instruction and practice in scoring so that all instructors within a given class will give similar scores to similar performance. This approach has value but also has important limitations; (1) ratings within one class of new instructors may differ from those of other classes; (2) ratings may not be driven primarily by the specific behaviors on which the company wanted the crews to be scored; and (3) ratings may not be calibrated to company standards for level of performance skill required. In this paper we provide a method to extend the existing method of training instructors to address these three limitations. We call this method the "gold standards" approach because it uses ratings from the company's most experienced instructors as the basis for training rater accuracy. This approach ties the training to the specific behaviors on which the experienced instructors based their ratings.
Skin prick/puncture testing in North America: a call for standards and consistency.
Fatteh, Shahnaz; Rekkerth, Donna J; Hadley, James A
2014-01-01
Skin prick/puncture testing (SPT) is widely accepted as a safe, dependable, convenient, and cost-effective procedure to detect allergen-specific IgE sensitivity. It is, however, prone to influence by a variety of factors that may significantly alter test outcomes, affect the accuracy of diagnosis, and the effectiveness of subsequent immunotherapy regimens. Proficiency in SPT administration is a key variable that can be routinely measured and documented to improve the predictive value of allergy skin testing. Literature surveys were conducted to determine the adherence to repeated calls for development and implementation of proficiency testing standards in the 1990's, the mid-2000's and the 2008 allergy diagnostics practice parameters. Authors publishing clinical research in peer-reviewed journals and conducting workshops at annual scientific meetings have recommended proficiency testing based primarily on its potential to reduce variability, minimize confounding test results, and promote more effective immunotherapeutic treatments. Very few publications of clinical studies, however, appear to report proficiency testing data for SPT performance. Allergen immunotherapy recommendations are updated periodically by the Joint Task Force on Practice Parameters representing the American Academy of Allergy, Asthma and Immunology (AAAAI), the American College of Allergy, Asthma and Immunology (ACAAI), and the Joint Council of Allergy, Asthma and Immunology (JCAAI). Despite consensus that all staff who perform SPT should meet basic quality assurance standards that demonstrate their SPT proficiency, the gap between recommendations and daily practice persists. By embracing standards, the accuracy of SPT and allergy diagnosis can be optimized, ultimately benefiting patients with allergic disease.
Treuer, H; Hoevels, M; Luyken, K; Gierich, A; Kocher, M; Müller, R P; Sturm, V
2000-08-01
We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.
Conrad, Claudius; Konuk, Yusuf; Werner, Paul D.; Cao, Caroline G.; Warshaw, Andrew L.; Rattner, David W.; Stangenberg, Lars; Ott, Harald C.; Jones, Daniel B.; Miller, Diane L; Gee, Denise W.
2012-01-01
OBJECTIVE To explore how the two most important components of surgical performance - speed and accuracy - are influenced by different forms of stress and what the impact of music on these factors is. SUMMARY BACKGROUND DATA Based on a recently published pilot study on surgical experts, we designed an experiment examining the effects of auditory stress, mental stress, and music on surgical performance and learning, and then correlated the data psychometric measures to the role of music in a novice surgeon’s life. METHODS 31 surgeons were recruited for a crossover study. Surgeons were randomized to four simple standardized tasks to be performed on the Surgical SIM VR laparoscopic simulator, allowing exact tracking of speed and accuracy. Tasks were performed under a variety of conditions, including silence, dichotic music (auditory stress), defined classical music (auditory relaxation), and mental loading (mental arithmetic tasks). Tasks were performed twice to test for memory consolidation and to accommodate for baseline variability. Performance was correlated to the Brief Musical Experience Questionnaire (MEQ). RESULTS Mental loading influences performance with respect to accuracy, speed, and recall more negatively than does auditory stress. Defined classical music might lead to minimally worse performance initially, but leads to significantly improved memory consolidation. Furthermore, psychologic testing of the volunteers suggests that surgeons with greater musical commitment, measured by the MEQ, perform worse under the mental loading condition. CONCLUSION Mental distraction and auditory stress negatively affect specific components of surgical learning and performance. If used appropriately, classical music may positively affect surgical memory consolidation. It also may be possible to predict surgeons’ performance and learning under stress through psychological tests on the role of music in a surgeon’s life. Further investigation is necessary to determine the cognitive processes behind these correlations. PMID:22584632
Stone, Richard T; Moeller, Brandon F; Mayer, Robert R; Rosenquist, Bryce; Van Ryswyk, Darin; Eichorn, Drew
2014-06-01
Shooter accuracy and stability were monitored while firing two bullpup and two conventional configuration rifles of the same caliber in order to determine if one style of weapon results in superior performance. Considerable debate exists among police and military professionals regarding the differences between conventional configuration weapons, where the magazine and action are located ahead of the trigger, and bullpup configuration, where they are located behind the trigger (closer to the user). To date, no published research has attempted to evaluate this question from a physical ergonomics standpoint, and the knowledge that one style might improve stability or result in superior performance is of interest to countless military, law enforcement, and industry experts. A live-fire evaluation of both weapon styles was performed using a total of 48 participants. Shooting accuracy and fluctuations in biomechanical stability (center of pressure) were monitored while subjects used the weapons to perform standard drills. The bullpup weapon designs were found to provide a significant advantage in accuracy and shooter stability, while subjects showed considerable preference toward the conventional weapons. Although many mechanical and maintenance issues must be considered before committing to a bullpup or conventional weapon system, it is clear in terms of basic human stability that the bullpup is the more advantageous configuration. Results can be used by competitive shooter, military, law enforcement, and industry experts while outfitting personnel with a weapon system that leads to superior performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, J. A.; Booth, J. T.; O’Brien, R. T.
2014-11-01
Purpose: Kilovoltage intrafraction monitoring (KIM) is a real-time 3D tumor monitoring system for cancer radiotherapy. KIM uses the commonly available gantry-mounted x-ray imager as input, making this method potentially more widely available than dedicated real-time 3D tumor monitoring systems. KIM is being piloted in a clinical trial for prostate cancer patients treated with VMAT (NCT01742403). The purpose of this work was to develop clinical process and quality assurance (QA) practices for the clinical implementation of KIM. Methods: Informed by and adapting existing guideline documents from other real-time monitoring systems, KIM-specific QA practices were developed. The following five KIM-specific QA testsmore » were included: (1) static localization accuracy, (2) dynamic localization accuracy, (3) treatment interruption accuracy, (4) latency measurement, and (5) clinical conditions accuracy. Tests (1)–(4) were performed using KIM to measure static and representative patient-derived prostate motion trajectories using a 3D programmable motion stage supporting an anthropomorphic phantom with implanted gold markers to represent the clinical treatment scenario. The threshold for system tolerable latency is <1 s. The tolerances for all other tests are that both the mean and standard deviation of the difference between the programmed trajectory and the measured data are <1 mm. The (5) clinical conditions accuracy test compared the KIM measured positions with those measured by kV/megavoltage (MV) triangulation from five treatment fractions acquired in a previous pilot study. Results: For the (1) static localization, (2) dynamic localization, and (3) treatment interruption accuracy tests, the mean and standard deviation of the difference are <1.0 mm. (4) The measured latency is 350 ms. (5) For the tests with previously acquired patient data, the mean and standard deviation of the difference between KIM and kV/MV triangulation are <1.0 mm. Conclusions: Clinical process and QA practices for the safe clinical implementation of KIM, a novel real-time monitoring system using commonly available equipment, have been developed and implemented for prostate cancer VMAT.« less
Accuracy of clinical pallor in the diagnosis of anaemia in children: a meta-analysis.
Chalco, Juan P; Huicho, Luis; Alamo, Carlos; Carreazo, Nilton Y; Bada, Carlos A
2005-12-08
Anaemia is highly prevalent in children of developing countries. It is associated with impaired physical growth and mental development. Palmar pallor is recommended at primary level for diagnosing it, on the basis of few studies. The objective of the study was to systematically assess the accuracy of clinical signs in the diagnosis of anaemia in children. A systematic review on the accuracy of clinical signs of anaemia in children. We performed an Internet search in various databases and an additional reference tracking. Studies had to be on performance of clinical signs in the diagnosis of anaemia, using haemoglobin as the gold standard. We calculated pooled diagnostic likelihood ratios (LR's) and odds ratios (DOR's) for each clinical sign at different haemoglobin thresholds. Eleven articles met the inclusion criteria. Most studies were performed in Africa, in children underfive. Chi-square test for proportions and Cochran Q for DOR's and for LR's showed heterogeneity. Type of observer and haemoglobin technique influenced the results. Pooling was done using the random effects model. Pooled DOR at haemoglobin <11 g/dL was 4.3 (95% CI 2.6-7.2) for palmar pallor, 3.7 (2.3-5.9) for conjunctival pallor, and 3.4 (1.8-6.3) for nailbed pallor. DOR's and LR's were slightly better for nailbed pallor at all other haemoglobin thresholds. The accuracy did not vary substantially after excluding outliers. This meta-analysis did not document a highly accurate clinical sign of anaemia. In view of poor performance of clinical signs, universal iron supplementation may be an adequate control strategy in high prevalence areas. Further well-designed studies are needed in settings other than Africa. They should assess inter-observer variation, performance of combined clinical signs, phenotypic differences, and different degrees of anaemia.
Tasci, Ozlem; Hatipoglu, Osman Nuri; Cagli, Bekir; Ermis, Veli
2016-07-08
The primary purpose of our study was to compare the efficacies of two sonographic (US) probes, a high-frequency linear-array probe and a lower-frequency phased-array sector probe in the diagnosis of basic thoracic pathologies. The secondary purpose was to compare the diagnostic performance of thoracic US with auscultation and chest radiography (CXR) using thoracic CT as a gold standard. In total, 55 consecutive patients scheduled for thoracic CT were enrolled in this prospective study. Four pathologic entities were evaluated: pneumothorax, pleural effusion, consolidation, and interstitial syndrome. A portable US scanner was used with a 5-10-MHz linear-array probe and a 1-5-MHz phased-array sector probe. The first probe used was chosen randomly. US, CXR, and auscultation results were compared with the CT results. The linear-array probe had the highest performance in the identification of pneumothorax (83% sensitivity, 100% specificity, and 99% diagnostic accuracy) and pleural effusion (100% sensitivity, 97% specificity, and 98% diagnostic accuracy); the sector probe had the highest performance in the identification of consolidation (89% sensitivity, 100% specificity, and 95% diagnostic accuracy) and interstitial syndrome (94% sensitivity, 93% specificity, and 94% diagnostic accuracy). For all pathologies, the performance of US was superior to those of CXR and auscultation. The linear probe is superior to the sector probe for identifying pleural pathologies, whereas the sector probe is superior to the linear probe for identifying parenchymal pathologies. Thoracic US has better diagnostic performance than CXR and auscultation for the diagnosis of common pathologic conditions of the chest. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:383-389, 2016. © 2016 Wiley Periodicals, Inc.
Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C
2017-09-07
Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.
Schroeck, Florian R; Patterson, Olga V; Alba, Patrick R; Pattison, Erik A; Seigne, John D; DuVall, Scott L; Robertson, Douglas J; Sirovich, Brenda; Goodney, Philip P
2017-12-01
To take the first step toward assembling population-based cohorts of patients with bladder cancer with longitudinal pathology data, we developed and validated a natural language processing (NLP) engine that abstracts pathology data from full-text pathology reports. Using 600 bladder pathology reports randomly selected from the Department of Veterans Affairs, we developed and validated an NLP engine to abstract data on histology, invasion (presence vs absence and depth), grade, the presence of muscularis propria, and the presence of carcinoma in situ. Our gold standard was based on an independent review of reports by 2 urologists, followed by adjudication. We assessed the NLP performance by calculating the accuracy, the positive predictive value, and the sensitivity. We subsequently applied the NLP engine to pathology reports from 10,725 patients with bladder cancer. When comparing the NLP output to the gold standard, NLP achieved the highest accuracy (0.98) for the presence vs the absence of carcinoma in situ. Accuracy for histology, invasion (presence vs absence), grade, and the presence of muscularis propria ranged from 0.83 to 0.96. The most challenging variable was depth of invasion (accuracy 0.68), with an acceptable positive predictive value for lamina propria (0.82) and for muscularis propria (0.87) invasion. The validated engine was capable of abstracting pathologic characteristics for 99% of the patients with bladder cancer. NLP had high accuracy for 5 of 6 variables and abstracted data for the vast majority of the patients. This now allows for the assembly of population-based cohorts with longitudinal pathology data. Published by Elsevier Inc.
Hwang, Chang Yun; Song, Tae Jun; Moon, Sung-Hoon; Lee, Don; Park, Do Hyun; Seo, Dong Wan; Lee, Sung Koo; Kim, Myung-Hwan
2009-01-01
Background/Aims Although endoscopic ultrasound guided fine needle aspiration (EUS-FNA) has been introduced and its use has been increasing in Korea, there have not been many reports about its performance. The aim of this study was to assess the utility of EUS-FNA without on-site cytopathologist in establishing the diagnosis of solid pancreatic and peripancreatic masses from a single institution in Korea. Methods Medical records of 139 patients who underwent EUS-FNA for pancreatic and peripancreatic solid mass in the year 2007, were retrospectively reviewed. By comparing cytopathologic diagnosis of FNA with final diagnosis, sensitivity, specificity, and accuracy were determined, and factors influencing the accuracy as well as complications were analyzed. Results One hundred twenty out of 139 cases had final diagnosis of malignancy. Sensitivity, specificity, and accuracy of EUS-FNA were 82%, 89%, and 83%, respectively, and positive and negative predictive values were 100% and 46%, respectively. As for factors influencing the accuracy of FNA, lesion size was marginally significant (p-value 0.08) by multivariate analysis. Conclusions EUS-FNA performed without on-site cytopathologist was found to be accurate and safe, and thus EUS-FNA should be a part of the standard management algorithm for pancreatic and peripancreatic mass. PMID:20431733
Henschke, Nicholas; Keuerleber, Julia; Ferreira, Manuela; Maher, Christopher G; Verhagen, Arianne P
2014-04-01
To provide an overview of reporting and methodological quality in diagnostic test accuracy (DTA) studies in the musculoskeletal field and evaluate the use of the QUality Assessment of Diagnostic Accuracy Studies (QUADAS) checklist. A literature review identified all systematic reviews that evaluated the accuracy of clinical tests to diagnose musculoskeletal conditions and used the QUADAS checklist. Two authors screened all identified reviews and extracted data on the target condition, index tests, reference standard, included studies, and QUADAS items. A descriptive analysis of the QUADAS checklist was performed, along with Rasch analysis to examine the construct validity and internal reliability. A total of 19 systematic reviews were included, which provided data on individual items of the QUADAS checklist for 392 DTA studies. In the musculoskeletal field, uninterpretable or intermediate test results are commonly not reported, with 175 (45%) studies scoring "no" to this item. The proportion of studies fulfilling certain items varied from 22% (item 11) to 91% (item 3). The interrater reliability of the QUADAS checklist was good and Rasch analysis showed excellent construct validity and internal consistency. This overview identified areas where the reporting and performance of diagnostic studies within the musculoskeletal field can be improved. Copyright © 2014 Elsevier Inc. All rights reserved.
Physical examination tests for the diagnosis of femoroacetabular impingement. A systematic review.
Pacheco-Carrillo, Aitana; Medina-Porqueres, Ivan
2016-09-01
Numerous clinical tests have been proposed to diagnose FAI, but little is known about their diagnostic accuracy. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of FAI. A search of the PubMed, SPORTDiscus and CINAHL databases was performed. Studies were considered eligible if they compared the results of physical examination tests to those of a reference standard. Methodological quality and internal validity assessment was performed by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. The systematic search strategy revealed 298 potential articles, five of which articles met the inclusion criteria. After assessment using the QUADAS score, four of the five articles were of high quality. Clinical tests included were Impingement sign, IROP test (Internal Rotation Over Pressure), FABER test (Flexion-Abduction-External Rotation), Stinchfield/RSRL (Resisted Straight Leg Raise) test, Scour test, Maximal squat test, and the Anterior Impingement test. IROP test, impingement sign, and FABER test showed the most sensitive values to identify FAI. The diagnostic accuracy of physical examination tests to assess FAI is limited due to its heterogenecity. There is a strong need for sound research of high methodological quality in this area. Copyright © 2016 Elsevier Ltd. All rights reserved.
A fast RCS accuracy assessment method for passive radar calibrators
NASA Astrophysics Data System (ADS)
Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI
2016-10-01
In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.
Meissner, Oliver A; Verrel, Frauke; Tató, Federico; Siebert, Uwe; Ramirez, Heldin; Ruppert, Volker; Schoenberg, Stefan O; Reiser, Maximilian
2004-11-01
The danger of limb loss as a consequence of acute occlusion of infrapopliteal bypasses underscores the requirement for careful patient follow-up. The objective of this study was to determine the agreement and accuracy of contrast material-enhanced moving-table magnetic resonance (MR) angiography and duplex ultrasonography (US) in the assessment of failing bypass grafts. In cases of discrepancy, digital subtraction angiography (DSA) served as the reference standard. MR angiography was performed in 24 consecutive patients with 26 femorotibial or femoropedal bypass grafts. Each revascularized limb was divided into five segments--(i) native arteries proximal to the graft; (ii) proximal anastomosis; (iii) graft course; (iv) distal anastomosis; and (v) native arteries distal to the graft-resulting in 130 vascular segments. Three readers evaluated all MR angiograms for image quality and the presence of failing grafts. The degree of stenosis was compared to the findings of duplex US, and in case of discrepancy, to DSA findings. Two separate analyses were performed with use of DSA only and a combined diagnostic endpoint as the reference standard. Image quality was rated excellent or intermediate in 119 of 130 vascular segments (92%). Venous overlay was encountered in 26 of 130 segments (20%). In only two segments was evaluation of the outflow region not feasible. One hundred seventeen of 130 vascular segments were available for quantitative analysis. In 109 of 117 segments (93%), MR angiography and duplex US showed concordant findings. In the eight discordant segments in seven patients, duplex US overlooked four high-grade stenoses that were correctly identified by MR angiography and confirmed by DSA. Percutaneous transluminal angioplasty was performed in these cases. In no case did MR angiography miss an area of stenosis of sufficient severity to require treatment. Total accuracy for duplex US ranged from 0.90 to 0.97 depending on the reference standard used, whereas MR angiography was completely accurate (1.00) regardless of the standard definition. Our data strongly suggest that the accuracy of MR angiography for identifying failing grafts in the infrapopliteal circulation is equal to that of duplex US and superior to that of duplex US in cases of complex revascularization. MR angiography should be included in routine follow-up of patients undergoing infrapopliteal bypass surgery.
NASA Astrophysics Data System (ADS)
Komonov, A. I.; Prinz, V. Ya.; Seleznev, V. A.; Kokh, K. A.; Shlegel, V. N.
2017-07-01
Metrology is essential for nanotechnology, especially for structures and devices with feature sizes going down to nm. Scanning probe microscopes (SPMs) permits measurement of nanometer- and subnanometer-scale objects. Accuracy of size measurements performed using SPMs is largely defined by the accuracy of used calibration measures. In the present publication, we demonstrate that height standards of monolayer step (∼1 and ∼0.6 nm) can be easily prepared by cleaving Bi2Se3 and ZnWO4 layered single crystals. It was shown that the conducting surface of Bi2Se3 crystals offers height standard appropriate for calibrating STMs and for testing conductive SPM probes. Our AFM study of the morphology of freshly cleaved (0001) Bi2Se3 surfaces proved that such surfaces remained atomically smooth during a period of at least half a year. The (010) surfaces of ZnWO4 crystals remained atomically smooth during one day, but already two days later an additional nanorelief of amplitude ∼0.3 nm appeared on those surfaces. This relief, however, did not further grow in height, and it did not hamper the calibration. Simplicity and the possibility of rapid fabrication of the step-height standards, as well as their high stability, make these standards available for a great, permanently growing number of users involved in 3D printing activities.
Development of a Smartphone-based reading system for lateral flow immunoassay.
Lee, Sangdae; Kim, Giyoung; Moon, Jihea
2014-11-01
This study was conducted to develop and evaluate the performance of the Smartphone-based reading system for the lateral flow immunoassay (LFIA). Smartphone-based reading system consists of a Samsung Galaxy S2 Smartphone, Smartphone application, and a LFIA reader. LFIA reader is composed of the close-up lens with a focal length up to 30 mm, white LED light, lithium polymer battery, and main body. The Smartphone application for image acquisition and data analysis was developed on the Android platform. The standard curve was obtained by plotting the measured P(T)/P(c) or A(T)/A(c) ratio versus Salmonella standard concentration. The mean, standard deviation (SD), recovery, and relative standard deviation (RSD) were also calculated using additional experimental results. These data were compared with that obtained from the benchtop LFIA reader. The LOD in both systems was observed with 10(6) CFU/mL. The results show high accuracy and good reproducibility with a RSD less than 10% in the range of 10(6) to 10(9) CFU/mL. Due to the simple structure, good sensitivity, and high accuracy of the Smartphone-based reading system, this system can be substituted for the benchtop LFIA reader for point-of-care medical diagnostics.
Delgado-Gomez, D; Baca-Garcia, E; Aguado, D; Courtet, P; Lopez-Castroman, J
2016-12-01
Several Computerized Adaptive Tests (CATs) have been proposed to facilitate assessments in mental health. These tests are built in a standard way, disregarding useful and usually available information not included in the assessment scales that could increase the precision and utility of CATs, such as the history of suicide attempts. Using the items of a previously developed scale for suicidal risk, we compared the performance of a standard CAT and a decision tree in a support decision system to identify suicidal behavior. We included the history of past suicide attempts as a class for the separation of patients in the decision tree. The decision tree needed an average of four items to achieve a similar accuracy than a standard CAT with nine items. The accuracy of the decision tree, obtained after 25 cross-validations, was 81.4%. A shortened test adapted for the separation of suicidal and non-suicidal patients was developed. CATs can be very useful tools for the assessment of suicidal risk. However, standard CATs do not use all the information that is available. A decision tree can improve the precision of the assessment since they are constructed using a priori information. Copyright © 2016 Elsevier B.V. All rights reserved.
Stott, Joshua; Scior, Katrina; Mandy, William; Charlesworth, Georgina
2017-01-01
Scores on cognitive screening tools for dementia are associated with premorbid IQ. It has been suggested that screening scores should be adjusted accordingly. However, no study has examined whether premorbid IQ variation affects screening accuracy. To investigate whether the screening accuracy of a widely used cognitive screening tool for dementia, the Addenbrooke's cognitive examination-III (ACE-III), is improved by adjusting for premorbid IQ. 171 UK based adults (96 memory service attendees diagnosed with dementia and 75 healthy volunteers over the age of 65 without subjective memory impairments) completed the ACE-III and the Test of Premorbid Function (TOPF). The difference in screening performance between the ACE-III alone and the ACE-III adjusted for TOPF was assessed against a reference standard; the presence or absence of a diagnosis of dementia (Alzheimer's disease, vascular dementia, or others). Logistic regression and receiver operating curve analyses indicated that the ACE-III has excellent screening accuracy (93% sensitivity, 94% specificity) in distinguishing those with and without a dementia diagnosis. Although ACE-III scores were associated with TOPF scores, TOPF scores may be affected by having dementia and screening accuracy was not improved by accounting for premorbid IQ, age, or years of education. ACE-III screening accuracy is high and screening performance is robust to variation in premorbid IQ, age, and years of education. Adjustment of ACE-III cut-offs for premorbid IQ is not recommended in clinical practice. The analytic strategy used here may be useful to assess the impact of premorbid IQ on other screening tools.
NASA Astrophysics Data System (ADS)
Grunin, A. P.; Kalinov, G. A.; Bolokhovtsev, A. V.; Sai, S. V.
2018-05-01
This article reports on a novel method to improve the accuracy of positioning an object by a low frequency hyperbolic radio navigation system like an eLoran. This method is based on the application of the standard Kalman filter. Investigations of an affection of the filter parameters and the type of the movement on accuracy of the vehicle position estimation are carried out. Evaluation of the method accuracy was investigated by separating data from the semi-empirical movement model to different types of movements.
Currency crisis indication by using ensembles of support vector machine classifiers
NASA Astrophysics Data System (ADS)
Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee
2014-07-01
There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.
Contrast-enhanced spectral mammography improves diagnostic accuracy in the symptomatic setting.
Tennant, S L; James, J J; Cornford, E J; Chen, Y; Burrell, H C; Hamilton, L J; Girio-Fragkoulakis, C
2016-11-01
To assess the diagnostic accuracy of contrast-enhanced spectral mammography (CESM), and gauge its "added value" in the symptomatic setting. A retrospective multi-reader review of 100 consecutive CESM examinations was performed. Anonymised low-energy (LE) images were reviewed and given a score for malignancy. At least 3 weeks later, the entire examination (LE and recombined images) was reviewed. Histopathology data were obtained for all cases. Differences in performance were assessed using receiver operator characteristic (ROC) analysis. Sensitivity, specificity, and lesion size (versus MRI or histopathology) differences were calculated. Seventy-three percent of cases were malignant at final histology, 27% were benign following standard triple assessment. ROC analysis showed improved overall performance of CESM over LE alone, with area under the curve of 0.93 versus 0.83 (p<0.025). CESM showed increased sensitivity (95% versus 84%, p<0.025) and specificity (81% versus 63%, p<0.025) compared to LE alone, with all five readers showing improved accuracy. Tumour size estimation at CESM was significantly more accurate than LE alone, the latter tending to undersize lesions. In 75% of cases, CESM was deemed a useful or significant aid to diagnosis. CESM provides immediately available, clinically useful information in the symptomatic clinic in patients with suspicious palpable abnormalities. Radiologist sensitivity, specificity, and size accuracy for breast cancer detection and staging are all improved using CESM as the primary mammographic investigation. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Predicting coronary artery disease using different artificial neural network models.
Colak, M Cengiz; Colak, Cemil; Kocatürk, Hasan; Sağiroğlu, Seref; Barutçu, Irfan
2008-08-01
Eight different learning algorithms used for creating artificial neural network (ANN) models and the different ANN models in the prediction of coronary artery disease (CAD) are introduced. This work was carried out as a retrospective case-control study. Overall, 124 consecutive patients who had been diagnosed with CAD by coronary angiography (at least 1 coronary stenosis > 50% in major epicardial arteries) were enrolled in the work. Angiographically, the 113 people (group 2) with normal coronary arteries were taken as control subjects. Multi-layered perceptrons ANN architecture were applied. The ANN models trained with different learning algorithms were performed in 237 records, divided into training (n=171) and testing (n=66) data sets. The performance of prediction was evaluated by sensitivity, specificity and accuracy values based on standard definitions. The results have demonstrated that ANN models trained with eight different learning algorithms are promising because of high (greater than 71%) sensitivity, specificity and accuracy values in the prediction of CAD. Accuracy, sensitivity and specificity values varied between 83.63%-100%, 86.46%-100% and 74.67%-100% for training, respectively. For testing, the values were more than 71% for sensitivity, 76% for specificity and 81% for accuracy. It may be proposed that the use of different learning algorithms other than backpropagation and larger sample sizes can improve the performance of prediction. The proposed ANN models trained with these learning algorithms could be used a promising approach for predicting CAD without the need for invasive diagnostic methods and could help in the prognostic clinical decision.
Tatone, Elise H; Gordon, Jessica L; Hubbs, Jessie; LeBlanc, Stephen J; DeVries, Trevor J; Duffield, Todd F
2016-08-01
Several rapid tests for use on farm have been validated for the detection of hyperketonemia (HK) in dairy cattle, however the reported sensitivity and specificity of each method varies and no single study has compared them all. Meta-analysis of diagnostic test accuracy is becoming more common in human medical literature but there are few veterinary examples. The objective of this work was to perform a systematic review and meta-analysis to determine the point-of-care testing method with the highest combined sensitivity and specificity, the optimal threshold for each method, and to identify gaps in the literature. A comprehensive literature search resulted in 5196 references. After removing duplicates and performing relevance screening, 23 studies were included for the qualitative synthesis and 18 for the meta-analysis. The three index tests evaluated in the meta-analysis were: the Precision Xtra(®) handheld device measuring beta-hydroxybutyrate (BHB) concentration in whole blood, and Ketostix(®) and KetoTest(®) semi-quantitative strips measuring the concentration of acetoacetate in urine and BHB in milk, respectively. The diagnostic accuracy of the 3 index tests relative to the reference standard measurement of BHB in serum or whole blood between 1.0-1.4mmol/L was compared using the hierarchical summary receiver operator characteristic (HSROC) method. Subgroup analysis was conducted for each index test to examine the accuracy at different thresholds. The impact of the reference standard threshold, the reference standard method, the prevalence of HK in the population, the primary study source and risk of bias of the primary study was explored using meta-regression. The Precision Xtra(®) device had the highest summary sensitivity in whole blood BHB at 1.2mmol/L, 94.8% (CI95%: 92.6-97.0), and specificity, 97.5% (CI95%: 96.9-98.1). The threshold employed (1.2-1.4mmol/L) did not impact the diagnostic accuracy of the test. The Ketostix(®) and KetoTest(®) strips had the highest summary sensitivity and specificity when the trace and weak positive thresholds were used, respectively. Controlling for the source of publication, HK prevalence and reference standard employed did not impact the estimated sensitivity and specificity of the tests. Including only peer-reviewed studies reduced the number of primary studies evaluating the Precision Xtra(®) by 43% and Ketostix(®) by 33%. Diagnosing HK with blood, urine or milk are valid options, however, the diagnostic inaccuracy of urine and milk should be considered when making economic and treatment decisions. Copyright © 2016 Elsevier B.V. All rights reserved.
60 seconds to survival: A pilot study of a disaster triage video game for prehospital providers.
Cicero, Mark X; Whitfill, Travis; Munjal, Kevin; Madhok, Manu; Diaz, Maria Carmen G; Scherzer, Daniel J; Walsh, Barbara M; Bowen, Angela; Redlener, Michael; Goldberg, Scott A; Symons, Nadine; Burkett, James; Santos, Joseph C; Kessler, David; Barnicle, Ryan N; Paesano, Geno; Auerbach, Marc A
2017-01-01
Disaster triage training for emergency medical service (EMS) providers is not standardized. Simulation training is costly and time-consuming. In contrast, educational video games enable low-cost and more time-efficient standardized training. We hypothesized that players of the video game "60 Seconds to Survival" (60S) would have greater improvements in disaster triage accuracy compared to control subjects who did not play 60S. Participants recorded their demographics and highest EMS training level and were randomized to play 60S (intervention) or serve as controls. At baseline, all participants completed a live school-shooting simulation in which manikins and standardized patients depicted 10 adult and pediatric victims. The intervention group then played 60S at least three times over the course of 13 weeks (time 2). Players triaged 12 patients in three scenarios (school shooting, house fire, tornado), and received in-game performance feedback. At time 2, the same live simulation was conducted for all participants. Controls had no disaster training during the study. The main outcome was improvement in triage accuracy in live simulations from baseline to time 2. Physicians and EMS providers predetermined expected triage level (RED/YELLOW/GREEN/BLACK) via modified Delphi method. There were 26 participants in the intervention group and 21 in the control group. There was no difference in gender, level of training, or years of EMS experience (median 5.5 years intervention, 3.5 years control, p = 0.49) between the groups. At baseline, both groups demonstrated median triage accuracy of 80 percent (IQR 70-90 percent, p = 0.457). At time 2, the intervention group had a significant improvement from baseline (median accuracy = 90 percent [IQR: 80-90 percent], p = 0.005), while the control group did not (median accuracy = 80 percent [IQR:80-95], p = 0.174). However, the mean improvement from baseline was not significant between the two groups (difference = 6.5, p = 0.335). The intervention demonstrated a significant improvement in accuracy from baseline to time 2 while the control did not. However, there was no significant difference in the improvement between the intervention and control groups. These results may be due to small sample size. Future directions include assessment of the game's effect on triage accuracy with a larger, multisite site cohort and iterative development to improve 60S.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Darren M.
Sandia National Laboratories has tested and evaluated Geotech Smart24 data acquisition system with active Fortezza crypto card data signing and authentication. The test results included in this report were in response to static and tonal-dynamic input signals. Most test methodologies used were based on IEEE Standards 1057 for Digitizing Waveform Recorders and 1241 for Analog to Digital Converters; others were designed by Sandia specifically for infrasound application evaluation and for supplementary criteria not addressed in the IEEE standards. The objective of this work was to evaluate the overall technical performance of the Geotech Smart24 digitizer with a Fortezza PCMCIA cryptomore » card actively implementing the signing of data packets. The results of this evaluation were compared to relevant specifications provided within manufacturer's documentation notes. The tests performed were chosen to demonstrate different performance aspects of the digitizer under test. The performance aspects tested include determining noise floor, least significant bit (LSB), dynamic range, cross-talk, relative channel-to-channel timing, time-tag accuracy, analog bandwidth and calibrator performance.« less
A multi-standard approach for GIAO (13)C NMR calculations.
Sarotti, Ariel M; Pellegrinet, Silvina C
2009-10-02
The influence of the reference standard employed in the calculation of (13)C NMR chemical shifts was investigated over a large variety of known organic compounds, using different quantum chemistry methods and basis sets. After detailed analysis of the collected data, we found that methanol and benzene are excellent reference standards for computing NMR shifts of sp(3)- and sp-sp(2)-hybridized carbon atoms, respectively. This multi-standard approach (MSTD) performs better than TMS in terms of accuracy and precision and also displays much lower dependence on the level of theory employed. The use of mPW1PW91/6-31G(d)//mPW1PW91/6-31G(d) level is recommended for accurate (13)C NMR chemical shift prediction at low computational cost.
NASA Astrophysics Data System (ADS)
Baker, Erik Reese
A repeated-measures, within-subjects design was conducted on 58 participant pilots to assess mean differences on energy management situation awareness response time and response accuracy between a conventional electronic aircraft display, a primary flight display (PFD), and an ecological interface design aircraft display, the OZ concept display. Participants were associated with a small Midwestern aviation university, including student pilots, flight instructors, and faculty with piloting experience. Testing consisted of observing 15 static screenshots of each cockpit display type and then selecting applicable responses from 27 standardized responses for each screen. A paired samples t-test was computed comparing accuracy and response time for the two displays. There was no significant difference in means between PFD Response Time and OZ Response Time. On average, mean PFD Accuracy was significantly higher than mean OZ Accuracy (MDiff = 13.17, SDDiff = 20.96), t(57) = 4.78, p < .001, d = 0.63. This finding showed operational potential for the OZ display, since even without first training to proficiency on the previously unseen OZ display, participant performance differences were not operationally remarkable. There was no significant correlation between PFD Response Time and PFD Accuracy, but there was a significant correlation between OZ Response Time and OZ Accuracy, r (58) = .353, p < .01. These findings suggest the participant familiarity of the PFD resulted in accuracy scores unrelated to response time, compared to the participants unaccustomed with the OZ display where longer response times manifested in greater understanding of the OZ display. PFD Response Time and PFD Accuracy were not correlated with pilot flight hours, which was not expected. It was thought that increased experience would translate into faster and more accurate assessment of the aircraft stimuli. OZ Response Time and OZ Accuracy were also not correlated with pilot flight hours, but this was expected. This was consistent with previous research that observed novice operators performing as well as experienced professional pilots on dynamic flight tasks with the OZ display. A demographic questionnaire and a feedback survey were included in the trial. An equivalent three-quarters majority of participants rated the PFD as "easy" and the OZ as "confusing", yet performance accuracy and response times between the two displays were not operationally different.
Ahn, Ilyoung; Kim, Tae-Sung; Jung, Eun-Sun; Yi, Jung-Sun; Jang, Won-Hee; Jung, Kyoung-Mi; Park, Miyoung; Jung, Mi-Sook; Jeon, Eun-Young; Yeo, Kyeong-Uk; Jo, Ji-Hoon; Park, Jung-Eun; Kim, Chang-Yul; Park, Yeong-Chul; Seong, Won-Keun; Lee, Ai-Young; Chun, Young Jin; Jeong, Tae Cheon; Jeung, Eui Bae; Lim, Kyung-Min; Bae, SeungJin; Sohn, Soojung; Heo, Yong
2016-10-01
Local lymph node assay: 5-bromo-2-deoxyuridine-flow cytometry method (LLNA: BrdU-FCM) is a modified non-radioisotopic technique with the additional advantages of accommodating multiple endpoints with the introduction of FCM, and refinement and reduction of animal use by using a sophisticated prescreening scheme. Reliability and accuracy of the LLNA: BrdU-FCM was determined according to OECD Test Guideline (TG) No. 429 (Skin Sensitization: Local Lymph Node Assay) performance standards (PS), with the participation of four laboratories. Transferability was demonstrated through successfully producing stimulation index (SI) values for 25% hexyl cinnamic aldehyde (HCA) consistently greater than 3, a predetermined threshold, by all participating laboratories. Within- and between-laboratory reproducibility was shown using HCA and 2,4-dinitrochlorobenzene, in which EC2.7 values (the estimated concentrations eliciting an SI of 2.7, the threshold for LLNA: BrdU-FCM) fell consistently within the acceptance ranges, 0.025-0.1% and 5-20%, respectively. Predictive capacity was tested using the final protocol version 1.3 for the 18 reference chemicals listed in OECD TG 429, of which results showed 84.6% sensitivity, 100% specificity, and 88.9% accuracy compared with the original LLNA. The data presented are considered to meet the performance criteria for the PS, and its predictive capacity was also sufficiently validated. Copyright © 2016 Elsevier Inc. All rights reserved.
Assessment of electrocardiographic criteria of left atrial enlargement.
Batra, Mahesh Kumar; Khan, Atif; Farooq, Fawad; Masood, Tariq; Karim, Musa
2018-05-01
Background Left atrial enlargement is considered to be a robust, strong, and widely acceptable indicator of cardiovascular outcomes. Echocardiography is the gold standard for measurement of left atrial size, but electrocardiography can be simple, cost-effective, and noninvasive in clinical practice. This study was undertaken to assess the diagnostic accuracy of an established electrocardiographic criterion for left atrial enlargement, taking 2-dimensional echocardiography as the gold-standard technique. Methods A cross-sectional study was conducted on 146 consecutively selected patients with the complaints of dyspnea and palpitation and with a murmur detected on clinical examination, from September 10, 2016 to February 10, 2017. Electrocardiography and echocardiography were performed in all patients. Patients with a negative P wave terminal force in lead V 1 > 40 ms·mm on electrocardiography or left atrial dimension > 40 mm on echocardiography were classified as having left atrial enlargement. Sensitivity and specificity were calculated to assess the diagnostic accuracy. Results Taking 2-dimensional echocardiography as the gold-standard technique, electrocardiography correctly diagnosed 68 patients as positive for left atrial enlargement and 12 as negative. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of electrocardiography for left atrial enlargement were 54.4%, 57.1%, 88.3%, 17.4%, and 54.8%, respectively. Conclusion The electrocardiogram appears to be a reasonable indicator of left atrial enlargement. In case of nonavailability of echocardiography, electrocardiography can be used for diagnosis of left atrial enlargement.
NASA Astrophysics Data System (ADS)
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
Vujaklija, Ivan; Roche, Aidan D; Hasenoehrl, Timothy; Sturma, Agnes; Amsuess, Sebastian; Farina, Dario; Aszmann, Oskar C
2017-01-01
Missing an upper limb dramatically impairs daily-life activities. Efforts in overcoming the issues arising from this disability have been made in both academia and industry, although their clinical outcome is still limited. Translation of prosthetic research into clinics has been challenging because of the difficulties in meeting the necessary requirements of the market. In this perspective article, we suggest that one relevant factor determining the relatively small clinical impact of myocontrol algorithms for upper limb prostheses is the limit of commonly used laboratory performance metrics. The laboratory conditions, in which the majority of the solutions are being evaluated, fail to sufficiently replicate real-life challenges. We qualitatively support this argument with representative data from seven transradial amputees. Their ability to control a myoelectric prosthesis was tested by measuring the accuracy of offline EMG signal classification, as a typical laboratory performance metrics, as well as by clinical scores when performing standard tests of daily living. Despite all subjects reaching relatively high classification accuracy offline, their clinical scores varied greatly and were not strongly predicted by classification accuracy. We therefore support the suggestion to test myocontrol systems using clinical tests on amputees, fully fitted with sockets and prostheses highly resembling the systems they would use in daily living, as evaluation benchmark. Agreement on this level of testing for systems developed in research laboratories would facilitate clinically relevant progresses in this field.
Spices as a source of lead exposure: a market-basket survey in Sri Lanka.
Senanayake, M P; Perera, R; Liyanaarachchi, L A; Dassanayake, M P
2013-12-01
We performed a laboratory analysis of spices sold in Sri Lanka for lead content. Samples of curry powder, chili powder and turmeric powder from seven provinces, collected using the market basket survey method, underwent atomic absorption spectrometry. Blanks and standards were utilised for instrument calibration and measurement accuracy. The results were validated in two different laboratories. All samples were found to have lead levels below the US Food and Drug Administration's action level of 0.5 μg/g. Spices sold in Sri Lanka contain lead concentrations that are low and within the stipulated safety standards.
Li, S; Tang, X; Peng, L; Luo, Y; Dong, R; Liu, J
2015-05-01
To review the literature on the diagnostic accuracy of CT-derived fractional flow reserve (FFRCT) for the evaluation of myocardial ischaemia in patients with suspected or known coronary artery disease, with invasive fractional flow reserve (FFR) as the reference standard. A PubMed, EMBASE, and Cochrane cross-search was performed. The pooled diagnostic accuracy of FFRCT, with FFR as the reference standard, was primarily analysed, and then compared with that of CT angiography (CTA). The thresholds to diagnose ischaemia were FFR ≤0.80 or CTA ≥50% stenosis. Data extraction, synthesis, and statistical analysis were performed by standard meta-analysis methods. Three multicentre studies (NXT Trial, DISCOVER-FLOW study and DeFACTO study) were included, examining 609 patients and 1050 vessels. The pooled sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (LR+), negative likelihood ratio (LR-), and diagnostic odds ratio (DOR) for FFRCT were 89% (85-93%), 71% (65-75%), 70% (65-75%), 90% (85-93%), 3.31 (1.79-6.14), 0.16 (0.11-0.23), and 21.21 (9.15-49.15) at the patient-level, and 83% (78-63%), 78% (75-81%), 61% (56-65%), 92% (89-90%), 4.02 (1.84-8.80), 0.22 (0.13-0.35), and 19.15 (5.73-63.93) at the vessel-level. At per-patient analysis, FFRCT has similar sensitivity but improved specificity, PPV, NPV, LR+, LR-, and DOR versus those of CTA. At per-vessel analysis, FFRCT had a slightly lower sensitivity, similar NPV, but improved specificity, PPV, LR+, LR-, and DOR compared with those of CTA. The area under the summary receiver operating characteristic curves for FFRCT was 0.8909 at patient-level and 0.8865 at vessel-level, versus 0.7402 for CTA at patient-level. FFRCT, which was associated with improved diagnostic accuracy versus CTA, is a viable alternative to FFR for detecting coronary ischaemic lesions. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Zeller, Michelle; Cristancho, Sayra; Mangel, Joy; Goldszmidt, Mark
2015-06-01
Many believe that knowledge of anatomy is essential for performing clinical procedures; however, unlike their surgical counterparts, internal medicine (IM) programs rarely incorporate anatomy review into procedural teaching. This study tested the hypothesis that an educational intervention focused on teaching relevant surface and underlying anatomy would result in improved bone marrow procedure landmarking accuracy. This was a preintervention-postintervention prospective study on landmarking accuracy of consenting IM residents attending their mandatory academic half-day. The intervention included an interactive video and visualization exercise; the video was developed specifically to teach the relevant underlying anatomy and includes views of live volunteers, cadavers, and skeletons. Thirty-one IM residents participated. At pretest, 48% (15/31) of residents landmarked accurately. Inaccuracy of pretest landmarking varied widely (n = 16, mean 20.06 mm; standard deviation 30.03 mm). At posttest, 74% (23/31) of residents accurately performed the procedure. McNemar test revealed a nonsignificant trend toward increased performance at posttest (P = 0.076; unadjusted odds for discordant pairs 3; 95% confidence interval 0.97-9.3). The Wilcoxon signed rank test demonstrated a significant difference between pre- and posttest accuracy in the 16 residents who were inaccurate at pretest (P = 0.004). No association was detected between participant baseline characteristics and pretest accuracy. This study demonstrates that residents who were initially inaccurate were able to significantly improve their landmarking skills by interacting with an educational tool emphasizing the relation between the surface and underlying anatomy. Our results support the use of basic anatomy in teaching bone marrow procedures. Results also support the proper use of video as an effective means for incorporating anatomy teaching around procedural skills.
Angelides, Kimon; Matsunami, Risë K.; Engler, David A.
2015-01-01
Background: We evaluated the accuracy, precision, and linearity of the In Touch® blood glucose monitoring system (BGMS), a new color touch screen and cellular-enabled blood glucose meter, using a new rapid, highly precise and accurate 13C6 isotope-dilution liquid chromatography-mass spectrometry method (IDLC-MS). Methods: Blood glucose measurements from the In Touch® BGMS were referenced to a validated UPLC-MRM standard reference measurement procedure previously shown to be highly accurate and precise. Readings from the In Touch® BGMS were taken over the blood glucose range of 24-640 mg/dL using 12 concentrations of blood glucose. Ten In Touch® BGMS and 3 lots of test strips were used with 10 replicates at each concentration. A lay user study was also performed to assess the ease of use. Results: At blood glucose concentrations <75 mg/dL 100% of the measurements are within ±8 mg/dL from the true reference standard; at blood glucose levels >75 mg/dL 100% of the measurements are within ±15% of the true reference standard. 100% of the results are within category A of the consensus grid. Within-run precision show CV < 3.72% between 24-50 mg/dL and CV<2.22% between 500 and 600 mg/dL. The results show that the In Touch® meter exceeds the minimum criteria of both the ISO 15197:2003 and ISO 15197:2013 standards. The results from a user panel show that 100% of the respondents reported that the color touch screen, with its graphic user interface (GUI), is well labeled and easy to navigate. Conclusions: To our knowledge this is the first touch screen glucose meter and the first study where accuracy of a new BGMS has been measured against a true primary reference standard, namely IDLC-MS. PMID:26002836
Gigahertz single-electron pumping in silicon with an accuracy better than 9.2 parts in 10{sup 7}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamahata, Gento, E-mail: yamahata.gento@lab.ntt.co.jp; Karasawa, Takeshi; Fujiwara, Akira
2016-07-04
High-speed and high-accuracy pumping of a single electron is crucial for realizing an accurate current source, which is a promising candidate for a quantum current standard. Here, using a high-accuracy measurement system traceable to primary standards, we evaluate the accuracy of a Si tunable-barrier single-electron pump driven by a single sinusoidal signal. The pump operates at frequencies up to 6.5 GHz, producing a current of more than 1 nA. At 1 GHz, the current plateau with a level of about 160 pA is found to be accurate to better than 0.92 ppm (parts per million), which is a record value for 1-GHz operation. At 2 GHz,more » the current plateau offset from 1ef (∼320 pA) by 20 ppm is observed. The current quantization accuracy is improved by applying a magnetic field of 14 T, and we observe a current level of 1ef with an accuracy of a few ppm. The presented gigahertz single-electron pumping with a high accuracy is an important step towards a metrological current standard.« less
Chowdhury, Shubhajit Roy
2012-04-01
The paper reports of a Field Programmable Gate Array (FPGA) based embedded system for detection of QRS complex in a noisy electrocardiogram (ECG) signal and thereafter differential diagnosis of tachycardia and tachyarrhythmia. The QRS complex has been detected after application of entropy measure of fuzziness to build a detection function of ECG signal, which has been previously filtered to remove power line interference and base line wander. Using the detected QRS complexes, differential diagnosis of tachycardia and tachyarrhythmia has been performed. The entire algorithm has been realized in hardware on an FPGA. Using the standard CSE ECG database, the algorithm performed highly effectively. The performance of the algorithm in respect of QRS detection with sensitivity (Se) of 99.74% and accuracy of 99.5% is achieved when tested using single channel ECG with entropy criteria. The performance of the QRS detection system has been compared and found to be better than most of the QRS detection systems available in literature. Using the system, 200 patients have been diagnosed with an accuracy of 98.5%.
Ye, Guangming; Cai, Xuejian; Wang, Biao; Zhou, Zhongxian; Yu, Xiaohua; Wang, Weibin; Zhang, Jiandong; Wang, Yuhai; Dong, Jierong; Jiang, Yunyun
2008-11-04
A simple, accurate and rapid method for simultaneous analysis of vancomycin and ceftazidime in cerebrospinal fluid (CSF), utilizing high-performance liquid chromatography (HPLC), has been developed and thoroughly validated to satisfy strict FDA guidelines for bioanalytical methods. Protein precipitation was used as the sample pretreatment method. In order to increase the accuracy, tinidazole was chosen as the internal standard. Separation was achieved on a Diamonsil C18 column (200 mm x 4.6mm I.D., 5 microm) using a mobile phase composed of acetonitrile and acetate buffer (pH 3.5) (8:92, v/v) at room temperature (25 degrees C), and the detection wavelength was 240 nm. All the validation data, such as accuracy, precision, and inter-day repeatability, were within the required limits. The method was applied to determine vancomycin and ceftazidime concentrations in CSF in five craniotomy patients.
A SVM-based method for sentiment analysis in Persian language
NASA Astrophysics Data System (ADS)
Hajmohammadi, Mohammad Sadegh; Ibrahim, Roliana
2013-03-01
Persian language is the official language of Iran, Tajikistan and Afghanistan. Local online users often represent their opinions and experiences on the web with written Persian. Although the information in those reviews is valuable to potential consumers and sellers, the huge amount of web reviews make it difficult to give an unbiased evaluation to a product. In this paper, standard machine learning techniques SVM and naive Bayes are incorporated into the domain of online Persian Movie reviews to automatically classify user reviews as positive or negative and performance of these two classifiers is compared with each other in this language. The effects of feature presentations on classification performance are discussed. We find that accuracy is influenced by interaction between the classification models and the feature options. The SVM classifier achieves as well as or better accuracy than naive Bayes in Persian movie. Unigrams are proved better features than bigrams and trigrams in capturing Persian sentiment orientation.
Enhancing lineup identification accuracy: two codes are better than one.
Melara, R D; DeWitt-Rickards, T S; O'Brien, T P
1989-10-01
Ways of improving identification accuracy were explored by comparing the conventional visual lineup with an auditory/visual lineup, one that paired color photographs with voice recordings. This bimodal lineup necessitated sequential presentation of lineup members; Experiment 1 showed that performance in sequential lineups was better than performance in traditional simultaneous lineups. In Experiments 2A and 2B unimodal and bimodal lineups were compared by using a multiple-lineup paradigm: Ss viewed 3 videotaped episodes depicting standard police procedures and were tested in 4 sequential lineups. Bimodal lineups were more diagnostic than either visual or auditory lineups alone. The bimodal lineup led to a 126% improvement in number of correct identifications over the conventional visual lineup, with no concomitant increase in number of false identifications. These results imply strongly that bimodal procedures should be adopted in real-world lineups. The nature of memorial processes underlying this bimodal advantage is discussed.
Autonomous Navigation Improvements for High-Earth Orbiters Using GPS
NASA Technical Reports Server (NTRS)
Long, Anne; Kelbel, David; Lee, Taesul; Garrison, James; Carpenter, J. Russell; Bauer, F. (Technical Monitor)
2000-01-01
The Goddard Space Flight Center is currently developing autonomous navigation systems for satellites in high-Earth orbits where acquisition of the GPS signals is severely limited This paper discusses autonomous navigation improvements for high-Earth orbiters and assesses projected navigation performance for these satellites using Global Positioning System (GPS) Standard Positioning Service (SPS) measurements. Navigation performance is evaluated as a function of signal acquisition threshold, measurement errors, and dynamic modeling errors using realistic GPS signal strength and user antenna models. These analyses indicate that an autonomous navigation position accuracy of better than 30 meters root-mean-square (RMS) can be achieved for high-Earth orbiting satellites using a GPS receiver with a very stable oscillator. This accuracy improves to better than 15 meters RMS if the GPS receiver's signal acquisition threshold can be reduced by 5 dB-Hertz to track weaker signals.
McGovern, Aine; Pendlebury, Sarah T; Mishra, Nishant K; Fan, Yuhua; Quinn, Terence J
2016-02-01
Poststroke cognitive assessment can be performed using standardized questionnaires designed for family or care givers. We sought to describe the test accuracy of such informant-based assessments for diagnosis of dementia/multidomain cognitive impairment in stroke. We performed a systematic review using a sensitive search strategy across multidisciplinary electronic databases. We created summary test accuracy metrics and described reporting and quality using STARDdem and Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tools, respectively. From 1432 titles, we included 11 studies. Ten papers used the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Four studies described IQCODE for diagnosis of poststroke dementia (n=1197); summary sensitivity: 0.81 (95% confidence interval, 0.60-0.93); summary specificty: 0.83 (95% confidence interval, 0.64-0.93). Five studies described IQCODE as tool for predicting future dementia (n=837); summary sensitivity: 0.60 (95% confidence interval, 0.32-0.83); summary specificity: 0.97 (95% confidence interval, 0.70-1.00). All papers had issues with at least 1 aspect of study reporting or quality. There is a limited literature on informant cognitive assessments in stroke. IQCODE as a diagnostic tool has test properties similar to other screening tools, IQCODE as a prognostic tool is specific but insensitive. We found no papers describing test accuracy of informant tests for diagnosis of prestroke cognitive decline, few papers on poststroke dementia and all included papers had issues with potential bias. © 2015 American Heart Association, Inc.
FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.
Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver
2014-06-14
Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.
Quickbird Geometry Report for Summer 2003
NASA Technical Reports Server (NTRS)
Darbha, Ravikanth; Helder, Dennis; Choi, Taeyoung
2005-01-01
Digital Globe provides for general use 2.4 m multi-spectral and 0.7 m panchromatic imagery acquired by the Quickbird satellite. This geometrically corrected imagery was obtained as standard and orthorectified products; the difference between the two products is primarily in the degree of geometric accuracy that Digital Globe claims. For both products, every image pixel contains estimated sets of Northing/Easting and lat/long coordinates accessible through an image display application such as ENVI. Ground processing was performed by Digital Globe using the ADP 2.1 version of their system. Analysis conducted at South Dakota State University attempted to verify the geometric accuracy of standard and orthorectified Quickbird imagery to determine if specifications for the NASA Science Data Purchase (SDP) were met. These specifications are in Table 1 of Appendix 1. In this analysis, we had approximately 90 Ground Control Points (varies depending on scene size on each date), uniformly distributed over the Brookings, SD, area, from 4 Quickbird scenes acquired August 23, September 15, and October 21 of 2003.
Brain collection, standardized neuropathologic assessment, and comorbidity in ADNI participants
Franklin, Erin E.; Perrin, Richard J.; Vincent, Benjamin; Baxter, Michael; Morris, John C.; Cairns, Nigel J.
2015-01-01
Introduction The Alzheimer’s Disease Neuroimaging Initiative Neuropathology Core (ADNI-NPC) facilitates brain donation, ensures standardized neuropathologic assessments, and maintains a tissue resource for research. Methods The ADNI-NPC coordinates with performance sites to promote autopsy consent, facilitate tissue collection and autopsy administration, and arrange sample delivery to the NPC, for assessment using NIA-AA neuropathologic diagnostic criteria. Results The ADNI-NPC has obtained 45 participant specimens and neuropathologic assessments have been completed in 36 to date. Challenges in obtaining consent at some sites have limited the voluntary autopsy rate to 58%. Among assessed cases, clinical diagnostic accuracy for Alzheimer disease (AD) is 97%; however, 58% show neuropathologic comorbidities. Discussion Challenges facing autopsy consent and coordination are largely resource-related. The neuropathologic assessments indicate that ADNI’s clinical diagnostic accuracy for AD is high; however, many AD cases have comorbidities that may impact the clinical presentation, course, and imaging and biomarker results. These neuropathologic data permit multimodal and genetic studies of these comorbidities to improve diagnosis and provide etiologic insights. PMID:26194314
Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Gentile, Nick
This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy andmore » performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.« less
Applications of wavelets in interferometry and artificial vision
NASA Astrophysics Data System (ADS)
Escalona Z., Rafael A.
2001-08-01
In this paper we present a different point of view of phase measurements performed in interferometry, image processing and intelligent vision using Wavelet Transform. In standard and white-light interferometry, the phase function is retrieved by using phase-shifting, Fourier-Transform, cosinus-inversion and other known algorithms. Our novel technique presented here is faster, robust and shows excellent accuracy in phase determinations. Finally, in our second application, fringes are no more generate by some light interaction but result from the observation of adapted strip set patterns directly printed on the target of interest. The moving target is simply observed by a conventional vision system and usual phase computation algorithms are adapted to an image processing by wavelet transform, in order to sense target position and displacements with a high accuracy. In general, we have determined that wavelet transform presents properties of robustness, relative speed of calculus and very high accuracy in phase computations.
Navigation strategy and filter design for solar electric missions
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Hagar, H., Jr.
1972-01-01
Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.
The use of computerized image guidance in lumbar disk arthroplasty.
Smith, Harvey E; Vaccaro, Alexander R; Yuan, Philip S; Papadopoulos, Stephen; Sasso, Rick
2006-02-01
Surgical navigation systems have been increasingly studied and applied in the application of spinal instrumentation. Successful disk arthroplasty requires accurate midline and rotational positioning for optimal function and longevity. A surgical simulation study in human cadaver specimens was done to evaluate and compare the accuracy of standard fluoroscopy, computer-assisted fluoroscopic image guidance, and Iso-C3D image guidance in the placement of lumbar intervertebral disk replacements. Lumbar intervertebral disk prostheses were placed using three different image guidance techniques in three human cadaver spine specimens at multiple levels. Postinstrumentation accuracy was assessed with thin-cut computed tomography scans. Intervertebral disk replacements placed using the StealthStation with Iso-C3D were more accurately centered than those placed using the StealthStation with FluoroNav and standard fluoroscopy. Intervertebral disk replacements placed with Iso-C3D and FluoroNav had improved rotational divergence compared with standard fluoroscopy. Iso-C3D and FluoroNav had a smaller interprocedure variance than standard fluoroscopy. These results did not approach statistical significance. Relative to both virtual and standard fluoroscopy, use of the StealthStation with Iso-C3D resulted in improved accuracy in centering the lumbar disk prosthesis in the coronal midline. The StealthStation with FluoroNav appears to be at least equivalent to standard fluoroscopy and may offer improved accuracy with rotational alignment while minimizing radiation exposure to the surgeon. Surgical guidance systems may offer improved accuracy and less interprocedure variation in the placement of intervertebral disk replacements than standard fluoroscopy. Further study regarding surgical navigation systems for intervertebral disk replacement is warranted.
76 FR 23713 - Wireless E911 Location Accuracy Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-28
... Location Accuracy Requirements AGENCY: Federal Communications Commission. ACTION: Final rule; announcement... contained in regulations concerning wireless E911 location accuracy requirements. The information collection... standards for wireless Enhanced 911 (E911) Phase II location accuracy and reliability to satisfy these...
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
Validation of Calculations in a Digital Thermometer Firmware
NASA Astrophysics Data System (ADS)
Batagelj, V.; Miklavec, A.; Bojkovski, J.
2014-04-01
State-of-the-art digital thermometers are arguably remarkable measurement instruments, measuring outputs from resistance thermometers and/or thermocouples. Not only that they can readily achieve measuring accuracies in the parts-per-million range, but they also incorporate sophisticated algorithms for the transformation calculation of the measured resistance or voltage to temperature. These algorithms often include high-order polynomials, exponentials and logarithms, and must be performed using both standard coefficients and particular calibration coefficients. The numerical accuracy of these calculations and the associated uncertainty component must be much better than the accuracy of the raw measurement in order to be negligible in the total measurement uncertainty. In order for the end-user to gain confidence in these calculations as well as to conform to formal requirements of ISO/IEC 17025 and other standards, a way of validation of these numerical procedures performed in the firmware of the instrument is required. A software architecture which allows a simple validation of internal measuring instrument calculations is suggested. The digital thermometer should be able to expose all its internal calculation functions to the communication interface, so the end-user can compare the results of the internal measuring instrument calculation with reference results. The method can be regarded as a variation of the black-box software validation. Validation results on a thermometer prototype with implemented validation ability show that the calculation error of basic arithmetic operations is within the expected rounding error. For conversion functions, the calculation error is at least ten times smaller than the thermometer effective resolution for the particular probe type.
Berger, Moritz; Kallus, Sebastian; Nova, Igor; Ristow, Oliver; Eisenmann, Urs; Dickhaus, Hartmut; Kuhle, Reinald; Hoffmann, Jürgen; Seeberger, Robin
2015-11-01
Intraoperative guidance using electromagnetic navigation is an upcoming method in maxillofacial surgery. However, due to their unwieldy structures, especially the line-of-sight problem, optical navigation devices are not used for daily orthognathic surgery. Therefore, orthognathic surgery was simulated on study phantom skulls, evaluating the accuracy and handling of a new electromagnetic tracking system. Le-Fort I osteotomies were performed on 10 plastic skulls. Orthognathic surgical planning was done in the conventional way using plaster models. Accuracy of the gold standard, splint-based model surgery versus an electromagnetic tracking system was evaluated by measuring the actual maxillary deviation using bimaxillary splints and preoperative and postoperative cone beam computer tomography imaging. The distance of five anatomical marker points were compared pre- and postoperatively. The electromagnetic tracking system was significantly more accurate in all measured parameters compared with the gold standard using bimaxillary splints (p < 0.01). The data shows a discrepancy between the model surgical plans and the actual correction of the upper jaw of 0.8 mm. Using the electromagnetic tracking, we could reduce the discrepancy of the maxillary transposition between the planned and actual orthognathic surgery to 0.3 mm on average. The data of this preliminary study shows a high level of accuracy in surgical orthognathic performance using electromagnetic navigation, and may offer greater precision than the conventional plaster model surgery with bimaxillary splints. This preliminary work shows great potential for the establishment of an intraoperative electromagnetic navigation system for maxillofacial surgery. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.
2016-01-01
Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359
Application of non-coherent Doppler data types for deep space navigation
NASA Technical Reports Server (NTRS)
Bhaskaran, Shyam
1995-01-01
Recent improvements in computational capability and Deep Space Network technology have renewed interest in examining the possibility of using one-way Doppler data alone to navigate interplanetary spacecraft. The one-way data can be formulated as the standard differenced-count Doppler or as phase measurements, and the data can be received at a single station or differenced if obtained simultaneously at two stations. A covariance analysis is performed which analyzes the accuracy obtainable by combinations of one-way Doppler data and compared with similar results using standard two-way Doppler and range. The sample interplanetary trajectory used was that of the Mars Pathfinder mission to Mars. It is shown that differenced one-way data is capable of determining the angular position of the spacecraft to fairly high accuracy, but has relatively poor sensitivity to the range. When combined with single station data, the position dispersions are roughly an order of magnitude larger in range and comparable in angular position as compared to dispersions obtained with standard data two-way types. It was also found that the phase formulation is less sensitive to data weight variations and data coverage than the differenced-count Doppler formulation.
The application of noncoherent Doppler data types for Deep Space Navigation
NASA Technical Reports Server (NTRS)
Bhaskaran, S.
1995-01-01
Recent improvements in computational capability and DSN technology have renewed interest in examining the possibility of using one-way Doppler data alone to navigate interplanetary spacecraft. The one-way data can be formulated as the standard differenced-count Doppler or as phase measurements, and the data can be received at a single station or differenced if obtained simultaneously at two stations. A covariance analysis, which analyzes the accuracy obtainable by combinations of one-way Doppler data, is performed and compared with similar results using standard two-way Doppler and range. The sample interplanetary trajectory used was that of the Mars Pathfinder mission to Mars. It is shown that differenced one-way data are capable of determining the angular position of the spacecraft to fairly high accuracy, but have relatively poor sensitivity to the range. When combined with single-station data, the position dispersions are roughly an order of magnitude larger in range and comparable in angular position as compared to dispersions obtained with standard two-way data types. It was also found that the phase formulation is less sensitive to data weight variations and data coverage than the differenced-count Doppler formulation.
New body fat prediction equations for severely obese patients.
Horie, Lilian Mika; Barbosa-Silva, Maria Cristina Gonzalez; Torrinhas, Raquel Susana; de Mello, Marco Túlio; Cecconello, Ivan; Waitzberg, Dan Linetzky
2008-06-01
Severe obesity imposes physical limitations to body composition assessment. Our aim was to compare body fat (BF) estimations of severely obese patients obtained by bioelectrical impedance (BIA) and air displacement plethysmography (ADP) for development of new equations for BF prediction. Severely obese subjects (83 female/36 male, mean age=41.6+/-11.6 years) had BF estimated by BIA and ADP. The agreement of the data was evaluated using Bland-Altman's graphic and concordance correlation coefficient (CCC). A multivariate regression analysis was performed to develop and validate new predictive equations. BF estimations from BIA (64.8+/-15 kg) and ADP (65.6+/-16.4 kg) did not differ (p>0.05, with good accuracy, precision, and CCC), but the Bland- Altman graphic showed a wide limit of agreement (-10.4; 8.8). The standard BIA equation overestimated BF in women (-1.3 kg) and underestimated BF in men (5.6 kg; p<0.05). Two BF new predictive equations were generated after BIA measurement, which predicted BF with higher accuracy, precision, CCC, and limits of agreement than the standard BIA equation. Standard BIA equations were inadequate for estimating BF in severely obese patients. Equations developed especially for this population provide more accurate BF assessment.
Tian, Chao; Wang, Lixin; Novick, Kimberly A
2016-10-15
High-precision analysis of atmospheric water vapor isotope compositions, especially δ(17) O values, can be used to improve our understanding of multiple hydrological and meteorological processes (e.g., differentiate equilibrium or kinetic fractionation). This study focused on assessing, for the first time, how the accuracy and precision of vapor δ(17) O laser spectroscopy measurements depend on vapor concentration, delta range, and averaging-time. A Triple Water Vapor Isotope Analyzer (T-WVIA) was used to evaluate the accuracy and precision of δ(2) H, δ(18) O and δ(17) O measurements. The sensitivity of accuracy and precision to water vapor concentration was evaluated using two international standards (GISP and SLAP2). The sensitivity of precision to delta value was evaluated using four working standards spanning a large delta range. The sensitivity of precision to averaging-time was assessed by measuring one standard continuously for 24 hours. Overall, the accuracy and precision of the δ(2) H, δ(18) O and δ(17) O measurements were high. Across all vapor concentrations, the accuracy of δ(2) H, δ(18) O and δ(17) O observations ranged from 0.10‰ to 1.84‰, 0.08‰ to 0.86‰ and 0.06‰ to 0.62‰, respectively, and the precision ranged from 0.099‰ to 0.430‰, 0.009‰ to 0.080‰ and 0.022‰ to 0.054‰, respectively. The accuracy and precision of all isotope measurements were sensitive to concentration, with the higher accuracy and precision generally observed under moderate vapor concentrations (i.e., 10000-15000 ppm) for all isotopes. The precision was also sensitive to the range of delta values, although the effect was not as large compared with the sensitivity to concentration. The precision was much less sensitive to averaging-time than the concentration and delta range effects. The accuracy and precision performance of the T-WVIA depend on concentration but depend less on the delta value and averaging-time. The instrument can simultaneously and continuously measure δ(2) H, δ(18) O and δ(17) O values in water vapor, opening a new window to better understand ecological, hydrological and meteorological processes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Feasibility of polymer gel-based measurements of radiation isocenter accuracy in magnetic fields
NASA Astrophysics Data System (ADS)
Dorsch, S.; Mann, P.; Lang, C.; Haering, P.; Runz, A.; Karger, C. P.
2018-06-01
For conventional irradiation devices, the radiation isocenter accuracy is determined by star shot measurements on films. In magnetic resonance (MR)-guided radiotherapy devices, the results of this test may be altered by the magnetic field and the need to align the radiation and imaging isocenter may require a modification of measurement procedures. Polymer dosimetry gels (PG) may offer a way to perform both, the radiation and imaging isocenter test, however, first it has to be shown that PG reveal results comparable to the conventionally applied films. Therefore, star shot measurements were performed at a linear accelerator using PG as well as radiochromic films. PG were evaluated using MR imaging and the isocircle radius and the distance between the isocircle center and the room isocenter were determined. Two different types of experiments were performed: i) a standard star-shot isocenter test and (ii) a star shot, where the detectors were placed between the pole shoes of an experimental electro magnet operated either at 0 T or 1 T. For the standard star shot, PG evaluation was independent of the time delay after irradiation (1 h, 24 h, 48 h and 216 h) and the results were comparable to those of film measurements. Within the electro magnet, the isocircle radius increased from 0.39 ± 0.01 mm to 1.37 ± 0.01 mm for the film and from 0.44 ± 0.02 mm to 0.97 ± 0.02 mm for the PG-measurements, respectively. The isocenter distance was essentially dependent on the alignment of the magnet to the isocenter and was between 0.12 ± 0.02 mm and 0.82 ± 0.02 mm. The study demonstrates that evaluation of the PG directly after irradiation is feasible, if only geometrical parameters are of interest. This allows using PG for star shot measurements to evaluate the radiation isocenter accuracy with comparable accuracy as with radiochromic films.
Busse, Harald; Riedel, Tim; Garnov, Nikita; Thörmer, Gregor; Kahn, Thomas; Moche, Michael
2015-01-01
Objectives MRI is of great clinical utility for the guidance of special diagnostic and therapeutic interventions. The majority of such procedures are performed iteratively ("in-and-out") in standard, closed-bore MRI systems with control imaging inside the bore and needle adjustments outside the bore. The fundamental limitations of such an approach have led to the development of various assistance techniques, from simple guidance tools to advanced navigation systems. The purpose of this work was to thoroughly assess the targeting accuracy, workflow and usability of a clinical add-on navigation solution on 240 simulated biopsies by different medical operators. Methods Navigation relied on a virtual 3D MRI scene with real-time overlay of the optically tracked biopsy needle. Smart reference markers on a freely adjustable arm ensured proper registration. Twenty-four operators – attending (AR) and resident radiologists (RR) as well as medical students (MS) – performed well-controlled biopsies of 10 embedded model targets (mean diameter: 8.5 mm, insertion depths: 17-76 mm). Targeting accuracy, procedure times and 13 Likert scores on system performance were determined (strong agreement: 5.0). Results Differences in diagnostic success rates (AR: 93%, RR: 88%, MS: 81%) were not significant. In contrast, between-group differences in biopsy times (AR: 4:15, RR: 4:40, MS: 5:06 min:sec) differed significantly (p<0.01). Mean overall rating was 4.2. The average operator would use the system again (4.8) and stated that the outcome justifies the extra effort (4.4). Lowest agreement was reported for the robustness against external perturbations (2.8). Conclusions The described combination of optical tracking technology with an automatic MRI registration appears to be sufficiently accurate for instrument guidance in a standard (closed-bore) MRI environment. High targeting accuracy and usability was demonstrated on a relatively large number of procedures and operators. Between groups with different expertise there were significant differences in experimental procedure times but not in the number of successful biopsies. PMID:26222443
NASA Astrophysics Data System (ADS)
Salvaris, Mathew; Sepulveda, Francisco
2010-10-01
Brain-computer interfaces (BCIs) rely on various electroencephalography methodologies that allow the user to convey their desired control to the machine. Common approaches include the use of event-related potentials (ERPs) such as the P300 and modulation of the beta and mu rhythms. All of these methods have their benefits and drawbacks. In this paper, three different selective attention tasks were tested in conjunction with a P300-based protocol (i.e. the standard counting of target stimuli as well as the conduction of real and imaginary movements in sync with the target stimuli). The three tasks were performed by a total of 10 participants, with the majority (7 out of 10) of the participants having never before participated in imaginary movement BCI experiments. Channels and methods used were optimized for the P300 ERP and no sensory-motor rhythms were explicitly used. The classifier used was a simple Fisher's linear discriminant. Results were encouraging, showing that on average the imaginary movement achieved a P300 versus No-P300 classification accuracy of 84.53%. In comparison, mental counting, the standard selective attention task used in previous studies, achieved 78.9% and real movement 90.3%. Furthermore, multiple trial classification results were recorded and compared, with real movement reaching 99.5% accuracy after four trials (12.8 s), imaginary movement reaching 99.5% accuracy after five trials (16 s) and counting reaching 98.2% accuracy after ten trials (32 s).
Salvaris, Mathew; Sepulveda, Francisco
2010-10-01
Brain-computer interfaces (BCIs) rely on various electroencephalography methodologies that allow the user to convey their desired control to the machine. Common approaches include the use of event-related potentials (ERPs) such as the P300 and modulation of the beta and mu rhythms. All of these methods have their benefits and drawbacks. In this paper, three different selective attention tasks were tested in conjunction with a P300-based protocol (i.e. the standard counting of target stimuli as well as the conduction of real and imaginary movements in sync with the target stimuli). The three tasks were performed by a total of 10 participants, with the majority (7 out of 10) of the participants having never before participated in imaginary movement BCI experiments. Channels and methods used were optimized for the P300 ERP and no sensory-motor rhythms were explicitly used. The classifier used was a simple Fisher's linear discriminant. Results were encouraging, showing that on average the imaginary movement achieved a P300 versus No-P300 classification accuracy of 84.53%. In comparison, mental counting, the standard selective attention task used in previous studies, achieved 78.9% and real movement 90.3%. Furthermore, multiple trial classification results were recorded and compared, with real movement reaching 99.5% accuracy after four trials (12.8 s), imaginary movement reaching 99.5% accuracy after five trials (16 s) and counting reaching 98.2% accuracy after ten trials (32 s).
Accelerated Fractional Ventilation Imaging with Hyperpolarized Gas MRI
Emami, Kiarash; Xu, Yinan; Hamedani, Hooman; Profka, Harrilla; Kadlecek, Stephen; Xin, Yi; Ishii, Masaru; Rizi, Rahim R.
2013-01-01
PURPOSE To investigate the utility of accelerated imaging to enhance multi-breath fractional ventilation (r) measurement accuracy using HP gas MRI. Undersampling shortens the breath-hold time, thereby reducing the O2-induced signal decay and allows subjects to maintain a more physiologically relevant breathing pattern. Additionally it may improve r estimation accuracy by reducing RF destruction of HP gas. METHODS Image acceleration was achieved by using an 8-channel phased array coil. Undersampled image acquisition was simulated in a series of ventilation images and images were reconstructed for various matrix sizes (48–128) using GRAPPA. Parallel accelerated r imaging was also performed on five mechanically ventilated pigs. RESULTS Optimal acceleration factor was fairly invariable (2.0–2.2×) over the range of simulated resolutions. Estimation accuracy progressively improved with higher resolutions (39–51% error reduction). In vivo r values were not significantly different between the two methods: 0.27±0.09, 0.35±0.06, 0.40±0.04 (standard) versus 0.23±0.05, 0.34±0.03, 0.37±0.02 (accelerated); for anterior, medial and posterior slices, respectively, whereas the corresponding vertical r gradients were significant (P < 0.001): 0.021±0.007 (standard) versus 0.019±0.005 (accelerated) [cm−1]. CONCLUSION Quadruple phased array coil simulations resulted in an optimal acceleration factor of ~2× independent of imaging resolution. Results advocate undersampled image acceleration to improve accuracy of fractional ventilation measurement with HP gas MRI. PMID:23400938
Montoya, Pablo J.; Lukehart, Sheila A.; Brentlinger, Paula E.; Blanco, Ana J.; Floriano, Florencia; Sairosse, Josefa; Gloyd, Stephen
2006-01-01
OBJECTIVE: Programmes to control syphilis in developing countries are hampered by a lack of laboratory services, delayed diagnosis, and doubts about current screening methods. We aimed to compare the diagnostic accuracy of an immunochromatographic strip (ICS) test and the rapid plasma reagin (RPR) test with the combined gold standard (RPR, Treponema pallidum haemagglutination assay and direct immunofluorescence stain done at a reference laboratory) for the detection of syphilis in pregnancy. METHODS: We included test results from 4789 women attending their first antenatal visit at one of six health facilities in Sofala Province, central Mozambique. We compared diagnostic accuracy (sensitivity, specificity, and positive and negative predictive values) of ICS and RPR done at the health facilities and ICS performed at the reference laboratory. We also made subgroup comparisons by human immunodeficiency virus (HIV) and malaria status. FINDINGS: For active syphilis, the sensitivity of the ICS was 95.3% at the reference laboratory, and 84.1% at the health facility. The sensitivity of the RPR at the health facility was 70.7%. Specificity and positive and negative predictive values showed a similar pattern. The ICS outperformed RPR in all comparisons (P<0.001). CONCLUSION: The diagnostic accuracy of the ICS compared favourably with that of the gold standard. The use of the ICS in Mozambique and similar settings may improve the diagnosis of syphilis in health facilities, both with and without laboratories. PMID:16501726
Srivastava, Praveen; Moorthy, Ganesh S; Gross, Robert; Barrett, Jeffrey S
2013-01-01
A selective and a highly sensitive method for the determination of the non-nucleoside reverse transcriptase inhibitor (NNRTI), efavirenz, in human plasma has been developed and fully validated based on high performance liquid chromatography tandem mass spectrometry (LC-MS/MS). Sample preparation involved protein precipitation followed by one to one dilution with water. The analyte, efavirenz was separated by high performance liquid chromatography and detected with tandem mass spectrometry in negative ionization mode with multiple reaction monitoring. Efavirenz and ¹³C₆-efavirenz (Internal Standard), respectively, were detected via the following MRM transitions: m/z 314.20243.90 and m/z 320.20249.90. A gradient program was used to elute the analytes using 0.1% formic acid in water and 0.1% formic acid in acetonitrile as mobile phase solvents, at a flow-rate of 0.3 mL/min. The total run time was 5 min and the retention times for the internal standard (¹³C₆-efavirenz) and efavirenz was approximately 2.6 min. The calibration curves showed linearity (coefficient of regression, r>0.99) over the concentration range of 1.0-2,500 ng/mL. The intraday precision based on the standard deviation of replicates of lower limit of quantification (LLOQ) was 9.24% and for quality control (QC) samples ranged from 2.41% to 6.42% and with accuracy from 112% and 100-111% for LLOQ and QC samples. The inter day precision was 12.3% and 3.03-9.18% for LLOQ and quality controls samples, and the accuracy was 108% and 95.2-108% for LLOQ and QC samples. Stability studies showed that efavirenz was stable during the expected conditions for sample preparation and storage. The lower limit of quantification for efavirenz was 1 ng/mL. The analytical method showed excellent sensitivity, precision, and accuracy. This method is robust and is being successfully applied for therapeutic drug monitoring and pharmacokinetic studies in HIV-infected patients.
Gutmann, Joanna; Muehlich, Susanne; Zolk, Oliver; Wojnowski, Leszek; Maas, Renke; Engelhardt, Stefan; Sarikas, Antonio
2014-01-01
The online resource Wikipedia is increasingly used by students for knowledge acquisition and learning. However, the lack of a formal editorial review and the heterogeneous expertise of contributors often results in skepticism by educators whether Wikipedia should be recommended to students as an information source. In this study we systematically analyzed the accuracy and completeness of drug information in the German and English language versions of Wikipedia in comparison to standard textbooks of pharmacology. In addition, references, revision history and readability were evaluated. Analysis of readability was performed using the Amstad readability index and the Erste Wiener Sachtextformel. The data on indication, mechanism of action, pharmacokinetics, adverse effects and contraindications for 100 curricular drugs were retrieved from standard German textbooks of general pharmacology and compared with the corresponding articles in the German language version of Wikipedia. Quantitative analysis revealed that accuracy of drug information in Wikipedia was 99.7%±0.2% when compared to the textbook data. The overall completeness of drug information in Wikipedia was 83.8±1.5% (p<0.001). Completeness varied in-between categories, and was lowest in the category “pharmacokinetics” (68.0%±4.2%; p<0.001) and highest in the category “indication” (91.3%±2.0%) when compared to the textbook data overlap. Similar results were obtained for the English language version of Wikipedia. Of the drug information missing in Wikipedia, 62.5% was rated as didactically non-relevant in a qualitative re-evaluation study. Drug articles in Wikipedia had an average of 14.6±1.6 references and 262.8±37.4 edits performed by 142.7±17.6 editors. Both Wikipedia and textbooks samples had comparable, low readability. Our study suggests that Wikipedia is an accurate and comprehensive source of drug-related information for undergraduate medical education. PMID:25250889
Kräenbring, Jona; Monzon Penza, Tika; Gutmann, Joanna; Muehlich, Susanne; Zolk, Oliver; Wojnowski, Leszek; Maas, Renke; Engelhardt, Stefan; Sarikas, Antonio
2014-01-01
The online resource Wikipedia is increasingly used by students for knowledge acquisition and learning. However, the lack of a formal editorial review and the heterogeneous expertise of contributors often results in skepticism by educators whether Wikipedia should be recommended to students as an information source. In this study we systematically analyzed the accuracy and completeness of drug information in the German and English language versions of Wikipedia in comparison to standard textbooks of pharmacology. In addition, references, revision history and readability were evaluated. Analysis of readability was performed using the Amstad readability index and the Erste Wiener Sachtextformel. The data on indication, mechanism of action, pharmacokinetics, adverse effects and contraindications for 100 curricular drugs were retrieved from standard German textbooks of general pharmacology and compared with the corresponding articles in the German language version of Wikipedia. Quantitative analysis revealed that accuracy of drug information in Wikipedia was 99.7% ± 0.2% when compared to the textbook data. The overall completeness of drug information in Wikipedia was 83.8 ± 1.5% (p < 0.001). Completeness varied in-between categories, and was lowest in the category "pharmacokinetics" (68.0% ± 4.2%; p < 0.001) and highest in the category "indication" (91.3% ± 2.0%) when compared to the textbook data overlap. Similar results were obtained for the English language version of Wikipedia. Of the drug information missing in Wikipedia, 62.5% was rated as didactically non-relevant in a qualitative re-evaluation study. Drug articles in Wikipedia had an average of 14.6 ± 1.6 references and 262.8 ± 37.4 edits performed by 142.7 ± 17.6 editors. Both Wikipedia and textbooks samples had comparable, low readability. Our study suggests that Wikipedia is an accurate and comprehensive source of drug-related information for undergraduate medical education.
Britto, Ingrid Schwach Werneck; Sananes, Nicolas; Olutoye, Oluyinka O; Cass, Darrell L; Sangi-Haghpeykar, Haleh; Lee, Timothy C; Cassady, Christopher I; Mehollin-Ray, Amy; Welty, Stephen; Fernandes, Caraciolo; Belfort, Michael A; Lee, Wesley; Ruano, Rodrigo
2015-10-01
The purpose of this study was to evaluate the impact of standardization of the lung-to-head ratio measurements in isolated congenital diaphragmatic hernia on prediction of neonatal outcomes and reproducibility. We conducted a retrospective cohort study of 77 cases of isolated congenital diaphragmatic hernia managed in a single center between 2004 and 2012. We compared lung-to-head ratio measurements that were performed prospectively in our institution without standardization to standardized measurements performed according to a defined protocol. The standardized lung-to-head ratio measurements were statistically more accurate than the nonstandardized measurements for predicting neonatal mortality (area under the receiver operating characteristic curve, 0.85 versus 0.732; P = .003). After standardization, there were no statistical differences in accuracy between measurements regardless of whether we considered observed-to-expected values (P > .05). Standardization of the lung-to-head ratio did not improve prediction of the need for extracorporeal membrane oxygenation (P> .05). Both intraoperator and interoperator reproducibility were good for the standardized lung-to-head ratio (intraclass correlation coefficient, 0.98 [95% confidence interval, 0.97-0.99]; bias, 0.02 [limits of agreement, -0.11 to +0.15], respectively). Standardization of lung-to-head ratio measurements improves prediction of neonatal outcomes. Further studies are needed to confirm these results and to assess the utility of standardization of other prognostic parameters.
Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.
Ryan, Andrew M; Burgess, James F; Dimick, Justin B
2015-08-01
To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models. Process-of-care quality data from Hospital Compare between 2003 and 2009. We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis. Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators. When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis. © Health Research and Educational Trust.
Matheoud, R; Ferrando, O; Valzano, S; Lizio, D; Sacchetti, G; Ciarmiello, A; Foppiano, F; Brambilla, M
2015-07-01
Resolution modeling (RM) of PET systems has been introduced in iterative reconstruction algorithms for oncologic PET. The RM recovers the loss of resolution and reduces the associated partial volume effect. While these methods improved the observer performance, particularly in the detection of small and faint lesions, their impact on quantification accuracy still requires thorough investigation. The aim of this study was to characterize the performances of the RM algorithms under controlled conditions simulating a typical (18)F-FDG oncologic study, using an anthropomorphic phantom and selected physical figures of merit, used for image quantification. Measurements were performed on Biograph HiREZ (B_HiREZ) and Discovery 710 (D_710) PET/CT scanners and reconstructions were performed using the standard iterative reconstructions and the RM algorithms associated to each scanner: TrueX and SharpIR, respectively. RM determined a significant improvement in contrast recovery for small targets (≤17 mm diameter) only for the D_710 scanner. The maximum standardized uptake value (SUVmax) increased when RM was applied using both scanners. The SUVmax of small targets was on average lower with the B_HiREZ than with the D_710. Sharp IR improved the accuracy of SUVmax determination, whilst TrueX showed an overestimation of SUVmax for sphere dimensions greater than 22 mm. The goodness of fit of adaptive threshold algorithms worsened significantly when RM algorithms were employed for both scanners. Differences in general quantitative performance were observed for the PET scanners analyzed. Segmentation of PET images using adaptive threshold algorithms should not be undertaken in conjunction with RM reconstructions. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H
2015-01-01
Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.
Yi, Dae Yong; Lee, Kyung Hoon; Park, Sung Bin; Kim, Jee Taek; Lee, Na Mi; Kim, Hyery; Yun, Sin Weon; Chae, Soo Ahn; Lim, In Seok
Computed tomography should be performed after careful consideration due to radiation hazard, which is why interest in low dose CT has increased recently in acute appendicitis. Previous studies have been performed in adult and adolescents populations, but no studies have reported on the efficacy of using low-dose CT in children younger than 10 years. Patients (n=475) younger than 10 years who were examined for acute appendicitis were recruited. Subjects were divided into three groups according to the examinations performed: low-dose CT, ultrasonography, and standard-dose CT. Subjects were categorized according to age and body mass index (BMI). Low-dose CT was a contributive tool in diagnosing appendicitis, and it was an adequate method, when compared with ultrasonography and standard-dose CT in terms of sensitivity (95.5% vs. 95.0% and 94.5%, p=0.794), specificity (94.9% vs. 80.0% and 98.8%, p=0.024), positive-predictive value (96.4% vs. 92.7% and 97.2%, p=0.019), and negative-predictive value (93.7% vs. 85.7% and 91.3%, p=0.890). Low-dose CT accurately diagnosed patients with a perforated appendix. Acute appendicitis was effectively diagnosed using low-dose CT in both early and middle childhood. BMI did not influence the accuracy of detecting acute appendicitis on low-dose CT. Low-dose CT is effective and accurate for diagnosing acute appendicitis in childhood, as well as in adolescents and young adults. Additionally, low-dose CT was relatively accurate, irrespective of age or BMI, for detecting acute appendicitis. Therefore, low-dose CT is recommended for assessing children with suspected acute appendicitis. Copyright © 2017. Published by Elsevier Editora Ltda.
The ultimate quantum limits on the accuracy of measurements
NASA Technical Reports Server (NTRS)
Yuen, Horace P.
1992-01-01
A quantum generalization of rate-distortion theory from standard communication and information theory is developed for application to determining the ultimate performance limit of measurement systems in physics. For the estimation of a real or a phase parameter, it is shown that the root-mean-square error obtained in a measurement with a single-mode photon level N cannot do better than approximately N exp -1, while approximately exp(-N) may be obtained for multi-mode fields with the same photon level N. Possible ways to achieve the remarkable exponential performance are indicated.
Michelessi, Manuele; Lucenteforte, Ersilia; Miele, Alba; Oddone, Francesco; Crescioli, Giada; Fameli, Valeria; Korevaar, Daniël A; Virgili, Gianni
2017-01-01
Research has shown a modest adherence of diagnostic test accuracy (DTA) studies in glaucoma to the Standards for Reporting of Diagnostic Accuracy Studies (STARD). We have applied the updated 30-item STARD 2015 checklist to a set of studies included in a Cochrane DTA systematic review of imaging tools for diagnosing manifest glaucoma. Three pairs of reviewers, including one senior reviewer who assessed all studies, independently checked the adherence of each study to STARD 2015. Adherence was analyzed on an individual-item basis. Logistic regression was used to evaluate the effect of publication year and impact factor on adherence. We included 106 DTA studies, published between 2003-2014 in journals with a median impact factor of 2.6. Overall adherence was 54.1% for 3,286 individual rating across 31 items, with a mean of 16.8 (SD: 3.1; range 8-23) items per study. Large variability in adherence to reporting standards was detected across individual STARD 2015 items, ranging from 0 to 100%. Nine items (1: identification as diagnostic accuracy study in title/abstract; 6: eligibility criteria; 10: index test (a) and reference standard (b) definition; 12: cut-off definitions for index test (a) and reference standard (b); 14: estimation of diagnostic accuracy measures; 21a: severity spectrum of diseased; 23: cross-tabulation of the index and reference standard results) were adequately reported in more than 90% of the studies. Conversely, 10 items (3: scientific and clinical background of the index test; 11: rationale for the reference standard; 13b: blinding of index test results; 17: analyses of variability; 18; sample size calculation; 19: study flow diagram; 20: baseline characteristics of participants; 28: registration number and registry; 29: availability of study protocol; 30: sources of funding) were adequately reported in less than 30% of the studies. Only four items showed a statistically significant improvement over time: missing data (16), baseline characteristics of participants (20), estimates of diagnostic accuracy (24) and sources of funding (30). Adherence to STARD 2015 among DTA studies in glaucoma research is incomplete, and only modestly increasing over time.
Wang, Danyang; Huo, Yanlei; Chen, Suyun; Wang, Hui; Ding, Yingli; Zhu, Xiaochun; Ma, Chao
2018-01-01
18 F-fluorodeoxyglucose ( 18 F-FDG) positron emission tomography/computed tomography (PET/CT) is the reference standard in staging of 18 F-FDG-avid lymphomas; however, there is no recommended functional imaging modality for indolent lymphomas. Therefore, we aimed to compare the performance of whole-body magnetic resonance imaging (WB-MRI) with that of 18 F-FDG PET/CT for lesion detection and initial staging in patients with aggressive or indolent lymphoma. We searched the MEDLINE, EMBASE, and CENTRAL databases for studies that compared WB-MRI with 18 F-FDG PET/CT for lymphoma staging or lesion detection. The methodological quality of the studies was assessed using version 2 of the "Quality Assessment of Diagnostic Accuracy Studies" tool. The pooled staging accuracy ( μ ) of WB-MRI and 18 F-FDG PET/CT for initial staging and for assessing possible heterogeneity ( χ 2 ) across studies were calculated using commercially available software. Eight studies comprising 338 patients were included. In terms of staging, the meta-analytic staging accuracies of WB-MRI and 18 F-FDG PET/CT for Hodgkin lymphoma and aggressive non-Hodgkin lymphoma (NHL) were 98% (95% CI, 94%-100%) and 98% (95% CI, 94%-100%), respectively. The pooled staging accuracy of 18 F-FDG PET/CT dropped to 87% (95% CI, 72%-97%) for staging in patients with indolent lymphoma, whereas that of WB-MRI remained 96% (95% CI, 91%-100%). Subgroup analysis indicated an even lower staging accuracy of 18 F-FDG PET/CT for staging of less FDG-avid indolent NHLs (60%; 95% CI, 23%-92%), in contrast to the superior performance of WB-MRI (98%; 95% CI, 88%-100%). WB-MRI is a promising radiation-free imaging technique that may serve as a viable alternative to 18 F-FDG PET/CT for staging of 18 FDG-avid lymphomas, where 18 F-FDG PET/CT remains the standard of care. Additionally, WB-MRI seems a less histology-dependent functional imaging test than 18 F-FDG PET/CT and may be the imaging test of choice for staging of indolent NHLs with low 18 F-FDG avidity.
Audio-visual sensory deprivation degrades visuo-tactile peri-personal space.
Noel, Jean-Paul; Park, Hyeong-Dong; Pasqualini, Isabella; Lissek, Herve; Wallace, Mark; Blanke, Olaf; Serino, Andrea
2018-05-01
Self-perception is scaffolded upon the integration of multisensory cues on the body, the space surrounding the body (i.e., the peri-personal space; PPS), and from within the body. We asked whether reducing information available from external space would change: PPS, interoceptive accuracy, and self-experience. Twenty participants were exposed to 15 min of audio-visual deprivation and performed: (i) a visuo-tactile interaction task measuring their PPS; (ii) a heartbeat perception task measuring interoceptive accuracy; and (iii) a series of questionnaires related to self-perception and mental illness. These tasks were carried out in two conditions: while exposed to a standard sensory environment and under a condition of audio-visual deprivation. Results suggest that while PPS becomes ill defined after audio-visual deprivation, interoceptive accuracy is unaltered at a group-level, with some participants improving and some worsening in interoceptive accuracy. Interestingly, correlational individual differences analyses revealed that changes in PPS after audio-visual deprivation were related to interoceptive accuracy and self-reports of "unusual experiences" on an individual subject basis. Taken together, the findings argue for a relationship between the malleability of PPS, interoceptive accuracy, and an inclination toward aberrant ideation often associated with mental illness. Copyright © 2018. Published by Elsevier Inc.
Neuhaus, Philipp; Doods, Justin; Dugas, Martin
2015-01-01
Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.
NASA Astrophysics Data System (ADS)
Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue
2017-08-01
On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.
Geometric accuracy of Landsat-4 and Landsat-5 Thematic Mapper images.
Borgeson, W.T.; Batson, R.M.; Kieffer, H.H.
1985-01-01
The geometric accuracy of the Landsat Thematic Mappers was assessed by a linear least-square comparison of the positions of conspicuous ground features in digital images with their geographic locations as determined from 1:24 000-scale maps. For a Landsat-5 image, the single-dimension standard deviations of the standard digital product, and of this image with additional linear corrections, are 11.2 and 10.3 m, respectively (0.4 pixel). An F-test showed that skew and affine distortion corrections are not significant. At this level of accuracy, the granularity of the digital image and the probable inaccuracy of the 1:24 000 maps began to affect the precision of the comparison. The tested image, even with a moderate accuracy loss in the digital-to-graphic conversion, meets National Horizontal Map Accuracy standards for scales of 1:100 000 and smaller. Two Landsat-4 images, obtained with the Multispectral Scanner on and off, and processed by an interim software system, contain significant skew and affine distortions. -Authors
Ceriotti, Ferruccio; Kaczmarek, Ewa; Guerra, Elena; Mastrantonio, Fabrizio; Lucarelli, Fausto; Valgimigli, Francesco; Mosca, Andrea
2015-03-01
Point-of-care (POC) testing devices for monitoring glucose and ketones can play a key role in the management of dysglycemia in hospitalized diabetes patients. The accuracy of glucose devices can be influenced by biochemical changes that commonly occur in critically ill hospital patients and by the medication prescribed. Little is known about the influence of these factors on ketone POC measurements. The aim of this study was to assess the analytical performance of POC hospital whole-blood glucose and ketone meters and the extent of glucose interference factors on the design and accuracy of ketone results. StatStrip glucose/ketone, Optium FreeStyle glucose/ketone, and Accu-Chek Performa glucose were also assessed and results compared to a central laboratory reference method. The analytical evaluation was performed according to Clinical and Laboratory Standards Institute (CLSI) protocols for precision, linearity, method comparison, and interference. The interferences assessed included acetoacetate, acetaminophen, ascorbic acid, galactose, maltose, uric acid, and sodium. The accuracies of both Optium ketone and glucose measurements were significantly influenced by varying levels of hematocrit and ascorbic acid. StatStrip ketone and glucose measurements were unaffected by the interferences tested with exception of ascorbic acid, which reduced the higher level ketone value. The accuracy of Accu-Chek glucose measurements was affected by hematocrit, by ascorbic acid, and significantly by galactose. The method correlation assessment indicated differences between the meters in compliance to ISO 15197 and CLSI 12-A3 performance criteria. Combined POC glucose/ketone methods are now available. The use of these devices in a hospital setting requires careful consideration with regard to the selection of instruments not sensitive to hematocrit variation and presence of interfering substances. © 2014 Diabetes Technology Society.
Accuracy of Satellite Optical Observations and Precise Orbit Determination
NASA Astrophysics Data System (ADS)
Shakun, L.; Koshkin, N.; Korobeynikova, E.; Strakhova, S.; Dragomiretsky, V.; Ryabov, A.; Melikyants, S.; Golubovskaya, T.; Terpan, S.
The monitoring of low-orbit space objects (LEO-objects) is performed in the Astronomical Observatory of Odessa I.I. Mechnikov National University (Ukraine) for many years. Decades-long archives of these observations are accessible within Ukrainian network of optical observers (UMOS). In this work, we give an example of orbit determination for the satellite with the 1500-km height of orbit based on angular observations in our observatory (Int. No. 086). For estimation of the measurement accuracy and accuracy of determination and propagation of satellite position, we analyze the observations of Ajisai satellite with the well-determined orbit. This allows making justified conclusions not only about random errors of separate measurements, but also to analyze the presence of systematic errors, including external ones to the measurement process. We have shown that the accuracy of one measurement has the standard deviation about 1 arcsec across the track and 1.4 arcsec along the track and systematical shifts in measurements of one track do not exceed 0.45 arcsec. Ajisai position in the interval of the orbit fitting is predicted with accuracy better than 30 m along the orbit and better than 10 m across the orbit for any its point.
Presentation Accuracy of the Web Revisited: Animation Methods in the HTML5 Era
Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego
2014-01-01
Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies. PMID:25302791
Illusory expectations can affect retrieval-monitoring accuracy.
McDonough, Ian M; Gallo, David A
2012-03-01
The present study investigated how expectations, even when illusory, can affect the accuracy of memory decisions. Participants studied words presented in large or small font for subsequent memory tests. Replicating prior work, judgments of learning indicated that participants expected to remember large words better than small words, even though memory for these words was equivalent on a standard test of recognition memory and subjective judgments. Critically, we also included tests that instructed participants to selectively search memory for either large or small words, thereby allowing different memorial expectations to contribute to performance. On these tests we found reduced false recognition when searching memory for large words relative to small words, such that the size illusion paradoxically affected accuracy measures (d' scores) in the absence of actual memory differences. Additional evidence for the role of illusory expectations was that (a) the accuracy effect was obtained only when participants searched memory for the aspect of the stimuli corresponding to illusory expectations (size instead of color) and (b) the accuracy effect was eliminated on a forced-choice test that prevented the influence of memorial expectations. These findings demonstrate the critical role of memorial expectations in the retrieval-monitoring process. 2012 APA, all rights reserved
A novel automatic method for monitoring Tourette motor tics through a wearable device.
Bernabei, Michel; Preatoni, Ezio; Mendez, Martin; Piccini, Luca; Porta, Mauro; Andreoni, Giuseppe
2010-09-15
The aim of this study was to propose a novel automatic method for quantifying motor-tics caused by the Tourette Syndrome (TS). In this preliminary report, the feasibility of the monitoring process was tested over a series of standard clinical trials in a population of 12 subjects affected by TS. A wearable instrument with an embedded three-axial accelerometer was used to detect and classify motor tics during standing and walking activities. An algorithm was devised to analyze acceleration data by: eliminating noise; detecting peaks connected to pathological events; and classifying intensity and frequency of motor tics into quantitative scores. These indexes were compared with the video-based ones provided by expert clinicians, which were taken as the gold-standard. Sensitivity, specificity, and accuracy of tic detection were estimated, and an agreement analysis was performed through the least square regression and the Bland-Altman test. The tic recognition algorithm showed sensitivity = 80.8% ± 8.5% (mean ± SD), specificity = 75.8% ± 17.3%, and accuracy = 80.5% ± 12.2%. The agreement study showed that automatic detection tended to overestimate the number of tics occurred. Although, it appeared this may be a systematic error due to the different recognition principles of the wearable and video-based systems. Furthermore, there was substantial concurrency with the gold-standard in estimating the severity indexes. The proposed methodology gave promising performances in terms of automatic motor-tics detection and classification in a standard clinical context. The system may provide physicians with a quantitative aid for TS assessment. Further developments will focus on the extension of its application to everyday long-term monitoring out of clinical environments. © 2010 Movement Disorder Society.
Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M
2008-12-09
The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.
Freeman, Karoline; Tsertsvadze, Alexander; Taylor-Phillips, Sian; McCarthy, Noel; Mistry, Hema; Manuel, Rohini; Mason, James
2017-01-01
Multiplex gastrointestinal pathogen panel (GPP) tests simultaneously identify bacterial, viral and parasitic pathogens from the stool samples of patients with suspected infectious gastroenteritis presenting in hospital or the community. We undertook a systematic review to compare the accuracy of GPP tests with standard microbiology techniques. Searches in Medline, Embase, Web of Science and the Cochrane library were undertaken from inception to January 2016. Eligible studies compared GPP tests with standard microbiology techniques in patients with suspected gastroenteritis. Quality assessment of included studies used tailored QUADAS-2. In the absence of a reference standard we analysed test performance taking GPP tests and standard microbiology techniques in turn as the benchmark test, using random effects meta-analysis of proportions. No study provided an adequate reference standard with which to compare the test accuracy of GPP and conventional tests. Ten studies informed a meta-analysis of positive and negative agreement. Positive agreement across all pathogens was 0.93 (95% CI 0.90 to 0.96) when conventional methods were the benchmark and 0.68 (95% CI: 0.58 to 0.77) when GPP provided the benchmark. Negative agreement was high in both instances due to the high proportion of negative cases. GPP testing produced a greater number of pathogen-positive findings than conventional testing. It is unclear whether these additional 'positives' are clinically important. GPP testing has the potential to simplify testing and accelerate reporting when compared to conventional microbiology methods. However the impact of GPP testing upon the management, treatment and outcome of patients is poorly understood and further studies are needed to evaluate the health economic impact of GPP testing compared with standard methods. The review protocol is registered with PROSPERO as CRD42016033320.
Budiman, Erwin S; Samant, Navendu; Resch, Ansgar
2013-03-01
Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors. Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. Further research is necessary to validate the findings of this model-based approach. © 2013 Diabetes Technology Society.
Budiman, Erwin S.; Samant, Navendu; Resch, Ansgar
2013-01-01
Background Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. Methods We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors.. Results Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Conclusions Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. PMID:23566995
On the Accuracy of Language Trees
Pompei, Simone; Loreto, Vittorio; Tria, Francesca
2011-01-01
Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic) features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve it. PMID:21674034
The performance of flash glucose monitoring in critically ill patients with diabetes.
Ancona, Paolo; Eastwood, Glenn M; Lucchetta, Luca; Ekinci, Elif I; Bellomo, Rinaldo; Mårtensson, Johan
2017-06-01
Frequent glucose monitoring may improve glycaemic control in critically ill patients with diabetes. We aimed to assess the accuracy of a novel subcutaneous flash glucose monitor (FreeStyle Libre [Abbott Diabetes Care]) in these patients. We applied the FreeStyle Libre sensor to the upper arm of eight patients with diabetes in the intensive care unit and obtained hourly flash glucose measurements. Duplicate recordings were obtained to assess test-retest reliability. The reference glucose level was measured in arterial or capillary blood. We determined numerical accuracy using Bland- Altman methods, the mean absolute relative difference (MARD) and whether the International Organization for Standardization (ISO) and Clinical and Laboratory Standards Institute Point of Care Testing (CLSI POCT) criteria were met. Clarke error grid (CEG) and surveillance error grid (SEG) analyses were used to determine clinical accuracy. We compared 484 duplicate flash glucose measurements and observed a Pearson correlation coefficient of 0.97 and a coefficient of repeatability of 1.6 mmol/L. We studied 185 flash readings paired with arterial glucose levels, and 89 paired with capillary glucose levels. Using the arterial glucose level as the reference, we found a mean bias of 1.4 mmol/L (limits of agreement, -1.7 to 4.5 mmol/L). The MARD was 14% (95% CI, 12%-16%) and the proportion of measurements meeting ISO and CLSI POCT criteria was 64.3% and 56.8%, respectively. The proportions of values within a low-risk zone on CEG and SEG analyses were 97.8% and 99.5%, respectively. Using capillary glucose levels as the reference, we found that numerical and clinical accuracy were lower. The subcutaneous FreeStyle Libre blood glucose measurement system showed high test-retest reliability and acceptable accuracy when compared with arterial blood glucose measurement in critically ill patients with diabetes.
Accuracy of Referring Provider and Endoscopist Impressions of Colonoscopy Indication.
Naveed, Mariam; Clary, Meredith; Ahn, Chul; Kubiliun, Nisa; Agrawal, Deepak; Cryer, Byron; Murphy, Caitlin; Singal, Amit G
2017-07-01
Background: Referring provider and endoscopist impressions of colonoscopy indication are used for clinical care, reimbursement, and quality reporting decisions; however, the accuracy of these impressions is unknown. This study assessed the sensitivity, specificity, positive and negative predictive value, and overall accuracy of methods to classify colonoscopy indication, including referring provider impression, endoscopist impression, and administrative algorithm compared with gold standard chart review. Methods: We randomly sampled 400 patients undergoing a colonoscopy at a Veterans Affairs health system between January 2010 and December 2010. Referring provider and endoscopist impressions of colonoscopy indication were compared with gold-standard chart review. Indications were classified into 4 mutually exclusive categories: diagnostic, surveillance, high-risk screening, or average-risk screening. Results: Of 400 colonoscopies, 26% were performed for average-risk screening, 7% for high-risk screening, 26% for surveillance, and 41% for diagnostic indications. Accuracy of referring provider and endoscopist impressions of colonoscopy indication were 87% and 84%, respectively, which were significantly higher than that of the administrative algorithm (45%; P <.001 for both). There was substantial agreement between endoscopist and referring provider impressions (κ=0.76). All 3 methods showed high sensitivity (>90%) for determining screening (vs nonscreening) indication, but specificity of the administrative algorithm was lower (40.3%) compared with referring provider (93.7%) and endoscopist (84.0%) impressions. Accuracy of endoscopist, but not referring provider, impression was lower in patients with a family history of colon cancer than in those without (65% vs 84%; P =.001). Conclusions: Referring provider and endoscopist impressions of colonoscopy indication are both accurate and may be useful data to incorporate into algorithms classifying colonoscopy indication. Copyright © 2017 by the National Comprehensive Cancer Network.
Lack of Accuracy of Body Temperature for Detecting Serious Bacterial Infection in Febrile Episodes.
De, Sukanya; Williams, Gabrielle J; Teixeira-Pinto, Armando; Macaskill, Petra; McCaskill, Mary; Isaacs, David; Craig, Jonathan C
2015-09-01
Body temperature is a time-honored marker of serious bacterial infection, but there are few studies of its test performance. The aim of our study was to determine the accuracy of temperature measured on presentation to medical care for detecting serious bacterial infection. Febrile children 0-5 years of age presenting to the emergency department of a tertiary care pediatric hospital were sampled consecutively. The accuracy of the axillary temperature measured at presentation was evaluated using logistic regression models to generate receiver operating characteristic curves. Reference standard tests for serious bacterial infection were standard microbiologic/radiologic tests and clinical follow-up. Age, clinicians' impression of appearance of the child (well versus unwell) and duration of illness were assessed as possible effect modifiers. Of 15,781 illness episodes 1120 (7.1%) had serious bacterial infection. The area under the receiver operating characteristic curve for temperature was 0.60 [95% confidence intervals (CI): 0.58-0.62]. A threshold of ≥ 38°C had a sensitivity of 0.67 (95% CI: 0.64-0.70), specificity of 0.45 (95% CI: 0.44-0.46), positive likelihood ratio of 1.2 (95% CI: 1.2-1.3) and negative likelihood ratio of 0.7 (95% CI: 0.7-0.8). Age and illness duration had a small but significant effect on the accuracy of temperature increasing its "rule-in" potential. Measured temperature at presentation to hospital is not an accurate marker of serious bacterial infection in febrile children. Younger age and longer duration of illness increase the rule-in potential of temperature but without substantial overall change in its test accuracy.
Niechwiej-Szwedo, Ewa; Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts' type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2-5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings.
Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts’ type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2–5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings. PMID:29529064
Metal Standards for Waveguide Characterization of Materials
NASA Technical Reports Server (NTRS)
Lambert, Kevin M.; Kory, Carol L.
2009-01-01
Rectangular-waveguide inserts that are made of non-ferromagnetic metals and are sized and shaped to function as notch filters have been conceived as reference standards for use in the rectangular- waveguide method of characterizing materials with respect to such constitutive electromagnetic properties as permittivity and permeability. Such standards are needed for determining the accuracy of measurements used in the method, as described below. In this method, a specimen of a material to be characterized is cut to a prescribed size and shape and inserted in a rectangular- waveguide test fixture, wherein the specimen is irradiated with a known source signal and detectors are used to measure the signals reflected by, and transmitted through, the specimen. Scattering parameters [also known as "S" parameters (S11, S12, S21, and S22)] are computed from ratios between the transmitted and reflected signals and the source signal. Then the permeability and permittivity of the specimen material are derived from the scattering parameters. Theoretically, the technique for calculating the permeability and permittivity from the scattering parameters is exact, but the accuracy of the results depends on the accuracy of the measurements from which the scattering parameters are obtained. To determine whether the measurements are accurate, it is necessary to perform comparable measurements on reference standards, which are essentially specimens that have known scattering parameters. To be most useful, reference standards should provide the full range of scattering-parameter values that can be obtained from material specimens. Specifically, measurements of the backscattering parameter (S11) from no reflection to total reflection and of the forward-transmission parameter (S21) from no transmission to total transmission are needed. A reference standard that functions as a notch (band-stop) filter can satisfy this need because as the signal frequency is varied across the frequency range for which the filter is designed, the scattering parameters vary over the ranges of values between the extremes of total reflection and total transmission. A notch-filter reference standard in the form of a rectangular-waveguide insert that has a size and shape similar to that of a material specimen is advantageous because the measurement configuration used for the reference standard can be the same as that for a material specimen. Typically a specimen is a block of material that fills a waveguide cross-section but occupies only a small fraction of the length of the waveguide. A reference standard of the present type (see figure) is a metal block that fills part of a waveguide cross section and contains a slot, the long dimension of which can be chosen to tailor the notch frequency to a desired value. The scattering parameters and notch frequency can be estimated with high accuracy by use of commercially available electromagnetic-field-simulating software. The block can be fabricated to the requisite precision by wire electrical-discharge machining. In use, the accuracy of measurements is determined by comparison of (1) the scattering parameters calculated from the measurements with (2) the scattering parameters calculated by the aforementioned software.
2010-01-01
Background Acute urinary tract infections (UTI) are one of the most common bacterial infections among women presenting to primary care. However, there is a lack of consensus regarding the optimal reference standard threshold for diagnosing UTI. The objective of this systematic review is to determine the diagnostic accuracy of symptoms and signs in women presenting with suspected UTI, across three different reference standards (102 or 103 or 105 CFU/ml). We also examine the diagnostic value of individual symptoms and signs combined with dipstick test results in terms of clinical decision making. Methods Searches were performed through PubMed (1966 to April 2010), EMBASE (1973 to April 2010), Cochrane library (1973 to April 2010), Google scholar and reference checking. Studies that assessed the diagnostic accuracy of symptoms and signs of an uncomplicated UTI using a urine culture from a clean-catch or catherised urine specimen as the reference standard, with a reference standard of at least ≥ 102 CFU/ml were included. Synthesised data from a high quality systematic review were used regarding dipstick results. Studies were combined using a bivariate random effects model. Results Sixteen studies incorporating 3,711 patients are included. The weighted prior probability of UTI varies across diagnostic threshold, 65.1% at ≥ 102 CFU/ml; 55.4% at ≥ 103 CFU/ml and 44.8% at ≥ 102 CFU/ml ≥ 105 CFU/ml. Six symptoms are identified as useful diagnostic symptoms when a threshold of ≥ 102 CFU/ml is the reference standard. Presence of dysuria (+LR 1.30 95% CI 1.20-1.41), frequency (+LR 1.10 95% CI 1.04-1.16), hematuria (+LR 1.72 95%CI 1.30-2.27), nocturia (+LR 1.30 95% CI 1.08-1.56) and urgency (+LR 1.22 95% CI 1.11-1.34) all increase the probability of UTI. The presence of vaginal discharge (+LR 0.65 95% CI 0.51-0.83) decreases the probability of UTI. Presence of hematuria has the highest diagnostic utility, raising the post-test probability of UTI to 75.8% at ≥ 102 CFU/ml and 67.4% at ≥ 103 CFU/ml. Probability of UTI increases to 93.3% and 90.1% at ≥ 102 CFU/ml and ≥ 103 CFU/ml respectively when presence of hematuria is combined with a positive dipstick result for nitrites. Subgroup analysis shows improved diagnostic accuracy using lower reference standards ≥ 102 CFU/ml and ≥ 103 CFU/ml. Conclusions Individual symptoms and signs have a modest ability to raise the pretest-risk of UTI. Diagnostic accuracy improves considerably when combined with dipstick tests particularly tests for nitrites. PMID:20969801
Giesen, Leonie G M; Cousins, Gráinne; Dimitrov, Borislav D; van de Laar, Floris A; Fahey, Tom
2010-10-24
Acute urinary tract infections (UTI) are one of the most common bacterial infections among women presenting to primary care. However, there is a lack of consensus regarding the optimal reference standard threshold for diagnosing UTI. The objective of this systematic review is to determine the diagnostic accuracy of symptoms and signs in women presenting with suspected UTI, across three different reference standards (10(2) or 10(3) or 10(5) CFU/ml). We also examine the diagnostic value of individual symptoms and signs combined with dipstick test results in terms of clinical decision making. Searches were performed through PubMed (1966 to April 2010), EMBASE (1973 to April 2010), Cochrane library (1973 to April 2010), Google scholar and reference checking.Studies that assessed the diagnostic accuracy of symptoms and signs of an uncomplicated UTI using a urine culture from a clean-catch or catherised urine specimen as the reference standard, with a reference standard of at least ≥ 10(2) CFU/ml were included. Synthesised data from a high quality systematic review were used regarding dipstick results. Studies were combined using a bivariate random effects model. Sixteen studies incorporating 3,711 patients are included. The weighted prior probability of UTI varies across diagnostic threshold, 65.1% at ≥ 10(2) CFU/ml; 55.4% at ≥ 10(3) CFU/ml and 44.8% at ≥ 10(2) CFU/ml ≥ 10(5) CFU/ml. Six symptoms are identified as useful diagnostic symptoms when a threshold of ≥ 10(2) CFU/ml is the reference standard. Presence of dysuria (+LR 1.30 95% CI 1.20-1.41), frequency (+LR 1.10 95% CI 1.04-1.16), hematuria (+LR 1.72 95%CI 1.30-2.27), nocturia (+LR 1.30 95% CI 1.08-1.56) and urgency (+LR 1.22 95% CI 1.11-1.34) all increase the probability of UTI. The presence of vaginal discharge (+LR 0.65 95% CI 0.51-0.83) decreases the probability of UTI. Presence of hematuria has the highest diagnostic utility, raising the post-test probability of UTI to 75.8% at ≥ 10(2) CFU/ml and 67.4% at ≥ 10(3) CFU/ml. Probability of UTI increases to 93.3% and 90.1% at ≥ 10(2) CFU/ml and ≥ 10(3) CFU/ml respectively when presence of hematuria is combined with a positive dipstick result for nitrites. Subgroup analysis shows improved diagnostic accuracy using lower reference standards ≥ 10(2) CFU/ml and ≥ 10(3) CFU/ml. Individual symptoms and signs have a modest ability to raise the pretest-risk of UTI. Diagnostic accuracy improves considerably when combined with dipstick tests particularly tests for nitrites.
Standardization of noncontact 3D measurement
NASA Astrophysics Data System (ADS)
Takatsuji, Toshiyuki; Osawa, Sonko; Sato, Osamu
2008-08-01
As the global R&D competition is intensified, more speedy measurement instruments are required both in laboratories and production process. In machinery areas, while contact type coordinate measuring machines (CMM) have been widely used, noncontact type CMMs are growing its market share which are capable of measuring enormous number of points at once. Nevertheless, since no industrial standard concerning an accuracy test of noncontact CMMs exists, each manufacturer writes the accuracy of their product according to their own rules, and this situation gives confusion to customers. The working group ISO/TC 213/WG 10 is trying to make a new ISO standard which stipulates an accuracy test of noncontact CMMs. The concept and the situation of discussion of this new standard will be explained. In National Metrology Institute of Japan (NMIJ), we are collecting measurement data which serves as a technical background of the standards together with a consortium formed by users and manufactures. This activity will also be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, Lori A.; Marshall, Michael S.; Sherrill, C. David, E-mail: sherrill@gatech.edu
2014-12-21
A systematic examination of noncovalent interactions as modeled by wavefunction theory is presented in comparison to gold-standard quality benchmarks available for 345 interaction energies of 49 bimolecular complexes. Quantum chemical techniques examined include spin-component-scaling (SCS) variations on second-order perturbation theory (MP2) [SCS, SCS(N), SCS(MI)] and coupled cluster singles and doubles (CCSD) [SCS, SCS(MI)]; also, method combinations designed to improve dispersion contacts [DW-MP2, MP2C, MP2.5, DW-CCSD(T)-F12]; where available, explicitly correlated (F12) counterparts are also considered. Dunning basis sets augmented by diffuse functions are employed for all accessible ζ-levels; truncations of the diffuse space are also considered. After examination of both accuracymore » and performance for 394 model chemistries, SCS(MI)-MP2/cc-pVQZ can be recommended for general use, having good accuracy at low cost and no ill-effects such as imbalance between hydrogen-bonding and dispersion-dominated systems or non-parallelity across dissociation curves. Moreover, when benchmarking accuracy is desirable but gold-standard computations are unaffordable, this work recommends silver-standard [DW-CCSD(T**)-F12/aug-cc-pVDZ] and bronze-standard [MP2C-F12/aug-cc-pVDZ] model chemistries, which support accuracies of 0.05 and 0.16 kcal/mol and efficiencies of 97.3 and 5.5 h for adenine·thymine, respectively. Choice comparisons of wavefunction results with the best symmetry-adapted perturbation theory [T. M. Parker, L. A. Burns, R. M. Parrish, A. G. Ryno, and C. D. Sherrill, J. Chem. Phys. 140, 094106 (2014)] and density functional theory [L. A. Burns, Á. Vázquez-Mayagoitia, B. G. Sumpter, and C. D. Sherrill, J. Chem. Phys. 134, 084107 (2011)] methods previously studied for these databases are provided for readers' guidance.« less
Prehospital lung ultrasound for the diagnosis of cardiogenic pulmonary oedema: a pilot study.
Laursen, Christian B; Hänselmann, Anja; Posth, Stefan; Mikkelsen, Søren; Videbæk, Lars; Berg, Henrik
2016-08-02
An improved prehospital diagnostic accuracy of cardiogenic pulmonary oedema could potentially improve initial treatment, triage, and outcome. A pilot study was conducted to assess the feasibility, time-use, and diagnostic accuracy of prehospital lung ultrasound (PLUS) for the diagnosis of cardiogenic pulmonary oedema. A prospective observational study was conducted in a prehospital setting. Patients were included if the physician based prehospital mobile emergency care unit was activated and one or more of the following two were present: respiratory rate >30/min., oxygen saturation <90 %. Exclusion criteria were: age <18 years, permanent mental disability or PLUS causing a delay in life-saving treatment or transportation. Following clinical assessment PLUS was performed and presence or absence of interstitial syndrome was registered. Audit by three physicians using predefined diagnostic criteria for cardiogenic pulmonary oedema was used as gold standard. A total of 40 patients were included in the study. Feasibility of PLUS was 100 % and median time used was 3 min. The gold standard diagnosed 18 (45.0 %) patients with cardiogenic pulmonary oedema. The diagnostic accuracy of PLUS for the diagnosis of cardiogenic pulmonary oedema was: sensitivity 94.4 % (95 % confidence interval (CI) 72.7-99.9 %), specificity 77.3 % (95 % CI 54.6-92.2 %), positive predictive value 77.3 % (95 % CI 54.6-92.2 %), negative predictive value 94.4 % (95 % CI 72.7-99.9 %). The sensitivity of PLUS is high, making it a potential tool for ruling-out cardiogenic pulmonary. The observed specificity was lower than what has been described in previous studies. Performed, as part of a physician based prehospital emergency service, PLUS seems fast and highly feasible in patients with respiratory failure. Due to its diagnostic accuracy, PLUS may have potential as a prehospital tool, especially to rule out cardiogenic pulmonary oedema.
Ferrando, Carlos; Romero, Carolina; Tusman, Gerardo; Suarez-Sipmann, Fernando; Canet, Jaume; Dosdá, Rosa; Valls, Paola; Villena, Abigail; Serralta, Ferran; Jurado, Ana; Carrizo, Juan; Navarro, Jose; Parrilla, Cristina; Romero, Jose E; Pozo, Natividad; Soro, Marina; Villar, Jesús; Belda, Francisco Javier
2017-01-01
Objective To assess the diagnostic accuracy of peripheral capillary oxygen saturation (SpO2) while breathing room air for 5 min (the ‘Air-Test’) in detecting postoperative atelectasis. Design Prospective cohort study. Diagnostic accuracy was assessed by measuring the agreement between the index test and the reference standard CT scan images. Setting Postanaesthetic care unit in a tertiary hospital in Spain. Participants Three hundred and fifty patients from 12 January to 7 February 2015; 170 patients scheduled for surgery under general anaesthesia who were admitted into the postsurgical unit were included. Intervention The Air-Test was performed in conscious extubated patients after a 30 min stabilisation period during which they received supplemental oxygen therapy via a venturi mask. The Air-Test was defined as positive when SpO2 was ≤96% and negative when SpO2 was ≥97%. Arterial blood gases were measured in all patients at the end of the Air-Test. In the subsequent 25 min, the presence of atelectasis was evaluated by performing a CT scan in 59 randomly selected patients. Main outcome measures The primary study outcome was assessment of the accuracy of the Air-Test for detecting postoperative atelectasis compared with the reference standard. The secondary outcome was the incidence of positive Air-Test results. Results The Air-Test diagnosed postoperative atelectasis with an area under the receiver operating characteristic curve of 0.90 (95% CI 0.82 to 0.98) with a sensitivity of 82.6% and a specificity of 87.8%. The presence of atelectasis was confirmed by CT scans in all patients (30/30) with positive and in 5 patients (17%) with negative Air-Test results. Based on the Air-Test, postoperative atelectasis was present in 36% of the patients (62 out of 170). Conclusion The Air-Test may represent an accurate, simple, inexpensive and non-invasive method for diagnosing postoperative atelectasis. Trial Registration NCT02650037. PMID:28554935
Continuous decoding of human grasp kinematics using epidural and subdural signals
NASA Astrophysics Data System (ADS)
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-02-01
Objective. Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces. Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials (EFPs). Approach. We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with EFPs, with both standard- and high-resolution electrode arrays. Main results. In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean ± SD grasp aperture variance accounted for was 0.54 ± 0.05 across all subjects, 0.75 ± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7-20 Hz and 70-115 Hz spectral bands contained the most information about grasp kinematics, with the 70-115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance. To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface.
Continuous decoding of human grasp kinematics using epidural and subdural signals
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-01-01
Objective Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces (BMIs). Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are: accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials. Approach We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with epidural field potentials (EFPs), with both standard- and high-resolution electrode arrays. Main results In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean± SD grasp aperture variance accounted for was 0.54± 0.05 across all subjects, 0.75± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7–20 Hz and 70–115 Hz spectral bands contained the most information about grasp kinematics, with the 70–115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface. PMID:27900947
Low-speed airspeed calibration data for a single-engine research-support aircraft
NASA Technical Reports Server (NTRS)
Holmes, B. J.
1980-01-01
A standard service airspeed system on a single engine research support airplane was calibrated by the trailing anemometer method. The effects of flaps, power, sideslip, and lag were evaluated. The factory supplied airspeed calibrations were not sufficiently accurate for high accuracy flight research applications. The trailing anemometer airspeed calibration was conducted to provide the capability to use the research support airplane to perform pace aircraft airspeed calibrations.
Standardized UXO Technology Demonstration Site Blind Grid Scoring Record No. 764
2006-04-01
Attainable accuracy of depth (z) ± 0.3 meter Detection performance for ferrous and nonferrous metals : will detect ammunition components 20-mm...ASSOCIATES, INC. 6832 OLD DOMINION DRIVE MCLEAN, VA 22101 TECHNOLOGY TYPE/PLATFORM: MULTI CHANNEL DETECTOR SYSTEM (AMOS)/TOWED PREPARED BY: U.S...Multi Channel Detector System (AMOS)/Towed, MEC 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON a. REPORT Unclassified b. ABSTRACT
Liu, Ying; Xu, Fengguo; Zhang, Zunjian; Song, Rui; Tian, Yuan
2008-07-01
To quantify naringenin and hesperetin in rat plasma after oral administration of Da-Cheng-Qi decoction, a famous purgative traditional Chinese medicine, a high-performance liquid chromatography-tandem mass spectrometry method was developed and validated. The HPLC separation was carried out on a Zorbax SB-C(18) column using 0.1% formic acid-methanol as mobile phase and estazolam as internal standard after the sample of rat plasma had been cleaned up with one-step protein precipitation using methanol. Atmospheric pressure chemical ionization in the positive ion mode and selected reaction monitoring method was developed to determine the active components. This method was validated in terms of recovery, linearity, accuracy and precision (intra- and inter-batch variation). The recoveries of naringenin and hesperetin were 72.8-76.6 and 75.7-77.2%, respectively. Linearity in rat plasma was observed over the range of 0.5-250 ng/mL (r2 > 0.99) for both naringenin and hesperetin. The accuracy and precision were well within the acceptable range and the relative standard deviation of the measured rat plasma samples was less than 15% (n = 5). The validated method was successfully applied for the evaluation of the pharmacokinetics of naringenin and hesperetin administered to six rats.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.
Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing
2018-03-07
The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.
A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance
Zheng, Binqi; Yuan, Xiaobing
2018-01-01
The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results. PMID:29518960
Madanat, Rami; Moritz, Niko; Aro, Hannu T
2007-01-01
Physical phantom models have conventionally been used to determine the accuracy and precision of radiostereometric analysis (RSA) in various orthopaedic applications. Using a phantom model of a fracture of the distal radius it has previously been shown that RSA is a highly accurate and precise method for measuring both translation and rotation in three-dimensions (3-D). The main shortcoming of a physical phantom model is its inability to mimic complex 3-D motion. The goal of this study was to create a realistic computer model for preoperative planning of RSA studies and to test the accuracy of RSA in measuring complex movements in fractures of the distal radius using this new model. The 3-D computer model was created from a set of tomographic scans. The simulation of the radiographic imaging was performed using ray-tracing software (POV-Ray). RSA measurements were performed according to standard protocol. Using a two-part fracture model (AO/ASIF type A2), it was found that for simple movements in one axis, translations in the range of 25microm-2mm could be measured with an accuracy of +/-2microm. Rotations ranging from 16 degrees to 2 degrees could be measured with an accuracy of +/-0.015 degrees . Using a three-part fracture model the corresponding values of accuracy were found to be +/-4microm and +/-0.031 degrees for translation and rotation, respectively. For complex 3-D motion in a three-part fracture model (AO/ASIF type C1) the accuracy was +/-6microm for translation and +/-0.120 degrees for rotation. The use of 3-D computer modelling can provide a method for preoperative planning of RSA studies in complex fractures of the distal radius and in other clinical situations in which the RSA method is applicable.
Apirakviriya, Chayanis; Rungruxsirivorn, Tassawan; Phupong, Vorapong; Wisawasukmongchol, Wirach
2016-05-01
To assess diagnostic accuracy of 3D transvaginal ultrasound (3D-TVS) compared with hysteroscopy in detecting uterine cavity abnormalities in infertile women. This prospective observational cross-sectional study was conducted during the July 2013 to December 2013 study period. Sixty-nine women with infertility were enrolled. In the mid to late follicular phase of each subject's menstrual cycle, 3D transvaginal ultrasound and hysteroscopy were performed on the same day in each patient. Hysteroscopy is widely considered to be the gold standard method for investigation of the uterine cavity. Uterine cavity characteristics and abnormalities were recorded. Diagnostic accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and positive and negative likelihood ratios were evaluated. Hysteroscopy was successfully performed in all subjects. Hysteroscopy diagnosed pathological findings in 22 of 69 cases (31.8%). There were 18 endometrial polyps, 3 submucous myomas, and 1 septate uterus. Three-dimensional transvaginal ultrasound in comparison with hysteroscopy had 84.1% diagnostic accuracy, 68.2% sensitivity, 91.5% specificity, 79% positive predictive value, and 86% negative predictive value. The positive and negative likelihood ratios were 8.01 and 0.3, respectively. 3D-TVS successfully detected every case of submucous myoma and uterine anomaly. For detection of endometrial polyps, 3D-TVS had 61.1% sensitivity, 91.5% specificity, and 83.1% diagnostic accuracy. 3D-TVS demonstrated 84.1% diagnostic accuracy for detecting uterine cavity abnormalities in infertile women. A significant percentage of infertile patients had evidence of uterine cavity pathology. Hysteroscopy is, therefore, recommended for accurate detection and diagnosis of uterine cavity lesion. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Olateju, Tolu; Begley, Joseph; Flanagan, Daniel; Kerr, David
2012-07-01
Most manufacturers of blood glucose monitoring equipment do not give advice regarding the use of their meters and strips onboard aircraft, and some airlines have blood glucose testing equipment in the aircraft cabin medical bag. Previous studies using older blood glucose meters (BGMs) have shown conflicting results on the performance of both glucose oxidase (GOX)- and glucose dehydrogenase (GDH)-based meters at high altitude. The aim of our study was to evaluate the performance of four new-generation BGMs at sea level and at a simulated altitude equivalent to that used in the cabin of commercial aircrafts. Blood glucose measurements obtained by two GDH and two GOX BGMs at sea level and simulated altitude of 8000 feet in a hypobaric chamber were compared with measurements obtained using a YSI 2300 blood glucose analyzer as a reference method. Spiked venous blood samples of three different glucose levels were used. The accuracy of each meter was determined by calculating percentage error of each meter compared with the YSI reference and was also assessed against standard International Organization for Standardization (ISO) criteria. Clinical accuracy was evaluated using the consensus error grid method. The percentage (standard deviation) error for GDH meters at sea level and altitude was 13.36% (8.83%; for meter 1) and 12.97% (8.03%; for meter 2) with p = .784, and for GOX meters was 5.88% (7.35%; for meter 3) and 7.38% (6.20%; for meter 4) with p = .187. There was variation in the number of time individual meters met the standard ISO criteria ranging from 72-100%. Results from all four meters at both sea level and simulated altitude fell within zones A and B of the consensus error grid, using YSI as the reference. Overall, at simulated altitude, no differences were observed between the performance of GDH and GOX meters. Overestimation of blood glucose concentration was seen among individual meters evaluated, but none of the results obtained would have resulted in dangerous failure to detect and treat blood glucose errors or in giving treatment that was actually contradictory to that required. © 2012 Diabetes Technology Society.
Long-Term Performance of Readers Trained in Grading Crohn Disease Activity Using MRI.
Puylaert, Carl A J; Tielbeek, Jeroen A W; Bipat, Shandra; Boellaard, Thierry N; Nio, C Yung; Stoker, Jaap
2016-12-01
We aim to evaluate the long-term performance of readers who had participated in previous magnetic resonance imaging (MRI) reader training in grading Crohn disease activity. Fourteen readers (8 women; 12 radiologists, 2 residents; mean age 40; range 31-59), who had participated in a previous MRI reader training, participated in a follow-up evaluation after a mean interval of 29 months (range 25-34 months). Follow-up evaluation comprised 25 MRI cases of suspected or known Crohn disease patients with direct feedback; cases were identical to the evaluation set used in the initial reader training (of which readers were unaware). Grading accuracy, overstaging, and understaging were compared between training and follow-up using a consensus score by two experienced abdominal radiologists as the reference standard. In the follow-up evaluation, overall grading accuracy was 73% (95% confidence interval [CI]: 62%-81%), which was comparable to reader training grading accuracy (72%, 95% CI: 61%-80%) (P = .66). Overstaging decreased significantly from 19% (95% CI: 12%-27%) to 13% (95% CI: 8%-21%) between training and follow-up (P = .03), whereas understaging increased significantly from 9% (95% CI: 4%-21%) to 14% (95% CI: 7%-26%) (P < .01). Readers have consistent long-term accuracy for grading Crohn disease activity after case-based reader training with direct feedback. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
GOCI image enhancement using an MTF compensation technique for coastal water applications.
Oh, Eunsong; Choi, Jong-Kuk
2014-11-03
The Geostationary Ocean Color Imager (GOCI) is the first optical sensor in geostationary orbit for monitoring the ocean environment around the Korean Peninsula. This paper discusses on-orbit modulation transfer function (MTF) estimation with the pulse-source method and its compensation results for the GOCI. Additionally, by analyzing the relationship between the MTF compensation effect and the accuracy of the secondary ocean product, we confirmed the optimal MTF compensation parameter for enhancing image quality without variation in the accuracy. In this study, MTF assessment was performed using a natural target because the GOCI system has a spatial resolution of 500 m. For MTF compensation with the Wiener filter, we fitted a point spread function with a Gaussian curve controlled by a standard deviation value (σ). After a parametric analysis for finding the optimal degradation model, the σ value of 0.4 was determined to be an optimal indicator. Finally, the MTF value was enhanced from 0.1645 to 0.2152 without degradation of the accuracy of the ocean color product. Enhanced GOCI images by MTF compensation are expected to recognize small-scale ocean products in coastal areas with sharpened geometric performance.
Image analysis software versus direct anthropometry for breast measurements.
Quieregatto, Paulo Rogério; Hochman, Bernardo; Furtado, Fabianne; Machado, Aline Fernanda Perez; Sabino Neto, Miguel; Ferreira, Lydia Masako
2014-10-01
To compare breast measurements performed using the software packages ImageTool(r), AutoCAD(r) and Adobe Photoshop(r) with direct anthropometric measurements. Points were marked on the breasts and arms of 40 volunteer women aged between 18 and 60 years. When connecting the points, seven linear segments and one angular measurement on each half of the body, and one medial segment common to both body halves were defined. The volunteers were photographed in a standardized manner. Photogrammetric measurements were performed by three independent observers using the three software packages and compared to direct anthropometric measurements made with calipers and a protractor. Measurements obtained with AutoCAD(r) were the most reproducible and those made with ImageTool(r) were the most similar to direct anthropometry, while measurements with Adobe Photoshop(r) showed the largest differences. Except for angular measurements, significant differences were found between measurements of line segments made using the three software packages and those obtained by direct anthropometry. AutoCAD(r) provided the highest precision and intermediate accuracy; ImageTool(r) had the highest accuracy and lowest precision; and Adobe Photoshop(r) showed intermediate precision and the worst accuracy among the three software packages.
Camera calibration: active versus passive targets
NASA Astrophysics Data System (ADS)
Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli
2011-11-01
Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.
Diagnostic needle arthroscopy and the economics of improved diagnostic accuracy: a cost analysis.
Voigt, Jeffrey D; Mosier, Michael; Huber, Bryan
2014-10-01
Hundreds of thousands of surgical arthroscopy procedures are performed annually in the United States (US) based on MRI findings. There are situations where these MRI findings are equivocal or indeterminate and because of this clinicians commonly perform the arthroscopy in order not to miss pathology. Recently, a less invasive needle arthroscopy system has been introduced that is commonly performed in the physician office setting and that may help improve the accuracy of diagnostic findings. This in turn may prevent unnecessary follow-on arthroscopy procedures from being performed. The purpose of this analysis is to determine whether the in-office diagnostic needle arthroscopy system can provide cost savings by reducing unnecessary follow on arthroscopy procedures. Data obtained from a recent trial and from a systematic review were used in comparing the accuracy of MRI and VisionScope needle arthroscopy (VSI) with standard arthroscopy (gold standard). The resultant false positive and false negative findings were then used to evaluate the costs of follow-on procedures. These differences were then modeled for the US patient population diagnosed and treated for meniscal knee pathology (most common disorder) to determine if a technology such as VSI could save the US healthcare system money. Data on surgical arthroscopy procedures in the US for meniscal knee pathology were used (calendar year [CY] 2010). The costs of performing diagnostic and surgical arthroscopy procedures (using CY 2013 Medicare reimbursement amounts), costs associated with false negative findings, and the costs for treating associated complications arising from diagnostic and therapeutic arthroscopy procedures were assessed. In patients presenting with medial meniscal pathology (International Classification of Diseases, 9th edition, Clinical Modification [ICD9CM] diagnosis 836.0), VSI in place of MRI (standard of care) resulted in a net cost savings to the US system of US$115-US$177 million (CY 2013) (use of systematic review and study data, respectively). In patients presenting with lateral meniscus pathology (ICD9CM 836.1), VSI in place of MRI cost the healthcare system an additional US$14-US$97 million (CY 2013). Overall aggregate savings for meniscal (lateral plus medial) pathology were identified in representative care models along with more appropriate care as fewer patients were exposed to higher risk surgical procedures. Since in-office arthroscopy is significantly more accurate, patients can be treated more appropriately and the US healthcare system can save money, most especially in medial meniscal pathology.
Thomas, Christoph; Brodoefel, Harald; Tsiflikas, Ilias; Bruckner, Friederike; Reimann, Anja; Ketelsen, Dominik; Drosch, Tanja; Claussen, Claus D; Kopp, Andreas; Heuschmid, Martin; Burgstahler, Christof
2010-02-01
To prospectively evaluate the influence of the clinical pretest probability assessed by the Morise score onto image quality and diagnostic accuracy in coronary dual-source computed tomography angiography (DSCTA). In 61 patients, DSCTA and invasive coronary angiography were performed. Subjective image quality and accuracy for stenosis detection (>50%) of DSCTA with invasive coronary angiography as gold standard were evaluated. The influence of pretest probability onto image quality and accuracy was assessed by logistic regression and chi-square testing. Correlations of image quality and accuracy with the Morise score were determined using linear regression. Thirty-eight patients were categorized into the high, 21 into the intermediate, and 2 into the low probability group. Accuracies for the detection of significant stenoses were 0.94, 0.97, and 1.00, respectively. Logistic regressions and chi-square tests showed statistically significant correlations between Morise score and image quality (P < .0001 and P < .001) and accuracy (P = .0049 and P = .027). Linear regression revealed a cutoff Morise score for a good image quality of 16 and a cutoff for a barely diagnostic image quality beyond the upper Morise scale. Pretest probability is a weak predictor of image quality and diagnostic accuracy in coronary DSCTA. A sufficient image quality for diagnostic images can be reached with all pretest probabilities. Therefore, coronary DSCTA might be suitable also for patients with a high pretest probability. Copyright 2010 AUR. Published by Elsevier Inc. All rights reserved.
Model-based RSA of a femoral hip stem using surface and geometrical shape models.
Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M
2006-07-01
Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.
Percutaneous CT-guided biopsy of the spine: results of 430 biopsies
Rimondi, Eugenio; Errani, Costantino; Bianchi, Giuseppe; Casadei, Roberto; Alberghini, Marco; Malaguti, Maria Cristina; Rossi, Giuseppe; Durante, Stefano; Mercuri, Mario
2008-01-01
Biopsies of lesions in the spine are often challenging procedures with significant risk of complications. CT-guided needle biopsies could lower these risks but uncertainties still exist about the diagnostic accuracy. Aim of this retrospective study was to evaluate the diagnostic accuracy of CT-guided needle biopsies for bone lesions of the spine. We retrieved the results of 430 core needle biopsies carried out over the past fifteen years at the authors’ institute and examined the results obtained. Of the 430 biopsies performed, in 401 cases the right diagnosis was made with the first CT-guided needle biopsy (93.3% accuracy rate). Highest accuracy rates were obtained in primary and secondary malignant lesions. Most false negative results were found in cervical lesions and in benign, pseudotumoral, inflammatory, and systemic pathologies. There were only 9 complications (5 transient paresis, 4 haematomas that resolved spontaneously) that had no influence on the treatment strategy, nor on the patient’s outcome. In conclusion we can assert that this technique is reliable and safe and should be considered the gold standard in biopsies of the spine. PMID:18463900
Accuracy Assessment of Recent Global Ocean Tide Models around Antarctica
NASA Astrophysics Data System (ADS)
Lei, J.; Li, F.; Zhang, S.; Ke, H.; Zhang, Q.; Li, W.
2017-09-01
Due to the coverage limitation of T/P-series altimeters, the lack of bathymetric data under large ice shelves, and the inaccurate definitions of coastlines and grounding lines, the accuracy of ocean tide models around Antarctica is poorer than those in deep oceans. Using tidal measurements from tide gauges, gravimetric data and GPS records, the accuracy of seven state-of-the-art global ocean tide models (DTU10, EOT11a, GOT4.8, FES2012, FES2014, HAMTIDE12, TPXO8) is assessed, as well as the most widely-used conventional model FES2004. Four regions (Antarctic Peninsula region, Amery ice shelf region, Filchner-Ronne ice shelf region and Ross ice shelf region) are separately reported. The standard deviations of eight main constituents between the selected models are large in polar regions, especially under the big ice shelves, suggesting that the uncertainty in these regions remain large. Comparisons with in situ tidal measurements show that the most accurate model is TPXO8, and all models show worst performance in Weddell sea and Filchner-Ronne ice shelf regions. The accuracy of tidal predictions around Antarctica is gradually improving.
Lee, Chau Hung; Haaland, Benjamin; Earnest, Arul; Tan, Cher Heng
2013-09-01
To determine whether positive oral contrast agents improve accuracy of abdominopelvic CT compared with no, neutral or negative oral contrast agent. Literature was searched for studies evaluating the diagnostic performance of abdominopelvic CT with positive oral contrast agents against imaging with no, neutral or negative oral contrast agent. Meta-analysis reviewed studies correlating CT findings of blunt abdominal injury with positive and without oral contrast agents against surgical, autopsy or clinical outcome allowing derivation of pooled sensitivity and specificity. Systematic review was performed on studies with common design and reference standard. Thirty-two studies were divided into two groups. Group 1 comprised 15 studies comparing CT with positive and without oral contrast agents. Meta-analysis of five studies from group 1 provided no difference in sensitivity or specificity between CT with positive or without oral contrast agents. Group 2 comprised 17 studies comparing CT with positive and neutral or negative oral contrast agents. Systematic review of 12 studies from group 2 indicated that neutral or negative oral contrasts were as effective as positive oral contrast agents for bowel visualisation. There is no difference in accuracy between CT performed with positive oral contrast agents or with no, neutral or negative oral contrast agent. • There is no difference in the accuracy of CT with or without oral contrast agent. • There is no difference in the accuracy of CT with Gastrografin or water. • Omission of oral contrast, utilising neutral or negative oral contrast agent saves time, costs and decreases risk of aspiration.
Schneller, Mikkel B; Pedersen, Mogens T; Gupta, Nidhi; Aadahl, Mette; Holtermann, Andreas
2015-03-13
We compared the accuracy of five objective methods, including two newly developed methods combining accelerometry and activity type recognition (Acti4), against indirect calorimetry, to estimate total energy expenditure (EE) of different activities in semi-standardized settings. Fourteen participants performed a standardized and semi-standardized protocol including seven daily life activity types, while having their EE measured by indirect calorimetry. Simultaneously, physical activity was quantified by an ActivPAL3, two ActiGraph GT3X+'s and an Actiheart. EE was estimated by the standard ActivPAL3 software (ActivPAL), ActiGraph GT3X+ (ActiGraph) and Actiheart (Actiheart), and by a combination of activity type recognition via Acti4 software and activity counts per minute (CPM) of either a hip- or thigh-worn ActiGraph GT3X+ (AGhip + Acti4 and AGthigh + Acti4). At group level, estimated physical activities EE by Actiheart (MSE = 2.05) and AGthigh + Acti4 (MSE = 0.25) were not significantly different from measured EE by indirect calorimetry, while significantly underestimated by ActiGraph, ActivPAL and AGhip + Acti4. AGthigh + Acti4 and Actiheart explained 77% and 45%, of the individual variations in measured physical activity EE by indirect calorimetry, respectively. This study concludes that combining accelerometer data from a thigh-worn ActiGraph GT3X+ with activity type recognition improved the accuracy of activity specific EE estimation against indirect calorimetry in semi-standardized settings compared to previously validated methods using CPM only.
NASA Astrophysics Data System (ADS)
Tachibana, Rie; Kohlhase, Naja; Näppi, Janne J.; Hironaka, Toru; Ota, Junko; Ishida, Takayuki; Regge, Daniele; Yoshida, Hiroyuki
2016-03-01
Accurate electronic cleansing (EC) for CT colonography (CTC) enables the visualization of the entire colonic surface without residual materials. In this study, we evaluated the accuracy of a novel multi-material electronic cleansing (MUMA-EC) scheme for non-cathartic ultra-low-dose dual-energy CTC (DE-CTC). The MUMA-EC performs a wateriodine material decomposition of the DE-CTC images and calculates virtual monochromatic images at multiple energies, after which a random forest classifier is used to label the images into the regions of lumen air, soft tissue, fecal tagging, and two types of partial-volume boundaries based on image-based features. After the labeling, materials other than soft tissue are subtracted from the CTC images. For pilot evaluation, 384 volumes of interest (VOIs), which represented sources of subtraction artifacts observed in current EC schemes, were sampled from 32 ultra-low-dose DE-CTC scans. The voxels in the VOIs were labeled manually to serve as a reference standard. The metric for EC accuracy was the mean overlap ratio between the labels of the reference standard and the labels generated by the MUMA-EC, a dualenergy EC (DE-EC), and a single-energy EC (SE-EC) scheme. Statistically significant differences were observed between the performance of the MUMA/DE-EC and the SE-EC methods (p<0.001). Visual assessment confirmed that the MUMA-EC generated less subtraction artifacts than did DE-EC and SE-EC. Our MUMA-EC scheme yielded superior performance over conventional SE-EC scheme in identifying and minimizing subtraction artifacts on noncathartic ultra-low-dose DE-CTC images.
Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use
Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil
2013-01-01
The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648
NASA Technical Reports Server (NTRS)
Woodring, D. G.; Nichols, S. A.; Swanson, R.
1979-01-01
During 1978 and 1979, an Air Force C-135 test aircraft was flown to various locations in the North and South Atlantic and Pacific Oceans for satellite communications experiments. A part of the equipment tested on the aircraft was the SEACOM spread spectrum modem. The SEACOM modem operated at X band frequency from the aircraft via the DSCS II satellite to a ground station. For data to be phased successfully, it was necessary to maintain independent time and frequency accuracy over relatively long periods of time (up to two weeks) on the aircraft and at the ground station. To achieve this goal, two Efratom atomic frequency standards were used. The performance of these frequency standards as used in the spread spectrum modem is discussed, including the effects of high relative velocity, synchronization and the effects of the frequency standards on data performance is discussed. The aircraft environment, which includes extremes of temperature, as well as long periods of shutdown followed by rapid warmup requirements, is also discussed.
Naveed, Mariam; Siddiqui, Ali A.; Kowalski, Thomas E.; Loren, David E.; Khalid, Ammara; Soomro, Ayesha; Mazhar, Syed M.; Yoo, Joseph; Hasan, Raza; Yalamanchili, Silpa; Tarangelo, Nicholas; Taylor, Linda J.; Adler, Douglas G.
2018-01-01
Background and Objectives: The ability to obtain adequate tissue of solid pancreatic lesions by EUS-guided remains a challenge. The aim of this study was to compare the performance characteristics and safety of EUS-FNA for evaluating solid pancreatic lesions using the standard 22-gauge needle versus a novel EUS biopsy needle. Methods: This was a multicenter retrospective study of EUS-guided sampling of solid pancreatic lesions between 2009 and 2015. Patients underwent EUS-guided sampling with a 22-gauge SharkCore (SC) needle or a standard 22-gauge FNA needle. Technical success, performance characteristics of EUS-FNA, the number of needle passes required to obtain a diagnosis, diagnostic accuracy, and complications were compared. Results: A total of 1088 patients (mean age = 66 years; 49% female) with pancreatic masses underwent EUS-guided sampling with a 22-gauge SC needle (n = 115) or a standard 22-gauge FNA needle (n = 973). Technical success was 100%. The frequency of obtaining an adequate cytology by EUS-FNA was similar when using the SC and the standard needle (94.1% vs. 92.7%, respectively). The sensitivity, specificity, and diagnostic accuracy of EUS-FNA for tissue diagnosis were not significantly different between two needles. Adequate sample collection leading to a definite diagnosis was achieved by the 1st, 2nd, and 3rd pass in 73%, 92%, and 98% of procedures using the SC needle and 20%, 37%, and 94% procedures using the standard needle (P < 0.001), respectively. The median number of passes to obtain a tissue diagnosis using the SC needle was significantly less as compared to the standard needle (1 and 3, respectively; P < 0.001). Conclusions: The EUS SC biopsy needle is safe and technically feasible for EUS-FNA of solid pancreatic mass lesions. Preliminary results suggest that the SC needle has a diagnostic yield similar to the standard EUS needle and significantly reduces the number of needle passes required to obtain a tissue diagnosis. PMID:29451167
Type testing of the Siemens Plessey electronic personal dosemeter.
Hirning, C R; Yuen, P S
1995-07-01
This paper presents the results of a laboratory assessment of the performance of a new type of personal dosimeter, the Electronic Personal Dosemeter made by Siemens Plessey Controls Limited. Twenty pre-production dosimeters and a reader were purchased by Ontario Hydro for the assessment. Tests were performed on radiological performance, including reproducibility, accuracy, linearity, detection threshold, energy response, angular response, neutron response, and response time. There were also tests on the effects of a variety of environmental factors, such as temperature, humidity, pulsed magnetic and electric fields, low- and high-frequency electromagnetic fields, light exposure, drop impact, vibration, and splashing. Other characteristics that were tested were alarm volume, clip force, and battery life. The test results were compared with the relevant requirements of three standards: an Ontario Hydro standard for personal alarming dosimeters, an International Electrotechnical Commission draft standard for direct reading personal dose monitors, and an International Electrotechnical Commission standard for thermoluminescence dosimetry systems for personal monitoring. In general, the performance of the Electronic Personal Dosemeter was found to be quite acceptable: it met most of the relevant requirements of the three standards. However, the following deficiencies were found: slow response time; sensitivity to high-frequency electromagnetic fields; poor resistance to dropping; and an alarm that was not loud enough. In addition, the response of the electronic personal dosimeter to low-energy beta rays may be too low for some applications. Problems were experienced with the reliability of operation of the pre-production dosimeters used in these tests.
Kamal, Abid; Khan, Washim; Ahmad, Sayeed; Ahmad, F. J.; Saleem, Kishwar
2015-01-01
Objective: The present study was used to design simple, accurate and sensitive reversed phase-high-performance liquid chromatography RP-HPLC and high-performance thin-layer chromatography (HPTLC) methods for the development of quantification of khellin present in the seeds of Ammi visnaga. Materials and Methods: RP-HPLC analysis was performed on a C18 column with methanol: Water (75: 25, v/v) as a mobile phase. The HPTLC method involved densitometric evaluation of khellin after resolving it on silica gel plate using ethyl acetate: Toluene: Formic acid (5.5:4.0:0.5, v/v/v) as a mobile phase. Results: The developed HPLC and HPTLC methods were validated for precision (interday, intraday and intersystem), robustness and accuracy, limit of detection and limit of quantification. The relationship between the concentration of standard solutions and the peak response was linear in both HPLC and HPTLC methods with the concentration range of 10–80 μg/mL in HPLC and 25–1,000 ng/spot in HPTLC for khellin. The % relative standard deviation values for method precision was found to be 0.63–1.97%, 0.62–2.05% in HPLC and HPTLC for khellin respectively. Accuracy of the method was checked by recovery studies conducted at three different concentration levels and the average percentage recovery was found to be 100.53% in HPLC and 100.08% in HPTLC for khellin. Conclusions: The developed HPLC and HPTLC methods for the quantification of khellin were found simple, precise, specific, sensitive and accurate which can be used for routine analysis and quality control of A. visnaga and several formulations containing it as an ingredient. PMID:26681890
Black Carbon Measurement Intercomparison during the 2017 Black Carbon Shootout
NASA Astrophysics Data System (ADS)
Shingler, T.; Moore, R.; Winstead, E.; Robinson, C. E.; Shook, M.; Crosbie, E.; Ziemba, L. D.; Thornhill, K. L., II; Sorooshian, A.; Anderson, B. E.
2017-12-01
The NASA Langley Aerosol Research Group (LARGE) provides multiple black carbon (BC) based aerosol particle measurements and engine emission factors for airborne and ground-based field campaigns and laboratory studies. These datasets are made available to the general public where accuracy is key to enable further use in environmental assessments, models, and validation studies. Studies are needed to establish the accuracy and precision of BC measurements of particles with varying physical properties using a variety of detection techniques. Work is also needed to develop calibration and correction schemes for new sensors and to link these measurements to heritage instruments on which our understanding of BC emissions and characteristics has been established. A BC measurement intercomparison was performed at Langley Research Center using particles generated from a mini-CAST (Jing) diffusion flame soot generator. The particles were passed to instruments measuring optical absorption, extinction, scattering and black carbon mass. Filter based measurements of optical absorption were performed using a PSAP (Radiance Research) and a TAP (BMI). Absorption was also measured using two photoacoustic based instruments: the MSS-plus (AVL) and PASS-3 (DMT). Measurements of aerosol extinction were performed using three CAPS PM-ex (Aerodyne Research) instruments at multiple wavelengths. Two Artium LII-300 units (standard and high-sensitivity) were used to measure black carbon mass via laser incandescence. Black carbon measurements were correlated to mass collected concurrently on a filter and analyzed by OC/EC analysis (Sunset Labs). Black carbon quantification measurements are analyzed between instruments to assess agreement between platforms using manufacturer's calibration settings as well as after calibrations performed to a single standard soot source (mini-CAST). Sampling was also performed from behind a Falcon aircraft at multiple thrust settings and downwind of runway at an international airport with commercial takeoffs and landings.
Bayesian methods for estimating GEBVs of threshold traits
Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q
2013-01-01
Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458
Acquisition and evaluation of verb subcategorization resources for biomedicine.
Rimell, Laura; Lippincott, Thomas; Verspoor, Karin; Johnson, Helen L; Korhonen, Anna
2013-04-01
Biomedical natural language processing (NLP) applications that have access to detailed resources about the linguistic characteristics of biomedical language demonstrate improved performance on tasks such as relation extraction and syntactic or semantic parsing. Such applications are important for transforming the growing unstructured information buried in the biomedical literature into structured, actionable information. In this paper, we address the creation of linguistic resources that capture how individual biomedical verbs behave. We specifically consider verb subcategorization, or the tendency of verbs to "select" co-occurrence with particular phrase types, which influences the interpretation of verbs and identification of verbal arguments in context. There are currently a limited number of biomedical resources containing information about subcategorization frames (SCFs), and these are the result of either labor-intensive manual collation, or automatic methods that use tools adapted to a single biomedical subdomain. Either method may result in resources that lack coverage. Moreover, the quality of existing verb SCF resources for biomedicine is unknown, due to a lack of available gold standards for evaluation. This paper presents three new resources related to verb subcategorization frames in biomedicine, and four experiments making use of the new resources. We present the first biomedical SCF gold standards, capturing two different but widely-used definitions of subcategorization, and a new SCF lexicon, BioCat, covering a large number of biomedical sub-domains. We evaluate the SCF acquisition methodologies for BioCat with respect to the gold standards, and compare the results with the accuracy of the only previously existing automatically-acquired SCF lexicon for biomedicine, the BioLexicon. Our results show that the BioLexicon has greater precision while BioCat has better coverage of SCFs. Finally, we explore the definition of subcategorization using these resources and its implications for biomedical NLP. All resources are made publicly available. The SCF resources we have evaluated still show considerably lower accuracy than that reported with general English lexicons, demonstrating the need for domain- and subdomain-specific SCF acquisition tools for biomedicine. Our new gold standards reveal major differences when annotators use the different definitions. Moreover, evaluation of BioCat yields major differences in accuracy depending on the gold standard, demonstrating that the definition of subcategorization adopted will have a direct impact on perceived system accuracy for specific tasks. Copyright © 2013 Elsevier Inc. All rights reserved.
Optimetrics for Precise Navigation
NASA Technical Reports Server (NTRS)
Yang, Guangning; Heckler, Gregory; Gramling, Cheryl
2017-01-01
Optimetrics for Precise Navigation will be implemented on existing optical communication links. The ranging and Doppler measurements are conducted over communication data frame and clock. The measurement accuracy is two orders of magnitude better than TDRSS. It also has other advantages of: The high optical carrier frequency enables: (1) Immunity from ionosphere and interplanetary Plasma noise floor, which is a performance limitation for RF tracking; and (2) High antenna gain reduces terminal size and volume, enables high precision tracking in Cubesat, and in deep space smallsat. High Optical Pointing Precision provides: (a) spacecraft orientation, (b) Minimal additional hardware to implement Precise Optimetrics over optical comm link; and (c) Continuous optical carrier phase measurement will enable the system presented here to accept future optical frequency standard with much higher clock accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shestermanov, K.E.; Vasiliev, A.N; /Serpukhov, IHEP
2005-12-01
A precise measurement of the angle {alpha} in the CKM triangle is very important for a complete test of Standard Model. A theoretically clean method to extract {alpha} is provided by B{sup 0} {yields} {rho}{pi} decays. Monte Carlo simulations to obtain the BTeV reconstruction efficiency and to estimate the signal to background ratio for these decays were performed. Finally the time-dependent Dalitz plot analysis, using the isospin amplitude formalism for tre and penguin contributions, was carried out. It was shown that in one year of data taking BTeV could achieve an accuracy on {alpha} better than 5{sup o}.
Validation of a quantized-current source with 0.2 ppm uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Friederike; Fricke, Lukas, E-mail: lukas.fricke@ptb.de; Scherer, Hansjörg
2015-09-07
We report on high-accuracy measurements of quantized current, sourced by a tunable-barrier single-electron pump at frequencies f up to 1 GHz. The measurements were performed with an ultrastable picoammeter instrument, traceable to the Josephson and quantum Hall effects. Current quantization according to I = ef with e being the elementary charge was confirmed at f = 545 MHz with a total relative uncertainty of 0.2 ppm, improving the state of the art by about a factor of 5. The accuracy of a possible future quantum current standard based on single-electron transport was experimentally validated to be better than the best (indirect) realization of the ampere within themore » present SI.« less
Tsai, I-Lin; Kuo, Ching-Hua; Sun, Hsin-Yun; Chuang, Yu-Chung; Chepyala, Divyabharathi; Lin, Shu-Wen; Tsai, Yun-Jung
2017-10-25
Outbreaks of multidrug-resistant Gram-negative bacterial infections have been reported worldwide. Colistin, an antibiotic with known nephrotoxicity and neurotoxicity, is now being used to treat multidrug-resistant Gram-negative strains. In this study, we applied an on-spot internal standard addition approach coupled with an ultra high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) method to quantify colistin A and B from dried blood spots (DBSs). Only 15μL of whole blood was required for each sample. An internal standard with the same yield of extraction recoveries as colistin was added to the spot before sample extraction for accurate quantification. Formic acid in water (0.15%) with an equal volume of acetonitrile (50:50v/v) was used as the extraction solution. With the optimized extraction process and LC-MS/MS conditions, colistin A and B could be quantified from a DBS with respective limits of quantification of 0.13 and 0.27μgmL -1 , and the retention times were < 2min. The relative standard deviations of within-run and between-run precisions for peak area ratios were all < 17.3%. Accuracies were 91.5-111.2% for lower limit of quantification, low, medium, and high QC samples. The stability of the easily hydrolyzed prodrug, colistin methanesulfonate, was investigated in DBSs. Less than 4% of the prodrug was found to be hydrolyzed in DBSs at room temperature after 48h. The developed method applied an on-spot internal standard addition approach which benefited the precision and accuracy. Results showed that DBS sampling coupled with the sensitive LC-MS/MS method has the potential to be an alternative approach for colistin quantification, where the bias of prodrug hydrolysis in liquid samples is decreased. Copyright © 2017 Elsevier B.V. All rights reserved.
Accuracy of smartphone apps for heart rate measurement.
Coppetti, Thomas; Brauchlin, Andreas; Müggler, Simon; Attinger-Toller, Adrian; Templin, Christian; Schönrath, Felix; Hellermann, Jens; Lüscher, Thomas F; Biaggi, Patric; Wyss, Christophe A
2017-08-01
Background Smartphone manufacturers offer mobile health monitoring technology to their customers, including apps using the built-in camera for heart rate assessment. This study aimed to test the diagnostic accuracy of such heart rate measuring apps in clinical practice. Methods The feasibility and accuracy of measuring heart rate was tested on four commercially available apps using both iPhone 4 and iPhone 5. 'Instant Heart Rate' (IHR) and 'Heart Fitness' (HF) work with contact photoplethysmography (contact of fingertip to built-in camera), while 'Whats My Heart Rate' (WMH) and 'Cardiio Version' (CAR) work with non-contact photoplethysmography. The measurements were compared to electrocardiogram and pulse oximetry-derived heart rate. Results Heart rate measurement using app-based photoplethysmography was performed on 108 randomly selected patients. The electrocardiogram-derived heart rate correlated well with pulse oximetry ( r = 0.92), IHR ( r = 0.83) and HF ( r = 0.96), but somewhat less with WMH ( r = 0.62) and CAR ( r = 0.60). The accuracy of app-measured heart rate as compared to electrocardiogram, reported as mean absolute error (in bpm ± standard error) was 2 ± 0.35 (pulse oximetry), 4.5 ± 1.1 (IHR), 2 ± 0.5 (HF), 7.1 ± 1.4 (WMH) and 8.1 ± 1.4 (CAR). Conclusions We found substantial performance differences between the four studied heart rate measuring apps. The two contact photoplethysmography-based apps had higher feasibility and better accuracy for heart rate measurement than the two non-contact photoplethysmography-based apps.
Abawajy, Jemal; Kelarev, Andrei; Chowdhury, Morshed U; Jelinek, Herbert F
2016-01-01
Blood biochemistry attributes form an important class of tests, routinely collected several times per year for many patients with diabetes. The objective of this study is to investigate the role of blood biochemistry for improving the predictive accuracy of the diagnosis of cardiac autonomic neuropathy (CAN) progression. Blood biochemistry contributes to CAN, and so it is a causative factor that can provide additional power for the diagnosis of CAN especially in the absence of a complete set of Ewing tests. We introduce automated iterative multitier ensembles (AIME) and investigate their performance in comparison to base classifiers and standard ensemble classifiers for blood biochemistry attributes. AIME incorporate diverse ensembles into several tiers simultaneously and combine them into one automatically generated integrated system so that one ensemble acts as an integral part of another ensemble. We carried out extensive experimental analysis using large datasets from the diabetes screening research initiative (DiScRi) project. The results of our experiments show that several blood biochemistry attributes can be used to supplement the Ewing battery for the detection of CAN in situations where one or more of the Ewing tests cannot be completed because of the individual difficulties faced by each patient in performing the tests. The results show that AIME provide higher accuracy as a multitier CAN classification paradigm. The best predictive accuracy of 99.57% has been obtained by the AIME combining decorate on top tier with bagging on middle tier based on random forest. Practitioners can use these findings to increase the accuracy of CAN diagnosis.
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
St-Germain, G; Lapierre, S; Tessier, D
1989-01-01
We compared the accuracy and precision of two microbiological methods and one high-pressure liquid chromatography (HPLC) procedure used to measure the concentrations of flucytosine in serum. On the basis of an analysis of six standards, all methods were judged reliable within acceptable limits for clinical use. With the biological methods, a slight loss of linearity was observed in the 75- to 100-micrograms/ml range. Compared with the bioassays, the HPLC method did not present linearity problems and was more precise and accurate in the critical zone of 100 micrograms/ml. On average, results obtained with patient sera containing 50 to 100 micrograms of flucytosine per ml were 10.6% higher with the HPLC method than with the bioassays. Standards for the biological assays may be prepared in serum or water. PMID:2802566
Liu, Xiao-Qi; Peng, Dan-Hong; Wang, Yan-Ping; Xie, Rong; Chen, Xin-Lin; Yu, Chun-Quan; Li, Xian-Tao
2018-05-03
Phlegm and blood stasis syndrome (PBSS) is one of the main syndromes in coronary heart disease (CHD). Syndromes of Chinese medicine (CM) are lack of quantitative and easyimplementation diagnosis standards. To quantify and standardize the diagnosis of PBSS, scales are usually applied. To evaluate the diagnostic accuracy of CM diagnosis scale of PBSS in CHD. Six hundred patients with stable angina pectoris of CHD, 300 in case group and 300 in control group, will be recruited from 5 hospitals across China. Diagnosis from 2 experts will be considered as the "gold standard". The study design consists of 2 phases: pilot test is used to evaluate the reliability and validity, and diagnostic test is used to assess the diagnostic accuracy of the scale, including sensitivity, specififi city, likelihood ratio and area under the receiver operator characteristic (ROC) curve. This study will evaluate the diagnostic accuracy of CM diagnosis scale of PBSS in CHD. The consensus of 2 experts may not be ideal as a "gold standard", and itself still requires further study. (No. ChiCTR-OOC-15006599).
Factors influencing use of an e-health website in a community sample of older adults.
Czaja, Sara J; Sharit, Joseph; Lee, Chin Chin; Nair, Sankaran N; Hernández, Mario A; Arana, Neysarí; Fu, Shih Hua
2013-01-01
The use of the internet as a source of health information and link to healthcare services has raised concerns about the ability of consumers, especially vulnerable populations such as older adults, to access these applications. This study examined the influence of training on the ability of adults (aged 45+ years) to use the Medicare.gov website to solve problems related to health management. The influence of computer experience and cognitive abilities on performance was also examined. Seventy-one participants, aged 47-92, were randomized into a Multimedia training, Unimodal training, or Cold Start condition and completed three healthcare management problems. MEASUREMENT AND ANALYSES: Computer/internet experience was measured via questionnaire, and cognitive abilities were assessed using standard neuropsychological tests. Performance metrics included measures of navigation, accuracy and efficiency. Data were analyzed using analysis of variance, χ(2) and regression techniques. The data indicate that there was no difference among the three conditions on measures of accuracy, efficiency, or navigation. However, results of the regression analyses showed that, overall, people who received training performed better on the tasks, as evidenced by greater accuracy and efficiency. Performance was also significantly influenced by prior computer experience and cognitive abilities. Participants with more computer experience and higher cognitive abilities performed better. The findings indicate that training, experience, and abilities are important when using complex health websites. However, training alone is not sufficient. The complexity of web content needs to be considered to ensure successful use of these websites by those with lower abilities.
Factors influencing use of an e-health website in a community sample of older adults
Sharit, Joseph; Lee, Chin Chin; Nair, Sankaran N; Hernández, Mario A; Arana, Neysarí; Fu, Shih Hua
2013-01-01
Objective The use of the internet as a source of health information and link to healthcare services has raised concerns about the ability of consumers, especially vulnerable populations such as older adults, to access these applications. This study examined the influence of training on the ability of adults (aged 45+ years) to use the Medicare.gov website to solve problems related to health management. The influence of computer experience and cognitive abilities on performance was also examined. Design Seventy-one participants, aged 47–92, were randomized into a Multimedia training, Unimodal training, or Cold Start condition and completed three healthcare management problems. Measurement and analyses Computer/internet experience was measured via questionnaire, and cognitive abilities were assessed using standard neuropsychological tests. Performance metrics included measures of navigation, accuracy and efficiency. Data were analyzed using analysis of variance, χ2 and regression techniques. Results The data indicate that there was no difference among the three conditions on measures of accuracy, efficiency, or navigation. However, results of the regression analyses showed that, overall, people who received training performed better on the tasks, as evidenced by greater accuracy and efficiency. Performance was also significantly influenced by prior computer experience and cognitive abilities. Participants with more computer experience and higher cognitive abilities performed better. Conclusions The findings indicate that training, experience, and abilities are important when using complex health websites. However, training alone is not sufficient. The complexity of web content needs to be considered to ensure successful use of these websites by those with lower abilities. PMID:22802269
Colbert-Getz, Jorie M; Fleishman, Carol; Jung, Julianna; Shilkofski, Nicole
2013-01-01
Research suggests that medical students are not accurate in self-assessment, but it is not clear whether students over- or underestimate their skills or how certain characteristics correlate with accuracy in self-assessment. The goal of this study was to determine the effect of gender and anxiety on accuracy of students' self-assessment and on actual performance in the context of a high-stakes assessment. Prior to their fourth year of medical school, two classes of medical students at Johns Hopkins University School of Medicine completed a required clinical skills exam in fall 2010 and 2011, respectively. Two hundred two students rated their anxiety in anticipation of the exam and predicted their overall scores in the history taking and physical examination performance domains. A self-assessment deviation score was calculated by subtracting each student's predicted score from his or her score as rated by standardized patients. When students self-assessed their data gathering performance, there was a weak negative correlation between their predicted scores and their actual scores on the examination. Additionally, there was an interaction effect of anxiety and gender on both self-assessment deviation scores and actual performance. Specifically, females with high anxiety were more accurate in self-assessment and achieved higher actual scores compared with males with high anxiety. No differences by gender emerged for students with moderate or low anxiety. Educators should take into account not only gender but also the role of emotion, in this case anxiety, when planning interventions to help improve accuracy of students' self-assessment.
2017-01-01
Background As commercially available activity trackers are being utilized in clinical trials, the research community remains uncertain about reliability of the trackers, particularly in studies that involve walking aids and low-intensity activities. While these trackers have been tested for reliability during walking and running activities, there has been limited research on validating them during low-intensity activities and walking with assistive tools. Objective The aim of this study was to (1) determine the accuracy of 3 Fitbit devices (ie, Zip, One, and Flex) at different wearing positions (ie, pants pocket, chest, and wrist) during walking at 3 different speeds, 2.5, 5, and 8 km/h, performed by healthy adults on a treadmill; (2) determine the accuracy of the mentioned trackers worn at different sites during activities of daily living; and (3) examine whether intensity of physical activity (PA) impacts the choice of optimal wearing site of the tracker. Methods We recruited 15 healthy young adults to perform 6 PAs while wearing 3 Fitbit devices (ie, Zip, One, and Flex) on their chest, pants pocket, and wrist. The activities include walking at 2.5, 5, and 8 km/h, pushing a shopping cart, walking with aid of a walker, and eating while sitting. We compared the number of steps counted by each tracker with gold standard numbers. We performed multiple statistical analyses to compute descriptive statistics (ie, ANOVA test), intraclass correlation coefficient (ICC), mean absolute error rate, and correlation by comparing the tracker-recorded data with that of the gold standard. Results All the 3 trackers demonstrated good-to-excellent (ICC>0.75) correlation with the gold standard step counts during treadmill experiments. The correlation was poor (ICC<0.60), and the error rate was significantly higher in walker experiment compared to other activities. There was no significant difference between the trackers and the gold standard in the shopping cart experiment. The wrist worn tracker, Flex, counted several steps when eating (P<.01). The chest tracker was identified as the most promising site to capture steps in more intense activities, while the wrist was the optimal wearing site in less intense activities. Conclusions This feasibility study focused on 6 PAs and demonstrated that Fitbit trackers were most accurate when walking on a treadmill and least accurate during walking with a walking aid and for low-intensity activities. This may suggest excluding participants with assistive devices from studies that focus on PA interventions using commercially available trackers. This study also indicates that the wearing site of the tracker is an important factor impacting the accuracy performance. A larger scale study with a more diverse population, various activity tracker vendors, and a larger activity set are warranted to generalize our results. PMID:28801304
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
Performance analysis of the OneTouch UltraVue blood glucose monitoring system.
Chang, Anna; Orth, Alice; Le, Bryan; Menchavez, Perla; Miller, Lupe
2009-09-01
OneTouch UltraVue is a new meter for self-monitoring of blood glucose that includes a color display, used-strip ejector, and no-button interface. The system uses an electrochemical biosensor technology based on glucose oxidase chemistry to detect glucose concentrations from 20 to 600 mg/dl (1.1 to 33.3 mmol/liter). Accuracy and reproducibility were evaluated over a wide range of glucose concentrations according to standard criteria. Clinical accuracy was assessed by health care providers (HCPs) in two studies and by diabetes patients in the second study. Reference glucose levels were determined by a YSI 2300 analyzer. Same-day reproducibility and day-to-day reproducibility were also evaluated. In the accuracy studies, 99.7% and 98.7% of tests by HCPs and 97.0% of tests by patients were within +/-15 mg/dl (+/-0.8 mmol/liter) of the YSI reference for blood glucose <75 mg/dl (<4.2 mmol/liter), and within +/-20% for blood glucose > or =75 mg/dl (> or =4.2 mmol/liter), respectively. Consensus error grid analysis showed that 99.7% and 95.3% of tests by HCPs and 97.0% of tests by patients fell within zone A (i.e., has no effect on clinical action); all other results were in zone B (i.e., altered clinical action, little or no effect on clinical outcome). In the reproducibility studies, the standard deviation was <1.5 mg/dl (<0.1 mmol/liter) for glucose concentrations <100 mg/dl (<5.6 mmol/liter), and the coefficient of variation was <2% for concentrations > or = 100 mg/dl (> or =5.6 mmol/liter). OneTouch UltraVue meets standard acceptability criteria for accuracy and reproducibility across a wide range of glucose concentrations. Its simple interface and lack of contact with used strips make it a viable option for older patients and their caregivers. 2009 Diabetes Technology Society.
Performance Analysis of the OneTouch® UltraVue™ Blood Glucose Monitoring System
Chang, Anna; Orth, Alice; Le, Bryan; Menchavez, Perla; Miller, Lupe
2009-01-01
Background OneTouch® UltraVue™ is a new meter for self-monitoring of blood glucose that includes a color display, used-strip ejector, and no-button interface. The system uses an electrochemical biosensor technology based on glucose oxidase chemistry to detect glucose concentrations from 20 to 600 mg/dl (1.1 to 33.3 mmol/liter). Methods Accuracy and reproducibility were evaluated over a wide range of glucose concentrations according to standard criteria. Clinical accu-racy was assessed by health care providers (HCPs) in two studies and by diabetes patients in the second study. Reference glucose lev-els were determined by a YSI 2300 analyzer. Same-day reproducibility and day-to-day reproducibility were also evaluated. Results In the accuracy studies, 99.7% and 98.7% of tests by HCPs and 97.0% of tests by patients were within ±15 mg/dl (±0.8 mmol/liter) of the YSI reference for blood glucose <75 mg/dl (<4.2 mmol/liter), and within ±20% for blood glucose ≥75 mg/dl (≥4.2 mmol/liter), respectively. Consensus error grid analysis showed that 99.7% and 95.3% of tests by HCPs and 97.0% of tests by patients fell within zone A (i.e., has no effect on clinical action); all other results were in zone B (i.e., altered clinical action, little or no effect on clini-cal outcome). In the reproducibility studies, the standard deviation was <1.5 mg/dl (<0.1 mmol/liter) for glucose concentra-tions <100 mg/dl (<5.6 mmol/liter), and the coefficient of variation was <2% for concentrations ≥100 mg/dl (≥5.6 mmol/liter). Conclusions OneTouch UltraVue meets standard acceptability criteria for accuracy and reproducibility across a wide range of glucose concentra-tions. Its simple interface and lack of contact with used strips make it a viable option for older patients and their caregivers. PMID:20144431
Evaluation of reliability and validity of three dental color-matching devices.
Tsiliagkou, Aikaterini; Diamantopoulou, Sofia; Papazoglou, Efstratios; Kakaboura, Afrodite
2016-01-01
To assess the repeatability and accuracy of three dental color-matching devices under standardized and freehand measurement conditions. Two shade guides (Vita Classical A1-D4, Vita; and Vita Toothguide 3D-Master, Vita), and three color-matching devices (Easyshade, Vita; SpectroShade, MHT Optic Research; and ShadeVision, X-Rite) were used. Five shade tabs were selected from the Vita Classical A1-D4 (A2, A3.5, B1, C4, D3), and five from the Vita Toothguide 3D-Master (1M1, 2R1.5, 3M2, 4L2.5, 5M3) shade guides. Each shade tab was recorded 15 continuous, repeated times with each device under two different measurement conditions (standardized, and freehand). Both qualitative (color shade) and quantitative (L, a, and b) color characteristics were recorded. The color difference (ΔE) of each recorded value with the known values of the shade tab was calculated. The repeatability of each device was evaluated by the coefficient of variance. The accuracy of each device was determined by comparing the recorded values with the known values of the reference shade tab (one sample t test; α = 0.05). The agreement between the recorded shade and the reference shade tab was calculated. The influence of the parameters (devices and conditions) on the parameter ΔE was investigated (two-way ANOVA). Comparison of the devices was performed with Bonferroni pairwise post-hoc analysis. Under standardized conditions, repeatability of all three devices was very good, except for ShadeVision with Vita Classical A1-D4. Accuracy ranged from good to fair, depending on the device and the shade guide. Under freehand conditions, repeatability and accuracy for Easyshade and ShadeVision were negatively influenced, but not for SpectroShade, regardless of the shade guide. Based on the total of the color parameters assessed per device, SpectroShade was the most reliable of the three color-matching devices studied.
Eusebio, Lidia; Capelli, Laura; Sironi, Selena
2016-01-01
Despite initial enthusiasm towards electronic noses and their possible application in different fields, and quite a lot of promising results, several criticalities emerge from most published research studies, and, as a matter of fact, the diffusion of electronic noses in real-life applications is still very limited. In general, a first step towards large-scale-diffusion of an analysis method, is standardization. The aim of this paper is describing the experimental procedure adopted in order to evaluate electronic nose performances, with the final purpose of establishing minimum performance requirements, which is considered to be a first crucial step towards standardization of the specific case of electronic nose application for environmental odor monitoring at receptors. Based on the experimental results of the performance testing of a commercialized electronic nose type with respect to three criteria (i.e., response invariability to variable atmospheric conditions, instrumental detection limit, and odor classification accuracy), it was possible to hypothesize a logic that could be adopted for the definition of minimum performance requirements, according to the idea that these are technologically achievable. PMID:27657086
Eusebio, Lidia; Capelli, Laura; Sironi, Selena
2016-09-21
Despite initial enthusiasm towards electronic noses and their possible application in different fields, and quite a lot of promising results, several criticalities emerge from most published research studies, and, as a matter of fact, the diffusion of electronic noses in real-life applications is still very limited. In general, a first step towards large-scale-diffusion of an analysis method, is standardization. The aim of this paper is describing the experimental procedure adopted in order to evaluate electronic nose performances, with the final purpose of establishing minimum performance requirements, which is considered to be a first crucial step towards standardization of the specific case of electronic nose application for environmental odor monitoring at receptors. Based on the experimental results of the performance testing of a commercialized electronic nose type with respect to three criteria (i.e., response invariability to variable atmospheric conditions, instrumental detection limit, and odor classification accuracy), it was possible to hypothesize a logic that could be adopted for the definition of minimum performance requirements, according to the idea that these are technologically achievable.
Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking
NASA Astrophysics Data System (ADS)
Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.
2009-08-01
The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.
NASA Astrophysics Data System (ADS)
Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang
2017-10-01
Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.
NASA Astrophysics Data System (ADS)
Calzetti, D.; Dickinson, M. E.; Bergeron, L. E.; Colina, L.
1998-12-01
We summarize the performance of the NICMOS instrument and discuss the measured sensitivity, and the photometric performance and stability. We also present a method for removing an instrument artifact termed ``pedestal'', a bias instability that is present at a low level in most NICMOS images. The characteristics of dark frames will also be discussed, in particular as they relate to pedestal correction. NICMOS is capable of achieving the advertised performance in most areas. As an example, typical 3 sigma detection limits for a 5 orbit observation with NIC2 are 1.47 mJy arcsec(-2) in F110W, 1.67 mJy arcsec(-2) in F160W, and 12.6 mJy arcsec(-2) in F222M. The absence of time-dependent backgrounds makes infrared photometry from NICMOS highly stable, reaching an accuracy of 2% or better. NICMOS absolute calibration has been accomplished with a combination of solar analog stars and white dwarf standard stars and achieves 5% absolute photometry. An exception to these accuracies occurs for NIC3 at short wavelengths where intra-pixel sensitivity variations produces variations in relative photometry as large as 20%.