NASA Technical Reports Server (NTRS)
Tranter, W. H.; Ziemer, R. E.; Fashano, M. J.
1975-01-01
This paper reviews the SYSTID technique for performance evaluation of communication systems using time-domain computer simulation. An example program illustrates the language. The inclusion of both Gaussian and impulse noise models make accurate simulation possible in a wide variety of environments. A very flexible postprocessor makes possible accurate and efficient performance evaluation.
Error Reduction Program. [combustor performance evaluation codes
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.
1985-01-01
The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.
A Primer on Building Teacher Evaluation Instruments.
ERIC Educational Resources Information Center
Bitner, Ted; Kratzner, Ron
This paper presents a primer on building a scientifically oriented teacher evaluation instrument. It stresses the importance of accurate measures and accepts the presupposition that scientific approaches provide the most accurate measures of student teacher performance. The paper discusses the scientific concepts of validity and reliability, and…
NASA Astrophysics Data System (ADS)
Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.
2010-04-01
An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.
NPAC-Nozzle Performance Analysis Code
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.
1997-01-01
A simple and accurate nozzle performance analysis methodology has been developed. The geometry modeling requirements are minimal and very flexible, thus allowing rapid design evaluations. The solution techniques accurately couple: continuity, momentum, energy, state, and other relations which permit fast and accurate calculations of nozzle gross thrust. The control volume and internal flow analyses are capable of accounting for the effects of: over/under expansion, flow divergence, wall friction, heat transfer, and mass addition/loss across surfaces. The results from the nozzle performance methodology are shown to be in excellent agreement with experimental data for a variety of nozzle designs over a range of operating conditions.
A multi-method approach to evaluate health information systems.
Yu, Ping
2010-01-01
Systematic evaluation of the introduction and impact of health information systems (HIS) is a challenging task. As the implementation is a dynamic process, with diverse issues emerge at various stages of system introduction, it is challenge to weigh the contribution of various factors and differentiate the critical ones. A conceptual framework will be helpful in guiding the evaluation effort; otherwise data collection may not be comprehensive and accurate. This may again lead to inadequate interpretation of the phenomena under study. Based on comprehensive literature research and own practice of evaluating health information systems, the author proposes a multimethod approach that incorporates both quantitative and qualitative measurement and centered around DeLone and McLean Information System Success Model. This approach aims to quantify the performance of HIS and its impact, and provide comprehensive and accurate explanations about the casual relationships of the different factors. This approach will provide decision makers with accurate and actionable information for improving the performance of the introduced HIS.
Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.
2007-01-01
The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268
ERIC Educational Resources Information Center
Metzger, Christa; Lynch, Steven B.
1974-01-01
This paper describes the Performance Evaluation of the Education Leader (PEEL) program, initiated from a study to define the competent school administrator and to develop an instrument to measure administrative competence objectively and accurately. The resulting PEEL materials include the following: (a) "Guidelines for Evaluation: The School…
Unrealistic Optimism in the Pursuit of Academic Success
ERIC Educational Resources Information Center
Lewine, Rich; Sommers, Alison A.
2016-01-01
Although the ability to evaluate one's own knowledge and performance is critical to learning, the correlation between students' self-evaluation and actual performance measures is modest at best. In this study we examine the effect of offering extra credit for students' accurate prediction (self-accuracy) of their performance on four exams in two…
42 CFR 421.201 - Performance criteria and standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
... funds. (2) The standards evaluate the specific requirements of each functional responsibility or... performance of functional responsibilities such as— (i) Accurate and timely payment determinations; (ii...
Resident Evaluation and Remediation: A Comprehensive Approach
Wu, Jim S.; Siewert, Bettina; Boiselle, Phillip M.
2010-01-01
Background A comprehensive evaluation and remediation program is an essential component of any residency program. The evaluation system should identify problems accurately and early and allow residents with problems to be assigned to a remediation program that effectively deals with them. Elements of a proactive remediation program include a process for outlining deficiencies, providing resources for improvement, communicating clear goals for acceptable performance, and reevaluating performance against these goals. Intervention In recognition of the importance of early detection and prompt remediation of the struggling resident, we sought to develop a multifaceted approach to resident evaluation with the aim of early identification and prompt remediation of difficulties. This article describes our comprehensive evaluation program and remediation program, which uses resources within our radiology department and institutional graduate medical education office. Discussion An effective evaluation system should identify problems accurately and early, whereas a proactive remediation program should effectively deal with issues once they are identified. PMID:21975628
Bristow, Tony; Constantine, Jill; Harrison, Mark; Cavoit, Fabien
2008-04-01
Orthogonal-acceleration quadrupole time-of-flight (oa-QTOF) mass spectrometers, employed for accurate mass measurement, have been commercially available for well over a decade. A limitation of the early instruments of this type was the narrow ion abundance range over which accurate mass measurements could be made with a high degree of certainty. Recently, a new generation of oa-QTOF mass spectrometers has been developed and these allow accurate mass measurements to be recorded over a much greater range of ion abundances. This development has resulted from new ion detection technology and improved electronic stability or by accurate control of the number of ions reaching the detector. In this report we describe the results from experiments performed to evaluate the mass measurement performance of the Bruker micrOTOF-Q, a member of the new-generation oa-QTOFs. The relationship between mass accuracy and ion abundance has been extensively evaluated and mass measurement accuracy remained stable (+/-1.5 m m/z units) over approximately 3-4 orders of magnitude of ion abundance. The second feature of the Bruker micrOTOF-Q that was evaluated was the SigmaFit function of the software. This isotope pattern-matching algorithm provides an exact numerical comparison of the theoretical and measured isotope patterns as an additional identification tool to accurate mass measurement. The smaller the value, the closer the match between theoretical and measured isotope patterns. This information is then employed to reduce the number of potential elemental formulae produced from the mass measurements. A relationship between the SigmaFit value and ion abundance has been established. The results from the study for both mass accuracy and SigmaFit were employed to define the performance criteria for the micrOTOF-Q. This provided increased confidence in the selection of elemental formulae resulting from accurate mass measurements.
USDA-ARS?s Scientific Manuscript database
Successful monitoring of pollutant transport through the soil profile requires accurate, reliable, and appropriate instrumentation to measure amount of drainage water or flux within the vadose layer. We evaluated the performance and accuracy of automated passive capillary wick samplers (PCAPs) for ...
DOT National Transportation Integrated Search
2014-05-01
Accurate, consistent, and repeatable distress evaluation surveys can be performed by using the Distress Identification Manual for the Long-Term Pavement Performance Program. Color photographs and drawings illustrate the distresses found in three basi...
Children's perception of their synthetically corrected speech production.
Strömbergsson, Sofia; Wengelin, Asa; House, David
2014-06-01
We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.
Evaluating Equating Accuracy and Assumptions for Groups that Differ in Performance
ERIC Educational Resources Information Center
Powers, Sonya; Kolen, Michael J.
2014-01-01
Accurate equating results are essential when comparing examinee scores across exam forms. Previous research indicates that equating results may not be accurate when group differences are large. This study compared the equating results of frequency estimation, chained equipercentile, item response theory (IRT) true-score, and IRT observed-score…
Evaluation of a new disposable silicon limbal relaxing incision knife by experienced users.
Albanese, John; Dugue, Geoffrey; Parvu, Valentin; Bajart, Ann M; Lee, Edwin
2009-12-21
Previous research has suggested that the silicon BD Atomic Edge knife has superior performance characteristics when compared to a metal knife and performance similar to diamond knife when making various incisions. This study was designed to determine whether a silicon accurate depth knife has equivalent performance characteristics when compared to a diamond limbal relaxing incision (LRI) knife and superior performance characteristics when compared to a steel accurate depth knife when creating limbal relaxing incision. Sixty-five ophthalmic surgeons with limbal relaxing incision experience created limbal relaxing incisions in ex-vivo porcine eyes with silicon and steel accurate depth knives and diamond LRI knives. The ophthalmic surgeons rated multiple performance characteristics of the knives on Visual Analog Scales. The observed differences between the silicon knife and diamond knife were found to be insignificant. The mean ratio between the performance of the silicon knife and the diamond knife was shown to be greater than 90% (with 95% confidence). The silicon knife's mean performance was significantly higher than the performance of the steel knife for all characteristics. (p-value < .05) For experienced users, the silicon accurate depth knife was found to be equivalent in performance to the diamond LRI knife and superior to the steel accurate depth knife when making limbal relaxing incisions in ex vivo porcine eyes. Disposable silicon LRI knives may be an alternative to diamond LRI knives.
Neural network classification of clinical neurophysiological data for acute care monitoring
NASA Technical Reports Server (NTRS)
Sgro, Joseph
1994-01-01
The purpose of neurophysiological monitoring of the 'acute care' patient is to allow the accurate recognition of changing or deteriorating neurological function as close to the moment of occurrence as possible, thus permitting immediate intervention. Results confirm that: (1) neural networks are able to accurately identify electroencephalogram (EEG) patterns and evoked potential (EP) wave components, and measuring EP waveform latencies and amplitudes; (2) neural networks are able to accurately detect EP and EEG recordings that have been contaminated by noise; (3) the best performance was obtained consistently with the back propagation network for EP and the HONN for EEG's; (4) neural network performed consistently better than other methods evaluated; and (5) neural network EEG and EP analyses are readily performed on multichannel data.
NASA Astrophysics Data System (ADS)
Ko, P.; Kurosawa, S.
2014-03-01
The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.
Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting
2016-01-01
This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507
FAST COGNITIVE AND TASK ORIENTED, ITERATIVE DATA DISPLAY (FACTOID)
2017-06-01
approaches. As a result, the following assumptions guided our efforts in developing modeling and descriptive metrics for evaluation purposes...Application Evaluation . Our analytic workflow for evaluation is to first provide descriptive statistics about applications across metrics (performance...distributions for evaluation purposes because the goal of evaluation is accurate description , not inference (e.g., prediction). Outliers depicted
"That never happened": adults' discernment of children's true and false memory reports.
Block, Stephanie D; Shestowsky, Donna; Segovia, Daisy A; Goodman, Gail S; Schaaf, Jennifer M; Alexander, Kristen Weede
2012-10-01
Adults' evaluations of children's reports can determine whether legal proceedings are undertaken and whether they ultimately lead to justice. The current study involved 92 undergraduates and 35 laypersons who viewed and evaluated videotaped interviews of 3- and 5-year-olds providing true or false memory reports. The children's reports fell into the following categories based on a 2 (event type: true vs. false) × 2 (child report: assent vs. denial) factorial design: accurate reports, false reports, accurate denials, and false denials. Results revealed that adults were generally better able to correctly judge accurate reports, accurate denials, and false reports compared with false denials: For false denials, adults were, on average, "confident" that the event had not occurred, even though the event had in fact been experienced. Participant age predicted performance. These findings underscore the greater difficulty adults have in evaluating young children's false denials compared with other types of reports. Implications for law-related situations in which adults are called upon to evaluate children's statements are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.
“That Never Happened”: Adults’ Discernment of Children’s True and False Memory Reports
Block, Stephanie D.; Shestowsky, Donna; Segovia, Daisy A.; Goodman, Gail S.; Schaaf, Jennifer M.; Alexander, Kristen Weede
2014-01-01
Adults’ evaluations of children’s reports can determine whether legal proceedings are undertaken and whether they ultimately lead to justice. The current study involved 92 undergraduates and 35 laypersons who viewed and evaluated videotaped interviews of 3- and 5-year-olds providing true or false memory reports. The children’s reports fell into the following categories based on a 2 (event type: true vs. false) × 2 (child report: assent vs. denial) factorial design: Accurate reports, false reports, accurate denials, and false denials. Results revealed that adults were generally better able to correctly judge accurate reports, accurate denials, and false reports compared to false denials: For false denials, adults were, on average, “confident” that the event had not occurred, even though the event had in fact been experienced. Participant age predicted performance. These findings underscore the greater difficulty adults have in evaluating young children’s false denials compared to other types of reports. Implications for law-related situations in which adults are called upon to evaluate children’s statements are discussed. PMID:23030818
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, T; Kumaraswamy, L
Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10more » CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.« less
ERIC Educational Resources Information Center
Hershberg, Theodore; Robertson-Kraft, Claire
2010-01-01
Pay-for-performance systems in public schools have long been burdened with controversy. Critics of performance pay systems contend that because teachers' impact cannot be measured without error, it is impossible to create fair and accurate systems for evaluating and rewarding performance. By this standard, however, current practice fails on both…
Incorporation of a Cumulus Fraction Scheme in the GRAPES_Meso and Evaluation of Its Performance
NASA Astrophysics Data System (ADS)
Zheng, X.
2016-12-01
Accurate simulation of cloud cover fraction is a key and difficult issue in numerical modeling studies. Preliminary evaluations have indicated that cloud fraction is generally underestimated in GRAPES_Meso simulations, while the cloud fraction scheme (CFS) of ECMWF can provide more realistic results. Therefore, the ECMWF cumulus fraction scheme is introduced into GRAPES_Meso to replace the original CFS, and the model performance with the new CFS is evaluated based on simulated three-dimensional cloud fractions and surface temperature. Results indicate that the simulated cloud fractions increase and become more accurate with the new CFS; the simulation for vertical cloud structure has improved too; errors in surface temperature simulation have decreased. The above analysis and results suggest that the new CFS has a positive impact on cloud fraction and surface temperature simulation.
Computer Aided Evaluation of Higher Education Tutors' Performance
ERIC Educational Resources Information Center
Xenos, Michalis; Papadopoulos, Thanos
2007-01-01
This article presents a method for computer-aided tutor evaluation: Bayesian Networks are used for organizing the collected data about tutors and for enabling accurate estimations and predictions about future tutor behavior. The model provides indications about each tutor's strengths and weaknesses, which enables the evaluator to exploit strengths…
Characterization of Alaskan HMA mixtures with the simple performance tester.
DOT National Transportation Integrated Search
2014-05-01
Material characterization provides basic and essential information for pavement design and the evaluation of hot mix asphalt (HMA). : This study focused on the accurate characterization of an Alaskan HMA mixture using an asphalt mixture performance t...
WASTE REDUCTION OF TECHNOLOGY EVALUATIONS OF THE U.S. EPA WRITE PROGRAM
The Waste Reduction Innovative Technology Evaluation (WRITE)Program was established in 1989 to provide objective, accurate performance and cost data about waste reducing technologies for a variety of industrial and commercial application. EPA's Risk Reduction Engineering Laborato...
A comprehensive evaluation of strip performance in multiple blood glucose monitoring systems.
Katz, Laurence B; Macleod, Kirsty; Grady, Mike; Cameron, Hilary; Pfützner, Andreas; Setford, Steven
2015-05-01
Accurate self-monitoring of blood glucose is a key component of effective self-management of glycemic control. Accurate self-monitoring of blood glucose results are required for optimal insulin dosing and detection of hypoglycemia. However, blood glucose monitoring systems may be susceptible to error from test strip, user, environmental and pharmacological factors. This report evaluated 5 blood glucose monitoring systems that each use Verio glucose test strips for precision, effect of hematocrit and interferences in laboratory testing, and lay user and system accuracy in clinical testing according to the guidelines in ISO15197:2013(E). Performance of OneTouch(®) VerioVue™ met or exceeded standards described in ISO15197:2013 for precision, hematocrit performance and interference testing in a laboratory setting. Performance of OneTouch(®) Verio IQ™, OneTouch(®) Verio Pro™, OneTouch(®) Verio™, OneTouch(®) VerioVue™ and Omni Pod each met or exceeded accuracy standards for user performance and system accuracy in a clinical setting set forth in ISO15197:2013(E).
76 FR 4912 - Proposed Information Collection Activity; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-27
... reliable evaluation design to produce accurate evidence of the effect of HPOG on individuals and health job training programs systems. The goals of the HPOG evaluation are to establish a performance management... grantee organizations (higher education Institutions, workforce investment boards, private training...
Evaluation of Turbulence-Model Performance as Applied to Jet-Noise Prediction
NASA Technical Reports Server (NTRS)
Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.
1998-01-01
The accurate prediction of jet noise is possible only if the jet flow field can be predicted accurately. Predictions for the mean velocity and turbulence quantities in the jet flowfield are typically the product of a Reynolds-averaged Navier-Stokes solver coupled with a turbulence model. To evaluate the effectiveness of solvers and turbulence models in predicting those quantities most important to jet noise prediction, two CFD codes and several turbulence models were applied to a jet configuration over a range of jet temperatures for which experimental data is available.
ERIC Educational Resources Information Center
Stinson, Terrye A.; Zhao, Xiaofeng
2008-01-01
Past studies indicate that students are frequently poor judges of their likely academic performance in the classroom. The difficulty a student faces in accurately predicting performance on a classroom exam may be due to unrealistic optimism or may be due to an inability to self-evaluate academic performance, but the resulting disconnect between…
New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiss, T.; Chaney, L.; Meyer, J.
Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
NASA Astrophysics Data System (ADS)
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-01
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-21
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians' manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
NASA Technical Reports Server (NTRS)
Ray, Ronald J.
1994-01-01
New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.
NASA Astrophysics Data System (ADS)
Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.
2009-05-01
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.
A simple video-based timing system for on-ice team testing in ice hockey: a technical report.
Larson, David P; Noonan, Benjamin C
2014-09-01
The purpose of this study was to describe and evaluate a newly developed on-ice timing system for team evaluation in the sport of ice hockey. We hypothesized that this new, simple, inexpensive, timing system would prove to be highly accurate and reliable. Six adult subjects (age 30.4 ± 6.2 years) performed on ice tests of acceleration and conditioning. The performance times of the subjects were recorded using a handheld stopwatch, photocell, and high-speed (240 frames per second) video. These results were then compared to allow for accuracy calculations of the stopwatch and video as compared with filtered photocell timing that was used as the "gold standard." Accuracy was evaluated using maximal differences, typical error/coefficient of variation (CV), and intraclass correlation coefficients (ICCs) between the timing methods. The reliability of the video method was evaluated using the same variables in a test-retest analysis both within and between evaluators. The video timing method proved to be both highly accurate (ICC: 0.96-0.99 and CV: 0.1-0.6% as compared with the photocell method) and reliable (ICC and CV within and between evaluators: 0.99 and 0.08%, respectively). This video-based timing method provides a very rapid means of collecting a high volume of very accurate and reliable on-ice measures of skating speed and conditioning, and can easily be adapted to other testing surfaces and parameters.
Stephenson, D J; Lillquist, D R
2001-04-01
Occupational hygienists perform air sampling to characterize airborne contaminant emissions, assess occupational exposures, and establish allowable workplace airborne exposure concentrations. To perform these air sampling applications, occupational hygienists often compare an airborne exposure concentration to a corresponding American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) or an Occupational Safety and Health Administration (OSHA) permissible exposure limit (PEL). To perform such comparisons, one must understand the physiological assumptions used to establish these occupational exposure limits, the relationship between a workplace airborne exposure concentration and its associated TLV or PEL, and the effect of temperature and pressure on the performance of an accurate compliance evaluation. This article illustrates the correct procedure for performing compliance evaluations using airborne exposure concentrations expressed in both parts per million and milligrams per cubic meter. In so doing, a brief discussion is given on the physiological assumptions used to establish TLVs and PELs. It is further shown how an accurate compliance evaluation is fundamentally based on comparison of a measured work site exposure dose (derived from the sampling site exposure concentration estimate) to an estimated acceptable exposure dose (derived from the occupational exposure limit concentration). In addition, this article correctly illustrates the effect that atmospheric temperature and pressure have on airborne exposure concentrations and the eventual performance of a compliance evaluation. This article also reveals that under fairly moderate conditions of temperature and pressure, 30 degrees C and 670 torr, a misunderstanding of how varying atmospheric conditions affect concentration values can lead to a 15 percent error in assessing compliance.
Wireless data collection system for travel time estimation and traffic performance evaluation.
DOT National Transportation Integrated Search
2010-09-01
Having accurate and continually updated travel time and other performance data for the road and highway system has many benefits. From the perspective of the road users, having real-time updates on travel times will permit better travel and route pla...
An Accurate Projector Calibration Method Based on Polynomial Distortion Representation
Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua
2015-01-01
In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247
In situ corrective action technologies are being proposed and installed at an increasing number of underground storage tank (LIST) sites contaminated with petroleum products in saturated and unsaturated zones. It is often difficult to accurately assess the performance of these sy...
DOE Office of Scientific and Technical Information (OSTI.GOV)
2018-01-23
Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.
ERIC Educational Resources Information Center
Gohara, Sabry; Shapiro, Joseph I.; Jacob, Adam N.; Khuder, Sadik A.; Gandy, Robyn A.; Metting, Patricia J.; Gold, Jeffrey; Kleshinski, James; and James Kleshinski
2011-01-01
The purpose of this study was to evaluate whether models based on pre-admission testing, including performance on the Medical College Admission Test (MCAT), performance on required courses in the medical school curriculum, or a combination of both could accurately predict performance of medical students on the United States Medical Licensing…
Williams, Matthew R.; Kirsch, Robert F.
2013-01-01
We investigated the performance of three user interfaces for restoration of cursor control in individuals with tetraplegia: head orientation, EMG from face and neck muscles, and a standard computer mouse (for comparison). Subjects engaged in a 2D, center-out, Fitts’ Law style task and performance was evaluated using several measures. Overall, head orientation commanded motion resembled mouse commanded cursor motion (smooth, accurate movements to all targets), although with somewhat lower performance. EMG commanded movements exhibited a higher average speed, but other performance measures were lower, particularly for diagonal targets. Compared to head orientation, EMG as a cursor command source was less accurate, was more affected by target direction and was more prone to overshoot the target. In particular, EMG commands for diagonal targets were more sequential, moving first in one direction and then the other rather than moving simultaneous in the two directions. While the relative performance of each user interface differs, each has specific advantages depending on the application. PMID:18990652
NASA Astrophysics Data System (ADS)
Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang
2017-10-01
Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.
Detection of CMOS bridging faults using minimal stuck-at fault test sets
NASA Technical Reports Server (NTRS)
Ijaz, Nabeel; Frenzel, James F.
1993-01-01
The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.
Principal Evaluation: Standards, Rubrics, and Tools for Effective Performance
ERIC Educational Resources Information Center
Stronge, James H.; Xu, Xianxuan; Leeper, Lauri M.; Tonneson, Virginia C.
2013-01-01
Effective principals run effective schools--this much we know. Accurately measuring principal effectiveness, however, has long been an elusive goal for school administrators. In this indispensable book, author James H. Stronge details the steps and resources necessary for designing a comprehensive principal evaluation system that is based on sound…
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
Significant accomplishments, 2009
DOT National Transportation Integrated Search
2009-01-01
BTS data is vital to improve transportation decision : making, as well as evaluations of performance and : what works in the nations transportation systems. : As BTS continues its course of producing relevant, : accurate, and timely transpor...
NASA Astrophysics Data System (ADS)
Gupta, Arun; Kim, Kyeong Yun; Hwang, Donghwi; Lee, Min Sun; Lee, Dong Soo; Lee, Jae Sung
2018-06-01
SPECT plays important role in peptide receptor targeted radionuclide therapy using theranostic radionuclides such as Lu-177 for the treatment of various cancers. However, SPECT studies must be quantitatively accurate because the reliable assessment of tumor uptake and tumor-to-normal tissue ratios can only be performed using quantitatively accurate images. Hence, it is important to evaluate performance parameters and quantitative accuracy of preclinical SPECT systems for therapeutic radioisotopes before conducting pre- and post-therapy SPECT imaging or dosimetry studies. In this study, we evaluated system performance and quantitative accuracy of NanoSPECT/CT scanner for Lu-177 imaging using point source and uniform phantom studies. We measured recovery coefficient, uniformity, spatial resolution, system sensitivity and calibration factor for mouse whole body standard aperture. We also performed the experiments using Tc-99m to compare the results with that of Lu-177. We found that the recovery coefficient of more than 70% for Lu-177 at the optimum noise level when nine iterations were used. The spatial resolutions of Lu-177 with and without adding uniform background was comparable to that of Tc-99m in axial, radial and tangential directions. System sensitivity measured for Lu-177 was almost three times less than that of Tc-99m.
Ensemble of trees approaches to risk adjustment for evaluating a hospital's performance.
Liu, Yang; Traskin, Mikhail; Lorch, Scott A; George, Edward I; Small, Dylan
2015-03-01
A commonly used method for evaluating a hospital's performance on an outcome is to compare the hospital's observed outcome rate to the hospital's expected outcome rate given its patient (case) mix and service. The process of calculating the hospital's expected outcome rate given its patient mix and service is called risk adjustment (Iezzoni 1997). Risk adjustment is critical for accurately evaluating and comparing hospitals' performances since we would not want to unfairly penalize a hospital just because it treats sicker patients. The key to risk adjustment is accurately estimating the probability of an Outcome given patient characteristics. For cases with binary outcomes, the method that is commonly used in risk adjustment is logistic regression. In this paper, we consider ensemble of trees methods as alternatives for risk adjustment, including random forests and Bayesian additive regression trees (BART). Both random forests and BART are modern machine learning methods that have been shown recently to have excellent performance for prediction of outcomes in many settings. We apply these methods to carry out risk adjustment for the performance of neonatal intensive care units (NICU). We show that these ensemble of trees methods outperform logistic regression in predicting mortality among babies treated in NICU, and provide a superior method of risk adjustment compared to logistic regression.
ERIC Educational Resources Information Center
Chiarenza, Giuseppe Augusto
1990-01-01
Eight reading-disordered and 9 nondisabled males (age 10) performed a skilled motor-perceptual task. The children with reading disorders were slower, less accurate, and achieved a smaller number of target performances. Their brain macropotentials associated with motor programing, processing of sensory information, and evaluation of the results…
Gimenes, Fernanda Raphael Escobar; Motta, Ana Paula Gobbo; da Silva, Patrícia Costa dos Santos; Gobbo, Ana Flora Fogaça; Atila, Elisabeth; de Carvalho, Emilia Campos
2017-01-01
ABSTRACT Objective: to identify the nursing interventions associated with the most accurate and frequently used NANDA International, Inc. (NANDA-I) nursing diagnoses for patients with liver cirrhosis. Method: this is a descriptive, quantitative, cross-sectional study. Results: a total of 12 nursing diagnoses were evaluated, seven of which showed high accuracy (IVC ≥ 0.8); 70 interventions were identified and 23 (32.86%) were common to more than one diagnosis. Conclusion: in general, nurses often perform nursing interventions suggested in the NIC for the seven highly accurate nursing diagnoses identified in this study to care patients with liver cirrhosis. Accurate and valid nursing diagnoses guide the selection of appropriate interventions that nurses can perform to enhance patient safety and thus improve patient health outcomes.
Design and evaluation of a parametric model for cardiac sounds.
Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador
2017-10-01
Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.
Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.
2012-01-01
We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.
Weiss, N; Venot, M; Verdonk, F; Chardon, A; Le Guennec, L; Llerena, M C; Raimbourg, Q; Taldir, G; Luque, Y; Fagon, J-Y; Guerot, E; Diehl, J-L
2015-05-01
The accurate prediction of outcome after out-of-hospital cardiac arrest (OHCA) is of major importance. The recently described Full Outline of UnResponsiveness (FOUR) is well adapted to mechanically ventilated patients and does not depend on verbal response. To evaluate the ability of FOUR assessed by intensivists to accurately predict outcome in OHCA. We prospectively identified patients admitted for OHCA with a Glasgow Coma Scale below 8. Neurological assessment was performed daily. Outcome was evaluated at 6 months using Glasgow-Pittsburgh Cerebral Performance Categories (GP-CPC). Eighty-five patients were included. At 6 months, 19 patients (22%) had a favorable outcome, GP-CPC 1-2, and 66 (78%) had an unfavorable outcome, GP-CPC 3-5. Compared to both brainstem responses at day 3 and evolution of Glasgow Coma Scale, evolution of FOUR score over the three first days was able to predict unfavorable outcome more precisely. Thus, absence of improvement or worsening from day 1 to day 3 of FOUR had 0.88 (0.79-0.97) specificity, 0.71 (0.66-0.76) sensitivity, 0.94 (0.84-1.00) PPV and 0.54 (0.49-0.59) NPV to predict unfavorable outcome. Similarly, the brainstem response of FOUR score at 0 evaluated at day 3 had 0.94 (0.89-0.99) specificity, 0.60 (0.50-0.70) sensitivity, 0.96 (0.92-1.00) PPV and 0.47 (0.37-0.57) NPV to predict unfavorable outcome. The absence of improvement or worsening from day 1 to day 3 of FOUR evaluated by intensivists provides an accurate prognosis of poor neurological outcome in OHCA. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji
2012-07-01
With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.
ERIC Educational Resources Information Center
Goomas, David T.
2012-01-01
The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Gregory; Mistrick, Ph.D., Richard; Lee, Eleanor
2011-01-21
We describe two methods which rely on bidirectional scattering distribution functions (BSDFs) to model the daylighting performance of complex fenestration systems (CFS), enabling greater flexibility and accuracy in evaluating arbitrary assemblies of glazing, shading, and other optically-complex coplanar window systems. Two tools within Radiance enable a) efficient annual performance evaluations of CFS, and b) accurate renderings of CFS despite the loss of spatial resolution associated with low-resolution BSDF datasets for inhomogeneous systems. Validation, accuracy, and limitations of the methods are discussed.
Fast and Exact Continuous Collision Detection with Bernstein Sign Classification
Tang, Min; Tong, Ruofeng; Wang, Zhendong; Manocha, Dinesh
2014-01-01
We present fast algorithms to perform accurate CCD queries between triangulated models. Our formulation uses properties of the Bernstein basis and Bézier curves and reduces the problem to evaluating signs of polynomials. We present a geometrically exact CCD algorithm based on the exact geometric computation paradigm to perform reliable Boolean collision queries. Our algorithm is more than an order of magnitude faster than prior exact algorithms. We evaluate its performance for cloth and FEM simulations on CPUs and GPUs, and highlight the benefits. PMID:25568589
[Aneurysm of the atrial septum diagnosed by trans-esophageal echocardiography].
Juszczyk, Z; Attir, A; Kamińska, M
1991-01-01
We report an uncommon case of atrial septal aneurysm associated with mitral valve prolapse. A 28 year old woman was studied with transthoracic and transesophageal echocardiography (TEE). Transthoracic echocardiography suggested mitral valve prolapse. TEE with color mapping was performed. Atrial septal aneurysm and mitral valve prolapse was found. The study has shown that TEE can evaluate accurately some of the anatomic features of atrial septal aneurysm and color flow mapping can provide accurate information about the blood flow in the lesion. We believe that TEE may be the safest and most accurate investigative technique for diagnosing this rare lesion.
NASA Astrophysics Data System (ADS)
Naserpour, Mahin; Zapata-Rodríguez, Carlos J.
2018-01-01
The evaluation of vector wave fields can be accurately performed by means of diffraction integrals, differential equations and also series expansions. In this paper, a Bessel series expansion which basis relies on the exact solution of the Helmholtz equation in cylindrical coordinates is theoretically developed for the straightforward yet accurate description of low-numerical-aperture focal waves. The validity of this approach is confirmed by explicit application to Gaussian beams and apertured focused fields in the paraxial regime. Finally we discuss how our procedure can be favorably implemented in scattering problems.
Does More Accurate Knowledge of Course Grade Impact Teaching Evaluation?
ERIC Educational Resources Information Center
Cho, Donghun; Cho, Joonmo
2017-01-01
Students' different standards may yield different kinds of bias, such as self-directed (higher than their past performance) bias and peer-directed (higher than their classmates) bias. Utilizing data obtained from a natural experiment where some students were able to see their grades prior to teacher evaluations, and to investigate possible sources…
Evaluation of seeding depth and guage-wheel load effects on maize emergence and yield
USDA-ARS?s Scientific Manuscript database
Planting represents perhaps the most important field operation with errors likely to negatively affect crop yield and thereby farm profitability. Performance of row-crop planters are evaluated by their ability to accurately place seeds into the soil at an adequate and pre-determined depth, the goal ...
Strapdown system performance optimization test evaluations (SPOT), volume 1
NASA Technical Reports Server (NTRS)
Blaha, R. J.; Gilmore, J. P.
1973-01-01
A three axis inertial system was packaged in an Apollo gimbal fixture for fine grain evaluation of strapdown system performance in dynamic environments. These evaluations have provided information to assess the effectiveness of real-time compensation techniques and to study system performance tradeoffs to factors such as quantization and iteration rate. The strapdown performance and tradeoff studies conducted include: (1) Compensation models and techniques for the inertial instrument first-order error terms were developed and compensation effectivity was demonstrated in four basic environments; single and multi-axis slew, and single and multi-axis oscillatory. (2) The theoretical coning bandwidth for the first-order quaternion algorithm expansion was verified. (3) Gyro loop quantization was identified to affect proportionally the system attitude uncertainty. (4) Land navigation evaluations identified the requirement for accurate initialization alignment in order to pursue fine grain navigation evaluations.
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ye; Ma, Xiaosong; Liu, Qing Gary
2015-01-01
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less
Branda, John A; Rychert, Jenna; Burnham, Carey-Ann D; Bythrow, Maureen; Garner, Omai B; Ginocchio, Christine C; Jennemann, Rebecca; Lewinski, Michael A; Manji, Ryhana; Mochon, A Brian; Procop, Gary W; Richter, Sandra S; Sercia, Linda F; Westblade, Lars F; Ferraro, Mary Jane
2014-02-01
The VITEK MS v2.0 MALDI-TOF mass spectrometry system's performance in identifying fastidious gram-negative bacteria was evaluated in a multicenter study. Compared with the reference method (DNA sequencing), the VITEK MS system provided an accurate, species-level identification for 96% of 226 isolates; an additional 1% were accurately identified to the genus level. © 2013.
Structured Overlapping Grid Simulations of Contra-rotating Open Rotor Noise
NASA Technical Reports Server (NTRS)
Housman, Jeffrey A.; Kiris, Cetin C.
2015-01-01
Computational simulations using structured overlapping grids with the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for predicting tonal noise generated by a contra-rotating open rotor (CROR) propulsion system. A coupled Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) numerical approach is applied. Three-dimensional time-accurate hybrid Reynolds Averaged Navier-Stokes/Large Eddy Simulation (RANS/LES) CFD simulations are performed in the inertial frame, including dynamic moving grids, using a higher-order accurate finite difference discretization on structured overlapping grids. A higher-order accurate free-stream preserving metric discretization with discrete enforcement of the Geometric Conservation Law (GCL) on moving curvilinear grids is used to create an accurate, efficient, and stable numerical scheme. The aeroacoustic analysis is based on a permeable surface Ffowcs Williams-Hawkings (FW-H) approach, evaluated in the frequency domain. A time-step sensitivity study was performed using only the forward row of blades to determine an adequate time-step. The numerical approach is validated against existing wind tunnel measurements.
Specific energy yield comparison between crystalline silicon and amorphous silicon based PV modules
NASA Astrophysics Data System (ADS)
Ferenczi, Toby; Stern, Omar; Hartung, Marianne; Mueggenburg, Eike; Lynass, Mark; Bernal, Eva; Mayer, Oliver; Zettl, Marcus
2009-08-01
As emerging thin-film PV technologies continue to penetrate the market and the number of utility scale installations substantially increase, detailed understanding of the performance of the various PV technologies becomes more important. An accurate database for each technology is essential for precise project planning, energy yield prediction and project financing. However recent publications showed that it is very difficult to get accurate and reliable performance data of theses technologies. This paper evaluates previously reported claims the amorphous silicon based PV modules have a higher annual energy yield compared to crystalline silicon modules relative to their rated performance. In order to acquire a detailed understanding of this effect, outdoor module tests were performed at GE Global Research Center in Munich. In this study we examine closely two of the five reported factors that contribute to enhanced energy yield of amorphous silicon modules. We find evidence to support each of these factors and evaluate their relative significance. We discuss aspects for improvement in how PV modules are sold and identify areas for further study further study.
Performance Indexing: Assessing the Nonmonetized Returns on Investment in Military Equipment
2016-05-17
investment’s value (return) because it cannot be objectively quanti - fied. To support resource allocation decisions, our mission was to provide accurate and...timely analyses with readily available information. In fiscal year 2014, the Marine Corps evaluated its strategic equipment investment initiatives...battle outcomes (Department of the Air Force, 1996). Elaborate opera- tional testing and evaluation events are created to evaluate these measures
ERIC Educational Resources Information Center
Au, Wayne
2011-01-01
Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…
Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P
2015-08-01
The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.
A Proposal for Modeling Real Hardware, Weather and Marine Conditions for Underwater Sensor Networks
Climent, Salvador; Capella, Juan Vicente; Blanc, Sara; Perles, Angel; Serrano, Juan José
2013-01-01
Network simulators are useful for researching protocol performance, appraising new hardware capabilities and evaluating real application scenarios. However, these tasks can only be achieved when using accurate models and real parameters that enable the extraction of trustworthy results and conclusions. This paper presents an underwater wireless sensor network ecosystem for the ns-3 simulator. This ecosystem is composed of a new energy-harvesting model and a low-cost, low-power underwater wake-up modem model that, alongside existing models, enables the performance of accurate simulations by providing real weather and marine conditions from the location where the real application is to be deployed. PMID:23748171
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
NASA Astrophysics Data System (ADS)
Essameldin, Mahmoud; Fleischmann, Friedrich; Henning, Thomas; Lang, Walter
2017-02-01
Freeform optical systems are playing an important role in the field of illumination engineering for redistributing the light intensity, because of its capability of achieving accurate and efficient results. The authors have presented the basic idea of the freeform lens design method at the 117th annual meeting of the German Society of Applied Optics (DGAOProceedings). Now, we demonstrate the feasibility of the design method by designing and evaluating a freeform lens. The concepts of luminous intensity mapping, energy conservation and differential equation are combined in designing a lens for non-imaging applications. The required procedures to design a lens including the simulations are explained in detail. The optical performance is investigated by using a numerical simulation of optical ray tracing. For evaluation, the results are compared with another recently published design method, showing the accurate performance of the proposed method using a reduced number of mapping angles. As a part of the tolerance analyses of the fabrication processes, the influence of the light source misalignments (translation and orientation) on the beam-shaping performance is presented. Finally, the importance of considering the extended light source while designing a freeform lens using the proposed method is discussed.
Evaluation of methods for freeway operational analysis.
DOT National Transportation Integrated Search
2001-10-01
The ability to estimate accurately the operational performance of roadway segments has become increasingly critical as we move from a period of new construction into one of operations, maintenance, and, in some cases, reconstruction. In addition to m...
Evaluation of four methods for estimating leaf area of isolated trees
P.J. Peper; E.G. McPherson
2003-01-01
The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...
Plus and Minus Grading Options: Toward Accurate Student Performance Evaluations.
ERIC Educational Resources Information Center
California Community Colleges, Sacramento. Academic Senate.
Although both the University of California and the California State University systems have the option to use plus or minus grades in student evaluations, the Board of Governors of the California Community Colleges (CCC) does not allow the use of such a grading system. Since 1985, the CCC's Academic Senate has lobbied the Board to allow local…
Mothers' negative evaluations of their children's performances enhance boys' memories for crafts.
Dunsmore, Julie C; Halberstadt, Amy G; Robinson, Megan L
2004-12-01
The authors predicted that mothers' evaluative comments would affect their preschool-aged children's learning during a craft-making activity. Each mother (N = 67) taught her child 6 crafts in a playroom in a university setting. Three weeks later, the child returned to the playroom to redo the crafts. Evaluators tested the children's memories of the procedures for completing the crafts. The authors used videotapes of the mothers' teaching to code for statements that positively evaluated their children's performances (praise) or that negatively evaluated their children's performances (criticism). Maternal praise did not affect children's memories. Maternal criticism did not affect their daughters' memories. However, sons were more likely to more accurately redo craft steps for which their mothers had made at least 1 comment criticizing their performances. The authors proposed that emotional arousal was a reason for girls' and boys' differential responses to maternal criticism.
1985-09-01
assume they will result in a sweatshop atmosphere. Workers may have fears that management will tighten standards when performance improves or...the shop’s performance can have a major impact on overall shipyard performance. In addition, the potential for accurate performance measurement was... impact of this experimental productivity improvement technique on participants’ job attitudes is supported in the literature. White et al. (in
The Continual Intercomparison of Radiation Codes: Results from Phase I
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Cole, Jason; Iacono, Michael; Jin, Zhonghai; Li, Jiangnan; Manners, James; Raisanen, Petri;
2011-01-01
The computer codes that calculate the energy budget of solar and thermal radiation in Global Climate Models (GCMs), our most advanced tools for predicting climate change, have to be computationally efficient in order to not impose undue computational burden to climate simulations. By using approximations to gain execution speed, these codes sacrifice accuracy compared to more accurate, but also much slower, alternatives. International efforts to evaluate the approximate schemes have taken place in the past, but they have suffered from the drawback that the accurate standards were not validated themselves for performance. The manuscript summarizes the main results of the first phase of an effort called "Continual Intercomparison of Radiation Codes" (CIRC) where the cases chosen to evaluate the approximate models are based on observations and where we have ensured that the accurate models perform well when compared to solar and thermal radiation measurements. The effort is endorsed by international organizations such as the GEWEX Radiation Panel and the International Radiation Commission and has a dedicated website (i.e., http://circ.gsfc.nasa.gov) where interested scientists can freely download data and obtain more information about the effort's modus operandi and objectives. In a paper published in the March 2010 issue of the Bulletin of the American Meteorological Society only a brief overview of CIRC was provided with some sample results. In this paper the analysis of submissions of 11 solar and 13 thermal infrared codes relative to accurate reference calculations obtained by so-called "line-by-line" radiation codes is much more detailed. We demonstrate that, while performance of the approximate codes continues to improve, significant issues still remain to be addressed for satisfactory performance within GCMs. We hope that by identifying and quantifying shortcomings, the paper will help establish performance standards to objectively assess radiation code quality, and will guide the development of future phases of CIRC
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Evaluating the accuracy of wear formulae for acetabular cup liners.
Wu, James Shih-Shyn; Hsu, Shu-Ling; Chen, Jian-Horng
2010-02-01
This study proposes two methods for exploring the wear volume of a worn liner. The first method is a numerical method, in which SolidWorks software is used to create models of the worn out regions of liners at various wear directions and depths. The second method is an experimental one, in which a machining center is used to mill polyoxymethylene to manufacture worn and unworn liner models, then the volumes of the models are measured. The results show that the SolidWorks software is a good tool for presenting the wear pattern and volume of a worn liner. The formula provided by Ilchmann is the most suitable for computing liner volume loss, but is not accurate enough. This study suggests that a more accurate wear formula is required. This is crucial for accurate evaluation of the performance of hip components implanted in patients, as well as for designing new hip components.
GPS radio collar 3D performance as influenced by forest structure and topography
R. Scott Gamo; Mark A. Rumble; Fred Lindzey; Matt Stefanich
2000-01-01
Global Positioning System (GPS) telemetry enables biologists to obtain accurate and systematic locations of animals. Vegetation can block signals from satellites to GPS radio collars. Therefore, a vegetation dependent bias to telemetry data may occur which if quantified, could be accounted for. We evaluated the performance of GPS collars in 6 structural stage...
Faustmann and the forestry tradition of outcome-based performance measures
Peter J. Ince
1999-01-01
The concept of land expectation value developed by Martin Faustmann may serve as a paradigm for outcome-based performance measures in public forest management if the concept of forest equity value is broadened to include social and environmental benefits and costs, and sustainability. However, anticipation and accurate evaluation of all benefits and costs appears to...
Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates
ERIC Educational Resources Information Center
Im, Piljae
2009-01-01
A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…
ERIC Educational Resources Information Center
Dupeyrat, Caroline; Escribe, Christian; Huet, Nathalie; Regner, Isabelle
2011-01-01
The study examined how biases in self-evaluations of math competence relate to achievement goals and progress in math achievement. It was expected that performance goals would be related to overestimation and mastery goals to accurate self-assessments. A sample of French high-school students completed a questionnaire measuring their math…
Activity-based costing and its application in a Turkish university hospital.
Yereli, Ayşe Necef
2009-03-01
Resource management in hospitals is of increasing importance in today's global economy. Traditional accounting systems have become inadequate for managing hospital resources and accurately determining service costs. Conversely, the activity-based costing approach to hospital accounting is an effective cost management model that determines costs and evaluates financial performance across departments. Obtaining costs that are more accurate can enable hospitals to analyze and interpret costing decisions and make more accurate budgeting decisions. Traditional and activity-based costing approaches were compared using a cost analysis of gall bladder surgeries in the general surgery department of one university hospital in Manisa, Turkey. Copyright (c) AORN, Inc, 2009.
The solution of the point kinetics equations via converged accelerated Taylor series (CATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.; Picca, P.; Previti, A.
This paper deals with finding accurate solutions of the point kinetics equations including non-linear feedback, in a fast, efficient and straightforward way. A truncated Taylor series is coupled to continuous analytical continuation to provide the recurrence relations to solve the ordinary differential equations of point kinetics. Non-linear (Wynn-epsilon) and linear (Romberg) convergence accelerations are employed to provide highly accurate results for the evaluation of Taylor series expansions and extrapolated values of neutron and precursor densities at desired edits. The proposed Converged Accelerated Taylor Series, or CATS, algorithm automatically performs successive mesh refinements until the desired accuracy is obtained, making usemore » of the intermediate results for converged initial values at each interval. Numerical performance is evaluated using case studies available from the literature. Nearly perfect agreement is found with the literature results generally considered most accurate. Benchmark quality results are reported for several cases of interest including step, ramp, zigzag and sinusoidal prescribed insertions and insertions with adiabatic Doppler feedback. A larger than usual (9) number of digits is included to encourage honest benchmarking. The benchmark is then applied to the enhanced piecewise constant algorithm (EPCA) currently being developed by the second author. (authors)« less
[µCT analysis of mandibular molars before and after instrumentation by Reciproc files].
Ametrano, Gianluca; Riccitiello, Francesco; Amato, Massimo; Formisano, Anna; Muto, Massimo; Grassi, Roberta; Valletta, Alessandra; Simeone, Michele
2013-01-01
Cleaning and shaping are important section for the root canal treatment. A number of different methodologies have been developed to overcome these problems, including the introduction of rotary instruments nickel-titanium (NiTi). In endodontics NiTi have been shown to significantly reduce procedural errors compared to manual techniques of instrumentation. The efficiency of files is related to many factor. Although previous investigations that have used µCT analysis were hampered by insufficient resolution or projection incorrect. The new generation of μCT performance best offer, as micron resolution and accurate measurement software for evaluating the accurate anatomy of the root canal. The aim the paper was to evaluate the efficiency of Reciproc files in root canal treatment, evaluated before and after instrumentation by using μ-CT analysis.
NASA Technical Reports Server (NTRS)
Saus, Joseph R.; DeLaat, John C.; Chang, Clarence T.; Vrnak, Daniel R.
2012-01-01
At the NASA Glenn Research Center, a characterization rig was designed and constructed for the purpose of evaluating high bandwidth liquid fuel modulation devices to determine their suitability for active combustion control research. Incorporated into the rig s design are features that approximate conditions similar to those that would be encountered by a candidate device if it were installed on an actual combustion research rig. The characterized dynamic performance measures obtained through testing in the rig are planned to be accurate indicators of expected performance in an actual combustion testing environment. To evaluate how well the characterization rig predicts fuel modulator dynamic performance, characterization rig data was compared with performance data for a fuel modulator candidate when the candidate was in operation during combustion testing. Specifically, the nominal and off-nominal performance data for a magnetostrictive-actuated proportional fuel modulation valve is described. Valve performance data were collected with the characterization rig configured to emulate two different combustion rig fuel feed systems. Fuel mass flows and pressures, fuel feed line lengths, and fuel injector orifice size was approximated in the characterization rig. Valve performance data were also collected with the valve modulating the fuel into the two combustor rigs. Comparison of the predicted and actual valve performance data show that when the valve is operated near its design condition the characterization rig can appropriately predict the installed performance of the valve. Improvements to the characterization rig and accompanying modeling activities are underway to more accurately predict performance, especially for the devices under development to modulate fuel into the much smaller fuel injectors anticipated in future lean-burning low-emissions aircraft engine combustors.
DOT National Transportation Integrated Search
2015-02-01
Evaluation of the actual performance (quality) of pavements requires : in situ nondestructive testing (NDT) techniques that can accurately : measure the most critical, objective, and sensitive properties of : pavement systems.
Sisco, Howard; Leventhal, Gloria
2007-12-01
The importance of accurate performance appraisals is central to many aspects of personnel activities in organizations. This study examined threats due to past performance to accuracy of evaluation of subsequent performance by raters differing in scores on field dependence. 162 college students were classified as Field-dependent (n = 81) or Field-independent (n = 81), using a median split on the Group Embedded Figures Test. Past performance (a lecture) was good or poor, presented directly via a videotape or indirectly via a written evaluation to the Field-independent or Field-dependent groups. Analysis indicated the hypothesized contrast effect (ratings in the opposite direction from that of prior ratings) in the Direct condition and an unexpected, albeit smaller, contrast effect in the Indirect condition. There were also differential effects of performance, presentation, and field dependency on rating of lecturer's style and ability.
MDCT assessment of resectability in hilar cholangiocarcinoma.
Ni, Qihong; Wang, Haolu; Zhang, Yunhe; Qian, Lijun; Chi, Jiachang; Liang, Xiaowen; Chen, Tao; Wang, Jian
2017-03-01
The purpose of this study is to investigate the value of multidetector computed tomography (MDCT) assessment of resectability in hilar cholangiocarcinoma, and to identify the factors associated with unresectability and accurate evaluation of resectability. From January 2007 to June 2015, a total of 77 consecutive patients were included. All patients had preoperative MDCT (with MPR and MinIP) and surgical treatment, and were pathologically proven with hilar cholangiocarcinoma. The MDCT images were reviewed retrospectively by two senior radiologists and one hepatobiliary surgeon. The surgical findings and pathologic results were considered to be the gold standard. The Chi square test was used to identify factors associated with unresectability and accurate evaluation of resectability. The sensitivity, specificity, and overall accuracy of MDCT assessment were 83.3 %, 75.9 %, and 80.5 %, respectively. The main causes of inaccuracy were incorrect evaluation of N2 lymph node metastasis (4/15) and distant metastasis (4/15). Bismuth type IV tumor, main or bilateral hepatic artery involvement, and main or bilateral portal vein involvement were highly associated with unresectability (P < 0.001). Patients without biliary drainage had higher accuracy of MDCT evaluation of resectability compared to those with biliary drainage (P < 0.001). MDCT is reliable for preoperative assessment of resectability in hilar cholangiocarcinoma. Bismuth type IV tumor and main or bilateral vascular involvement highly suggest the unresectability of hilar cholangiocarcinoma. Patients without biliary drainage have a more accurate MDCT evaluation of resectability. We suggest MDCT should be performed before biliary drainage to achieve an accurate evaluation of resectability in hilar cholangiocarcinoma.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnakumar, Raga; Sinha, Anupama; Bird, Sara W.
Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed themore » quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.« less
Krishnakumar, Raga; Sinha, Anupama; Bird, Sara W.; ...
2018-02-16
Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed themore » quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.« less
Silva, Ana Rita; Pinho, Maria Salomé; Macedo, Luís; Souchay, Céline; Moulin, Christopher
2017-06-01
There is a debate about the ability of patients with Alzheimer's disease to build an up-to-date representation of their memory function, which has been termed mnemonic anosognosia. This form of anosognosia is typified by accurate online evaluations of performance, but dysfunctional or outmoded representations of function more generally. We tested whether people with Alzheimer's disease could adapt or change their representations of memory performance across three different six-week memory training programs using global judgements of learning. We showed that whereas online assessments of performance were accurate, patients continued to make inaccurate overestimations of their memory performance. This was despite the fact that the magnitude of predictions shifted according to the memory training. That is, on some level patients showed an ability to change and retain a representation of performance over time, but it was a dysfunctional one. For the first time in the literature we were able to use an analysis using correlations to support this claim, based on a large heterogeneous sample of 51 patients with Alzheimer's disease. The results point not to a failure to retain online metamemory information, but rather that this information is never used or incorporated into longer term representations, supporting but refining the mnemonic anosognosia hypothesis.
NASA Astrophysics Data System (ADS)
Gao, Chen; Ding, Zhongan; Deng, Bofa; Yan, Shengteng
2017-10-01
According to the characteristics of electric energy data acquire system (EEDAS), considering the availability of each index data and the connection between the index integrity, establishing the performance evaluation index system of electric energy data acquire system from three aspects as master station system, communication channel, terminal equipment. To determine the comprehensive weight of each index based on triangular fuzzy number analytic hierarchy process with entropy weight method, and both subjective preference and objective attribute are taken into consideration, thus realize the performance comprehensive evaluation more reasonable and reliable. Example analysis shows that, by combination with analytic hierarchy process (AHP) and triangle fuzzy numbers (TFN) to establish comprehensive index evaluation system based on entropy method, the evaluation results not only convenient and practical, but also more objective and accurate.
Bio-inspired adaptive feedback error learning architecture for motor control.
Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo
2012-10-01
This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).
NASA Technical Reports Server (NTRS)
Mckay, Charles
1991-01-01
This is the configuration management Plan for the AdaNet Repository Based Software Engineering (RBSE) contract. This document establishes the requirements and activities needed to ensure that the products developed for the AdaNet RBSE contract are accurately identified, that proposed changes to the product are systematically evaluated and controlled, that the status of all change activity is known at all times, and that the product achieves its functional performance requirements and is accurately documented.
Finding accurate frontiers: A knowledge-intensive approach to relational learning
NASA Technical Reports Server (NTRS)
Pazzani, Michael; Brunk, Clifford
1994-01-01
An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.
ERIC Educational Resources Information Center
Willoughby, Meg; And Others
This report presents the findings of a collaboration between the Evaluation and Research Departments and the Arts Education Department of the Wake County (North Carolina) Public School System with teachers of the district to pilot performance assessments in art programs. In an effort to accurately assess the arts, various alternative…
Lesnik, Keaton Larson; Liu, Hong
2017-09-19
The complex interactions that occur in mixed-species bioelectrochemical reactors, like microbial fuel cells (MFCs), make accurate predictions of performance outcomes under untested conditions difficult. While direct correlations between any individual waste stream characteristic or microbial community structure and reactor performance have not been able to be directly established, the increase in sequencing data and readily available computational power enables the development of alternate approaches. In the current study, 33 MFCs were evaluated under a range of conditions including eight separate substrates and three different wastewaters. Artificial Neural Networks (ANNs) were used to establish mathematical relationships between wastewater/solution characteristics, biofilm communities, and reactor performance. ANN models that incorporated biotic interactions predicted reactor performance outcomes more accurately than those that did not. The average percent error of power density predictions was 16.01 ± 4.35%, while the average percent error of Coulombic efficiency and COD removal rate predictions were 1.77 ± 0.57% and 4.07 ± 1.06%, respectively. Predictions of power density improved to within 5.76 ± 3.16% percent error through classifying taxonomic data at the family versus class level. Results suggest that the microbial communities and performance of bioelectrochemical systems can be accurately predicted using data-mining, machine-learning techniques.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Luo, Mei; Wang, Hao; Lyu, Zhi
2017-12-01
Species distribution models (SDMs) are widely used by researchers and conservationists. Results of prediction from different models vary significantly, which makes users feel difficult in selecting models. In this study, we evaluated the performance of two commonly used SDMs, the Biomod2 and Maximum Entropy (MaxEnt), with real presence/absence data of giant panda, and used three indicators, i.e., area under the ROC curve (AUC), true skill statistics (TSS), and Cohen's Kappa, to evaluate the accuracy of the two model predictions. The results showed that both models could produce accurate predictions with adequate occurrence inputs and simulation repeats. Comparedto MaxEnt, Biomod2 made more accurate prediction, especially when occurrence inputs were few. However, Biomod2 was more difficult to be applied, required longer running time, and had less data processing capability. To choose the right models, users should refer to the error requirements of their objectives. MaxEnt should be considered if the error requirement was clear and both models could achieve, otherwise, we recommend the use of Biomod2 as much as possible.
Evaluation of Low-Cost, Centimeter-Level Accuracy OEM GNSS Receivers
DOT National Transportation Integrated Search
2018-02-02
This report discusses the results of a study to quantify the performance of low-cost, centimeter-level accurate Global Navigation Satellite Systems (GNSS) receivers that have appeared on the market in the last few years. Centimeter-level accuracy is ...
HuMOVE: a low-invasive wearable monitoring platform in sexual medicine.
Ciuti, Gastone; Nardi, Matteo; Valdastri, Pietro; Menciassi, Arianna; Basile Fasolo, Ciro; Dario, Paolo
2014-10-01
To investigate an accelerometer-based wearable system, named Human Movement (HuMOVE) platform, designed to enable quantitative and continuous measurement of sexual performance with minimal invasiveness and inconvenience for users. Design, implementation, and development of HuMOVE, a wearable platform equipped with an accelerometer sensor for monitoring inertial parameters for sexual performance assessment and diagnosis, were performed. The system enables quantitative measurement of movement parameters during sexual intercourse, meeting the requirements of wearability, data storage, sampling rate, and interfacing methods, which are fundamental for human sexual intercourse performance analysis. HuMOVE was validated through characterization using a controlled experimental test bench and evaluated in a human model during simulated sexual intercourse conditions. HuMOVE demonstrated to be a robust and quantitative monitoring platform and a reliable candidate for sexual performance evaluation and diagnosis. Characterization analysis on the controlled experimental test bench demonstrated an accurate correlation between the HuMOVE system and data from a reference displacement sensor. Experimental tests in the human model during simulated intercourse conditions confirmed the accuracy of the sexual performance evaluation platform and the effectiveness of the selected and derived parameters. The obtained outcomes also established the project expectations in terms of usability and comfort, evidenced by the questionnaires that highlighted the low invasiveness and acceptance of the device. To the best of our knowledge, HuMOVE platform is the first device for human sexual performance analysis compatible with sexual intercourse; the system has the potential to be a helpful tool for physicians to accurately classify sexual disorders, such as premature or delayed ejaculation. Copyright © 2014 Elsevier Inc. All rights reserved.
Lie, Désirée; May, Win; Richter-Lagha, Regina; Forest, Christopher; Banzali, Yvonne; Lohenry, Kevin
2015-01-01
Current scales for interprofessional team performance do not provide adequate behavioral anchors for performance evaluation. The Team Observed Structured Clinical Encounter (TOSCE) provides an opportunity to adapt and develop an existing scale for this purpose. We aimed to test the feasibility of using a retooled scale to rate performance in a standardized patient encounter and to assess faculty ability to accurately rate both individual students and teams. The 9-point McMaster-Ottawa Scale developed for a TOSCE was converted to a 3-point scale with behavioral anchors. Students from four professions were trained a priori to perform in teams of four at three different levels as individuals and teams. Blinded faculty raters were trained to use the scale to evaluate individual and team performances. G-theory was used to analyze ability of faculty to accurately rate individual students and teams using the retooled scale. Sixteen faculty, in groups of four, rated four student teams, each participating in the same TOSCE station. Faculty expressed comfort rating up to four students in a team within a 35-min timeframe. Accuracy of faculty raters varied (38-81% individuals, 50-100% teams), with errors in the direction of over-rating individual, but not team performance. There was no consistent pattern of error for raters. The TOSCE can be administered as an evaluation method for interprofessional teams. However, faculty demonstrate a 'leniency error' in rating students, even with prior training using behavioral anchors. To improve consistency, we recommend two trained faculty raters per station.
The Design and Evaluation of a Large-Scale Real-Walking Locomotion Interface
Peck, Tabitha C.; Fuchs, Henry; Whitton, Mary C.
2014-01-01
Redirected Free Exploration with Distractors (RFED) is a large-scale real-walking locomotion interface developed to enable people to walk freely in virtual environments that are larger than the tracked space in their facility. This paper describes the RFED system in detail and reports on a user study that evaluated RFED by comparing it to walking-in-place and joystick interfaces. The RFED system is composed of two major components, redirection and distractors. This paper discusses design challenges, implementation details, and lessons learned during the development of two working RFED systems. The evaluation study examined the effect of the locomotion interface on users’ cognitive performance on navigation and wayfinding measures. The results suggest that participants using RFED were significantly better at navigating and wayfinding through virtual mazes than participants using walking-in-place and joystick interfaces. Participants traveled shorter distances, made fewer wrong turns, pointed to hidden targets more accurately and more quickly, and were able to place and label targets on maps more accurately, and more accurately estimate the virtual environment size. PMID:22184262
ERIC Educational Resources Information Center
Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.
2011-01-01
Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…
NASA Technical Reports Server (NTRS)
Baker, David P.
2002-01-01
The extent to which pilot instructors are trained to assess crew resource management (CRM) skills accurately during Line-Oriented Flight Training (LOFT) and Line Operational Evaluation (LOE) scenarios is critical. Pilot instructors must make accurate performance ratings to ensure that proper feedback is provided to flight crews and appropriate decisions are made regarding certification to fly the line. Furthermore, the Federal Aviation Administration's (FAA) Advanced Qualification Program (AQP) requires that instructors be trained explicitly to evaluate both technical and CRM performance (i.e., rater training) and also requires that proficiency and standardization of instructors be verified periodically. To address the critical need for effective pilot instructor training, the American Institutes for Research (AIR) reviewed the relevant research on rater training and, based on "best practices" from this research, developed a new strategy for training pilot instructors to assess crew performance. In addition, we explored new statistical techniques for assessing the effectiveness of pilot instructor training. The results of our research are briefly summarized below. This summary is followed by abstracts of articles and book chapters published under this grant.
A Method for the Evaluation of Thousands of Automated 3D Stem Cell Segmentations
Bajcsy, Peter; Simon, Mylene; Florczyk, Stephen; Simon, Carl G.; Juba, Derek; Brady, Mary
2016-01-01
There is no segmentation method that performs perfectly with any data set in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of 3D image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate “ground truth” of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations, and (3) minimizing human labor needed to create surrogate “truth” by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation. PMID:26268699
Performance evaluation : balloon-type breath alcohol self tester for personal use
DOT National Transportation Integrated Search
1984-01-01
The accuracy of the only breath alcohol balloon-type self test device being marketed for personal use (Luckey Laboratories DM-2) was assessed in the laboratory. Data regarding this self-test device's ability to accurately classify an individual as ha...
EVALUATION OF METHODS FOR SAMPLING, RECOVERY, AND ENUMERATION OF BACTERIA APPLIED TO THE PHYLLOPANE
Determining the fate and survival of genetically engineered microorganisms released into the environment requires the development and application of accurate and practical methods of detection and enumeration. everal experiments were performed to examine quantitative recovery met...
NASA Astrophysics Data System (ADS)
Karton, Amir; Martin, Jan M. L.
2012-10-01
Accurate isomerization energies are obtained for a set of 45 C8H8 isomers by means of the high-level, ab initio W1-F12 thermochemical protocol. The 45 isomers involve a range of hydrocarbon functional groups, including (linear and cyclic) polyacetylene, polyyne, and cumulene moieties, as well as aromatic, anti-aromatic, and highly-strained rings. Performance of a variety of DFT functionals for the isomerization energies is evaluated. This proves to be a challenging test: only six of the 56 tested functionals attain root mean square deviations (RMSDs) below 3 kcal mol-1 (the performance of MP2), namely: 2.9 (B972-D), 2.8 (PW6B95), 2.7 (B3PW91-D), 2.2 (PWPB95-D3), 2.1 (ωB97X-D), and 1.2 (DSD-PBEP86) kcal mol-1. Isomers involving highly-strained fused rings or long cumulenic chains provide a 'torture test' for most functionals. Finally, we evaluate the performance of composite procedures (e.g. G4, G4(MP2), CBS-QB3, and CBS-APNO), as well as that of standard ab initio procedures (e.g. MP2, SCS-MP2, MP4, CCSD, and SCS-CCSD). Both connected triples and post-MP4 singles and doubles are important for accurate results. SCS-MP2 actually outperforms MP4(SDQ) for this problem, while SCS-MP3 yields similar performance as CCSD and slightly bests MP4. All the tested empirical composite procedures show excellent performance with RMSDs below 1 kcal mol-1.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
Commutability of food microbiology proficiency testing samples.
Abdelmassih, M; Polet, M; Goffaux, M-J; Planchon, V; Dierick, K; Mahillon, J
2014-03-01
Food microbiology proficiency testing (PT) is a useful tool to assess the analytical performances among laboratories. PT items should be close to routine samples to accurately evaluate the acceptability of the methods. However, most PT providers distribute exclusively artificial samples such as reference materials or irradiated foods. This raises the issue of the suitability of these samples because the equivalence-or 'commutability'-between results obtained on artificial vs. authentic food samples has not been demonstrated. In the clinical field, the use of noncommutable PT samples has led to erroneous evaluation of the performances when different analytical methods were used. This study aimed to provide a first assessment of the commutability of samples distributed in food microbiology PT. REQUASUD and IPH organized 13 food microbiology PTs including 10-28 participants. Three types of PT items were used: genuine food samples, sterile food samples and reference materials. The commutability of the artificial samples (reference material or sterile samples) was assessed by plotting the distribution of the results on natural and artificial PT samples. This comparison highlighted matrix-correlated issues when nonfood matrices, such as reference materials, were used. Artificially inoculated food samples, on the other hand, raised only isolated commutability issues. In the organization of a PT-scheme, authentic or artificially inoculated food samples are necessary to accurately evaluate the analytical performances. Reference materials, used as PT items because of their convenience, may present commutability issues leading to inaccurate penalizing conclusions for methods that would have provided accurate results on food samples. For the first time, the commutability of food microbiology PT samples was investigated. The nature of the samples provided by the organizer turned out to be an important factor because matrix effects can impact on the analytical results. © 2013 The Society for Applied Microbiology.
Evaluating true BCI communication rate through mutual information and language models.
Speier, William; Arnold, Corey; Pouratian, Nader
2013-01-01
Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.
Characteristic Evaluation on Cooling Performance of Thermoelectric Modules.
Seo, Sae Rom; Han, Seungwoo
2015-10-01
The aim of this work is to develop a performance evaluation system for thermoelectric cooling modules. We describe the design of such a system, composed of a vacuum chamber with a heat sink along with a metal block to measure the absorbed heat Qc. The system has a simpler structure than existing water-cooled or air-cooled systems. The temperature difference between the cold and hot sides of the thermoelectric module ΔT can be accurately measured without any effects due to convection, and the temperature equilibrium time is minimized compared to a water-cooled system. The evaluation system described here can be used to measure characteristic curves of Qc as a function of ΔT, as well as the current-voltage relations. High-performance thermoelectric systems can therefore be developed using optimal modules evaluated with this system.
Ranking Reputation and Quality in Online Rating Systems
Liao, Hao; Zeng, An; Xiao, Rui; Ren, Zhuo-Ming; Chen, Duan-Bing; Zhang, Yi-Cheng
2014-01-01
How to design an accurate and robust ranking algorithm is a fundamental problem with wide applications in many real systems. It is especially significant in online rating systems due to the existence of some spammers. In the literature, many well-performed iterative ranking methods have been proposed. These methods can effectively recognize the unreliable users and reduce their weight in judging the quality of objects, and finally lead to a more accurate evaluation of the online products. In this paper, we design an iterative ranking method with high performance in both accuracy and robustness. More specifically, a reputation redistribution process is introduced to enhance the influence of highly reputed users and two penalty factors enable the algorithm resistance to malicious behaviors. Validation of our method is performed in both artificial and real user-object bipartite networks. PMID:24819119
Doblas, Ana; Sánchez-Ortiga, Emilio; Martínez-Corral, Manuel; Saavedra, Genaro; Garcia-Sucerquia, Jorge
2014-04-01
The advantages of using a telecentric imaging system in digital holographic microscopy (DHM) to study biological specimens are highlighted. To this end, the performances of nontelecentric DHM and telecentric DHM are evaluated from the quantitative phase imaging (QPI) point of view. The evaluated stability of the microscope allows single-shot QPI in DHM by using telecentric imaging systems. Quantitative phase maps of a section of the head of the drosophila melanogaster fly and of red blood cells are obtained via single-shot DHM with no numerical postprocessing. With these maps we show that the use of telecentric DHM provides larger field of view for a given magnification and permits more accurate QPI measurements with less number of computational operations.
Injection-depth-locking axial motion guided handheld micro-injector using CP-SSOCT.
Cheon, Gyeong Woo; Huang, Yong; Kwag, Hye Rin; Kim, Ki-Young; Taylor, Russell H; Gehlbach, Peter L; Kang, Jin U
2014-01-01
This paper presents a handheld micro-injector system using common-path swept source optical coherence tomography (CP-SSOCT) as a distal sensor with highly accurate injection-depth-locking. To achieve real-time, highly precise, and intuitive freehand control, the system used graphics processing unit (GPU) to process the oversampled OCT signal with high throughput and a smart customized motion monitoring control algorithm. A performance evaluation was conducted with 60-insertions and fluorescein dye injection tests to show how accurately the system can guide the needle and lock to the target depth. The evaluation tests show our system can guide the injection needle into the desired depth with 4.12 um average deviation error while injecting 50 nl of fluorescein dye.
Temporal Relatedness: Personality and Behavioral Correlates
ERIC Educational Resources Information Center
Getsinger, Stephen H.
1975-01-01
Two studies explored the relationship of temporal relatedness to self actualization, sex, and certain temporal behaviors. Subjects who obtained higher time-relatedness scores demonstrated greater self-actualization, evaluated the present time mode more positively, overestimated time intervals in an estimation task, and performed less accurately in…
NASA Astrophysics Data System (ADS)
Streiter, R.; Wanielik, G.
2013-07-01
The construction of highways and federal roadways is subject to many restrictions and designing rules. The focus is on safety, comfort and smooth driving. Unfortunately, the planning information for roadways and their real constitution, course and their number of lanes and lane widths is often unsure or not available. Due to digital map databases of roads raised much interest during the last years and became one major cornerstone of innovative Advanced Driving Assistance Systems (ADASs), the demand for accurate and detailed road information increases considerably. Within this project a measurement system for collecting high accurate road data was developed. This paper gives an overview about the sensor configuration within the measurement vehicle, introduces the implemented algorithms and shows some applications implemented in the post processing platform. The aim is to recover the origin parametric description of the roadway and the performance of the measurement system is being evaluated against several original road construction information.
Evaluation of various thrust calculation techniques on an F404 engine
NASA Technical Reports Server (NTRS)
Ray, Ronald J.
1990-01-01
In support of performance testing of the X-29A aircraft at the NASA-Ames, various thrust calculation techniques were developed and evaluated for use on the F404-GE-400 engine. The engine was thrust calibrated at NASA-Lewis. Results from these tests were used to correct the manufacturer's in-flight thrust program to more accurately calculate thrust for the specific test engine. Data from these tests were also used to develop an independent, simplified thrust calculation technique for real-time thrust calculation. Comparisons were also made to thrust values predicted by the engine specification model. Results indicate uninstalled gross thrust accuracies on the order of 1 to 4 percent for the various in-flight thrust methods. The various thrust calculations are described and their usage, uncertainty, and measured accuracies are explained. In addition, the advantages of a real-time thrust algorithm for flight test use and the importance of an accurate thrust calculation to the aircraft performance analysis are described. Finally, actual data obtained from flight test are presented.
Properties of targeted preamplification in DNA and cDNA quantification.
Andersson, Daniel; Akrap, Nina; Svec, David; Godfrey, Tony E; Kubista, Mikael; Landberg, Göran; Ståhlberg, Anders
2015-01-01
Quantification of small molecule numbers often requires preamplification to generate enough copies for accurate downstream enumerations. Here, we studied experimental parameters in targeted preamplification and their effects on downstream quantitative real-time PCR (qPCR). To evaluate different strategies, we monitored the preamplification reaction in real-time using SYBR Green detection chemistry followed by melting curve analysis. Furthermore, individual targets were evaluated by qPCR. The preamplification reaction performed best when a large number of primer pairs was included in the primer pool. In addition, preamplification efficiency, reproducibility and specificity were found to depend on the number of template molecules present, primer concentration, annealing time and annealing temperature. The amount of nonspecific PCR products could also be reduced about 1000-fold using bovine serum albumin, glycerol and formamide in the preamplification. On the basis of our findings, we provide recommendations how to perform robust and highly accurate targeted preamplification in combination with qPCR or next-generation sequencing.
Suspected leaking abdominal aortic aneurysm: use of sonography in the emergency room.
Shuman, W P; Hastrup, W; Kohler, T R; Nyberg, D A; Wang, K Y; Vincent, L M; Mack, L A
1988-07-01
To determine the value of sonography in the emergent evaluation of suspected leaking abdominal aortic aneurysms, the authors examined 60 patients in the emergency department using sonography and a protocol involving advance radio notification from the ambulance; arrival of sonographic personnel and equipment in the triage room before patient arrival; and, during other triage activities, rapid sonographic evaluation of the aorta for aneurysm and of the paraaortic region for extraluminal blood. Sonographic findings were correlated with surgical results and clinical outcome. When performed under these circumstances, sonography was accurate in demonstrating presence or absence of aneurysm (98%), but its sensitivity for extraluminal blood was poor (4%). A combination of sonographic confirmation of aneurysm, abdominal pain, and unstable hemodynamic condition resulted in the correct decision to perform emergent surgery in 21 of 22 patients (95%). An abbreviated sonographic examination done in the emergency room can provide accurate, useful information about the presence of aneurysm; this procedure does not significantly delay triage of these patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
MiRduplexSVM: A High-Performing MiRNA-Duplex Prediction and Evaluation Methodology
Karathanasis, Nestoras; Tsamardinos, Ioannis; Poirazi, Panayiota
2015-01-01
We address the problem of predicting the position of a miRNA duplex on a microRNA hairpin via the development and application of a novel SVM-based methodology. Our method combines a unique problem representation and an unbiased optimization protocol to learn from mirBase19.0 an accurate predictive model, termed MiRduplexSVM. This is the first model that provides precise information about all four ends of the miRNA duplex. We show that (a) our method outperforms four state-of-the-art tools, namely MaturePred, MiRPara, MatureBayes, MiRdup as well as a Simple Geometric Locator when applied on the same training datasets employed for each tool and evaluated on a common blind test set. (b) In all comparisons, MiRduplexSVM shows superior performance, achieving up to a 60% increase in prediction accuracy for mammalian hairpins and can generalize very well on plant hairpins, without any special optimization. (c) The tool has a number of important applications such as the ability to accurately predict the miRNA or the miRNA*, given the opposite strand of a duplex. Its performance on this task is superior to the 2nts overhang rule commonly used in computational studies and similar to that of a comparative genomic approach, without the need for prior knowledge or the complexity of performing multiple alignments. Finally, it is able to evaluate novel, potential miRNAs found either computationally or experimentally. In relation with recent confidence evaluation methods used in miRBase, MiRduplexSVM was successful in identifying high confidence potential miRNAs. PMID:25961860
Frame-of-Reference Training: Establishing Reliable Assessment of Teaching Effectiveness.
Newman, Lori R; Brodsky, Dara; Jones, Richard N; Schwartzstein, Richard M; Atkins, Katharyn Meredith; Roberts, David H
2016-01-01
Frame-of-reference (FOR) training has been used successfully to teach faculty how to produce accurate and reliable workplace-based ratings when assessing a performance. We engaged 21 Harvard Medical School faculty members in our pilot and implementation studies to determine the effectiveness of using FOR training to assess health professionals' teaching performances. All faculty were novices at rating their peers' teaching effectiveness. Before FOR training, we asked participants to evaluate a recorded lecture using a criterion-based peer assessment of medical lecturing instrument. At the start of training, we discussed the instrument and emphasized its precise behavioral standards. During training, participants practiced rating lectures and received immediate feedback on how well they categorized and scored performances as compared with expert-derived scores of the same lectures. At the conclusion of the training, we asked participants to rate a post-training recorded lecture to determine agreement with the experts' scores. Participants and experts had greater rating agreement for the post-training lecture compared with the pretraining lecture. Through this investigation, we determined that FOR training is a feasible method to teach faculty how to accurately and reliably assess medical lectures. Medical school instructors and continuing education presenters should have the opportunity to be observed and receive feedback from trained peer observers. Our results show that it is possible to use FOR rater training to teach peer observers how to accurately rate medical lectures. The process is time efficient and offers the prospect for assessment and feedback beyond traditional learner evaluation of instruction.
Wave Rotor Research and Technology Development
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
1998-01-01
Wave rotor technology offers the potential to increase the performance of gas turbine engines significantly, within the constraints imposed by current material temperature limits. The wave rotor research at the NASA Lewis Research Center is a three-element effort: 1) Development of design and analysis tools to accurately predict the performance of wave rotor components; 2) Experiments to characterize component performance; 3) System integration studies to evaluate the effect of wave rotor topping on the gas turbine engine system.
Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P
2016-03-24
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
Quality of Online Resources for Pancreatic Cancer Patients.
De Groot, Lauren; Harris, Ilene; Regehr, Glenn; Tekian, Ara; Ingledew, Paris-Ann
2017-10-18
The Internet is increasingly a source of information for pancreatic cancer patients. This disease is usually diagnosed at an advanced stage; therefore, timely access to high-quality information is critical. Our purpose is to systematically evaluate the information available to pancreatic cancer patients on the internet. An internet search using the term "pancreatic cancer" was performed, with the meta-search engines "Dogpile", "Yippy" and "Google". The top 100 websites returned by the search engines were evaluated using a validated structured rating tool. Inter-rater reliability was evaluated using kappa statistics and results were analyzed using descriptive statistics. Amongst the 100 websites evaluated, etiology/risk factors and symptoms were the most accurately covered (70 and 67% of websites). Prevention, treatment and prognosis were the least accurate sections (55, 55 and 43% of websites). Prevention and prognosis were also the least likely to be covered with 63 and 51 websites covering these, respectively. Only 40% of websites identified an author. Twenty-two percent of websites were at a university reading level. The majority of online information is accurate but incomplete. Websites may lack information on prognosis. Many websites are outdated and lacked author information, and readability levels are inappropriate. This knowledge can inform the dialogue between healthcare providers and patients.
Accurate evaluation and analysis of functional genomics data and methods
Greene, Casey S.; Troyanskaya, Olga G.
2016-01-01
The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Steve E.
The accuracy and precision of a new Isolok sampler configuration was evaluated using a recirculation flow loop. The evaluation was performed using two slurry simulants of Hanford high-level tank waste. Through testing, the capability of the Isolok sampler was evaluated. Sample concentrations were compared to reference samples that were simultaneously collected by a two-stage Vezin sampler. The capability of the Isolok sampler to collect samples that accurately reflect the contents in the test loop improved – biases between the Isolok and Vezin samples were greatly reduce for fast settling particles.
Standardized Technical Data Survey (STDS) for Aerial Refueling
2016-09-06
the KC-135 and the German Tornado . The tanker/receiver combination was certified by a technical evaluation of the performance interface survey, face...another. That document was first used in assessing the compatibility of the KC-135 and the German Tornado . The survey questions, when accurately answered
DOT National Transportation Integrated Search
2009-07-01
A dog-bone direct tension test (DBDT) to accurately determine tensile properties of asphalt concrete, : including OGFC, was conceived, developed and validated. Resilient modulus, creep, and strength tests : were performed at multiple temperatures on ...
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.; Litt, Jonathan S.
2005-01-01
An approach based on the Constant Gain Extended Kalman Filter (CGEKF) technique is investigated for the in-flight estimation of non-measurable performance parameters of aircraft engines. Performance parameters, such as thrust and stall margins, provide crucial information for operating an aircraft engine in a safe and efficient manner, but they cannot be directly measured during flight. A technique to accurately estimate these parameters is, therefore, essential for further enhancement of engine operation. In this paper, a CGEKF is developed by combining an on-board engine model and a single Kalman gain matrix. In order to make the on-board engine model adaptive to the real engine s performance variations due to degradation or anomalies, the CGEKF is designed with the ability to adjust its performance through the adjustment of artificial parameters called tuning parameters. With this design approach, the CGEKF can maintain accurate estimation performance when it is applied to aircraft engines at offnominal conditions. The performance of the CGEKF is evaluated in a simulation environment using numerous component degradation and fault scenarios at multiple operating conditions.
NASA Technical Reports Server (NTRS)
Hippensteele, S. A.; Russell, L. M.; Stepka, F. S.
1981-01-01
Commercially available elements of a composite consisting of a plastic sheet coated with liquid crystal, another sheet with a thin layer of a conducting material (gold or carbon), and copper bus bar strips were evaluated and found to provide a simple, convenient, accurate, and low-cost measuring device for use in heat transfer research. The particular feature of the composite is its ability to obtain local heat transfer coefficients and isotherm patterns that provide visual evaluation of the thermal performances of turbine blade cooling configurations. Examples of the use of the composite are presented.
Adaptation of Mesoscale Weather Models to Local Forecasting
NASA Technical Reports Server (NTRS)
Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.
2003-01-01
Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes objective and subjective verification methodologies. Objective (e.g., statistical) verification of point forecasts is a stringent measure of model performance, but when used alone, it is not usually sufficient for quantifying the value of the overall contribution of the model to the weather-forecasting process. This is especially true for mesoscale models with enhanced spatial and temporal resolution that may be capable of predicting meteorologically consistent, though not necessarily accurate, fine-scale weather phenomena. Therefore, subjective (phenomenological) evaluation, focusing on selected case studies and specific weather features, such as sea breezes and precipitation, has been performed to help quantify the added value that cannot be inferred solely from objective evaluation.
Structural Loads Analysis for Wave Energy Converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi
2017-06-03
This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process.« less
Kahan, Meldon; Liu, Eleanor; Borsoi, Diane; Wilson, Lynn; Brewster, Joan M; Sobell, Mark B; Sobell, Linda C
2004-12-01
Simulated patients are commonly used to evaluate medical trainees. Unannounced simulated patients provide an accurate measure of physician performance. To determine the effects of detection of SPs on physician performance, and identify factors leading to detection. Fixty-six family medicine residents were each visited by two unannounced simulated patients presenting with alcohol-induced hypertension or insomnia. Residents were then surveyed on their detection of SPs. SPs were detected on 45 out of 104 visits. Inner city clinics had higher detection rates than middle class clinics. Residents' checklist and global rating scores were substantially higher on detected than undetected visits, for both between-subject and within-subject comparisons. The most common reasons for detection concerned SP demographics and behaviour; the SP "did not act like a drinker" and was of a different social class than the typical clinic patient. Multi-clinic studies involving residents experienced with SPs should ensure that the SP role and behavior conform to physician expectations and the demographics of the clinic. SP station testing does not accurately reflect physicians' actual clinical behavior and should not be relied on as the primary method of evaluation. The study also suggests that physicians' poor performance in identifying and managing alcohol problems is not entirely due to lack of skill, as they demonstrated greater clinical skills when they became aware that they were being evaluated. Physicians' clinical priorities, sense of responsibility and other attitudinal determinants of their behavior should be addressed when training physicians on the management of alcohol problems.
Moseley, Lorimer
2003-05-01
To identify why reconceptualization of the problem is difficult in chronic pain, this study aimed to evaluate whether (1) health professionals and patients can understand currently accurate information about the neurophysiology of pain and (2) health professionals accurately estimate the ability of patients to understand the neurophysiology of pain. Knowledge tests were completed by 276 patients with chronic pain and 288 professionals either before (untrained) or after (trained) education about the neurophysiology of pain. Professionals estimated typical patient performance on the test. Untrained participants performed poorly (mean +/- standard deviation, 55% +/- 19% and 29% +/- 12% for professionals and patients, respectively), compared to their trained counterparts (78% +/- 21% and 61% +/- 19%, respectively). The estimated patient score (46% +/- 18%) was less than the actual patient score (P <.005). The results suggest that professionals and patients can understand the neurophysiology of pain but professionals underestimate patients' ability to understand. The implications are that (1) a poor knowledge of currently accurate information about pain and (2) the underestimation of patients' ability to understand currently accurate information about pain represent barriers to reconceptualization of the problem in chronic pain within the clinical and lay arenas.
Analysis Tools for CFD Multigrid Solvers
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Diskin, Boris
2004-01-01
Analysis tools are needed to guide the development and evaluate the performance of multigrid solvers for the fluid flow equations. Classical analysis tools, such as local mode analysis, often fail to accurately predict performance. Two-grid analysis tools, herein referred to as Idealized Coarse Grid and Idealized Relaxation iterations, have been developed and evaluated within a pilot multigrid solver. These new tools are applicable to general systems of equations and/or discretizations and point to problem areas within an existing multigrid solver. Idealized Relaxation and Idealized Coarse Grid are applied in developing textbook-efficient multigrid solvers for incompressible stagnation flow problems.
Performance Evaluation of Real-Time Precise Point Positioning Method
NASA Astrophysics Data System (ADS)
Alcay, Salih; Turgut, Muzeyyen
2017-12-01
Post-Processed Precise Point Positioning (PPP) is a well-known zero-difference positioning method which provides accurate and precise results. After the experimental tests, IGS Real Time Service (RTS) officially provided real time orbit and clock products for the GNSS community that allows real-time (RT) PPP applications. Different software packages can be used for RT-PPP. In this study, in order to evaluate the performance of RT-PPP, 3 IGS stations are used. Results, obtained by using BKG Ntrip Client (BNC) Software v2.12, are examined in terms of both accuracy and precision.
Thermal and optical performance of encapsulation systems for flat-plate photovoltaic modules
NASA Technical Reports Server (NTRS)
Minning, C. P.; Coakley, J. F.; Perrygo, C. M.; Garcia, A., III; Cuddihy, E. F.
1981-01-01
The electrical power output from a photovoltaic module is strongly influenced by the thermal and optical characteristics of the module encapsulation system. Described are the methodology and computer model for performing fast and accurate thermal and optical evaluations of different encapsulation systems. The computer model is used to evaluate cell temperature, solar energy transmittance through the encapsulation system, and electric power output for operation in a terrestrial environment. Extensive results are presented for both superstrate-module and substrate-module design schemes which include different types of silicon cell materials, pottants, and antireflection coatings.
[Using on-farm records to evaluate the reproductive performance in dairy herds].
Iwersen, M; Klein, D; Drillich, M
2012-01-01
The designated abolition of the European milk quota system on April 1st 2015 is expected to have tremendous effects on the business environment on most dairy farms. Meanwhile farmers should use weak-point analyses to identify "bottlenecks" within their production and herd management system. As experts in herd health and herd performance, veterinarians should give advice to their clients based on sound analyses of production data. Therefore, accurate and reliable on-farm records are needed. This paper will focus on data management, especially data collection, and will address the concepts of evaluation of reproduction records.
Dotson, Wesley H; Rasmussen, Eric E; Shafer, Autumn; Colwell, Malinda; Densley, Rebecca L; Brewer, Adam T; Alonzo, Marisol C; Martinez, Laura A
2017-03-01
Daniel Tiger's Neighborhood is a children's television show incorporating many elements of video modeling, an intervention that can teach skills to children with autism spectrum disorders (ASD). This study evaluated the impact of watching Daniel Tiger's Neighborhood episodes on the accurate performance of trying new foods and stopping play politely with two five-year-old children with ASD. Both children showed improved performance of skills only following exposure to episodes of Daniel Tiger's Neighborhood , suggesting that watching episodes can help children with ASD learn specific skills.
Quasi-optical grids with thin rectangular patch/aperture elements
NASA Technical Reports Server (NTRS)
Wu, Te-Kao
1993-01-01
Theoretical analysis is presented for an efficient and accurate performance evaluation of quasi-optical grids comprised of thin rectangular patch/aperture elements with/without a dielectric substrate/superstrate. The convergence rate of this efficient technique is improved by an order of magnitude with the approximate edge conditions incorporated in the basis functions of the integral equation solution. Also presented are the interesting applications of this efficient analytical technique to the design and performance evaluation of the coupling grids and beam splitters in the optical systems as well as thermal protection sunshields used in the communication systems of satellites and spacecrafts.
Number of repetitions for evaluating technological traits in cotton genotypes.
Carvalho, L P; Farias, F J C; Morello, C L; Rodrigues, J I S; Teodoro, P E
2016-08-19
With the changes in spinning technology, technological cotton traits, such as fiber length, fiber uniformity, fiber strength, fineness, fiber maturity, percentage of fibers, and short fiber index, are of great importance for selecting cotton genotypes. However, for accurate discrimination of genotypes, it is important that these traits are evaluated with the best possible accuracy. The aim of this study was to determine the number of measurements (repetitions) needed to accurately assess technological traits of cotton genotypes. Seven experiments were conducted in four Brazilian States (Ceará, Rio Grande do Norte, Goiás, and Mato Grosso do Sul). We used nine brown and two white colored fiber lines in a randomized block design with four replications. After verifying the assumptions of residual normality and homogeneity of variances, analysis of variance was performed to estimate the repeatability coefficient and calculating the number of repetitions. Trials with four replications were found to be sufficient to identify superior cotton genotypes for all measured traits except short fiber index with a selective accuracy >90% and at least 81% accuracy in predicting their actual value. These results allow more accurate and reliable results in future researches with evaluating technological traits in cotton genotypes.
Simulator evaluation of manually flown curved instrument approaches. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sager, D.
1973-01-01
Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than straight-in approaches; that a moderate wind does not effect curve flying performance; and that there is no performance difference between 60 deg. and 90 deg. turns. A tradeoff of curve path parameters and a paper analysis of wind compensation were also made.
Katayama, R; Sakai, S; Sakaguchi, T; Maeda, T; Takada, K; Hayabuchi, N; Morishita, J
2008-07-20
PURPOSE/AIM OF THE EXHIBIT: The purpose of this exhibit is: 1. To explain "resampling", an image data processing, performed by the digital radiographic system based on flat panel detector (FPD). 2. To show the influence of "resampling" on the basic imaging properties. 3. To present accurate measurement methods of the basic imaging properties of the FPD system. 1. The relationship between the matrix sizes of the output image and the image data acquired on FPD that automatically changes depending on a selected image size (FOV). 2. The explanation of the image data processing of "resampling". 3. The evaluation results of the basic imaging properties of the FPD system using two types of DICOM image to which "resampling" was performed: characteristic curves, presampled MTFs, noise power spectra, detective quantum efficiencies. CONCLUSION/SUMMARY: The major points of the exhibit are as follows: 1. The influence of "resampling" should not be disregarded in the evaluation of the basic imaging properties of the flat panel detector system. 2. It is necessary for the basic imaging properties to be measured by using DICOM image to which no "resampling" is performed.
Performance of vegetation indices from Landsat time series in deforestation monitoring
NASA Astrophysics Data System (ADS)
Schultz, Michael; Clevers, Jan G. P. W.; Carter, Sarah; Verbesselt, Jan; Avitabile, Valerio; Quang, Hien Vu; Herold, Martin
2016-10-01
The performance of Landsat time series (LTS) of eight vegetation indices (VIs) was assessed for monitoring deforestation across the tropics. Three sites were selected based on differing remote sensing observation frequencies, deforestation drivers and environmental factors. The LTS of each VI was analysed using the Breaks For Additive Season and Trend (BFAST) Monitor method to identify deforestation. A robust reference database was used to evaluate the performance regarding spatial accuracy, sensitivity to observation frequency and combined use of multiple VIs. The canopy cover sensitive Normalized Difference Fraction Index (NDFI) was the most accurate. Among those tested, wetness related VIs (Normalized Difference Moisture Index (NDMI) and the Tasselled Cap wetness (TCw)) were spatially more accurate than greenness related VIs (Normalized Difference Vegetation Index (NDVI) and Tasselled Cap greenness (TCg)). When VIs were fused on feature level, spatial accuracy was improved and overestimation of change reduced. NDVI and NDFI produced the most robust results when observation frequency varies.
Test techniques for model development of repetitive service energy storage capacitors
NASA Astrophysics Data System (ADS)
Thompson, M. C.; Mauldin, G. H.
1984-03-01
The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.
INKER, Lesley A; WYATT, Christina; CREAMER, Rebecca; HELLINGER, James; HOTTA, Matthew; LEPPO, Maia; LEVEY, Andrew S; OKPARAVERO, Aghogho; GRAHAM, Hiba; SAVAGE, Karen; SCHMID, Christopher H; TIGHIOUART, Hocine; WALLACH, Fran; KRISHNASAMI, Zipporah
2013-01-01
Objective To evaluate the performance of CKD-EPI creatinine, cystatin C and creatinine-cystatin C estimating equations in HIV-positive patients. Methods We evaluated the performance of the MDRD Study and CKD-EPI creatinine 2009, CKD-EPI cystatin C 2012 and CKD-EPI creatinine-cystatin C 2012 glomerular filtration rate (GFR) estimating equations compared to GFR measured using plasma clearance of iohexol in 200 HIV-positive patients on stable antiretroviral therapy. Creatinine and cystatin C assays were standardized to certified reference materials. Results Of the 200 participants, median (IQR) CD4 count was 536 (421) and 61% had an undetectable HIV-viral load. Mean (SD) measured GFR (mGFR) was 87 (26) ml/min/1.73m2. All CKD-EPI equations performed better than the MDRD Study equation. All three CKD-EPI equations had similar bias and precision. The cystatin C equation was not more accurate than the creatinine equation. The creatinine-cystatin C equation was significantly more accurate than the cystatin C equation and there was a trend toward greater accuracy than the creatinine equation. Accuracy was equal or better in most subgroups with the combined equation compared to either alone. Conclusions The CKD-EPI cystatin C equation does not appear to be more accurate than the CKD-EPI creatinine equation in patients who are HIV-positive, supporting the use of the CKD-EPI creatinine equation for routine clinical care for use in North American populations with HIV. The use of both filtration markers together as a confirmatory test for decreased estimated GFR based on creatinine in individuals who are HIV-positive requires further study. PMID:22842844
A Framework for Quality Assurance in Child Welfare.
ERIC Educational Resources Information Center
O'Brien, Mary; Watson, Peter
In their search for new ways to assess their agencies' success in working with children and families, child welfare administrators and senior managers are increasingly seeking regular and reliable sources of information that help them evaluate agency performance, make ongoing decisions, and provide an accurate picture for agency staff and external…
Evaluating Large-Scale Studies to Accurately Appraise Children's Performance
ERIC Educational Resources Information Center
Ernest, James M.
2012-01-01
Educational policy is often developed using a top-down approach. Recently, there has been a concerted shift in policy for educators to develop programs and research proposals that evolve from "scientific" studies and focus less on their intuition, aided by professional wisdom. This article analyzes several national and international…
A Simple Close Range Photogrammetry Technique to Assess Soil Erosion in the Field
USDA-ARS?s Scientific Manuscript database
Evaluating the performance of a soil erosion prediction model depends on the ability to accurately measure the gain or loss of sediment in an area. Recent development in acquiring detailed surface elevation data (DEM) makes it feasible to assess soil erosion and deposition spatially. Digital photogr...
Bangalore Revisited: A Reluctant Complaint.
ERIC Educational Resources Information Center
Greenwood, John
1985-01-01
Discusses the Bangalore Project in South India and responds to three articles on it, particularly the one by C. J. Brumfit (ELT, 1984). Argues that more information on teacher and learner performance and more explicit and illustrative evidence of materials and methodology are needed in order to evaluate the project accurately. (SED)
Development and Evaluation of a Sandia Cooler-based Refrigerator Condenser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Terry A.; Kariya, Harumichi Arthur; Leick, Michael T.
This report describes the first design of a refrigerator condenser using the Sandia Cooler, i.e. air - bearing supported rotating heat - sink impeller. The project included ba seline performance testing of a residential refrigerator, analysis and design development of a Sandia Cooler condenser assembly including a spiral channel baseplate, and performance measurement and validation of this condenser system as incorporated into the residential refrigerator. Comparable performance was achieved in a 60% smaller volume package. The improved modeling parameters can now be used to guide more optimized designs and more accurately predict performance.
Performance of commercial platforms for rapid genotyping of polymorphisms affecting warfarin dose.
King, Cristi R; Porche-Sorbet, Rhonda M; Gage, Brian F; Ridker, Paul M; Renaud, Yannick; Phillips, Michael S; Eby, Charles
2008-06-01
Initiation of warfarin therapy is associated with bleeding owing to its narrow therapeutic window and unpredictable therapeutic dose. Pharmacogenetic-based dosing algorithms can improve accuracy of initial warfarin dosing but require rapid genotyping for cytochrome P-450 2C9 (CYP2C9) *2 and *3 single nucleotide polymorphisms (SNPs) and a vitamin K epoxide reductase (VKORC1) SNP. We evaluated 4 commercial systems: INFINITI analyzer (AutoGenomics, Carlsbad, CA), Invader assay (Third Wave Technologies, Madison, WI), Tag-It Mutation Detection assay (Luminex Molecular Diagnostics, formerly Tm Bioscience, Toronto, Canada), and Pyrosequencing (Biotage, Uppsala, Sweden). We genotyped 112 DNA samples and resolved any discrepancies with bidirectional sequencing. The INFINITI analyzer was 100% accurate for all SNPs and required 8 hours. Invader and Tag-It were 100% accurate for CYP2C9 SNPs, 99% accurate for VKORC1 -1639/3673 SNP, and required 3 hours and 8 hours, respectively. Pyrosequencing was 99% accurate for CYP2C9 *2, 100% accurate for CYP2C9 *3, and 100% accurate for VKORC1 and required 4 hours. Current commercial platforms provide accurate and rapid genotypes for pharmacogenetic dosing during initiation of warfarin therapy.
Deulofeu, R; Bodí, M A; Twose, J; López, P
2010-06-01
We are used to comparisons of activity using donation or transplantation population (pmp) rates between regions or countries, without a further evaluation of the process. But crude pmp rates do not clearly reflect real transplantation capacity, because organ procurement does not finish with the donation step; it is also necessary to know the utilization of the obtained organs. The objective of this study was to present methods and indicators deemed necessary to evaluate the effectiveness of the process. We have proposed the use of simple definitions and indicators to more accurately measure and compare the effectiveness of the total organ procurement process. To illustrate the use and performance of these indicators, we have presented the donation and transplantation activity in Catalonia from 2002 to 2007.
Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory
NASA Technical Reports Server (NTRS)
Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.
2013-01-01
Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.
Díaz-González, Lorena; Quiroz-Ruiz, Alfredo
2014-01-01
Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8. PMID:24737992
Verma, Surendra P; Díaz-González, Lorena; Rosales-Rivera, Mauricio; Quiroz-Ruiz, Alfredo
2014-01-01
Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8.
Predicting falls in older adults using the four square step test.
Cleary, Kimberly; Skornyakov, Elena
2017-10-01
The Four Square Step Test (FSST) is a performance-based balance tool involving stepping over four single-point canes placed on the floor in a cross configuration. The purpose of this study was to evaluate properties of the FSST in older adults who lived independently. Forty-five community dwelling older adults provided fall history and completed the FSST, Berg Balance Scale (BBS), Timed Up and Go (TUG), and Tinetti in random order. Future falls were recorded for 12 months following testing. The FSST accurately distinguished between non-fallers and multiple fallers, and the 15-second threshold score accurately distinguished multiple fallers from non-multiple fallers based on fall history. The FSST predicted future falls, and performance on the FSST was significantly correlated with performance on the BBS, TUG, and Tinetti. However, the test is not appropriate for older adults who use walkers. Overall, the FSST is a valid yet underutilized measure of balance performance and fall prediction tool that physical therapists should consider using in ambulatory community dwelling older adults.
Testing the Feasibility of a Low-Cost Network Performance Measurement Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chevalier, Scott; Schopf, Jennifer M.; Miller, Kenneth
2016-07-01
Todays science collaborations depend on reliable, high performance networks, but monitoring the end-to-end performance of a network can be costly and difficult. The most accurate approaches involve using measurement equipment in many locations, which can be both expensive and difficult to manage due to immobile or complicated assets. The perfSONAR framework facilitates network measurement making management of the tests more reasonable. Traditional deployments have used over-provisioned servers, which can be expensive to deploy and maintain. As scientific network uses proliferate, there is a desire to instrument more facets of a network to better understand trends. This work explores low costmore » alternatives to assist with network measurement. Benefits include the ability to deploy more resources quickly, and reduced capital and operating expenditures. Finally, we present candidate platforms and a testing scenario that evaluated the relative merits of four types of small form factor equipment to deliver accurate performance measurements.« less
NASA Technical Reports Server (NTRS)
Tranter, W. H.
1979-01-01
A technique for estimating the signal-to-noise ratio at a point in a digital simulation of a communication system is described; the technique is essentially a digital realization of a technique proposed by Shepertycki (1964) for the evaluation of analog communication systems. Signals having lowpass or bandpass spectra may be used. Simulation results show the technique to be accurate over a wide range of signal-to-noise ratios.
Quality evaluation of motion-compensated edge artifacts in compressed video.
Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R
2007-04-01
Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.
Accurate and efficient spin integration for particle accelerators
Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; ...
2015-02-01
Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations.We evaluate their performance and accuracy in quantitative detail for individual elements as well as formore » the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.« less
Lei, Huan; Yang, Xiu; Zheng, Bin; ...
2015-11-05
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
Virtual tape measure for the operating microscope: system specifications and performance evaluation.
Kim, M Y; Drake, J M; Milgram, P
2000-01-01
The Virtual Tape Measure for the Operating Microscope (VTMOM) was created to assist surgeons in making accurate 3D measurements of anatomical structures seen in the surgical field under the operating microscope. The VTMOM employs augmented reality techniques by combining stereoscopic video images with stereoscopic computer graphics, and functions by relying on an operator's ability to align a 3D graphic pointer, which serves as the end-point of the virtual tape measure, with designated locations on the anatomical structure being measured. The VTMOM was evaluated for its baseline and application performances as well as its application efficacy. Baseline performance was determined by measuring the mean error (bias) and standard deviation of error (imprecision) in measurements of non-anatomical objects. Application performance was determined by comparing the error in measuring the dimensions of aneurysm models with and without the VTMOM. Application efficacy was determined by comparing the error in selecting the appropriate aneurysm clip size with and without the VTMOM. Baseline performance indicated a bias of 0.3 mm and an imprecision of 0.6 mm. Application bias was 3.8 mm and imprecision was 2.8 mm for aneurysm diameter. The VTMOM did not improve aneurysm clip size selection accuracy. The VTMOM is a potentially accurate tool for use under the operating microscope. However, its performance when measuring anatomical objects is highly dependent on complex visual features of the object surfaces. Copyright 2000 Wiley-Liss, Inc.
Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions
Chen, Shengyong; Xiao, Gang; Li, Xiaoli
2014-01-01
This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954
Evaluation of the performance of spectacle lens "transmittance meters".
Stephens, G L; Pitts, D G
1994-03-01
Inexpensive transmittance meters have recently been developed for measuring of mean ultraviolet (UV) radiant transmittance and luminous transmittance of spectacle lenses. Our purpose was to determine how accurately these meters measured transmittance. The mean UV transmittance and the luminous transmittance of a series of lenses were determined using a spectrophotometer. Transmittance meters were then used to measure the same lenses. In general, the meters overestimated total (mean) UV transmittance. Luminous transmittance was relatively accurately measured by those meters which had this capability. Although the meters do not measure UV transmittance accurately, they are still useful for determining if a lens transmits any UV radiation. The relatively narrow response range of the meters, centered at 360 to 380 nm, is responsible for the measurement error of mean UV transmittance.
Segmentation of bone pixels from EROI Image using clustering method for bone age assessment
NASA Astrophysics Data System (ADS)
Bakthula, Rajitha; Agarwal, Suneeta
2016-03-01
The bone age of a human can be identified using carpal and epiphysis bones ossification, which is limited to teen age. The accurate age estimation depends on best separation of bone pixels and soft tissue pixels in the ROI image. The traditional approaches like canny, sobel, clustering, region growing and watershed can be applied, but these methods requires proper pre-processing and accurate initial seed point estimation to provide accurate results. Therefore this paper proposes new approach to segment the bone from soft tissue and background pixels. First pixels are enhanced using BPE and the edges are identified by HIPI. Later a K-Means clustering is applied for segmentation. The performance of the proposed approach has been evaluated and compared with the existing methods.
An object tracking method based on guided filter for night fusion image
NASA Astrophysics Data System (ADS)
Qian, Xiaoyan; Wang, Yuedong; Han, Lei
2016-01-01
Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.
2015-01-01
Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.
Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry
2018-01-01
The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout (Oncorhynchus mykiss) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k-Nearest neighbours (k-NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k-NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet’s effects on fish skin. PMID:29596375
Saberioon, Mohammadmehdi; Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry
2018-03-29
The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout ( Oncorhynchus mykiss ) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k -Nearest neighbours ( k -NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k -NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet's effects on fish skin.
NASA. Marshall Space Flight Center Hydrostatic Bearing Activities
NASA Technical Reports Server (NTRS)
Benjamin, Theodore G.
1991-01-01
The basic approach for analyzing hydrostatic bearing flows at the Marshall Space Flight Center (MSFC) is briefly discussed. The Hydrostatic Bearing Team has responsibility for assessing and evaluating flow codes; evaluating friction, ignition, and galling effects; evaluating wear; and performing tests. The Office of Aerospace and Exploration Technology Turbomachinery Seals Tasks consist of tests and analysis. The MSFC in-house analyses utilize one-dimensional bulk-flow codes. Computational fluid dynamics (CFD) analysis is used to enhance understanding of bearing flow physics or to perform parametric analysis that are outside the bulk flow database. As long as the bulk flow codes are accurate enough for most needs, they will be utilized accordingly and will be supported by CFD analysis on an as-needed basis.
NASA Astrophysics Data System (ADS)
Sumiya, H.; Hamaki, K.; Harano, K.
2018-05-01
Ultra-hard and high-strength spherical indenters with high precision and sphericity were successfully prepared from nanopolycrystalline diamond (NPD) synthesized by direct conversion sintering from graphite under high pressure and high temperature. It was shown that highly accurate and stable microfracture strength tests can be performed on various super-hard diamond materials by using the NPD spherical indenters. It was also verified that this technique enables quantitative evaluation of the strength characteristics of single crystal diamonds and NPDs which have been quite difficult to evaluate.
English Language Teachers' Ideology of ELT Assessment Literacy
ERIC Educational Resources Information Center
Hakim, Badia
2015-01-01
Deep understanding, clear perception and accurate use of assessment methodology play an integral role in the success of a language program. Use of various assessment techniques to evaluate and improve the performance of learners has been the focal point of interest in the field of English Language Teaching (ELT). Equally researchers are interested…
USDA-ARS?s Scientific Manuscript database
Accurate and rapid assays for glucose are desirable for analysis of glucose and starch in food and feedstuffs. An established colorimetric glucose oxidase-peroxidase method for glucose was modified to reduce analysis time, and evaluated for factors that affected accuracy. Time required to perform t...
Children's Reasoning about Evaluative Feedback
ERIC Educational Resources Information Center
Heyman, Gail D.; Fu, Genyue; Sweet, Monica A.; Lee, Kang
2009-01-01
Children's reasoning about the willingness of peers to convey accurate positive and negative performance feedback to others was investigated among a total of 179 6- to 11-year-olds from the USA and China. In Study 1, which was conducted in the USA only, participants responded that peers would be more likely to provide positive feedback than…
Estimating the Accuracy of Neurocognitive Effort Measures in the Absence of a "Gold Standard"
ERIC Educational Resources Information Center
Mossman, Douglas; Wygant, Dustin B.; Gervais, Roger O.
2012-01-01
Psychologists frequently use symptom validity tests (SVTs) to help determine whether evaluees' test performance or reported symptoms accurately represent their true functioning and capability. Most studies evaluating the accuracy of SVTs have used either known-group comparisons or simulation designs, but these approaches have well-known…
NASA Technical Reports Server (NTRS)
Aumann, Hartmut H.; Manning, Evan; Barnet, Chris; Maddy, Eric; Blackwell, William
2009-01-01
With the availability of very accurate forecasts, the metric of accuracy alone for the evaluation of the performance of a retrieval system can produce misleading results. A useful characterization of the quality of a retrieval system and its potential to contribute to an improved weather forecast is its skill, which we define as the ability to make retrievals of geophysical parameters which are closer to the truth than the six hour forecast, when the truth differs significantly from the forecast. We illustrate retrieval skill using one day of AMSU and AIRS data with three different retrieval algorithms, which result in retrievals for more than 90% of the potential retrievals under clear and cloudy conditions. Two of the three algorithms have better than 1 K rms "RAOB quality" accuracy on the troposphere, but only one has skill between 900 and 100 mb. AIRS was launched on the EOS Aqua spacecraft in May 2002 into a 705 km polar sun-synchronous orbit with accurately maintained 1:30 PM ascending node. Essentially uninterrupted data are freely available since September 2002.
MRI-guidance in percutaneous core decompression of osteonecrosis of the femoral head.
Kerimaa, Pekka; Väänänen, Matti; Ojala, Risto; Hyvönen, Pekka; Lehenkari, Petri; Tervonen, Osmo; Blanco Sequeiros, Roberto
2016-04-01
The purpose of this study was to evaluate the usefulness of MRI-guidance for core decompression of avascular necrosis of the femoral head. Twelve MRI-guided core decompressions were performed on patients with different stages of avascular necrosis of the femoral head. The patients were asked to evaluate their pain and their ability to function before and after the procedure and imaging findings were reviewed respectively. Technical success in reaching the target was 100 % without complications. Mean duration of the procedure itself was 54 min. All patients with ARCO stage 1 osteonecrosis experienced clinical benefit and pathological MRI findings were seen to diminish. Patients with more advanced disease gained less, if any, benefit and total hip arthroplasty was eventually performed on four patients. MRI-guidance seems technically feasible, accurate and safe for core decompression of avascular necrosis of the femoral head. Patients with early stage osteonecrosis may benefit from the procedure. • MRI is a useful guidance method for minimally invasive musculoskeletal interventions. • Bone drilling seems beneficial at early stages of avascular necrosis. • MRI-guidance is safe and accurate for bone drilling.
Engine isolation for structural-borne interior noise reduction in a general aviation aircraft
NASA Technical Reports Server (NTRS)
Unruh, J. F.; Scheidt, D. C.
1981-01-01
Engine vibration isolation for structural-borne interior noise reduction is investigated. A laboratory based test procedure to simulate engine induced structure-borne noise transmission, the testing of a range of candidate isolators for relative performance data, and the development of an analytical model of the transmission phenomena for isolator design evaluation are addressed. The isolator relative performance test data show that the elastomeric isolators do not appear to operate as single degree of freedom systems with respect to noise isolation. Noise isolation beyond 150 Hz levels off and begins to decrease somewhat above 600 Hz. Coupled analytical and empirical models were used to study the structure-borne noise transmission phenomena. Correlation of predicted results with measured data show that (1) the modeling procedures are reasonably accurate for isolator design evaluation, (2) the frequency dependent properties of the isolators must be included in the model if reasonably accurate noise prediction beyond 150 Hz is desired. The experimental and analytical studies were carried out in the frequency range from 10 Hz to 1000 Hz.
NASA Astrophysics Data System (ADS)
Javernick, Luke; Redolfi, Marco; Bertoldi, Walter
2018-05-01
New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.
Pulmonary tumor measurements from x-ray computed tomography in one, two, and three dimensions.
Villemaire, Lauren; Owrangi, Amir M; Etemad-Rezai, Roya; Wilson, Laura; O'Riordan, Elaine; Keller, Harry; Driscoll, Brandon; Bauman, Glenn; Fenster, Aaron; Parraga, Grace
2011-11-01
We evaluated the accuracy and reproducibility of three-dimensional (3D) measurements of lung phantoms and patient tumors from x-ray computed tomography (CT) and compared these to one-dimensional (1D) and two-dimensional (2D) measurements. CT images of three spherical and three irregularly shaped tumor phantoms were evaluated by three observers who performed five repeated measurements. Additionally, three observers manually segmented 29 patient lung tumors five times each. Follow-up imaging was performed for 23 tumors and response criteria were compared. For a single subject, imaging was performed on nine occasions over 2 years to evaluate multidimensional tumor response. To evaluate measurement accuracy, we compared imaging measurements to ground truth using analysis of variance. For estimates of precision, intraobserver and interobserver coefficients of variation and intraclass correlations (ICC) were used. Linear regression and Pearson correlations were used to evaluate agreement and tumor response was descriptively compared. For spherical shaped phantoms, all measurements were highly accurate, but for irregularly shaped phantoms, only 3D measurements were in high agreement with ground truth measurements. All phantom and patient measurements showed high intra- and interobserver reproducibility (ICC >0.900). Over a 2-year period for a single patient, there was disagreement between tumor response classifications based on 3D measurements and those generated using 1D and 2D measurements. Tumor volume measurements were highly reproducible and accurate for irregular, spherical phantoms and patient tumors with nonuniform dimensions. Response classifications obtained from multidimensional measurements suggest that 3D measurements provide higher sensitivity to tumor response. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
In Vivo, High-Frequency Three-Dimensional Cardiac MR Elastography: Feasibility in Normal Volunteers
Arani, Arvin; Glaser, Kevin L.; Arunachalam, Shivaram P.; Rossman, Phillip J.; Lake, David S.; Trzasko, Joshua D.; Manduca, Armando; McGee, Kiaran P.; Ehman, Richard L.; Araoz, Philip A.
2016-01-01
Purpose Noninvasive stiffness imaging techniques (elastography) can image myocardial tissue biomechanics in vivo. For cardiac MR elastography (MRE) techniques, the optimal vibration frequency for in vivo experiments is unknown. Furthermore, the accuracy of cardiac MRE has never been evaluated in a geometrically accurate phantom. Therefore, the purpose of this study was to determine the necessary driving frequency to obtain accurate three-dimensional (3D) cardiac MRE stiffness estimates in a geometrically accurate diastolic cardiac phantom and to determine the optimal vibration frequency that can be introduced in healthy volunteers. Methods The 3D cardiac MRE was performed on eight healthy volunteers using 80 Hz, 100 Hz, 140 Hz, 180 Hz, and 220 Hz vibration frequencies. These frequencies were tested in a geometrically accurate diastolic heart phantom and compared with dynamic mechanical analysis (DMA). Results The 3D Cardiac MRE was shown to be feasible in volunteers at frequencies as high as 180 Hz. MRE and DMA agreed within 5% at frequencies greater than 180 Hz in the cardiac phantom. However, octahedral shear strain signal to noise ratios and myocardial coverage was shown to be highest at a frequency of 140 Hz across all subjects. Conclusion This study motivates future evaluation of high-frequency 3D MRE in patient populations. PMID:26778442
NASA Astrophysics Data System (ADS)
Lehman, Donald Clifford
Today's medical laboratories are dealing with cost containment health care policies and unfilled laboratory positions. Because there may be fewer experienced clinical laboratory scientists, students graduating from clinical laboratory science (CLS) programs are expected by their employers to perform accurately in entry-level positions with minimal training. Information in the CLS field is increasing at a dramatic rate, and instructors are expected to teach more content in the same amount of time with the same resources. With this increase in teaching obligations, instructors could use a tool to facilitate grading. The research question was, "Can computer-assisted assessment evaluate students in an accurate and time efficient way?" A computer program was developed to assess CLS students' ability to evaluate peripheral blood smears. Automated grading permits students to get results quicker and allows the laboratory instructor to devote less time to grading. This computer program could improve instruction by providing more time to students and instructors for other activities. To be valuable, the program should provide the same quality of grading as the instructor. These benefits must outweigh potential problems such as the time necessary to develop and maintain the program, monitoring of student progress by the instructor, and the financial cost of the computer software and hardware. In this study, surveys of students and an interview with the laboratory instructor were performed to provide a formative evaluation of the computer program. In addition, the grading accuracy of the computer program was examined. These results will be used to improve the program for use in future courses.
Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong
2016-01-12
The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis.
Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong
2016-01-01
The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis. PMID:28787839
Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values
Alves, Gelio; Yu, Yi-Kuo
2014-01-01
Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491
Evaluation and comparison of predictive individual-level general surrogates.
Gabriel, Erin E; Sachs, Michael C; Halloran, M Elizabeth
2018-07-01
An intermediate response measure that accurately predicts efficacy in a new setting at the individual level could be used both for prediction and personalized medical decisions. In this article, we define a predictive individual-level general surrogate (PIGS), which is an individual-level intermediate response that can be used to accurately predict individual efficacy in a new setting. While methods for evaluating trial-level general surrogates, which are predictors of trial-level efficacy, have been developed previously, few, if any, methods have been developed to evaluate individual-level general surrogates, and no methods have formalized the use of cross-validation to quantify the expected prediction error. Our proposed method uses existing methods of individual-level surrogate evaluation within a given clinical trial setting in combination with cross-validation over a set of clinical trials to evaluate surrogate quality and to estimate the absolute prediction error that is expected in a new trial setting when using a PIGS. Simulations show that our method performs well across a variety of scenarios. We use our method to evaluate and to compare candidate individual-level general surrogates over a set of multi-national trials of a pentavalent rotavirus vaccine.
SU-F-T-552: A One-Year Evaluation of the QABeamChecker+ for Use with the CyberKnife System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gersh, J; Spectrum Medical Physics, LLC, Greenville, SC
Purpose: By attaching an adapter plate with fiducial markers to the QA BeamChecker+ (Standard Imaging, Inc., Middleton, WI), the output of the CyberKnife can be accurately, efficiently, and consistently evaluated. The adapter plate, known as the Cutting Board, allows for automated alignment of the QABC+ using the CK’s stereoscopic kV image-based treatment localization system (TLS). Described herein is an evaluation of the system following a year of clinical utilization. Methods: Based on a CT scan of the QABC+ and CB, a treatment plan is generated which delivers a beam to each of the 5 plane-parallel ionization chambers. Following absolute calibrationmore » of the CK, the QA plan is delivered, and baseline measurements are acquired (and automatically corrected for temperature and pressure). This test was performed at the beginning of each treatment day for a year. A calibration evaluation (using a water-equivalent slab and short thimble chamber) is performed every four weeks, or whenever the QABC+ detects a deviation of more than 1.0%. Results: During baseline evaluation, repeat measurements (n=10) were performed, with an average output of 0.25% with an SD of 0.11%. As a test of the reposition of the QABC+ and CB, ten additional measurements were performed where between each acquisition, the entire system was removed and re-positioned using the TLS. The average output deviation was 0.30% with a SD of 0.13%. During the course of the year, 187 QABC+ measurements and 13 slab-based measurements were performed. The output measurements of the QABC+ correlated well with slab-based measurements (R2=0.909). Conclusion: By using the QABC+ and CB, daily output was evaluated accurately, efficiently, and consistently. From setup to break-down (including analysis), this test required 5 minutes instead of approximately 15 using traditional techniques (collimator-mounted ionization chambers). Additionally, by automatically saving resultant output deviation to a database, trend analysis was simplified. Spectrum Medical Physics, LLC of Greenville, SC has a consulting contract with Standard Imaging of Middleton, WI.« less
List, Susan M; Starks, Nykole; Baum, John; Greene, Carmine; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Cuddihy, Robert
2011-01-01
Background This study evaluated performance and product labeling of CONTOUR® USB, a new blood glucose monitoring system (BGMS) with integrated diabetes management software and a universal serial bus (USB) port, in the hands of untrained lay users and health care professionals (HCPs). Method Subjects and HCPs tested subject's finger stick capillary blood in parallel using CONTOUR USB meters; deep finger stick blood was tested on a Yellow Springs Instruments (YSI) glucose analyzer for reference. Duplicate results by both subjects and HCPs were obtained to assess system precision. System accuracy was assessed according to International Organization for Standardization (ISO) 15197:2003 guidelines [within ±15 mg/dl of mean YSI results (samples <75 mg/dl) and ±20% (samples ≥75 mg/dl)]. Clinical accuracy was determined by Parkes error grid analysis. Subject labeling comprehension was assessed by HCP ratings of subject proficiency. Key system features and ease-of-use were evaluated by subject questionnaires. Results All subjects who completed the study (N = 74) successfully performed blood glucose measurements, connected the meter to a laptop computer, and used key features of the system. The system was accurate; 98.6% (146/148) of subject results and 96.6% (143/148) of HCP results exceeded ISO 15197:2003 criteria. All subject and HCP results were clinically accurate (97.3%; zone A) or associated with benign errors (2.7%; zone B). The majority of subjects rated features of the BGMS as “very good” or “excellent.” Conclusions CONTOUR USB exceeded ISO 15197:2003 system performance criteria in the hands of untrained lay users. Subjects understood the product labeling, found the system easy to use, and successfully performed blood glucose testing. PMID:22027308
List, Susan M; Starks, Nykole; Baum, John; Greene, Carmine; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Cuddihy, Robert
2011-09-01
This study evaluated performance and product labeling of CONTOUR® USB, a new blood glucose monitoring system (BGMS) with integrated diabetes management software and a universal serial bus (USB) port, in the hands of untrained lay users and health care professionals (HCPs). Subjects and HCPs tested subject's finger stick capillary blood in parallel using CONTOUR USB meters; deep finger stick blood was tested on a Yellow Springs Instruments (YSI) glucose analyzer for reference. Duplicate results by both subjects and HCPs were obtained to assess system precision. System accuracy was assessed according to International Organization for Standardization (ISO) 15197:2003 guidelines [within ±15 mg/dl of mean YSI results (samples <75 mg/dl) and ±20% (samples ≥75 mg/dl)]. Clinical accuracy was determined by Parkes error grid analysis. Subject labeling comprehension was assessed by HCP ratings of subject proficiency. Key system features and ease-of-use were evaluated by subject questionnaires. All subjects who completed the study (N = 74) successfully performed blood glucose measurements, connected the meter to a laptop computer, and used key features of the system. The system was accurate; 98.6% (146/148) of subject results and 96.6% (143/148) of HCP results exceeded ISO 15197:2003 criteria. All subject and HCP results were clinically accurate (97.3%; zone A) or associated with benign errors (2.7%; zone B). The majority of subjects rated features of the BGMS as "very good" or "excellent." CONTOUR USB exceeded ISO 15197:2003 system performance criteria in the hands of untrained lay users. Subjects understood the product labeling, found the system easy to use, and successfully performed blood glucose testing. © 2011 Diabetes Technology Society.
Using Rasch model to analyze the ability of pre-university students in vector
NASA Astrophysics Data System (ADS)
Ibrahim, Faridah Mohamed; Shariff, Asma Ahmad; Tahir, Rohayatimah Muhammad
2015-10-01
Evaluating students' performance only from overall examination marks does not give accurate evidence of their achievement on a particular subject. For a more detailed analysis, an instrument called Rasch Measurement Model (Rasch Model), widely used in education research, may be applied. Using the analysis map, the level of each student's ability and the level of the questions difficulty can be measured. This paper describes how the Rasch Model is used to evaluate students' achivement and performance in Vector, a subject taken by students enrolled in the Physical Science Program at the Centre for Foundation Studies in Science, University of Malaya. Usually, students' understanding of the subject and performance are assessed and examined at the end of the semester in the final examination, apart from continuous assessment done throughout the course. In order to evaluate the individual achievement and get a better and accurate evidence on the performance, 28 male and 28 female students' marks were taken randomly from the final examination results and analysed using the Rasch Model. Observation made from the map showed that more than half of the questions were categorized as difficult while the two most difficult questions could be answered correctly by 33.9% of the students. Results showed that the students performed very well and their achievement was above expectation. About 27% of the sudents could be considered as having very high ability in answering all the questions, with one student being able to answer well, obtaining perfect score. However, two students were found to be misfits since they were able to answer difficult questions but gave poor response to easy ones.
A quality assurance phantom for the performance evaluation of volumetric micro-CT systems
NASA Astrophysics Data System (ADS)
Du, Louise Y.; Umoh, Joseph; Nikolov, Hristo N.; Pollmann, Steven I.; Lee, Ting-Yim; Holdsworth, David W.
2007-12-01
Small-animal imaging has recently become an area of increased interest because more human diseases can be modeled in transgenic and knockout rodents. As a result, micro-computed tomography (micro-CT) systems are becoming more common in research laboratories, due to their ability to achieve spatial resolution as high as 10 µm, giving highly detailed anatomical information. Most recently, a volumetric cone-beam micro-CT system using a flat-panel detector (eXplore Ultra, GE Healthcare, London, ON) has been developed that combines the high resolution of micro-CT and the fast scanning speed of clinical CT, so that dynamic perfusion imaging can be performed in mice and rats, providing functional physiological information in addition to anatomical information. This and other commercially available micro-CT systems all promise to deliver precise and accurate high-resolution measurements in small animals. However, no comprehensive quality assurance phantom has been developed to evaluate the performance of these micro-CT systems on a routine basis. We have designed and fabricated a single comprehensive device for the purpose of performance evaluation of micro-CT systems. This quality assurance phantom was applied to assess multiple image-quality parameters of a current flat-panel cone-beam micro-CT system accurately and quantitatively, in terms of spatial resolution, geometric accuracy, CT number accuracy, linearity, noise and image uniformity. Our investigations show that 3D images can be obtained with a limiting spatial resolution of 2.5 mm-1 and noise of ±35 HU, using an acquisition interval of 8 s at an entrance dose of 6.4 cGy.
NASA Astrophysics Data System (ADS)
Rahmati, Omid; Tahmasebipour, Nasser; Haghizadeh, Ali; Pourghasemi, Hamid Reza; Feizizadeh, Bakhtiar
2017-12-01
Gully erosion constitutes a serious problem for land degradation in a wide range of environments. The main objective of this research was to compare the performance of seven state-of-the-art machine learning models (SVM with four kernel types, BP-ANN, RF, and BRT) to model the occurrence of gully erosion in the Kashkan-Poldokhtar Watershed, Iran. In the first step, a gully inventory map consisting of 65 gully polygons was prepared through field surveys. Three different sample data sets (S1, S2, and S3), including both positive and negative cells (70% for training and 30% for validation), were randomly prepared to evaluate the robustness of the models. To model the gully erosion susceptibility, 12 geo-environmental factors were selected as predictors. Finally, the goodness-of-fit and prediction skill of the models were evaluated by different criteria, including efficiency percent, kappa coefficient, and the area under the ROC curves (AUC). In terms of accuracy, the RF, RBF-SVM, BRT, and P-SVM models performed excellently both in the degree of fitting and in predictive performance (AUC values well above 0.9), which resulted in accurate predictions. Therefore, these models can be used in other gully erosion studies, as they are capable of rapidly producing accurate and robust gully erosion susceptibility maps (GESMs) for decision-making and soil and water management practices. Furthermore, it was found that performance of RF and RBF-SVM for modelling gully erosion occurrence is quite stable when the learning and validation samples are changed.
Machine Learning and Neurosurgical Outcome Prediction: A Systematic Review.
Senders, Joeky T; Staples, Patrick C; Karhade, Aditya V; Zaki, Mark M; Gormley, William B; Broekman, Marike L D; Smith, Timothy R; Arnaout, Omar
2018-01-01
Accurate measurement of surgical outcomes is highly desirable to optimize surgical decision-making. An important element of surgical decision making is identification of the patient cohort that will benefit from surgery before the intervention. Machine learning (ML) enables computers to learn from previous data to make accurate predictions on new data. In this systematic review, we evaluate the potential of ML for neurosurgical outcome prediction. A systematic search in the PubMed and Embase databases was performed to identify all potential relevant studies up to January 1, 2017. Thirty studies were identified that evaluated ML algorithms used as prediction models for survival, recurrence, symptom improvement, and adverse events in patients undergoing surgery for epilepsy, brain tumor, spinal lesions, neurovascular disease, movement disorders, traumatic brain injury, and hydrocephalus. Depending on the specific prediction task evaluated and the type of input features included, ML models predicted outcomes after neurosurgery with a median accuracy and area under the receiver operating curve of 94.5% and 0.83, respectively. Compared with logistic regression, ML models performed significantly better and showed a median absolute improvement in accuracy and area under the receiver operating curve of 15% and 0.06, respectively. Some studies also demonstrated a better performance in ML models compared with established prognostic indices and clinical experts. In the research setting, ML has been studied extensively, demonstrating an excellent performance in outcome prediction for a wide range of neurosurgical conditions. However, future studies should investigate how ML can be implemented as a practical tool supporting neurosurgical care. Copyright © 2017 Elsevier Inc. All rights reserved.
Real-time optical flow estimation on a GPU for a skied-steered mobile robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-04-01
Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.
Feeling-of-knowing for proper names.
Izaute, Marie; Chambres, Patrick; Larochelle, Serge
2002-12-01
The main objective of the presented study was to study feeling-of-knowing (FOK) in proper name retrieval. Many studies show that FOK can predict performance on a subsequent criterion test. Although feeling-of-knowing studies involve questions about proper names, none make this distinction between proper names and common names. Nevertheless, the specific character of proper names as a unique label referring to a person should allow participants to target precisely the desired verbal label. Our idea here was that the unique character of proper name information should result in more accurate FOK evaluations. In the experiment, participants evaluated feeling-of-knowing for proper and common name descriptions. The study demonstrates that FOK judgments are more accurate for proper names than for common names. The implications of the findings for proper names are briefly discussed in terms of feeling-of-knowing hypotheses.
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
Laboratory and clinical evaluation of on-site urine drug testing.
Beck, Olof; Carlsson, Sten; Tusic, Marinela; Olsson, Robert; Franzen, Lisa; Hulten, Peter
2014-11-01
Products for on-site urine drug testing offer the possibility to perform screening for drugs of abuse directly at the point-of-care. This is a well-established routine in emergency and dependency clinics but further evaluation of performance is needed due to inherent limitations with the available products. Urine drug testing by an on-site product was compared with routine laboratory methods. First, on-site testing was performed at the laboratory in addition to the routine method. Second, the on-site testing was performed at a dependency clinic and urine samples were subsequently sent to the laboratory for additional analytical investigation. The on-site testing products did not perform with assigned cut-off levels. The subjective reading between the presence of a spot (i.e. negative test result) being present or no spot (positive result) was difficult in 3.2% of the cases, and occurred for all parameters. The tests performed more accurately in drug negative samples (specificity 96%) but less accurately for detecting positives (sensitivity 79%). Of all incorrect results by the on-site test the proportion of false negatives was 42%. The overall agreement between on-site and laboratory testing was 95% in the laboratory study and 98% in the clinical study. Although a high degree of agreement was observed between on-site and routine laboratory urine drug testing, the performance of on-site testing was not acceptable due to significant number of false negative results. The limited sensitivity of on-site testing compared to laboratory testing reduces the applicability of these tests.
Assessment of human respiration patterns via noncontact sensing using Doppler multi-radar system.
Gu, Changzhan; Li, Changzhi
2015-03-16
Human respiratory patterns at chest and abdomen are associated with both physical and emotional states. Accurate measurement of the respiratory patterns provides an approach to assess and analyze the physical and emotional states of the subject persons. Not many research efforts have been made to wirelessly assess different respiration patterns, largely due to the inaccuracy of the conventional continuous-wave radar sensor to track the original signal pattern of slow respiratory movements. This paper presents the accurate assessment of different respiratory patterns based on noncontact Doppler radar sensing. This paper evaluates the feasibility of accurately monitoring different human respiration patterns via noncontact radar sensing. A 2.4 GHz DC coupled multi-radar system was used for accurate measurement of the complete respiration patterns without any signal distortion. Experiments were carried out in the lab environment to measure the different respiration patterns when the subject person performed natural breathing, chest breathing and diaphragmatic breathing. The experimental results showed that accurate assessment of different respiration patterns is feasible using the proposed noncontact radar sensing technique.
Assessment of Human Respiration Patterns via Noncontact Sensing Using Doppler Multi-Radar System
Gu, Changzhan; Li, Changzhi
2015-01-01
Human respiratory patterns at chest and abdomen are associated with both physical and emotional states. Accurate measurement of the respiratory patterns provides an approach to assess and analyze the physical and emotional states of the subject persons. Not many research efforts have been made to wirelessly assess different respiration patterns, largely due to the inaccuracy of the conventional continuous-wave radar sensor to track the original signal pattern of slow respiratory movements. This paper presents the accurate assessment of different respiratory patterns based on noncontact Doppler radar sensing. This paper evaluates the feasibility of accurately monitoring different human respiration patterns via noncontact radar sensing. A 2.4 GHz DC coupled multi-radar system was used for accurate measurement of the complete respiration patterns without any signal distortion. Experiments were carried out in the lab environment to measure the different respiration patterns when the subject person performed natural breathing, chest breathing and diaphragmatic breathing. The experimental results showed that accurate assessment of different respiration patterns is feasible using the proposed noncontact radar sensing technique. PMID:25785310
A k-space method for acoustic propagation using coupled first-order equations in three dimensions.
Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C
2009-09-01
A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.
Cognitive Change Questionnaire as a method for cognitive impairment screening
Damin, Antonio Eduardo; Nitrini, Ricardo; Brucki, Sonia Maria Dozzi
2015-01-01
The Cognitive Change Questionnaire (CCQ) was created as an effective measure of cognitive change that is easy to use and suitable for application in Brazil. Objective To evaluate whether the CCQ can accurately distinguish normal subjects from individuals with Mild Cognitive Impairment (MCI) and/or early stage dementia and to develop a briefer questionnaire, based on the original 22-item CCQ (CCQ22), that contains fewer questions. Methods A total of 123 individuals were evaluated: 42 healthy controls, 40 patients with MCI and 41 with mild dementia. The evaluation was performed using cognitive tests based on individual performance and on questionnaires administered to informants. The CCQ22 was created based on a selection of questions that experts deemed useful in screening for early stage dementia. Results The CCQ22 showed good accuracy for distinguishing between the groups. Statistical models selected the eight questions with the greatest power to discriminate between the groups. The AUC ROC corresponding to the final version of the 8-item CCQ (CCQ8), demonstrated good accuracy in differentiating between groups, good correlation with the final diagnosis (r=0.861) and adequate internal consistency (Cronbach's α=0.876). Conclusion The CCQ8 can be used to accurately differentiate between normal subjects and individuals with cognitive impairment, constituting a brief and appropriate instrument for cognitive screening. PMID:29213967
NASA Astrophysics Data System (ADS)
Benveniste, J.; Cotton, D.; Moreau, T.; Varona, E.; Roca, M.; Cipollini, P.; Cancet, M.; Martin, F.; Fenoglio-Marc, L.; Naeije, M.; Fernandes, J.; Restano, M.; Ambrozio, A.
2016-12-01
The ESA Sentinel-3 satellite, launched in February 2016 as a part of the Copernicus programme, is the second satellite to operate a SAR mode altimeter. The Sentinel 3 Synthetic Aperture Radar Altimeter (SRAL) is based on the heritage from Cryosat-2, but this time complemented by a Microwave Radiometer (MWR) to provide a wet troposphere correction, and operating at Ku and C-Bands to provide an accurate along-track ionospheric correction. Together this instrument package, including both GPS and DORIS instruments for accurate positioning, allows accurate measurements of sea surface height over the ocean, as well as measurements of significant wave height and surface wind speed. SCOOP (SAR Altimetry Coastal & Open Ocean Performance) is a project funded under the ESA SEOM (Scientific Exploitation of Operational Missions) Programme Element, started in September 2015, to characterise the expected performance of Sentinel-3 SRAL SAR mode altimeter products, in the coastal zone and open-ocean, and then to develop and evaluate enhancements to the baseline processing scheme in terms of improvements to ocean measurements. There is also a work package to develop and evaluate an improved Wet Troposphere correction for Sentinel-3, based on the measurements from the on-board MWR, further enhanced mostly in the coastal and polar regions using third party data, and provide recommendations for use. At the end of the project recommendations for further developments and implementations will be provided through a scientific roadmap. In this presentation we provide an overview of the SCOOP project, highlighting the key deliverables and discussing the potential impact of the results in terms of the application of delay-Doppler (SAR) altimeter measurements over the open-ocean and coastal zone. We also present the initial results from the project, including: Key findings from a review of the current "state-of-the-art" for SAR altimetry, Specification of the initial "reference" delay-Doppler and echo modelling /retracking processing schemes, Evaluation of the initial Test Data Set in the Open Ocean and Coastal Zone Overview of modifications planned to the reference delay-Doppler and echo modelling/ re-tracking processing schemes.
Evaluation of modal pushover-based scaling of one component of ground motion: Tall buildings
Kalkan, Erol; Chopra, Anil K.
2012-01-01
Nonlinear response history analysis (RHA) is now increasingly used for performance-based seismic design of tall buildings. Required for nonlinear RHAs is a set of ground motions selected and scaled appropriately so that analysis results would be accurate (unbiased) and efficient (having relatively small dispersion). This paper evaluates accuracy and efficiency of recently developed modal pushover–based scaling (MPS) method to scale ground motions for tall buildings. The procedure presented explicitly considers structural strength and is based on the standard intensity measure (IM) of spectral acceleration in a form convenient for evaluating existing structures or proposed designs for new structures. Based on results presented for two actual buildings (19 and 52 stories, respectively), it is demonstrated that the MPS procedure provided a highly accurate estimate of the engineering demand parameters (EDPs), accompanied by significantly reduced record-to-record variability of the responses. In addition, the MPS procedure is shown to be superior to the scaling procedure specified in the ASCE/SEI 7-05 document.
Application of a novel new multispectral nanoparticle tracking technique
NASA Astrophysics Data System (ADS)
McElfresh, Cameron; Harrington, Tyler; Vecchio, Kenneth S.
2018-06-01
Fast, reliable, and accurate particle size analysis techniques must meet the demands of evolving industrial and academic research in areas of functionalized nanoparticle synthesis, advanced materials development, and other nanoscale enabled technologies. In this study a new multispectral particle tracking analysis (m-PTA) technique enabled by the ViewSizer™ 3000 (MANTA Instruments, USA) was evaluated using solutions of monomodal and multimodal gold and polystyrene latex nanoparticles, as well as a spark eroded polydisperse 316L stainless steel nanopowder, and large (non-Brownian) borosilicate particles. It was found that m-PTA performed comparably to the DLS in evaluation of monomodal particle size distributions. When measuring bimodal, trimodal and polydisperse solutions, the m-PTA technique overwhelmingly outperformed traditional dynamic light scattering (DLS) in both peak detection and relative particle concentration analysis. It was also observed that the m-PTA technique is less susceptible to large particle overexpression errors. The ViewSizer™ 3000 was also found to be successful in accurately evaluating sizes and concentrations of monomodal and bimodal sinking borosilicate particles.
Evaluation of Preduster in Cement Industry Based on Computational Fluid Dynamic
NASA Astrophysics Data System (ADS)
Septiani, E. L.; Widiyastuti, W.; Djafaar, A.; Ghozali, I.; Pribadi, H. M.
2017-10-01
Ash-laden hot air from clinker in cement industry is being used to reduce water contain in coal, however it may contain large amount of ash even though it was treated by a preduster. This study investigated preduster performance as a cyclone separator in the cement industry by Computational Fluid Dynamic method. In general, the best performance of cyclone is it have relatively high efficiency with the low pressure drop. The most accurate and simple turbulence model, Reynold Average Navier Stokes (RANS), standard k-ε, and combination with Lagrangian model as particles tracking model were used to solve the problem. The measurement in simulation result are flow pattern in the cyclone, pressure outlet and collection efficiency of preduster. The applied model well predicted by comparing with the most accurate empirical model and pressure outlet in experimental measurement.
Konikoff, Jacob; Brookmeyer, Ron; Longosz, Andrew F.; Cousins, Matthew M.; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Koblin, Beryl A.; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Eshleman, Susan H.; Laeyendecker, Oliver
2013-01-01
Background A limiting antigen avidity enzyme immunoassay (HIV-1 LAg-Avidity assay) was recently developed for cross-sectional HIV incidence estimation. We evaluated the performance of the LAg-Avidity assay alone and in multi-assay algorithms (MAAs) that included other biomarkers. Methods and Findings Performance of testing algorithms was evaluated using 2,282 samples from individuals in the United States collected 1 month to >8 years after HIV seroconversion. The capacity of selected testing algorithms to accurately estimate incidence was evaluated in three longitudinal cohorts. When used in a single-assay format, the LAg-Avidity assay classified some individuals infected >5 years as assay positive and failed to provide reliable incidence estimates in cohorts that included individuals with long-term infections. We evaluated >500,000 testing algorithms, that included the LAg-Avidity assay alone and MAAs with other biomarkers (BED capture immunoassay [BED-CEIA], BioRad-Avidity assay, HIV viral load, CD4 cell count), varying the assays and assay cutoffs. We identified an optimized 2-assay MAA that included the LAg-Avidity and BioRad-Avidity assays, and an optimized 4-assay MAA that included those assays, as well as HIV viral load and CD4 cell count. The two optimized MAAs classified all 845 samples from individuals infected >5 years as MAA negative and estimated incidence within a year of sample collection. These two MAAs produced incidence estimates that were consistent with those from longitudinal follow-up of cohorts. A comparison of the laboratory assay costs of the MAAs was also performed, and we found that the costs associated with the optimal two assay MAA were substantially less than with the four assay MAA. Conclusions The LAg-Avidity assay did not perform well in a single-assay format, regardless of the assay cutoff. MAAs that include the LAg-Avidity and BioRad-Avidity assays, with or without viral load and CD4 cell count, provide accurate incidence estimates. PMID:24386116
Subjective evaluation of next-generation video compression algorithms: a case study
NASA Astrophysics Data System (ADS)
De Simone, Francesca; Goldmann, Lutz; Lee, Jong-Seok; Ebrahimi, Touradj; Baroncini, Vittorio
2010-08-01
This paper describes the details and the results of the subjective quality evaluation performed at EPFL, as a contribution to the effort of the Joint Collaborative Team on Video Coding (JCT-VC) for the definition of the next-generation video coding standard. The performance of 27 coding technologies have been evaluated with respect to two H.264/MPEG-4 AVC anchors, considering high definition (HD) test material. The test campaign involved a total of 494 naive observers and took place over a period of four weeks. While similar tests have been conducted as part of the standardization process of previous video coding technologies, the test campaign described in this paper is by far the most extensive in the history of video coding standardization. The obtained subjective quality scores show high consistency and support an accurate comparison of the performance of the different coding solutions.
a New Golf-Swing Robot Model Utilizing Shaft Elasticity
NASA Astrophysics Data System (ADS)
Suzuki, S.; Inooka, H.
1998-10-01
The performance of golf clubs and balls is generally evaluated by using golf-swing robots that conventionally have two or three joints with completely interrelated motion. This interrelation allows the user of this robot to specify only the initial posture and swing velocity of the robot and therefore the swing motion of this type of robot cannot be subtly adjusted to the specific characteristics of individual golf clubs. Consequently, golf-swing robots cannot accurately emulate advanced golfers, and this causes serious problems for the evaluation of golf club performance. In this study, a new golf-swing robot that can adjust its motion to both a specified value of swing velocity and the specific characteristics of individual golf clubs was analytically investigated. This robot utilizes the dynamic interference force produced by its swing motion and by shaft vibration and can therefore emulate advanced golfers and perform highly reliable evaluations of golf clubs.
Sliding Mode Control of Real-Time PNU Vehicle Driving Simulator and Its Performance Evaluation
NASA Astrophysics Data System (ADS)
Lee, Min Cheol; Park, Min Kyu; Yoo, Wan Suk; Son, Kwon; Han, Myung Chul
This paper introduces an economical and effective full-scale driving simulator for study of human sensibility and development of new vehicle parts and its control. Real-time robust control to accurately reappear a various vehicle motion may be a difficult task because the motion platform is the nonlinear complex system. This study proposes the sliding mode controller with a perturbation compensator using observer-based fuzzy adaptive network (FAN). This control algorithm is designed to solve the chattering problem of a sliding mode control and to select the adequate fuzzy parameters of the perturbation compensator. For evaluating the trajectory control performance of the proposed approach, a tracking control of the developed simulator named PNUVDS is experimentally carried out. And then, the driving performance of the simulator is evaluated by using human perception and sensibility of some drivers in various driving conditions.
Wilson, Glenn F; Russell, Christopher A
The functional state of the human operator is critical to optimal system performance. Degraded states of operator functioning can lead to errors and overall suboptimal system performance. Accurate assessment of operator functional state is crucial to the successful implementation of an adaptive aiding system. One method of determining operators' functional state is by monitoring their physiology. In the present study, artificial neural networks using physiological signals were used to continuously monitor, in real time, the functional state of 7 participants while they performed the Multi-Attribute Task Battery with two levels of task difficulty. Six channels of brain electrical activity and eye, heart and respiration measures were evaluated on line. The accuracy of the classifier was determined to test its utility as an on-line measure of operator state. The mean classification accuracies were 85%, 82%, and 86% for the baseline, low task difficulty, and high task difficulty conditions, respectively. The high levels of accuracy suggest that these procedures can be used to provide accurate estimates of operator functional state that can be used to provide adaptive aiding. The relative contribution of each of the 43 psychophysiological features was also determined. Actual or potential applications of this research include test and evaluation and adaptive aiding implementation.
Advanced Ultrasonic Diagnosis of Extremity Trauma: The Faster Exam
NASA Technical Reports Server (NTRS)
Dulchavsky, S. A.; Henry, S. E.; Moed, B. R.; Diebel, L. N.; Marshburn, T.; Hamilton, D. R.; Logan, J.; Kirkpatrick, A. W.; Williams, D. R.
2002-01-01
Ultrasound is of prO)len accuracy in abdominal and thoracic trauma and may be useful to diagnose extremity injury in situations where radiography is not available such as military and space applications. We prospectively evaluated the utility of extremity , ultrasound performed by trained, non-physician personnel in patients with extremity trauma, to simulate remote aerospace or military applications . Methods: Patients with extremity trauma were identified by history, physical examination, and radiographic studies. Ultrasound examination was performed bilaterally by nonphysician personnel with a portable ultrasound device using a 10-5 MHz linear probe, Images were video-recorded for later analysis against radiography by Fisher's exact test. The average time of examination was 4 minutes. Ultrasound accurately diagnosed extremity, injury in 94% of patients with no false positive exams; accuracy was greater in mid-shaft locations and least in the metacarpa/metatarsals. Soft tissue/tendon injury was readily visualized . Extremity ultrasound can be performed quickly and accurately by nonphysician personnel with excellent accuracy. Blinded verification of the utility of ultrasound in patients with extremity injury should be done to determine if Extremity and Respiratory evaluation should be added to the FAST examination (the FASTER exam) and verify the technique in remote locations such as military and aerospace applications.
Synthesized multi-station tribo-test system for bio-tribological evaluation in vitro
NASA Astrophysics Data System (ADS)
Wu, Tonghai; Du, Ying; Li, Yang; Wang, Shuo; Zhang, Zhinan
2016-07-01
Tribological tests play an important role on the evaluation of long-term bio-tribological performances of prosthetic materials for commercial fabrication. Those tests focus on the motion simulation of a real joint in vitro with only normal loads and constant velocities, which are far from the real friction behavior of human joints characterized with variable loads and multiple directions. In order to accurately obtain the bio-tribological performances of artificial joint materials, a tribological tester with a miniature four-station tribological system is proposed with four distinctive features. Firstly, comparability and repeatability of a test are ensured by four equal stations of the tester. Secondly, cross-linked scratch between tribo-pairs of human joints can be simulated by using a gear-rack meshing mechanism to produce composite motions. With this mechanism, the friction tracks can be designed by varying reciprocating and rotating speeds. Thirdly, variable loading system is realized by using a ball-screw mechanism driven by a stepper motor, by which loads under different gaits during walking are simulated. Fourthly, dynamic friction force and normal load can be measured simultaneously. The verifications of the performances of the developed tester show that the variable frictional tracks can produce different wear debris compared with one-directional tracks, and the accuracy of loading and friction force is within ±5%. Thus the high consistency among different stations can be obtained. Practically, the proposed tester system could provide more comprehensive and accurate bio-tribological evaluations for prosthetic materials.
Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit
NASA Astrophysics Data System (ADS)
Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi
2017-02-01
In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
Recognizing Disguised Faces: Human and Machine Evaluation
Dhamecha, Tejas Indulal; Singh, Richa; Vatsa, Mayank; Kumar, Ajay
2014-01-01
Face verification, though an easy task for humans, is a long-standing open research area. This is largely due to the challenging covariates, such as disguise and aging, which make it very hard to accurately verify the identity of a person. This paper investigates human and machine performance for recognizing/verifying disguised faces. Performance is also evaluated under familiarity and match/mismatch with the ethnicity of observers. The findings of this study are used to develop an automated algorithm to verify the faces presented under disguise variations. We use automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy. The performance of the proposed algorithm is evaluated on the IIIT-Delhi Disguise database that contains images pertaining to 75 subjects with different kinds of disguise variations. The experiments suggest that the proposed algorithm can outperform a popular commercial system and evaluates them against humans in matching disguised face images. PMID:25029188
Reaction Wheel Disturbance Model Extraction Software - RWDMES
NASA Technical Reports Server (NTRS)
Blaurock, Carl
2009-01-01
The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral densities); converting PSDs to order analysis data; extracting harmonics; initializing and simultaneously tuning a harmonic model and a wheel structural model; initializing and tuning a broadband model; and verifying the harmonic/broadband/structural model against the measurement data. Functional operation is through a MATLAB GUI that loads test data, performs the various analyses, plots evaluation data for assessment and refinement of analysis parameters, and exports the data to documentation or downstream analysis code. The harmonic models are defined as specified functions of frequency, typically speed-squared. The reaction wheel structural model is realized as mass, damping, and stiffness matrices (typically from a finite element analysis package) with the addition of a gyroscopic forcing matrix. The broadband noise model is realized as a set of speed-dependent filters. The tuning of the combined model is performed using nonlinear least squares techniques. RWDMES is implemented as a MATLAB toolbox comprising the Fit Manager for performing the model extraction, Data Manager for managing input data and output models, the Gyro Manager for modifying wheel structural models, and the Harmonic Editor for evaluating and tuning harmonic models. This software was validated using data from Goodrich E wheels, and from GSFC Lunar Reconnaissance Orbiter (LRO) wheels. The validation testing proved that RWDMES has the capability to extract accurate disturbance models from flight reaction wheels with minimal user effort.
Accurate Phylogenetic Tree Reconstruction from Quartets: A Heuristic Approach
Reaz, Rezwana; Bayzid, Md. Shamsuzzoha; Rahman, M. Sohel
2014-01-01
Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A ‘quartet’ is an unrooted tree over taxa, hence the quartet-based supertree methods combine many -taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets. PMID:25117474
McKenzie, Elizabeth M.; Balter, Peter A.; Stingo, Francesco C.; Jones, Jimmy; Followill, David S.; Kry, Stephen F.
2014-01-01
Purpose: The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. Methods: The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Results: Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. Conclusions: IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans. PMID:25471949
McKenzie, Elizabeth M; Balter, Peter A; Stingo, Francesco C; Jones, Jimmy; Followill, David S; Kry, Stephen F
2014-12-01
The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans.
NASA Astrophysics Data System (ADS)
Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid
2017-10-01
Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.
ERIC Educational Resources Information Center
Mouzakitis, Angela; Codding, Robin S.; Tryon, Georgiana
2015-01-01
Accurate implementation of individualized behavior intervention plans (BIPs) is a critical aspect of evidence-based practice. Research demonstrates that neither training nor consultation is sufficient to improve and maintain high rates of treatment integrity (TI). Therefore, evaluation of ongoing support strategies is needed. The purpose of this…
Closed-Form Evaluation of Mutual Coupling in a Planar Array of Circular Apertures
NASA Technical Reports Server (NTRS)
Bailey, M. C.
1996-01-01
The integral expression for the mutual admittance between circular apertures in a planar array is evaluated in closed form. Very good accuracy is realized when compared with values that were obtained by numerical integration. Utilization of this closed-form expression, for all element pairs that are separated by more than one element spacing, yields extremely accurate results and significantly reduces the computation time that is required to analyze the performance of a large electronically scanning antenna array.
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
Lange, Belinda; Chang, Chien-Yen; Suma, Evan; Newman, Bradley; Rizzo, Albert Skip; Bolas, Mark
2011-01-01
The use of the commercial video games as rehabilitation tools, such as the Nintendo WiiFit, has recently gained much interest in the physical therapy arena. Motion tracking controllers such as the Nintendo Wiimote are not sensitive enough to accurately measure performance in all components of balance. Additionally, users can figure out how to "cheat" inaccurate trackers by performing minimal movement (e.g. wrist twisting a Wiimote instead of a full arm swing). Physical rehabilitation requires accurate and appropriate tracking and feedback of performance. To this end, we are developing applications that leverage recent advances in commercial video game technology to provide full-body control of animated virtual characters. A key component of our approach is the use of newly available low cost depth sensing camera technology that provides markerless full-body tracking on a conventional PC. The aim of this research was to develop and assess an interactive game-based rehabilitation tool for balance training of adults with neurological injury.
NASA Technical Reports Server (NTRS)
Thompson, R. H.; Gambardella, P. J.
1980-01-01
The Solar Maximum Mission (SMM) spacecraft provides an excellent opportunity for evaluating attitude determination accuracies achievable with tracking instruments such as fixed head star trackers (FHSTs). As a part of its payload, SMM carries a highly accurate fine pointing Sun sensor (FPSS). The EPSS provides an independent check of the pitch and yaw parameters computed from observations of stars in the FHST field of view. A method to determine the alignment of the FHSTs relative to the FPSS using spacecraft data is applied. Two methods that were used to determine distortions in the 8 degree by 8 degree field of view of the FHSTs using spacecraft data are also presented. The attitude determination accuracy performance of the in flight calibrated FHSTs is evaluated.
Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis
2014-05-01
The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Lyons, J.E.; Collazo, J.A.; Guglielmo, C.
2005-01-01
Assessing stopover habitat quality and refueling performance of individual birds is crucial to the conservation and management of migratory shorebirds. Plasma lipid metabolites indicate the trajectory of mass change in individuals and may be a more accurate measure of refueling performance at a particular site than static measures such as nutrient reserves. We measured lipid metabolites of Semipalmated Sandpipers at 4 coastal stopover sites during northward migration: Merritt Island, FL; Georgetown, SC; Pea Island, NC; and Delaware Bay, NJ. We described spatial and temporal variation in metabolic profiles among the 4 stopovers and evaluated the effects of body mass, age, and date on metabolite concentrations. Triglyceride concentration, an indicator of fat deposition, declined during the migration, whereas B-OH-Butyrate, a measure of fasting, increased. Triglyceride concentration correlated with phospholipids and inversely related to B-OH-butyrate, but was not related to body mass or age. Triglyceride levels and estimated percent fat were greater at Delaware Bay than at any stopovers to the south. Plasma metabolite profiles accurately reflected stopover refueling performance and provide an important new technique for assessing stopover habitat quality for migratory shorebirds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorgensen, S.
Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].
Donnell, Deborah; Komárek, Arnošt; Omelka, Marek; Mullis, Caroline E.; Szekeres, Greg; Piwowar-Manning, Estelle; Fiamma, Agnes; Gray, Ronald H.; Lutalo, Tom; Morrison, Charles S.; Salata, Robert A.; Chipato, Tsungai; Celum, Connie; Kahle, Erin M.; Taha, Taha E.; Kumwenda, Newton I.; Karim, Quarraisha Abdool; Naranbhai, Vivek; Lingappa, Jairam R.; Sweat, Michael D.; Coates, Thomas; Eshleman, Susan H.
2013-01-01
Background Accurate methods of HIV incidence determination are critically needed to monitor the epidemic and determine the population level impact of prevention trials. One such trial, Project Accept, a Phase III, community-randomized trial, evaluated the impact of enhanced, community-based voluntary counseling and testing on population-level HIV incidence. The primary endpoint of the trial was based on a single, cross-sectional, post-intervention HIV incidence assessment. Methods and Findings Test performance of HIV incidence determination was evaluated for 403 multi-assay algorithms [MAAs] that included the BED capture immunoassay [BED-CEIA] alone, an avidity assay alone, and combinations of these assays at different cutoff values with and without CD4 and viral load testing on samples from seven African cohorts (5,325 samples from 3,436 individuals with known duration of HIV infection [1 month to >10 years]). The mean window period (average time individuals appear positive for a given algorithm) and performance in estimating an incidence estimate (in terms of bias and variance) of these MAAs were evaluated in three simulated epidemic scenarios (stable, emerging and waning). The power of different test methods to detect a 35% reduction in incidence in the matched communities of Project Accept was also assessed. A MAA was identified that included BED-CEIA, the avidity assay, CD4 cell count, and viral load that had a window period of 259 days, accurately estimated HIV incidence in all three epidemic settings and provided sufficient power to detect an intervention effect in Project Accept. Conclusions In a Southern African setting, HIV incidence estimates and intervention effects can be accurately estimated from cross-sectional surveys using a MAA. The improved accuracy in cross-sectional incidence testing that a MAA provides is a powerful tool for HIV surveillance and program evaluation. PMID:24236054
NASA Astrophysics Data System (ADS)
Chadwick, M. B.; Capote, R.; Trkov, A.; Herman, M. W.; Brown, D. A.; Hale, G. M.; Kahler, A. C.; Talou, P.; Plompen, A. J.; Schillebeeckx, P.; Pigni, M. T.; Leal, L.; Danon, Y.; Carlson, A. D.; Romain, P.; Morillon, B.; Bauge, E.; Hambsch, F.-J.; Kopecky, S.; Giorginis, G.; Kawano, T.; Lestone, J.; Neudecker, D.; Rising, M.; Paris, M.; Nobre, G. P. A.; Arcilla, R.; Cabellos, O.; Hill, I.; Dupont, E.; Koning, A. J.; Cano-Ott, D.; Mendoza, E.; Balibrea, J.; Paradela, C.; Durán, I.; Qian, J.; Ge, Z.; Liu, T.; Hanlin, L.; Ruan, X.; Haicheng, W.; Sin, M.; Noguere, G.; Bernard, D.; Jacqmin, R.; Bouland, O.; De Saint Jean, C.; Pronyaev, V. G.; Ignatyuk, A. V.; Yokoyama, K.; Ishikawa, M.; Fukahori, T.; Iwamoto, N.; Iwamoto, O.; Kunieda, S.; Lubitz, C. R.; Salvatores, M.; Palmiotti, G.; Kodeli, I.; Kiedrowski, B.; Roubtsov, D.; Thompson, I.; Quaglioni, S.; Kim, H. I.; Lee, Y. O.; Fischer, U.; Simakov, S.; Dunn, M.; Guber, K.; Márquez Damián, J. I.; Cantargi, F.; Sirakov, I.; Otuka, N.; Daskalakis, A.; McDermott, B. J.; van der Marck, S. C.
2018-02-01
The CIELO collaboration has studied neutron cross sections on nuclides that significantly impact criticality in nuclear technologies - 235,238U, 239Pu, 56Fe, 16O and 1H - with the aim of improving the accuracy of the data and resolving previous discrepancies in our understanding. This multi-laboratory pilot project, coordinated via the OECD/NEA Working Party on Evaluation Cooperation (WPEC) Subgroup 40 with support also from the IAEA, has motivated experimental and theoretical work and led to suites of new evaluated libraries that accurately reflect measured data and also perform
Vandenplas, J; Janssens, S; Buys, N; Gengler, N
2013-06-01
The aim of this study was to test the integration of external information, i.e. foreign estimated breeding values (EBV) and the associated reliabilities (REL), for stallions into the Belgian genetic evaluation for jumping horses. The Belgian model is a bivariate repeatability Best Linear Unbiased Prediction animal model only based on Belgian performances, while Belgian breeders import horses from neighbouring countries. Hence, use of external information is needed as prior to achieve more accurate EBV. Pedigree and performance data contained 101382 horses and 712212 performances, respectively. After conversion to the Belgian trait, external information of 98 French and 67 Dutch stallions was integrated into the Belgian evaluation. Resulting Belgian rankings of the foreign stallions were more similar to foreign rankings according to the increase of the rank correlations of at least 12%. REL of their EBV were improved of at least 2% on average. External information was partially to totally equivalent to 4 years of contemporary horses' performances or to all the stallions' own performances. All these results showed the interest to integrate external information into the Belgian evaluation. © 2012 Blackwell Verlag GmbH.
A new head phantom with realistic shape and spatially varying skull resistivity distribution.
Li, Jian-Bo; Tang, Chi; Dai, Meng; Liu, Geng; Shi, Xue-Tao; Yang, Bin; Xu, Can-Hua; Fu, Feng; You, Fu-Sheng; Tang, Meng-Xing; Dong, Xiu-Zhen
2014-02-01
Brain electrical impedance tomography (EIT) is an emerging method for monitoring brain injuries. To effectively evaluate brain EIT systems and reconstruction algorithms, we have developed a novel head phantom that features realistic anatomy and spatially varying skull resistivity. The head phantom was created with three layers, representing scalp, skull, and brain tissues. The fabrication process entailed 3-D printing of the anatomical geometry for mold creation followed by casting to ensure high geometrical precision and accuracy of the resistivity distribution. We evaluated the accuracy and stability of the phantom. Results showed that the head phantom achieved high geometric accuracy, accurate skull resistivity values, and good stability over time and in the frequency domain. Experimental impedance reconstructions performed using the head phantom and computer simulations were found to be consistent for the same perturbation object. In conclusion, this new phantom could provide a more accurate test platform for brain EIT research.
Injuries to the shoulder in the throwing athlete. Part two: evaluation/treatment.
Meister, K
2000-01-01
In part one of this three-part series (March/April 2000), I concentrated on summarizing the biomechanics of the normal throwing shoulder and the pathophysiology of injury. A classification of injury was presented that was based on the principles contained in that article. Part two of this series will focus on the evaluation and treatment of injuries, expanded from an understanding of the principles learned in part one. The ability to perform a skillful examination, and thus develop an accurate diagnosis, is the foundation for treatment. Fortunately, many difficulties encountered in a thrower's shoulder can be treated with a nonoperative approach. However, in instances where conservative measures fail, an improved understanding of the pathophysiology of injury and the development of improved surgical techniques are leading to more accurate diagnoses and more successful rates of return of the athlete to a premorbid level of activity.
Issack, Bilkiss B; Roy, Pierre-Nicholas
2005-08-22
An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.
Peroneal tendon pathology: Pre- and post-operative high resolution US and MR imaging.
Kumar, Yogesh; Alian, Ali; Ahlawat, Shivani; Wukich, Dane K; Chhabra, Avneesh
2017-07-01
Peroneal tendon pathology is an important cause of lateral ankle pain and instability. Typical peroneal tendon disorders include tendinitis, tenosynovitis, partial and full thickness tendon tears, peroneal retinacular injuries, and tendon subluxations and dislocations. Surgery is usually indicated when conservative treatment fails. Familiarity with the peroneal tendon surgeries and expected postoperative imaging findings is essential for accurate assessment and to avoid diagnostic pitfalls. Cross-sectional imaging, especially ultrasound and MRI provide accurate pre-operative and post-operative evaluation of the peroneal tendon pathology. In this review article, the normal anatomy, clinical presentation, imaging features, pitfalls and commonly performed surgical treatments for peroneal tendon abnormalities will be reviewed. The role of dynamic ultrasound and kinematic MRI for the evaluation of peroneal tendons will be discussed. Normal and abnormal postsurgical imaging appearances will be illustrated. Copyright © 2017 Elsevier B.V. All rights reserved.
Actinic imaging and evaluation of phase structures on EUV lithography masks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mochi, Iacopo; Goldberg, Kenneth; Huh, Sungmin
2010-09-28
The authors describe the implementation of a phase-retrieval algorithm to reconstruct phase and complex amplitude of structures on EUV lithography masks. Many native defects commonly found on EUV reticles are difficult to detect and review accurately because they have a strong phase component. Understanding the complex amplitude of mask features is essential for predictive modeling of defect printability and defect repair. Besides printing in a stepper, the most accurate way to characterize such defects is with actinic inspection, performed at the design, EUV wavelength. Phase defect and phase structures show a distinct through-focus behavior that enables qualitative evaluation of themore » object phase from two or more high-resolution intensity measurements. For the first time, phase of structures and defects on EUV masks were quantitatively reconstructed based on aerial image measurements, using a modified version of a phase-retrieval algorithm developed to test optical phase shifting reticles.« less
Accuracy of specific BIVA for the assessment of body composition in the United States population.
Buffa, Roberto; Saragat, Bruno; Cabras, Stefano; Rinaldi, Andrea C; Marini, Elisabetta
2013-01-01
Bioelectrical impedance vector analysis (BIVA) is a technique for the assessment of hydration and nutritional status, used in the clinical practice. Specific BIVA is an analytical variant, recently proposed for the Italian elderly population, that adjusts bioelectrical values for body geometry. Evaluating the accuracy of specific BIVA in the adult U.S. population, compared to the 'classic' BIVA procedure, using DXA as the reference technique, in order to obtain an interpretative model of body composition. A cross-sectional sample of 1590 adult individuals (836 men and 754 women, 21-49 years old) derived from the NHANES 2003-2004 was considered. Classic and specific BIVA were applied. The sensitivity and specificity in recognizing individuals below the 5(th) and above the 95(th) percentiles of percent fat (FMDXA%) and extracellular/intracellular water (ECW/ICW) ratio were evaluated by receiver operating characteristic (ROC) curves. Classic and specific BIVA results were compared by a probit multiple-regression. Specific BIVA was significantly more accurate than classic BIVA in evaluating FMDXA% (ROC areas: 0.84-0.92 and 0.49-0.61 respectively; p = 0.002). The evaluation of ECW/ICW was accurate (ROC areas between 0.83 and 0.96) and similarly performed by the two procedures (p = 0.829). The accuracy of specific BIVA was similar in the two sexes (p = 0.144) and in FMDXA% and ECW/ICW (p = 0.869). Specific BIVA showed to be an accurate technique. The tolerance ellipses of specific BIVA can be used for evaluating FM% and ECW/ICW in the U.S. adult population.
Evaluation of wet tantalum capacitors after exposure to extended periods of ripple current, volume 1
NASA Technical Reports Server (NTRS)
Watson, G. W.; Lasharr, J. C.; Shumaker, M. J.
1974-01-01
The application of tantalum capacitors in the Viking Lander includes both dc voltage and ripple current electrical stress, high temperature during nonoperating times (sterilization), and high vibration and shock loads. The capacitors must survive these severe environments without any degradation if reliable performance is to be achieved. A test program was established to evaluate both wet-slug tantalum and wet-foil capacitors under conditions accurately duplicating actual Viking applications. Test results of the electrical performance characteristics during extended periods of ripple current, the characteristics of the internal silver migration as a function for extended periods of ripple current, and the existence of any memory characteristics are presented.
Evaluation of wet tantalum capacitors after exposure to extended periods of ripple current, volume 2
NASA Technical Reports Server (NTRS)
Ward, C. M.
1975-01-01
The application of tantalum capacitors in the Viking Lander includes dc voltage and ripple current electrical stress, high temperature during nonoperating times (sterilization), and high vibration and shock loads. The capacitors must survive these severe environments without any degradation if reliable performance is to be achieved. A test program was established to evaluate both wet-slug tantalum and wet-foil capacitors under conditions accurately duplicating actual Viking applications. Test results of the electrical performance characteristics during extended periods of ripple current, the characteristics of the internal silver migration as a function of extended periods of ripple current, and the existence of any memory characteristics are presented.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco
2008-09-01
This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.
Accurate monitoring leads to effective control and greater learning of patient education materials.
Rawson, Katherine A; O'Neil, Rochelle; Dunlosky, John
2011-09-01
Effective management of chronic diseases (e.g., diabetes) can depend on the extent to which patients can learn and remember disease-relevant information. In two experiments, we explored a technique motivated by theories of self-regulated learning for improving people's learning of information relevant to managing a chronic disease. Materials were passages from patient education booklets on diabetes from NIDDK. Session 1 included an initial study trial, Session 2 included self-regulated restudy, and Session 3 included a final memory test. The key manipulation concerned the kind of support provided for self-regulated learning during Session 2. In Experiment 1, participants either were prompted to self-test and then evaluate their learning before selecting passages to restudy, were shown the prompt questions but did not overtly self-test or evaluate learning prior to selecting passages, or were not shown any prompts and were simply given the menu for selecting passages to restudy. Participants who self-tested and evaluated learning during Session 2 had a small but significant advantage over the other groups on the final test. Secondary analyses provided evidence that the performance advantage may have been modest because of inaccurate monitoring. Experiment 2 included a group who also self-tested but who evaluated their learning using idea-unit judgments (i.e., by checking their responses against a list of key ideas from the correct response). Participants who self-tested and made idea-unit judgments exhibited a sizable advantage on final test performance. Secondary analyses indicated that the performance advantage was attributable in part to more accurate monitoring and more effective self-regulated learning. An important practical implication is that learning of patient education materials can be enhanced by including appropriate support for learners' self-regulatory processes. (c) 2011 APA, all rights reserved.
Imitating manual curation of text-mined facts in biomedicine.
Rodriguez-Esteban, Raul; Iossifov, Ivan; Rzhetsky, Andrey
2006-09-08
Text-mining algorithms make mistakes in extracting facts from natural-language texts. In biomedical applications, which rely on use of text-mined data, it is critical to assess the quality (the probability that the message is correctly extracted) of individual facts--to resolve data conflicts and inconsistencies. Using a large set of almost 100,000 manually produced evaluations (most facts were independently reviewed more than once, producing independent evaluations), we implemented and tested a collection of algorithms that mimic human evaluation of facts provided by an automated information-extraction system. The performance of our best automated classifiers closely approached that of our human evaluators (ROC score close to 0.95). Our hypothesis is that, were we to use a larger number of human experts to evaluate any given sentence, we could implement an artificial-intelligence curator that would perform the classification job at least as accurately as an average individual human evaluator. We illustrated our analysis by visualizing the predicted accuracy of the text-mined relations involving the term cocaine.
Mioni, Giovanna; Bertucci, Erica; Rosato, Antonella; Terrett, Gill; Rendell, Peter G; Zamuner, Massimo; Stablum, Franca
2017-06-01
Previous studies have shown that traumatic brain injury (TBI) patients have difficulties with prospective memory (PM). Considering that PM is closely linked to independent living it is of primary interest to develop strategies that can improve PM performance in TBI patients. This study employed Virtual Week task as a measure of PM, and we included future event simulation to boost PM performance. Study 1 evaluated the efficacy of the strategy and investigated possible practice effects. Twenty-four healthy participants performed Virtual Week in a no strategy condition, and 24 healthy participants performed it in a mixed condition (no strategy - future event simulation). In Study 2, 18 TBI patients completed the mixed condition of Virtual Week and were compared with the 24 healthy controls who undertook the mixed condition of Virtual Week in Study 1. All participants also completed a neuropsychological evaluation to characterize the groups on level of cognitive functioning. Study 1 showed that participants in the future event simulation condition outperformed participants in the no strategy condition, and these results were not attributable to practice effects. Results of Study 2 showed that TBI patients performed PM tasks less accurately than controls, but that future event simulation can substantially reduce TBI-related deficits in PM performance. The future event simulation strategy also improved the controls' PM performance. These studies showed the value of future event simulation strategy in improving PM performance in healthy participants as well as in TBI patients. TBI patients performed PM tasks less accurately than controls, confirming prospective memory impairment in these patients. Participants in the future event simulation condition out-performed participants in the no strategy condition. Future event simulation can substantially reduce TBI-related deficits in PM performance. Future event simulation strategy also improved the controls' PM performance. © 2017 The British Psychological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvo Ortega, Juan Francisco, E-mail: jfcdrr@yahoo.es; Moragues, Sandra; Pozo, Miquel
2014-07-01
The aim of this study is to assess the accuracy of a convolution-based algorithm (anisotropic analytical algorithm [AAA]) implemented in the Eclipse planning system for intensity-modulated radiosurgery (IMRS) planning of small cranial targets by using a 5-mm leaf-width multileaf collimator (MLC). Overall, 24 patient-based IMRS plans for cranial lesions of variable size (0.3 to 15.1 cc) were planned (Eclipse, AAA, version 10.0.28) using fixed field-based IMRS produced by a Varian linear accelerator equipped with a 120 MLC (5-mm width on central leaves). Plan accuracy was evaluated according to phantom-based measurements performed with radiochromic film (EBT2, ISP, Wayne, NJ). Film 2Dmore » dose distributions were performed with the FilmQA Pro software (version 2011, Ashland, OH) by using the triple-channel dosimetry method. Comparison between computed and measured 2D dose distributions was performed using the gamma method (3%/1 mm). Performance of the MLC was checked by inspection of the DynaLog files created by the linear accelerator during the delivery of each dynamic field. The absolute difference between the calculated and measured isocenter doses for all the IMRS plans was 2.5% ± 2.1%. The gamma evaluation method resulted in high average passing rates of 98.9% ± 1.4% (red channel) and 98.9% ± 1.5% (blue and green channels). DynaLog file analysis revealed a maximum root mean square error of 0.46 mm. According to our results, we conclude that the Eclipse/AAA algorithm provides accurate cranial IMRS dose distributions that may be accurately delivered by a Varian linac equipped with a Millennium 120 MLC.« less
In vivo, high-frequency three-dimensional cardiac MR elastography: Feasibility in normal volunteers.
Arani, Arvin; Glaser, Kevin L; Arunachalam, Shivaram P; Rossman, Phillip J; Lake, David S; Trzasko, Joshua D; Manduca, Armando; McGee, Kiaran P; Ehman, Richard L; Araoz, Philip A
2017-01-01
Noninvasive stiffness imaging techniques (elastography) can image myocardial tissue biomechanics in vivo. For cardiac MR elastography (MRE) techniques, the optimal vibration frequency for in vivo experiments is unknown. Furthermore, the accuracy of cardiac MRE has never been evaluated in a geometrically accurate phantom. Therefore, the purpose of this study was to determine the necessary driving frequency to obtain accurate three-dimensional (3D) cardiac MRE stiffness estimates in a geometrically accurate diastolic cardiac phantom and to determine the optimal vibration frequency that can be introduced in healthy volunteers. The 3D cardiac MRE was performed on eight healthy volunteers using 80 Hz, 100 Hz, 140 Hz, 180 Hz, and 220 Hz vibration frequencies. These frequencies were tested in a geometrically accurate diastolic heart phantom and compared with dynamic mechanical analysis (DMA). The 3D Cardiac MRE was shown to be feasible in volunteers at frequencies as high as 180 Hz. MRE and DMA agreed within 5% at frequencies greater than 180 Hz in the cardiac phantom. However, octahedral shear strain signal to noise ratios and myocardial coverage was shown to be highest at a frequency of 140 Hz across all subjects. This study motivates future evaluation of high-frequency 3D MRE in patient populations. Magn Reson Med 77:351-360, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A gantry-based tri-modality system for bioluminescence tomography
Yan, Han; Lin, Yuting; Barber, William C.; Unlu, Mehmet Burcin; Gulsen, Gultekin
2012-01-01
A gantry-based tri-modality system that combines bioluminescence (BLT), diffuse optical (DOT), and x-ray computed tomography (XCT) into the same setting is presented here. The purpose of this system is to perform bioluminescence tomography using a multi-modality imaging approach. As parts of this hybrid system, XCT and DOT provide anatomical information and background optical property maps. This structural and functional a priori information is used to guide and restrain bioluminescence reconstruction algorithm and ultimately improve the BLT results. The performance of the combined system is evaluated using multi-modality phantoms. In particular, a cylindrical heterogeneous multi-modality phantom that contains regions with higher optical absorption and x-ray attenuation is constructed. We showed that a 1.5 mm diameter bioluminescence inclusion can be localized accurately with the functional a priori information while its source strength can be recovered more accurately using both structural and the functional a priori information. PMID:22559540
Jernick, Michael; Walker Gallego, Edward; Nuzzo, Michael
2017-12-01
Ultrasound (US)-guided intra-articular hip injections have been proposed in the literature to be accurate, reliable, and safe alternatives to fluoroscopy-guided injections. To evaluate the accuracy of US-guided magnetic resonance (MR) arthrogram injections of the hip performed in the office setting by a single orthopaedic surgeon and elucidate the potential effects that patient age, sex, and body mass index (BMI) have on contrast placement. Case series; Level of evidence, 4. From a review of the senior author's office database, 89 patients (101 hips) who had US-guided MR arthrogram injections performed between December 2014 and June 2016 were identified. Official radiology reports were evaluated to determine whether extra-articular contrast was noted. Patient variables, including BMI, age, and sex, were evaluated between patients who had inappropriately placed contrast and those who did not. Of the 101 hip injections, there were 6 cases that demonstrated inadequate contrast placement within the joint, likely secondary to extravasation or incorrect placement; however, an MR arthrogram was adequately interpreted in all cases. There were no significant differences noted between those with appropriate versus inappropriate contrast placement when evaluating BMI ( P = .57), age ( P = .33), or sex ( P = .67), and neither group had an adverse event. US-guided injections are safe and accurate alternatives to fluoroscopy-guided injections in the office setting, with 94% accuracy. Furthermore, BMI, age, and sex did not play a statistically significant role among patients with inappropriately placed contrast.
Paes, Thaís; Machado, Felipe Vilaça Cavallari; Cavalheri, Vinícius; Pitta, Fabio; Hernandes, Nidia Aparecida
2017-07-01
People with chronic obstructive pulmonary disease (COPD) present symptoms such as dyspnea and fatigue, which hinder their performance in activities of daily living (ADL). A few multitask protocols have been developed to assess ADL performance in this population, although measurement properties of such protocols were not yet systematically reviewed. Areas covered: Studies were included if an assessment of the ability to perform ADL was conducted in people with COPD using a (objective) performance-based protocol. The search was conducted in the following databases: Pubmed, EMBASE, Cochrane Library, PEDro, CINAHL and LILACS. Furthermore, hand searches were conducted. Expert commentary: Up to this moment, only three protocols had measurement properties described: the Glittre ADL Test, the Monitored Functional Task Evaluation and the Londrina ADL Protocol were shown to be valid and reliable whereas only the Glittre ADL Test was shown to be responsive to change after pulmonary rehabilitation. These protocols can be used in laboratory settings and clinical practice to evaluate ADL performance in people with COPD, although there is need for more in-depth information on their validity, reliability and especially responsiveness due to the growing interest in the accurate assessment of ADL performance in this population.
Advanced Technology Composite Fuselage-Structural Performance
NASA Technical Reports Server (NTRS)
Walker, T. H.; Minguet, P. J.; Flynn, B. W.; Carbery, D. J.; Swanson, G. D.; Ilcewicz, L. B.
1997-01-01
Boeing is studying the technologies associated with the application of composite materials to commercial transport fuselage structure under the NASA-sponsored contracts for Advanced Technology Composite Aircraft Structures (ATCAS) and Materials Development Omnibus Contract (MDOC). This report addresses the program activities related to structural performance of the selected concepts, including both the design development and subsequent detailed evaluation. Design criteria were developed to ensure compliance with regulatory requirements and typical company objectives. Accurate analysis methods were selected and/or developed where practical, and conservative approaches were used where significant approximations were necessary. Design sizing activities supported subsequent development by providing representative design configurations for structural evaluation and by identifying the critical performance issues. Significant program efforts were directed towards assessing structural performance predictive capability. The structural database collected to perform this assessment was intimately linked to the manufacturing scale-up activities to ensure inclusion of manufacturing-induced performance traits. Mechanical tests were conducted to support the development and critical evaluation of analysis methods addressing internal loads, stability, ultimate strength, attachment and splice strength, and damage tolerance. Unresolved aspects of these performance issues were identified as part of the assessments, providing direction for future development.
Nonlinear analysis and performance evaluation of the Annular Suspension and Pointing System (ASPS)
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1978-01-01
The Annular Suspension and Pointing System (ASPS) can provide high accurate fine pointing for a variety of solar-, stellar-, and Earth-viewing scientific instruments during space shuttle orbital missions. In this report, a detailed nonlinear mathematical model is developed for the ASPS/Space Shuttle system. The equations are augmented with nonlinear models of components such as magnetic actuators and gimbal torquers. Control systems and payload attitude state estimators are designed in order to obtain satisfactory pointing performance, and statistical pointing performance is predicted in the presence of measurement noise and disturbances.
Gymnastic judges benefit from their own motor experience as gymnasts.
Pizzera, Alexandra
2012-12-01
Gymnastic judges have the difficult task of evaluating highly complex skills. My purpose in the current study was to examine evidence that judges use their sensorimotor experiences to enhance their perceptual judgments. In a video test, 58 judges rated 31 gymnasts performing a balance beam skill. I compared decision quality between judges who could perform the skill themselves on the balance beam (specific motor experience = SME) and those who could not. Those with SME showed better performance than those without SME. These data suggest that judges use their personal experiences as information to accurately assess complex gymnastic skills. [corrected].
Increasing money-counting skills with a student with brain injury: skill and performance deficits.
Fienup, Daniel M; Mudgal, Dipti; Pace, Gary
2013-01-01
Two studies examined the effectiveness of interventions designed to increase money-counting skills of a student with brain injury. Both skill and performance hypotheses were examined. Single subject designs were used to evaluate interventions, including a multiple-baseline across counting paper and coin money (study 1) and a changing criterion design (study 2). In study 1, it was hypothesized that the student had a skill deficit; thus, the participant was taught organizational strategies for counting money. In study 2, a performance deficit was hypothesized and the effects of contingent rewards were evaluated. In study 1, organizational strategies increased organized counting of money, but did not affect counting accuracy. In study 2, contingent rewards increased accurate money counting. When dealing with multi-step behaviours, different components of behaviour can be controlled by different variables, such as skill and performance deficits. Effective academic interventions may need to consider both types of deficits.
Test Vehicle Forebody Wake Effects on CPAS Parachutes
NASA Technical Reports Server (NTRS)
Ray, Eric S.
2017-01-01
Parachute drag performance has been reconstructed for a large number of Capsule Parachute Assembly System (CPAS) flight tests. This allows for determining forebody wake effects indirectly through statistical means. When data are available in a "clean" wake, such as behind a slender test vehicle, the relative degradation in performance for other test vehicles can be computed as a Pressure Recovery Fraction (PRF). All four CPAS parachute types were evaluated: Forward Bay Cover Parachutes (FBCPs), Drogues, Pilots, and Mains. Many tests used the missile-shaped Parachute Compartment Drop Test Vehicle (PCDTV) to obtain data at high airspeeds. Other tests used the Orion "boilerplate" Parachute Test Vehicle (PTV) to evaluate parachute performance in a representative heatshield wake. Drag data from both vehicles are normalized to a "capsule" forebody equivalent for Orion simulations. A separate database of PCDTV-specific performance is maintained to accurately predict flight tests. Data are shared among analogous parachutes whenever possible to maximize statistical significance.
Performance Appraisal Interview: A Review of Research.
1987-01-01
and valued outcomes than did supervisory feedback. Ivancevich and McMahon (1982) * . found that self-generated feedback on goal accomplishment was... Ivancevich (1980) found that engineers rated with behavior expectation scales perceived their interviews as providing more clarity, more meaningful feedback...accurate than evaluative, emotionally-tone feedback. In addition, the previously-discussed studies by Ivancevich (1980) and Hom et al. (1982) suggest
ERIC Educational Resources Information Center
Chingos, Matthew M.; Henderson, Michael; West, Martin R.
2010-01-01
Conventional models of democratic accountability hinge on citizens' ability to evaluate government performance accurately, yet there is little evidence on the degree to which citizen perceptions of the quality of government services correspond to actual service quality. Using nationally representative survey data, we find that citizens'…
ERIC Educational Resources Information Center
Steiner, Lucy
2010-01-01
The United States' education system needs to take its critical next step: fairly and accurately measuring teacher performance. Successful reforms to teacher pay, career advancement, professional development, retention, and other human capital systems that lead to better student outcomes depend on it. Where can the U.S. find the best-practice…
ERIC Educational Resources Information Center
Karkee, Thakur; Choi, Seung
2005-01-01
Proper maintenance of a scale established in the baseline year would assure the accurate estimation of growth in subsequent years. Scale maintenance is especially important when the state performance standards must be preserved for future administrations. To ensure proper maintenance of a scale, the selection of anchor items and evaluation of…
Algorithmic Classification of Five Characteristic Types of Paraphasias.
Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven
2016-12-01
This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.
Influence of 2D and 3D view on performance and time estimation in minimal invasive surgery.
Blavier, A; Nyssen, A S
2009-11-01
This study aimed to evaluate the impact of two-dimensional (2D) and three-dimensional (3D) images on time performance and time estimation during a surgical motor task. A total of 60 subjects without any surgical experience (nurses) and 20 expert surgeons performed a fine surgical task with a new laparoscopic technology (da Vinci robotic system). The 80 subjects were divided into two groups, one using 3D view option and the other using 2D view option. We measured time performance and asked subjects to verbally estimate their time performance. Our results showed faster performance in 3D than in 2D view for novice subjects while the performance in 2D and 3D was similar in the expert group. We obtained a significant interaction between time performance and time evaluation: in 2D condition, all subjects accurately estimated their time performance while they overestimated it in the 3D condition. Our results emphasise the role of 3D in improving performance and the contradictory feeling about time evaluation in 2D and 3D. This finding is discussed in regard with the retrospective paradigm and suggests that 2D and 3D images are differently processed and memorised.
Van Duren, B H; Pandit, H; Beard, D J; Murray, D W; Gill, H S
2009-04-01
The recent development in Oxford lateral unicompartmental knee arthroplasty (UKA) design requires a valid method of assessing its kinematics. In particular, the use of single plane fluoroscopy to reconstruct the 3D kinematics of the implanted knee. The method has been used previously to investigate the kinematics of UKA, but mostly it has been used in conjunction with total knee arthroplasty (TKA). However, no accuracy assessment of the method when used for UKA has previously been reported. In this study we performed computer simulation tests to investigate the effect of the different geometry of the unicompartmental implant has on the accuracy of the method in comparison to the total knee implants. A phantom was built to perform in vitro tests to determine the accuracy of the method for UKA. The computer simulations suggested that the use of the method for UKA would prove less accurate than for TKA's. The rotational degrees of freedom for the femur showed greatest disparity between the UKA and TKA. The phantom tests showed that the in-plane translations were accurate to <0.5mm RMS and the out-of-plane translations were less accurate with 4.1mm RMS. The rotational accuracies were between 0.6 degrees and 2.3 degrees which are less accurate than those reported in the literature for TKA, however, the method is sufficient for studying overall knee kinematics.
The applicability and effectiveness of cluster analysis
NASA Technical Reports Server (NTRS)
Ingram, D. S.; Actkinson, A. L.
1973-01-01
An insight into the characteristics which determine the performance of a clustering algorithm is presented. In order for the techniques which are examined to accurately cluster data, two conditions must be simultaneously satisfied. First the data must have a particular structure, and second the parameters chosen for the clustering algorithm must be correct. By examining the structure of the data from the Cl flight line, it is clear that no single set of parameters can be used to accurately cluster all the different crops. The effectiveness of either a noniterative or iterative clustering algorithm to accurately cluster data representative of the Cl flight line is questionable. Thus extensive a prior knowledge is required in order to use cluster analysis in its present form for applications like assisting in the definition of field boundaries and evaluating the homogeneity of a field. New or modified techniques are necessary for clustering to be a reliable tool.
A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.
Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D
2014-02-01
In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.
Nunes, Matheus Henrique
2016-01-01
Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects. PMID:27187074
Bansal, Vasudha; Kumar, Pawan; Kwon, Eilhann E; Kim, Ki-Hyun
2017-10-13
There is a growing need for accurate detection of trace-level PAHs in food products due to the numerous detrimental effects caused by their contamination (e.g., toxicity, carcinogenicity, and teratogenicity). This review aims to discuss the up-to-date knowledge on the measurement techniques available for PAHs contained in food or its related products. This article aims to provide a comprehensive outline on the measurement techniques of PAHs in food to help reduce their deleterious impacts on human health based on the accurate quantification. The main part of this review is dedicated to the opportunities and practical options for the treatment of various food samples and for accurate quantification of PAHs contained in those samples. Basic information regarding all available analytical measurement techniques for PAHs in food samples is also evaluated with respect to their performance in terms of quality assurance.
Nunes, Matheus Henrique; Görgens, Eric Bastos
2016-01-01
Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.
"Performance Of A Wafer Stepper With Automatic Intra-Die Registration Correction."
NASA Astrophysics Data System (ADS)
van den Brink, M. A.; Wittekoek, S.; Linders, H. F. D.; van Hout, F. J.; George, R. A.
1987-01-01
An evaluation of a wafer stepper with the new improved Philips/ASM-L phase grating alignment system is reported. It is shown that an accurate alignment system needs an accurate X-Y-0 wafer stage and an accurate reticle Z stage to realize optimum overlay accuracy. This follows from a discussion of the overlay budget and an alignment procedure model. The accurate wafer stage permits high overlay accuracy using global alignment only, thus eliminating the throughput penalty of align-by-field schemes. The accurate reticle Z stage enables an intra-die magnification control with respect to the wafer scale. Various overlay data are reported, which have been measured with the automatic metrology program of the stepper. It is demonstrated that the new dual alignment system (with the external spatial filter) has improved the ability to align to weakly reflecting layers. The results are supported by a Fourier analysis of the alignment signal. Resolution data are given for the PAS 2500 projection lenses, which show that the high overlay accuracy of the system is properly matched with submicron linewidth control. The results of a recently introduced 20mm i-line lens with a numerical aperture of 0.4 (Zeiss 10-78-58) are included.
Performance comparison of extracellular spike sorting algorithms for single-channel recordings.
Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert
2012-01-30
Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (p<0.01) with optimized parameters than with the default ones. WaveClus was the most accurate spike sorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (p<0.01) than WaveClus for signals with a noise level in the range 0.15-0.30. KlustaKwik achieved similar scores to WaveClus for signals with low noise level 0.00-0.15 and was worse otherwise. In conclusion, none of the three compared algorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.
Clare, Linda; Whitaker, Christopher J; Roberts, Judith L; Nelis, Sharon M; Martyr, Anthony; Marková, Ivana S; Roth, Ilona; Woods, Robert T; Morris, Robin G
2013-01-01
Measures of memory awareness based on evaluative judgement and performance monitoring are often regarded as equivalent, but the Levels of Awareness Framework suggests they reflect different awareness phenomena. Examination of memory awareness among groups with differing degrees of impairment provides a test of this proposition. Ninety-nine people with dementia (PwD), 30 people with mild cognitive impairment (PwMCI), and their relatives completed isomorphic performance monitoring and evaluative judgement measures of memory awareness and were followed up at 12 and (PwD only) 20 months. In addition to the resulting awareness indices, comparative accuracy scores were calculated using the relatives' data to establish whether any inaccuracy was specific to self-ratings. When making evaluative judgements about their memory in general, both PwD and PwMCI tended to overestimate their own functioning relative to informant ratings made by relatives. When monitoring performance on memory tests, PwD again overestimated performance relative to test scores, but PwMCI were much more accurate. Comparative accuracy scores indicated that, unlike PwD, PwMCI do not show a specific inaccuracy in self-related appraisals. The results support the proposition that awareness indices at the levels of evaluative judgement and performance monitoring should be regarded as reflecting distinct awareness phenomena. Copyright © 2013 S. Karger AG, Basel.
Whetsell, M S; Rayburn, E B; Osborne, P I
2006-05-01
This study was conducted to evaluate the accuracy of the National Research Council's (2000) Nutrient Requirements of Beef Cattle computer model when used to predict calf performance during on-farm pasture or dry-lot weaning and backgrounding. Calf performance was measured on 22 farms in 2002 and 8 farms in 2003 that participated in West Virginia Beef Quality Assurance Sale marketing pools. Calves were weaned on pasture (25 farms) or dry-lot (5 farms) and fed supplemental hay, haylage, ground shell corn, soybean hulls, or a commercial concentrate. Concentrates were fed at a rate of 0.0 to 1.5% of BW. The National Research Council (2000) model was used to predict ADG of each group of calves observed on each farm. The model error was measured by calculating residuals (the difference between predicted ADG minus observed ADG). Predicted animal performance was determined using level 1 of the model. Results show that, when using normal on-farm pasture sampling and forage analysis methods, the model error for ADG is high and did not accurately predict the performance of steers or heifers fed high-forage pasture-based diets; the predicted ADG was lower (P < 0.05) than the observed ADG. The estimated intake of low-producing animals was similar to the expected DMI, but for the greater-producing animals it was not. The NRC (2000) beef model may more accurately predict on-farm animal performance in pastured situations if feed analysis values reflect the energy value of the feed, account for selective grazing, and relate empty BW and shrunk BW to NDF.
Press, Michael F; Slamon, Dennis J; Flom, Kerry J; Park, Jinha; Zhou, Jian-Yuan; Bernstein, Leslie
2002-07-15
To compare and evaluate HER-2/neu clinical assay methods. One hundred seventeen breast cancer specimens with known HER-2/neu amplification and overexpression status were assayed with four different immunohistochemical assays and two different fluorescence in situ hybridization (FISH) assays. The accuracy of the FISH assays for HER-2/neu gene amplification was high, 97.4% for the Vysis PathVision assay (Vysis, Inc, Downers Grove, IL) and 95.7% for the the Ventana INFORM assay (Ventana, Medical Systems, Inc, Tucson, AZ). The immunohistochemical assay with the highest accuracy for HER-2/neu overexpression was obtained with R60 polyclonal antibody (96.6%), followed by immunohistochemical assays performed with 10H8 monoclonal antibody (95.7%), the Ventana CB11 monoclonal antibody (89.7%), and the DAKO HercepTest (88.9%; Dako, Corp, Carpinteria, CA). Only the sensitivities, and therefore, overall accuracy, of the DAKO Herceptest and Ventana CB11 immunohistochemical assays were significantly different from the more sensitive FISH assay. Based on these findings, the FISH assays were highly accurate, with immunohistochemical assays performed with R60 and 10H8 nearly as accurate. The DAKO HercepTest and the Ventana CB11 immunohistochemical assay were statistically significantly different from the Vysis FISH assay in evaluating these previously molecularly characterized breast cancer specimens.
Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad R.; Pompili, Dario; Soltanian-Zadeh, Hamid
2015-01-01
Hippocampus segmentation is a key step in the evaluation of mesial Temporal Lobe Epilepsy (mTLE) by MR images. Several automated segmentation methods have been introduced for medical image segmentation. Because of multiple edges, missing boundaries, and shape changing along its longitudinal axis, manual outlining still remains the benchmark for hippocampus segmentation, which however, is impractical for large datasets due to time constraints. In this study, four automatic methods, namely FreeSurfer, Hammer, Automatic Brain Structure Segmentation (ABSS), and LocalInfo segmentation, are evaluated to find the most accurate and applicable method that resembles the bench-mark of hippocampus. Results from these four methods are compared against those obtained using manual segmentation for T1-weighted images of 157 symptomatic mTLE patients. For performance evaluation of automatic segmentation, Dice coefficient, Hausdorff distance, Precision, and Root Mean Square (RMS) distance are extracted and compared. Among these four automated methods, ABSS generates the most accurate results and the reproducibility is more similar to expert manual outlining by statistical validation. By considering p-value<0.05, the results of performance measurement for ABSS reveal that, Dice is 4%, 13%, and 17% higher, Hausdorff is 23%, 87%, and 70% lower, precision is 5%, -5%, and 12% higher, and RMS is 19%, 62%, and 65% lower compared to LocalInfo, FreeSurfer, and Hammer, respectively. PMID:25571043
Lin, Zhaozhou; Zhang, Qiao; Liu, Ruixin; Gao, Xiaojie; Zhang, Lu; Kang, Bingya; Shi, Junhan; Wu, Zidan; Gui, Xinjing; Li, Xuelin
2016-01-25
To accurately, safely, and efficiently evaluate the bitterness of Traditional Chinese Medicines (TCMs), a robust predictor was developed using robust partial least squares (RPLS) regression method based on data obtained from an electronic tongue (e-tongue) system. The data quality was verified by the Grubb's test. Moreover, potential outliers were detected based on both the standardized residual and score distance calculated for each sample. The performance of RPLS on the dataset before and after outlier detection was compared to other state-of-the-art methods including multivariate linear regression, least squares support vector machine, and the plain partial least squares regression. Both R² and root-mean-squares error (RMSE) of cross-validation (CV) were recorded for each model. With four latent variables, a robust RMSECV value of 0.3916 with bitterness values ranging from 0.63 to 4.78 were obtained for the RPLS model that was constructed based on the dataset including outliers. Meanwhile, the RMSECV, which was calculated using the models constructed by other methods, was larger than that of the RPLS model. After six outliers were excluded, the performance of all benchmark methods markedly improved, but the difference between the RPLS model constructed before and after outlier exclusion was negligible. In conclusion, the bitterness of TCM decoctions can be accurately evaluated with the RPLS model constructed using e-tongue data.
In vitro evaluation of the Medtronic cardioplegia safety system.
Trowbridge, C C; Woods, K R; Muhle, M L; Niimi, K S; Tremain, K D; Jiang, J; Stammers, A H
2000-03-01
Myocardial preservation demands the precise and accurate delivery of cardioplegic solutions to provide nutritive delivery and metabolic waste removal. The purpose of this study was to evaluate the performance characteristics of the Medtronic CSS Cardioplegia Safety System in an in vitro setting. The CSS was evaluated under the following conditions: blood to crystalloid ratios of 1:0, 1:1, 4:1, 8:1, 0:1; potassium concentrations of 10, 20, and 40 mEq L-1; volumetric delivery collection at 100, 250, 500, 750, and 990 mL/min; pressure accuracy at 100 and 300 mmHg; and system safety mechanisms. Measured and predicted values from the CSS were compared using one way ANOVA, with statistical significance accepted at p < or = 0.05. The measured values for the tested ratios and volume collections were all within the manufacturer's technical parameters. Potassium concentration results were all within expected values except at 100 mL/min, where the measured value of 17.1 +/- 2.1 mmol was lower than the expected 20.0 +/- 0.2 mmol (p < .034). As flow rates changed, the CSS line pressure error was constant (0.5 to 3.7%), and the only significant difference was observed at 100 mmHg, 500 mL/min (102.3 +/- 1.7 vs. 100.0 +/- 0.0 mmHg, P < .003). The device performed accurately and reliably under all simulated safety conditions, including bubble detection, over pressurization and battery backup. In conclusion, the performance of the CSS was within the manufacturer's specifications for the majority of the tested conditions and operated safely when challenged under varying conditions.
Novak, Kerri L.; Jacob, Deepti; Kaplan, Gilaad G.; Boyce, Emma; Ghosh, Subrata; Ma, Irene; Lu, Cathy; Wilson, Stephanie; Panaccione, Remo
2016-01-01
Background. Approaches to distinguish inflammatory bowel disease (IBD) from noninflammatory disease that are noninvasive, accurate, and readily available are desirable. Such approaches may decrease time to diagnosis and better utilize limited endoscopic resources. The aim of this study was to evaluate the diagnostic accuracy for gastroenterologist performed point of care ultrasound (POCUS) in the detection of luminal inflammation relative to gold standard ileocolonoscopy. Methods. A prospective, single-center study was conducted on convenience sample of patients presenting with symptoms of diarrhea and/or abdominal pain. Patients were offered POCUS prior to having ileocolonoscopy. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with 95% confidence intervals (CI), as well as likelihood ratios, were calculated. Results. Fifty-eight patients were included in this study. The overall sensitivity, specificity, PPV, and NPV were 80%, 97.8%, 88.9%, and 95.7%, respectively, with positive and negative likelihood ratios (LR) of 36.8 and 0.20. Conclusion. POCUS can accurately be performed at the bedside to detect transmural inflammation of the intestine. This noninvasive approach may serve to expedite diagnosis, improve allocation of endoscopic resources, and facilitate initiation of appropriate medical therapy. PMID:27446838
Fall risk screening protocol for older hearing clinic patients.
Criter, Robin E; Honaker, Julie A
2017-10-01
The primary purposes of this study were (1) to describe measures that may contrast audiology patients who fall from those who do not fall and (2) to evaluate the clinical performance of measures that could be easily used for fall risk screening in a mainstream audiology hearing clinic. Cross-sectional study Study sample: Thirty-six community-dwelling audiology patient participants and 27 community-dwelling non-audiology patients over 60 years of age. The Hearing Handicap Inventory for the Elderly (HHIE) most accurately identified patients with a recent fall (sensitivity: 76.0%), while the Dizziness Handicap Inventory (DHI) most accurately identified patients without a recent fall (specificity: 90.9%). A combination of measures used in a protocol-including HHIE, DHI, number of medications, and the Timed Up and Go test-resulted in good, accurate identification of patients with or without a recent history of falls (92.0% sensitivity, 100% specificity). This study reports good sensitivity and excellent specificity for identifying patients with and without a recent history of falls when measures were combined into a screening protocol. Despite previously reported barriers, effective fall risk screenings may be performed in hearing clinic settings with measures often readily accessible to audiologists.
Do dichromats see colours in this way? Assessing simulation tools without colorimetric measurements.
Lillo Jover, Julio A; Álvaro Llorente, Leticia; Moreira Villegas, Humberto; Melnikova, Anna
2016-11-01
Simulcheck evaluates Colour Simulation Tools (CSTs, they transform colours to mimic those seen by colour vision deficients). Two CSTs (Variantor and Coblis) were used to know if the standard Simulcheck version (direct measurement based, DMB) can be substituted by another (RGB values based) not requiring sophisticated measurement instruments. Ten normal trichromats performed the two psychophysical tasks included in the Simulcheck method. The Pseudoachromatic Stimuli Identification task provided the h uv (hue angle) values of the pseudoachromatic stimuli: colours seen as red or green by normal trichromats but as grey by colour deficient people. The Minimum Achromatic Contrast task was used to compute the L R (relative luminance) values of the pseudoachromatic stimuli. Simulcheck DMB version showed that Variantor was accurate to simulate protanopia but neither Variantor nor Coblis were accurate to simulate deuteranopia. Simulcheck RGB version provided accurate h uv values, so this variable can be adequately estimated when lacking a colorimeter —an expensive and unusual apparatus—. Contrary, the inaccuracy of the L R estimations provided by Simulcheck RGB version makes it advisable to compute this variable from the measurements performed with a photometer, a cheap and easy to find apparatus.
Zheng, Zhi; Warren, Zachary; Weitlauf, Amy; Fu, Qiang; Zhao, Huan; Swanson, Amy; Sarkar, Nilanjan
2016-11-01
Researchers are increasingly attempting to develop and apply innovative technological platforms for early detection and intervention of autism spectrum disorder (ASD). This pilot study designed and evaluated a novel technologically-mediated intelligent learning environment with relevance to early social orienting skills. The environment was endowed with the capacity to administer social orienting cues and adaptively respond to autonomous real-time measurement of performance (i.e., non-contact gaze measurement). We evaluated the system with both toddlers with ASD (n = 8) as well as typically developing infants (n = 8). Children in both groups were able to ultimately respond accurately to social prompts delivered by the technological system. Results also indicated that the system was capable of attracting and pushing toward correct performance autonomously without user intervention.
NASA Technical Reports Server (NTRS)
Payne, R. W. (Principal Investigator)
1981-01-01
The crop identification procedures used performed were for spring small grains and are conducive to automation. The performance of the machine processing techniques shows a significant improvement over previously evaluated technology; however, the crop calendars require additional development and refinements prior to integration into automated area estimation technology. The integrated technology is capable of producing accurate and consistent spring small grains proportion estimates. Barley proportion estimation technology was not satisfactorily evaluated because LANDSAT sample segment data was not available for high density barley of primary importance in foreign regions and the low density segments examined were not judged to give indicative or unequvocal results. Generally, the spring small grains technology is ready for evaluation in a pilot experiment focusing on sensitivity analysis to a variety of agricultural and meteorological conditions representative of the global environment.
Improving Seasonal Crop Monitoring and Forecasting for Soybean and Corn in Iowa
NASA Astrophysics Data System (ADS)
Togliatti, K.; Archontoulis, S.; Dietzel, R.; VanLoocke, A.
2016-12-01
Accurately forecasting crop yield in advance of harvest could greatly benefit farmers, however few evaluations have been conducted to determine the effectiveness of forecasting methods. We tested one such method that used a combination of short-term weather forecasting from the Weather Research and Forecasting Model (WRF) to predict in season weather variables, such as, maximum and minimum temperature, precipitation and radiation at 4 different forecast lengths (2 weeks, 1 week, 3 days, and 0 days). This forecasted weather data along with the current and historic (previous 35 years) data from the Iowa Environmental Mesonet was combined to drive Agricultural Production Systems sIMulator (APSIM) simulations to forecast soybean and corn yields in 2015 and 2016. The goal of this study is to find the forecast length that reduces the variability of simulated yield predictions while also increasing the accuracy of those predictions. APSIM simulations of crop variables were evaluated against bi-weekly field measurements of phenology, biomass, and leaf area index from early and late planted soybean plots located at the Agricultural Engineering and Agronomy Research Farm in central Iowa as well as the Northwest Research Farm in northwestern Iowa. WRF model predictions were evaluated against observed weather data collected at the experimental fields. Maximum temperature was the most accurately predicted variable, followed by minimum temperature and radiation, and precipitation was least accurate according to RMSE values and the number of days that were forecasted within a 20% error of the observed weather. Our analysis indicated that for the majority of months in the growing season the 3 day forecast performed the best. The 1 week forecast came in second and the 2 week forecast was the least accurate for the majority of months. Preliminary results for yield indicate that the 2 week forecast is the least variable of the forecast lengths, however it also is the least accurate. The 3 day and 1 week forecast have a better accuracy, with an increase in variability.
Quantitative aspects of inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Bulska, Ewa; Wagner, Barbara
2016-10-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.
Khan, Adil Mehmood; Siddiqi, Muhammad Hameed; Lee, Seok-Won
2013-09-27
Smartphone-based activity recognition (SP-AR) recognizes users' activities using the embedded accelerometer sensor. Only a small number of previous works can be classified as online systems, i.e., the whole process (pre-processing, feature extraction, and classification) is performed on the device. Most of these online systems use either a high sampling rate (SR) or long data-window (DW) to achieve high accuracy, resulting in short battery life or delayed system response, respectively. This paper introduces a real-time/online SP-AR system that solves this problem. Exploratory data analysis was performed on acceleration signals of 6 activities, collected from 30 subjects, to show that these signals are generated by an autoregressive (AR) process, and an accurate AR-model in this case can be built using a low SR (20 Hz) and a small DW (3 s). The high within class variance resulting from placing the phone at different positions was reduced using kernel discriminant analysis to achieve position-independent recognition. Neural networks were used as classifiers. Unlike previous works, true subject-independent evaluation was performed, where 10 new subjects evaluated the system at their homes for 1 week. The results show that our features outperformed three commonly used features by 40% in terms of accuracy for the given SR and DW.
A program to evaluate a control system based on feedback of aerodynamic pressure differentials
NASA Technical Reports Server (NTRS)
Levy, D. W.; Finn, P.; Roskam, J.
1981-01-01
The use of aerodynamic pressure differentials to position a control surface is evaluated. The system is a differential pressure command loop, analogous to a position command loop, where the surface is commanded to move until a desired differential pressure across the surface is achieved. This type of control is more direct and accurate because it is the differential pressure which causes the control forces and moments. A frequency response test was performed in a low speed wind tunnel to measure the performance of the system. Both pressure and position feedback were tested. The pressure feedback performed as well as position feedback implying that the actuator, with a break frequency on the order of 10 Rad/sec, was the limiting component. Theoretical considerations indicate that aerodynamic lags will not appear below frequencies of 50 Rad/sec, or higher.
Radiosonde pressure sensor performance - Evaluation using tracking radars
NASA Technical Reports Server (NTRS)
Parsons, C. L.; Norcross, G. A.; Brooks, R. L.
1984-01-01
The standard balloon-borne radiosonde employed for synoptic meteorology provides vertical profiles of temperature, pressure, and humidity as a function of elapsed time. These parameters are used in the hypsometric equation to calculate the geopotential altitude at each sampling point during the balloon's flight. It is important that the vertical location information be accurate. The present investigation was conducted with the objective to evaluate the altitude determination accuracy of the standard radiosonde throughout the entire balloon profile. The tests included two other commercially available pressure sensors to see if they could provide improved accuracy in the stratosphere. The pressure-measuring performance of standard baroswitches, premium baroswitches, and hypsometers in balloon-borne sondes was correlated with tracking radars. It was found that the standard and premium baroswitches perform well up to about 25 km altitude, while hypsometers provide more reliable data above 25 km.
Prediction of induced vibrations for a passenger - car ferry
NASA Astrophysics Data System (ADS)
Crudu, L.; Neculet, O.; Marcu, O.
2016-08-01
In order to evaluate the ship hull global vibrations, propeller excitation must be properly considered being mandatory to know enough accurate the magnitude of the induced hull pressure impulses. During the preliminary design stages, the pressures induced on the aft part of the ship by the operating propeller can be evaluated based on the guidelines given by the international standards or by the provisions of the Classification Societies. These approximate formulas are taking into account the wake field which, unfortunately, can be only estimated unless experimental towing tank tests are carried out. Another possibility is the numerical evaluation with different Computational Fluid Dynamics (CFD) codes. However, CFD methods are not always easy to be used requiring an accurate description of the hull forms in the aft part of the ship. The present research underlines these aspects during the preliminary prediction of propeller induced vibrations for a double-ended passenger-car ferry propelled by two azimuth fixed pitch thrusters placed at both ends of the ship. The evaluation of the global forced vibration is performed considering the 3D global Finite Element (FE) model, with NX Nastran for Windows. Based on the presented results, the paper provides reliable information to be used during the preliminary design stages.
A Gold Standards Approach to Training Instructors to Evaluate Crew Performance
NASA Technical Reports Server (NTRS)
Baker, David P.; Dismukes, R. Key
2003-01-01
The Advanced Qualification Program requires that airlines evaluate crew performance in Line Oriented Simulation. For this evaluation to be meaningful, instructors must observe relevant crew behaviors and evaluate those behaviors consistently and accurately against standards established by the airline. The airline industry has largely settled on an approach in which instructors evaluate crew performance on a series of event sets, using standardized grade sheets on which behaviors specific to event set are listed. Typically, new instructors are given a class in which they learn to use the grade sheets and practice evaluating crew performance observed on videotapes. These classes emphasize reliability, providing detailed instruction and practice in scoring so that all instructors within a given class will give similar scores to similar performance. This approach has value but also has important limitations; (1) ratings within one class of new instructors may differ from those of other classes; (2) ratings may not be driven primarily by the specific behaviors on which the company wanted the crews to be scored; and (3) ratings may not be calibrated to company standards for level of performance skill required. In this paper we provide a method to extend the existing method of training instructors to address these three limitations. We call this method the "gold standards" approach because it uses ratings from the company's most experienced instructors as the basis for training rater accuracy. This approach ties the training to the specific behaviors on which the experienced instructors based their ratings.
Lung magnetic resonance imaging for pneumonia in children.
Liszewski, Mark C; Görkem, Süreyya; Sodhi, Kushaljit S; Lee, Edward Y
2017-10-01
Technical factors have historically limited the role of MRI in the evaluation of pneumonia in children in routine clinical practice. As imaging technology has advanced, recent studies utilizing practical MR imaging protocols have shown MRI to be an accurate potential alternative to CT for the evaluation of pneumonia and its complications. This article provides up-to-date MR imaging techniques that can be implemented in most radiology departments to evaluate pneumonia in children. Imaging findings in pneumonia on MRI are also reviewed. In addition, the current literature describing the diagnostic performance of MRI for pneumonia is discussed. Furthermore, potential risks and limitations of MRI for the evaluation of pneumonia in children are described.
Matsui, Yuko; Murayama, Ryoko; Tanabe, Hidenori; Oe, Makoto; Motoo, Yoshiharu; Wagatsuma, Takanori; Michibuchi, Michiko; Kinoshita, Sachiko; Sakai, Keiko; Konya, Chizuko; Sugama, Junko; Sanada, Hiromi
Early detection of extravasation is important, but conventional methods of detection lack objectivity and reliability. This study evaluated the predictive validity of thermography for identifying extravasation during intravenous antineoplastic therapy. Of 257 patients who received chemotherapy through peripheral veins, extravasation was identified in 26. Thermography was performed every 15 to 30 minutes during the infusions. Sensitivity, specificity, positive predictive value, and negative predictive value using thermography were 84.6%, 94.8%, 64.7%, and 98.2%, respectively. This study showed that thermography offers an accurate prediction of extravasation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chadwick, M. B.; Capote, R.; Trkov, A.
The CIELO collaboration has studied neutron cross sections on nuclides that significantly impact criticality in nuclear technologies - 16O, 56Fe, 235;8U and 239Pu - with the aim of improving the accuracy of the data and resolving previous discrepancies in our understanding. This multi-laboratory pilot project, coordinated via the OECD/NEA Working Party on Evaluation Cooperation (WPEC) Subgroup 40 with support also from the IAEA, has motivated experimental and theoretical work and led to suites of new evaluated libraries that accurately reflect measured data and also perform well in integral simulations of criticality.
Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.
Ryan, Andrew M; Burgess, James F; Dimick, Justin B
2015-08-01
To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models. Process-of-care quality data from Hospital Compare between 2003 and 2009. We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis. Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators. When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis. © Health Research and Educational Trust.
Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong
2015-01-01
Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.
Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan
2013-04-01
Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.
Using contrasting cases to improve self-assessment in physics learning
NASA Astrophysics Data System (ADS)
Jax, Jared Michael
Accurate self-assessment (SA) is widely regarded as a valuable tool for conducting scientific work, although there is growing concern that students present difficulties in accurately assessing their own learning. For students, the challenge of accurately self-assessing their work prevents them from effectively critiquing their own knowledge and skills, and making corrections when necessary to improve their performance. An overwhelming majority of researchers have acknowledged the importance of developing and practicing the necessary reflective skills SA in science, yet it is rarely a focus of daily instruction leading to students typically overestimate their abilities. In an effort to provide a pragmatic approach to overcoming these deficiencies, this study will demonstrate the effect of using positive and negative examples of solutions (contrasting cases) on performance and accuracy of SA when compared to student who are only shown positive examples of solutions. The work described here sought, first, to establish the areas of flawed SA that introductory high school physics students experience when studying circuitry, and, second, to examine how giving students Content Knowledge in addition to Positive and Negative Examples focused on helping them self-assess might help overcome these deficiencies. In doing so, this work highlights the positive impact that these types of support have in significantly increasing student performance, SA accuracy, and the ability to evaluate solutions in physics education.
Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation
Lee, Jisun; Kwon, Jay Hyoun; Yu, Myeongjong
2015-01-01
In this study, simulation tests for gravity gradient referenced navigation (GGRN) are conducted to verify the effects of various factors such as database (DB) and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN). In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available. PMID:26184212
Lung vessel segmentation in CT images using graph-cuts
NASA Astrophysics Data System (ADS)
Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.
2016-03-01
Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.
McMullen, Allison R; Wallace, Meghan A; Pincus, David H; Wilkey, Kathy; Burnham, C A
2016-08-01
Invasive fungal infections have a high rate of morbidity and mortality, and accurate identification is necessary to guide appropriate antifungal therapy. With the increasing incidence of invasive disease attributed to filamentous fungi, rapid and accurate species-level identification of these pathogens is necessary. Traditional methods for identification of filamentous fungi can be slow and may lack resolution. Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has emerged as a rapid and accurate method for identification of bacteria and yeasts, but a paucity of data exists on the performance characteristics of this method for identification of filamentous fungi. The objective of our study was to evaluate the accuracy of the Vitek MS for mold identification. A total of 319 mold isolates representing 43 genera recovered from clinical specimens were evaluated. Of these isolates, 213 (66.8%) were correctly identified using the Vitek MS Knowledge Base, version 3.0 database. When a modified SARAMIS (Spectral Archive and Microbial Identification System) database was used to augment the version 3.0 Knowledge Base, 245 (76.8%) isolates were correctly identified. Unidentified isolates were subcultured for repeat testing; 71/319 (22.3%) remained unidentified. Of the unidentified isolates, 69 were not in the database. Only 3 (0.9%) isolates were misidentified by MALDI-TOF MS (including Aspergillus amoenus [n = 2] and Aspergillus calidoustus [n = 1]) although 10 (3.1%) of the original phenotypic identifications were not correct. In addition, this methodology was able to accurately identify 133/144 (93.6%) Aspergillus sp. isolates to the species level. MALDI-TOF MS has the potential to expedite mold identification, and misidentifications are rare. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Optimization and evaluation of a proportional derivative controller for planar arm movement.
Jagodnik, Kathleen M; van den Bogert, Antonie J
2010-04-19
In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. Copyright 2009 Elsevier Ltd. All rights reserved.
Optimization and evaluation of a proportional derivative controller for planar arm movement
Jagodnik, Kathleen M.; van den Bogert, Antonie J.
2013-01-01
In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. PMID:20097345
Bellot, Pau; Olsen, Catharina; Salembier, Philippe; Oliveras-Vergés, Albert; Meyer, Patrick E
2015-09-29
In the last decade, a great number of methods for reconstructing gene regulatory networks from expression data have been proposed. However, very few tools and datasets allow to evaluate accurately and reproducibly those methods. Hence, we propose here a new tool, able to perform a systematic, yet fully reproducible, evaluation of transcriptional network inference methods. Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities. The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data. As a result, it is possible to identify the techniques that have broad overall performances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambon, Paul H.; Deter, Dean D.
2016-07-01
xiii ABSTRACT The goal of this project is to develop and evaluate powertrain test procedures that can accurately simulate real-world operating conditions, and to determine greenhouse gas (GHG) emissions of advanced medium- and heavy-duty engine and vehicle technologies. ORNL used their Vehicle System Integration Laboratory to evaluate test procedures on a stand-alone engine as well as two powertrains. Those components where subjected to various drive cycles and vehicle conditions to evaluate the validity of the results over a broad range of test conditions. Overall, more than 1000 tests were performed. The data are compiled and analyzed in this report.
Chadwick, M. B.; Capote, R.; Trkov, A.; ...
2017-01-01
The CIELO collaboration has studied neutron cross sections on nuclides that significantly impact criticality in nuclear technologies - 16O, 56Fe, 235;8U and 239Pu - with the aim of improving the accuracy of the data and resolving previous discrepancies in our understanding. This multi-laboratory pilot project, coordinated via the OECD/NEA Working Party on Evaluation Cooperation (WPEC) Subgroup 40 with support also from the IAEA, has motivated experimental and theoretical work and led to suites of new evaluated libraries that accurately reflect measured data and also perform well in integral simulations of criticality.
NASA Astrophysics Data System (ADS)
Chadwick, M. B.; Capote, R.; Trkov, A.; Kahler, A. C.; Herman, M. W.; Brown, D. A.; Hale, G. M.; Pigni, M.; Dunn, M.; Leal, L.; Plompen, A.; Schillebeeck, P.; Hambsch, F.-J.; Kawano, T.; Talou, P.; Jandel, M.; Mosby, S.; Lestone, J.; Neudecker, D.; Rising, M.; Paris, M.; Nobre, G. P. A.; Arcilla, R.; Kopecky, S.; Giorginis, G.; Cabellos, O.; Hill, I.; Dupont, E.; Danon, Y.; Jing, Q.; Zhigang, G.; Tingjin, L.; Hanlin, L.; Xichao, R.; Haicheng, W.; Sin, M.; Bauge, E.; Romain, P.; Morillon, B.; Noguere, G.; Jacqmin, R.; Bouland, O.; De Saint Jean, C.; Pronyaev, V. G.; Ignatyuk, A.; Yokoyama, K.; Ishikawa, M.; Fukahori, T.; Iwamoto, N.; Iwamoto, O.; Kuneada, S.; Lubitz, C. R.; Palmiotti, G.; Salvatores, M.; Kodeli, I.; Kiedrowski, B.; Roubtsov, D.; Thompson, I.; Quaglioni, S.; Kim, H. I.; Lee, Y. O.; Koning, A. J.; Carlson, A.; Fischer, U.; Sirakov, I.
2017-09-01
The CIELO collaboration has studied neutron cross sections on nuclides that significantly impact criticality in nuclear technologies - 16O, 56Fe, 235,8U and 239Pu - with the aim of improving the accuracy of the data and resolving previous discrepancies in our understanding. This multi-laboratory pilot project, coordinated via the OECD/NEA Working Party on Evaluation Cooperation (WPEC) Subgroup 40 with support also from the IAEA, has motivated experimental and theoretical work and led to suites of new evaluated libraries that accurately reflect measured data and also perform well in integral simulations of criticality.
Esophageal manometry in gastroesophageal reflux disease.
Mello, Michael; Gyawali, C Prakash
2014-03-01
High-resolution manometry (HRM) allows nuanced evaluation of esophageal motor function, and more accurate evaluation of lower esophageal sphincter (LES) function, in comparison with conventional manometry. Pathophysiologic correlates of gastroesophageal reflux disease (GERD) and esophageal peristaltic performance are well addressed by this technique. HRM may alter the surgical decision by assessment of esophageal peristaltic function and exclusion of esophageal outflow obstruction before antireflux surgery. Provocative testing during HRM may assess esophageal smooth muscle peristaltic reserve and help predict the likelihood of transit symptoms following antireflux surgery. HRM represents a continuously evolving new technology that compliments the evaluation and management of GERD. Copyright © 2014 Elsevier Inc. All rights reserved.
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study
NASA Astrophysics Data System (ADS)
Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans
2015-03-01
Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.
Performance evaluation of an agent-based occupancy simulation model
Luo, Xuan; Lam, Khee Poh; Chen, Yixing; ...
2017-01-17
Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less
Performance evaluation of an agent-based occupancy simulation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xuan; Lam, Khee Poh; Chen, Yixing
Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less
ERIC Educational Resources Information Center
Archer, Robert P.; Handel, Richard W.; Couvadelli, Barbara
2004-01-01
The MMPI-2 Superlative (S) scale was developed by Butcher and Han (1995) to assess individuals tendencies to present themselves in an unrealistically positive light. The current study examined the performance of the L, K, and S scales in accurately distinguishing the MMPI-2 profiles of 379 psychiatric inpatients who produced one or more elevations…
SWOT Oceanography and Hydrology Data Product Simulators
NASA Technical Reports Server (NTRS)
Peral, Eva; Rodriguez, Ernesto; Fernandez, Daniel Esteban; Johnson, Michael P.; Blumstein, Denis
2013-01-01
The proposed Surface Water and Ocean Topography (SWOT) mission would demonstrate a new measurement technique using radar interferometry to obtain wide-swath measurements of water elevation at high resolution over ocean and land, addressing the needs of both the hydrology and oceanography science communities. To accurately evaluate the performance of the proposed SWOT mission, we have developed several data product simulators at different levels of fidelity and complexity.
ERIC Educational Resources Information Center
Chen, Ruey-Shin; Liu, I-Fan
2017-01-01
Currently, e-learning systems are being widely used in all stages of education. However, it is difficult for school administrators to accurately assess the actual usage performance of a new system, especially when an organization wishes to update the system for users from different backgrounds using new devices such as smartphones. To allow school…
Various Effects of Embedded Intrapulse Communications on Pulsed Radar
2017-06-01
specific type of interference that may be encountered by radar; however, this introductory information should suffice to illustrate to the reader why...chapter we seek to not merely understand the overall statistical performance of the radar with embedded intrapulse communications but rather to evaluate...Theory Probability of detection, discussed in Chapter 4, assesses the statistical probability of a radar accurately identifying a target given a
Evaluation of the thin deformable active optics mirror concept
NASA Technical Reports Server (NTRS)
Robertson, H. J.
1972-01-01
The active optics concept using a thin deformable mirror has been successfully demonstrated using a 30 in. diameter, 1/2 in. thick mirror and a 61 point matrix of forces for alignment. Many of the problems associated with the design, fabrication, and launch of large aperture diffraction-limited astronomical telescopes have been resolved and experimental data created that can provide accurate predictions of performance in orbit.
Impact of Trust on Security and Performance in Tactical Networks
2013-06-01
and reliability . On the other hand, in organizational theory, trust management has viewed trust as a key factor to manage relationships that flourish...environments challenges, these dynamics can hinder accurate and reliable trust evaluation of entities in the network [10], [11]. • Information Network Domain...trustworthy entities. • Social/Cognitive Network Domain: Social scientists, physiologists, and neuroscientists have studied social trust, interpersonal
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi
This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process. The objective of this study is to verify the generalized body-modes approach in comparison to high-fidelity FSI simulations to accurately predict structural deflections and stress loads in a WEC. Two verification cases are considered, a free-floating barge and a fixed-bottom column. Details for both the generalized body-modes models and FSI models are first provided. Results for each of the models are then compared and discussed. Finally, based on the verification results obtained, future plans for incorporating the generalized body-modes method into the WEC simulation tool, WEC-Sim, and the overall WEC design process are discussed.« less
Evaluation of Pollen Apps Forecasts: The Need for Quality Control in an eHealth Service.
Bastl, Katharina; Berger, Uwe; Kmenta, Maximilian
2017-05-08
Pollen forecasts are highly valuable for allergen avoidance and thus raising the quality of life of persons concerned by pollen allergies. They are considered as valuable free services for the public. Careful scientific evaluation of pollen forecasts in terms of accurateness and reliability has not been available till date. The aim of this study was to analyze 9 mobile apps, which deliver pollen information and pollen forecasts, with a focus on their accurateness regarding the prediction of the pollen load in the grass pollen season 2016 to assess their usefulness for pollen allergy sufferers. The following number of apps was evaluated for each location: 3 apps for Vienna (Austria), 4 apps for Berlin (Germany), and 1 app each for Basel (Switzerland) and London (United Kingdom). All mobile apps were freely available. Today's grass pollen forecast was compared throughout the defined grass pollen season at each respective location with measured grass pollen concentrations. Hit rates were calculated for the exact performance and for a tolerance in a range of ±2 and ±4 pollen per cubic meter. In general, for most apps, hit rates score around 50% (6 apps). It was found that 1 app showed better results, whereas 3 apps performed less well. Hit rates increased when calculated with tolerances for most apps. In contrast, the forecast for the "readiness to flower" for grasses was performed at a sufficiently accurate level, although only two apps provided such a forecast. The last of those forecasts coincided with the first moderate grass pollen load on the predicted day or 3 days after and performed even from about a month before well within the range of 3 days. Advertisement was present in 3 of the 9 analyzed apps, whereas an imprint mentioning institutions with experience in pollen forecasting was present in only three other apps. The quality of pollen forecasts is in need of improvement, and quality control for pollen forecasts is recommended to avoid potential harm to pollen allergy sufferers due to inadequate forecasts. The inclusion of information on reliability of provided forecasts and a similar handling regarding probabilistic weather forecasts should be considered. ©Katharina Bastl, Uwe Berger, Maximilian Kmenta. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 08.05.2017.
Evaluation of Pollen Apps Forecasts: The Need for Quality Control in an eHealth Service
Berger, Uwe; Kmenta, Maximilian
2017-01-01
Background Pollen forecasts are highly valuable for allergen avoidance and thus raising the quality of life of persons concerned by pollen allergies. They are considered as valuable free services for the public. Careful scientific evaluation of pollen forecasts in terms of accurateness and reliability has not been available till date. Objective The aim of this study was to analyze 9 mobile apps, which deliver pollen information and pollen forecasts, with a focus on their accurateness regarding the prediction of the pollen load in the grass pollen season 2016 to assess their usefulness for pollen allergy sufferers. Methods The following number of apps was evaluated for each location: 3 apps for Vienna (Austria), 4 apps for Berlin (Germany), and 1 app each for Basel (Switzerland) and London (United Kingdom). All mobile apps were freely available. Today’s grass pollen forecast was compared throughout the defined grass pollen season at each respective location with measured grass pollen concentrations. Hit rates were calculated for the exact performance and for a tolerance in a range of ±2 and ±4 pollen per cubic meter. Results In general, for most apps, hit rates score around 50% (6 apps). It was found that 1 app showed better results, whereas 3 apps performed less well. Hit rates increased when calculated with tolerances for most apps. In contrast, the forecast for the “readiness to flower” for grasses was performed at a sufficiently accurate level, although only two apps provided such a forecast. The last of those forecasts coincided with the first moderate grass pollen load on the predicted day or 3 days after and performed even from about a month before well within the range of 3 days. Advertisement was present in 3 of the 9 analyzed apps, whereas an imprint mentioning institutions with experience in pollen forecasting was present in only three other apps. Conclusions The quality of pollen forecasts is in need of improvement, and quality control for pollen forecasts is recommended to avoid potential harm to pollen allergy sufferers due to inadequate forecasts. The inclusion of information on reliability of provided forecasts and a similar handling regarding probabilistic weather forecasts should be considered. PMID:28483740
Saito, Takaya; Rehmsmeier, Marc
2015-01-01
Binary classifiers are routinely evaluated with performance measures such as sensitivity and specificity, and performance is frequently illustrated with Receiver Operating Characteristics (ROC) plots. Alternative measures such as positive predictive value (PPV) and the associated Precision/Recall (PRC) plots are used less frequently. Many bioinformatics studies develop and evaluate classifiers that are to be applied to strongly imbalanced datasets in which the number of negatives outweighs the number of positives significantly. While ROC plots are visually appealing and provide an overview of a classifier's performance across a wide range of specificities, one can ask whether ROC plots could be misleading when applied in imbalanced classification scenarios. We show here that the visual interpretability of ROC plots in the context of imbalanced datasets can be deceptive with respect to conclusions about the reliability of classification performance, owing to an intuitive but wrong interpretation of specificity. PRC plots, on the other hand, can provide the viewer with an accurate prediction of future classification performance due to the fact that they evaluate the fraction of true positives among positive predictions. Our findings have potential implications for the interpretation of a large number of studies that use ROC plots on imbalanced datasets.
Accurate clinical detection of exon copy number variants in a targeted NGS panel using DECoN.
Fowler, Anna; Mahamdallie, Shazia; Ruark, Elise; Seal, Sheila; Ramsay, Emma; Clarke, Matthew; Uddin, Imran; Wylie, Harriet; Strydom, Ann; Lunter, Gerton; Rahman, Nazneen
2016-11-25
Background: Targeted next generation sequencing (NGS) panels are increasingly being used in clinical genomics to increase capacity, throughput and affordability of gene testing. Identifying whole exon deletions or duplications (termed exon copy number variants, 'exon CNVs') in exon-targeted NGS panels has proved challenging, particularly for single exon CNVs. Methods: We developed a tool for the Detection of Exon Copy Number variants (DECoN), which is optimised for analysis of exon-targeted NGS panels in the clinical setting. We evaluated DECoN performance using 96 samples with independently validated exon CNV data. We performed simulations to evaluate DECoN detection performance of single exon CNVs and to evaluate performance using different coverage levels and sample numbers. Finally, we implemented DECoN in a clinical laboratory that tests BRCA1 and BRCA2 with the TruSight Cancer Panel (TSCP). We used DECoN to analyse 1,919 samples, validating exon CNV detections by multiplex ligation-dependent probe amplification (MLPA). Results: In the evaluation set, DECoN achieved 100% sensitivity and 99% specificity for BRCA exon CNVs, including identification of 8 single exon CNVs. DECoN also identified 14/15 exon CNVs in 8 other genes. Simulations of all possible BRCA single exon CNVs gave a mean sensitivity of 98% for deletions and 95% for duplications. DECoN performance remained excellent with different levels of coverage and sample numbers; sensitivity and specificity was >98% with the typical NGS run parameters. In the clinical pipeline, DECoN automatically analyses pools of 48 samples at a time, taking 24 minutes per pool, on average. DECoN detected 24 BRCA exon CNVs, of which 23 were confirmed by MLPA, giving a false discovery rate of 4%. Specificity was 99.7%. Conclusions: DECoN is a fast, accurate, exon CNV detection tool readily implementable in research and clinical NGS pipelines. It has high sensitivity and specificity and acceptable false discovery rate. DECoN is freely available at www.icr.ac.uk/decon.
Re-Evaluation of the AASHTO-Flexible Pavement Design Equation with Neural Network Modeling
Tiğdemir, Mesut
2014-01-01
Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance. PMID:25397962
Re-evaluation of the AASHTO-flexible pavement design equation with neural network modeling.
Tiğdemir, Mesut
2014-01-01
Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn
Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less
Fabelo, Himar; Ortega, Samuel; Ravi, Daniele; Kiran, B Ravi; Sosa, Coralia; Bulters, Diederik; Callicó, Gustavo M; Bulstrode, Harry; Szolna, Adam; Piñeiro, Juan F; Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O'Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.
Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O’Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area. PMID:29554126
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Setford, Steven; Smith, Antony; McColl, David; Grady, Mike; Koria, Krisna; Cameron, Hilary
2015-01-01
Assess laboratory and in-clinic performance of the OneTouch Select(®) Plus test system against ISO 15197:2013 standard for measurement of blood glucose. System performance assessed in laboratory against key patient, environmental and pharmacologic factors. User performance was assessed in clinic by system-naïve lay-users. Healthcare professionals assessed system accuracy on diabetes subjects in clinic. The system demonstrated high levels of performance, meeting ISO 15197:2013 requirements in laboratory testing (precision, linearity, hematocrit, temperature, humidity and altitude). System performance was tested against 28 interferents, with an adverse interfering effect only being recorded for pralidoxime iodide. Clinic user performance results fulfilled ISO 15197:2013 accuracy criteria. Subjects agreed that the color range indicator clearly showed if they were low, in-range or high and helped them better understand glucose results. The system evaluated is accurate and meets all ISO 15197:2013 requirements as per the tests described. The color range indicator helped subjects understand glucose results and supports patients in following healthcare professional recommendations on glucose targets.
Accelerated Test Method for Corrosion Protective Coatings Project
NASA Technical Reports Server (NTRS)
Falker, John; Zeitlin, Nancy; Calle, Luz
2015-01-01
This project seeks to develop a new accelerated corrosion test method that predicts the long-term corrosion protection performance of spaceport structure coatings as accurately and reliably as current long-term atmospheric exposure tests. This new accelerated test method will shorten the time needed to evaluate the corrosion protection performance of coatings for NASA's critical ground support structures. Lifetime prediction for spaceport structure coatings has a 5-year qualification cycle using atmospheric exposure. Current accelerated corrosion tests often provide false positives and negatives for coating performance, do not correlate to atmospheric corrosion exposure results, and do not correlate with atmospheric exposure timescales for lifetime prediction.
Cost-Conscious of Anesthesia Physicians: An awareness survey.
Hakimoglu, Sedat; Hancı, Volkan; Karcıoglu, Murat; Tuzcu, Kasım; Davarcı, Isıl; Kiraz, Hasan Ali; Turhanoglu, Selim
2015-01-01
Increasing competitive pressure and health performance system in the hospitals result in pressure to reduce the resources allocated. The aim of this study was to evaluate the anesthesiology and intensive care physicians awareness of the cost of the materials used and to determine the factors that influence it. This survey was conducted between September 2012 and September 2013 after the approval of the local ethics committee. Overall 149 anesthetists were included in the study. Participants were asked to estimate the cost of 30 products used by anesthesiology and intensive care units. One hundred forty nine doctors, 45% female and 55% male, participated in this study. Of the total 30 questions the averages of cost estimations were 5.8% accurate estimation, 35.13% underestimation and 59.16% overestimation. When the participants were divided into the different groups of institution, duration of working in this profession and sex, there were no statistically significant differences regarding accurate estimation. However, there was statistically significant difference in underestimation. In underestimation, there was no significant difference between 16-20 year group and >20 year group but these two groups have more price overestimation than the other groups (p=0.031). Furthermore, when all the participants were evaluated there were no significant difference between age-accurate cost estimation and profession time-accurate cost estimation. Anesthesiology and intensive care physicians in this survey have an insufficient awareness of the cost of the drugs and materials that they use. The institution and experience are not effective factors for accurate estimate. Programs for improving the health workers knowledge creating awareness of cost should be planned in order to use the resources more efficiently and cost effectively.
NASA Astrophysics Data System (ADS)
Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.
2011-12-01
The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U, 238,242Pu and 241,243Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for 233U fueled systems as a function of Above-Thermal Fission Fraction remain. The comprehensive nature of this critical benchmark suite and the generally accurate calculated eigenvalues obtained with ENDF/B-VII.1 neutron cross sections support the conclusion that this is the most accurate general purpose ENDF/B cross section library yet released to the technical community.
A statistical evaluation and comparison of VISSR Atmospheric Sounder (VAS) data
NASA Technical Reports Server (NTRS)
Jedlovec, G. J.
1984-01-01
In order to account for the temporal and spatial discrepancies between the VAS and rawinsonde soundings, the rawinsonde data were adjusted to a common hour of release where the new observation time corresponded to the satellite scan time. Both the satellite and rawinsonde observations of the basic atmospheric parameters (T Td, and Z) were objectively analyzed to a uniform grid maintaining the same mesoscale structure in each data set. The performance of each retrieval algorithm in producing accurate and representative soundings was evaluated using statistical parameters such as the mean, standard deviation, and root mean square of the difference fields for each parameter and grid level. Horizontal structure was also qualitatively evaluated by examining atmospheric features on constant pressure surfaces. An analysis of the vertical structure of the atmosphere were also performed by looking at colocated and grid mean vertical profiles of both the satellite and rawinsonde data sets. Highlights of these results are presented.
Evaluation of the SeedCounter, A Mobile Application for Grain Phenotyping.
Komyshev, Evgenii; Genaev, Mikhail; Afonnikov, Dmitry
2016-01-01
Grain morphometry in cereals is an important step in selecting new high-yielding plants. Manual assessment of parameters such as the number of grains per ear and grain size is laborious. One solution to this problem is image-based analysis that can be performed using a desktop PC. Furthermore, the effectiveness of analysis performed in the field can be improved through the use of mobile devices. In this paper, we propose a method for the automated evaluation of phenotypic parameters of grains using mobile devices running the Android operational system. The experimental results show that this approach is efficient and sufficiently accurate for the large-scale analysis of phenotypic characteristics in wheat grains. Evaluation of our application under six different lighting conditions and three mobile devices demonstrated that the lighting of the paper has significant influence on the accuracy of our method, unlike the smartphone type.
Faculty development for the evaluation system: a dual agenda
Oller, Kellee L; Mai, Cuc T; Ledford, Robert J; O’Brien, Kevin E
2017-01-01
Faculty development for the evaluation process serves two distinct goals. The first goal is to improve the quality of the evaluations submitted by the faculty. Providing an accurate assessment of a learner’s capabilities is a skill and, similar to other skills, can be developed with training. Frame-of-reference training serves to calibrate the faculty’s standard of performance and build a uniform language of the evaluation. Second, areas for faculty professional growth can be identified from data generated from learners’ evaluations of the faculty using narrative comments, item-level comparison reports, and comparative rank list information. This paper presents an innovative model, grounded in institutional experience and review of the literature, to provide feedback to faculty evaluators, thereby improving the reliability of the evaluation process, and motivating the professional growth of faculty as educators. PMID:28331382
Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C
2015-06-08
Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.
Dedoncker, Josefien; Brunoni, Andre R; Baeken, Chris; Vanderhasselt, Marie-Anne
2016-01-01
Research into the effects of transcranial direct current stimulation of the dorsolateral prefrontal cortex on cognitive functioning is increasing rapidly. However, methodological heterogeneity in prefrontal tDCS research is also increasing, particularly in technical stimulation parameters that might influence tDCS effects. To systematically examine the influence of technical stimulation parameters on DLPFC-tDCS effects. We performed a systematic review and meta-analysis of tDCS studies targeting the DLPFC published from the first data available to February 2016. Only single-session, sham-controlled, within-subject studies reporting the effects of tDCS on cognition in healthy controls and neuropsychiatric patients were included. Evaluation of 61 studies showed that after single-session a-tDCS, but not c-tDCS, participants responded faster and more accurately on cognitive tasks. Sub-analyses specified that following a-tDCS, healthy subjects responded faster, while neuropsychiatric patients responded more accurately. Importantly, different stimulation parameters affected a-tDCS effects, but not c-tDCS effects, on accuracy in healthy samples vs. increased current density and density charge resulted in improved accuracy in healthy samples, most prominently in females; for neuropsychiatric patients, task performance during a-tDCS resulted in stronger increases in accuracy rates compared to task performance following a-tDCS. Healthy participants respond faster, but not more accurate on cognitive tasks after a-tDCS. However, increasing the current density and/or charge might be able to enhance response accuracy, particularly in females. In contrast, online task performance leads to greater increases in response accuracy than offline task performance in neuropsychiatric patients. Possible implications and practical recommendations are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
Switching performance of OBS network model under prefetched real traffic
NASA Astrophysics Data System (ADS)
Huang, Zhenhua; Xu, Du; Lei, Wen
2005-11-01
Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.
Evaluation of peristaltic micromixers for highly integrated microfluidic systems
Kim, Duckjong; Rho, Hoon Suk; Jambovane, Sachin; Shin, Soojeong; Hong, Jong Wook
2016-01-01
Microfluidic devices based on the multilayer soft lithography allow accurate manipulation of liquids, handling reagents at the sub-nanoliter level, and performing multiple reactions in parallel processors by adapting micromixers. Here, we have experimentally evaluated and compared several designs of micromixers and operating conditions to find design guidelines for the micromixers. We tested circular, triangular, and rectangular mixing loops and measured mixing performance according to the position and the width of the valves that drive nanoliters of fluids in the micrometer scale mixing loop. We found that the rectangular mixer is best for the applications of highly integrated microfluidic platforms in terms of the mixing performance and the space utilization. This study provides an improved understanding of the flow behaviors inside micromixers and design guidelines for micromixers that are critical to build higher order fluidic systems for the complicated parallel bio/chemical processes on a chip. PMID:27036809
The role of flight planning in aircrew decision performance
NASA Technical Reports Server (NTRS)
Pepitone, Dave; King, Teresa; Murphy, Miles
1989-01-01
The role of flight planning in increasing the safety and decision-making performance of the air transport crews was investigated in a study that involved 48 rated airline crewmembers on a B720 simulator with a model-board-based visual scene and motion cues with three degrees of freedom. The safety performance of the crews was evaluated using videotaped replays of the flight. Based on these evaluations, the crews could be divided into high- and low-safety groups. It was found that, while collecting information before flights, the high-safety crews were more concerned with information about alternative airports, especially the fuel required to get there, and were characterized by making rapid and appropriate decisions during the emergency part of the flight scenario, allowing these crews to make an early diversion to other airports. These results suggest that contingency planning that takes into account alternative courses of action enhances rapid and accurate decision-making under time pressure.
Evaluation of Raman spectroscopy in comparison to commonly performed dengue diagnostic tests
NASA Astrophysics Data System (ADS)
Khan, Saranjam; Ullah, Rahat; Khurram, Muhammad; Ali, Hina; Mahmood, Arshad; Khan, Ajmal; Ahmed, Mushtaq
2016-09-01
This study demonstrates the evaluation of Raman spectroscopy as a rapid diagnostic test in comparison to commonly performed tests for an accurate detection of dengue fever in human blood sera. Blood samples of 104 suspected dengue patients collected from Holy Family Hospital, Rawalpindi, Pakistan, have been used in this study. Out of 104 samples, 52 (50%) were positive based on immunoglobulin G (IgG), whereas 54 (52%) were positive based on immunoglobulin M (IgM) antibody tests. For the determination of the diagnostic capabilities of Raman spectroscopy, accuracy, sensitivity, specificity and false positive rate have been calculated in comparison to normally performed IgM and IgG captured enzyme-linked immunosorbent assay tests. Accuracy, precision, specificity, and sensitivity for Raman spectroscopy in comparison to IgM were found to be 66%, 70%, 72%, and 61%, whereas based on IgG they were 47%, 46%, 52%, and 43%, respectively.
Evaluation of peristaltic micromixers for highly integrated microfluidic systems
NASA Astrophysics Data System (ADS)
Kim, Duckjong; Rho, Hoon Suk; Jambovane, Sachin; Shin, Soojeong; Hong, Jong Wook
2016-03-01
Microfluidic devices based on the multilayer soft lithography allow accurate manipulation of liquids, handling reagents at the sub-nanoliter level, and performing multiple reactions in parallel processors by adapting micromixers. Here, we have experimentally evaluated and compared several designs of micromixers and operating conditions to find design guidelines for the micromixers. We tested circular, triangular, and rectangular mixing loops and measured mixing performance according to the position and the width of the valves that drive nanoliters of fluids in the micrometer scale mixing loop. We found that the rectangular mixer is best for the applications of highly integrated microfluidic platforms in terms of the mixing performance and the space utilization. This study provides an improved understanding of the flow behaviors inside micromixers and design guidelines for micromixers that are critical to build higher order fluidic systems for the complicated parallel bio/chemical processes on a chip.
Prediction of performance on the RCMP physical ability requirement evaluation.
Stanish, H I; Wood, T M; Campagna, P
1999-08-01
The Royal Canadian Mounted Police use the Physical Ability Requirement Evaluation (PARE) for screening applicants. The purposes of this investigation were to identify those field tests of physical fitness that were associated with PARE performance and determine which most accurately classified successful and unsuccessful PARE performers. The participants were 27 female and 21 male volunteers. Testing included measures of aerobic power, anaerobic power, agility, muscular strength, muscular endurance, and body composition. Multiple regression analysis revealed a three-variable model for males (70-lb bench press, standing long jump, and agility) explaining 79% of the variability in PARE time, whereas a one-variable model (agility) explained 43% of the variability for females. Analysis of the classification accuracy of the males' data was prohibited because 91% of the males passed the PARE. Classification accuracy of the females' data, using logistic regression, produced a two-variable model (agility, 1.5-mile endurance run) with 93% overall classification accuracy.
Predicting biomedical metadata in CEDAR: A study of Gene Expression Omnibus (GEO).
Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier
2017-08-01
A crucial and limiting factor in data reuse is the lack of accurate, structured, and complete descriptions of data, known as metadata. Towards improving the quantity and quality of metadata, we propose a novel metadata prediction framework to learn associations from existing metadata that can be used to predict metadata values. We evaluate our framework in the context of experimental metadata from the Gene Expression Omnibus (GEO). We applied four rule mining algorithms to the most common structured metadata elements (sample type, molecular type, platform, label type and organism) from over 1.3million GEO records. We examined the quality of well supported rules from each algorithm and visualized the dependencies among metadata elements. Finally, we evaluated the performance of the algorithms in terms of accuracy, precision, recall, and F-measure. We found that PART is the best algorithm outperforming Apriori, Predictive Apriori, and Decision Table. All algorithms perform significantly better in predicting class values than the majority vote classifier. We found that the performance of the algorithms is related to the dimensionality of the GEO elements. The average performance of all algorithm increases due of the decreasing of dimensionality of the unique values of these elements (2697 platforms, 537 organisms, 454 labels, 9 molecules, and 5 types). Our work suggests that experimental metadata such as present in GEO can be accurately predicted using rule mining algorithms. Our work has implications for both prospective and retrospective augmentation of metadata quality, which are geared towards making data easier to find and reuse. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Impaired information sampling in mild dementia of Alzheimer's type but not in healthy aging.
Zamarian, Laura; Benke, Thomas; Brand, Matthias; Djamshidian, Atbin; Delazer, Margarete
2015-05-01
It is unknown whether aging affects predecisional processing, that is, gathering information and evaluating options before making a decision. Here, we investigated information sampling in mild Dementia of Alzheimer's type (DAT) and healthy aging by using the Information Sampling Task (IST). In a first investigation, we compared patients with mild DAT (n = 20) with healthy controls (n = 20) on the IST and several neuropsychological background tests. In a second investigation, healthy older adults (n = 30) were compared with younger adults (n = 30) on the IST and executive-function tasks. Results of the first investigation demonstrated that, in the IST, patients gathered significantly less information, made riskier and less accurate decisions, and showed less reward sensitivity relative to controls. We found a significant correlation between performance on the IST and performance on tests of verbal fluency, working memory, and recognition in patients but not in controls. Results of the second investigation indicated a largely similar performance pattern between healthy older adults and younger adults. There were no significant correlations for both groups between the IST and executive-function tasks. There are no relevant changes with healthy aging in predecisional processing. In contrast, mild DAT significantly affects predecisional information sampling. Thus, the problems shown in patients with mild DAT in decision making might be related to the patients' difficulties in predecisional processing. Decision-making performance in mild DAT might be improved by helping the patients at a predecisional stage to gather sufficient information and evaluate options more accurately. (c) 2015 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, S.J.; Fischer, D.D.; Crawford, R.C.
1982-06-01
Rockwell Hanford Operations is currently involved in an extensive effort to perform interim ground surface stabilization activities at retired low-level waste burial grounds located at the Hanford Site, Richland, Washington. The principal objective of these activities is to promote increased occupational and radiological safety at burial grounds. Interim stabilization activities include: (1) load testing (traversing burial ground surfaces with heavy equipment to promote incipient collapse of void spaces within the disposal structure and overburden), (2) barrier placement (placement of a {ge} 0.6 m soil barrier over existing overburden), and (3) revegetation (establishment of shallow rooted vegetation on the barrier tomore » mitigate deep rooted plant growth and to reduce erosion). Low-level waste disposal caissons were used in 300 Area Burial Grounds as internment structures for containerized liquid wastes. These caissons, by virtue of their contents, design and methods of closure, require long-term performance evaluation. As an initial activity to evaluate long-term performance, the accurate location of these structures is required. This topical report summarizes engineering activities used to locate caissons in the subsurface environment at the Burial Ground. Activities were conducted to locate caissons during surface stabilization activities. The surface locations were marked, photographed, and recorded on an as built engineering drawing. The recorded location of these caissons will augment long-term observations of confinement structure and engineered surface barrier performance. In addition, accurate caisson location will minimize occupational risk during monitoring and observation activities periodically conducted at the burial ground.« less
Hitzfeld, Kristina L; Gehre, Matthias; Richnow, Hans-Hermann
2017-05-01
In this study conversion conditions for oxygen gas chromatography high temperature conversion (HTC) isotope ratio mass spectrometry (IRMS) are characterised using qualitative mass spectrometry (IonTrap). It is shown that physical and chemical properties of a given reactor design impact HTC and thus the ability to accurately measure oxygen isotope ratios. Commercially available and custom-built tube-in-tube reactors were used to elucidate (i) by-product formation (carbon dioxide, water, small organic molecules), (ii) 2nd sources of oxygen (leakage, metal oxides, ceramic material), and (iii) required reactor conditions (conditioning, reduction, stability). The suitability of the available HTC approach for compound-specific isotope analysis of oxygen in volatile organic molecules like methyl tert-butyl ether is assessed. Main problems impeding accurate analysis are non-quantitative HTC and significant carbon dioxide by-product formation. An evaluation strategy combining mass spectrometric analysis of HTC products and IRMS 18 O/ 16 O monitoring for future method development is proposed.
Using single leg standing time to predict the fall risk in elderly.
Chang, Chun-Ju; Chang, Yu-Shin; Yang, Sai-Wei
2013-01-01
In clinical evaluation, we used to evaluate the fall risk according to elderly falling experience or the balance assessment tool. Because of the tool limitation, sometimes we could not predict accurately. In this study, we first analyzed 15 healthy elderly (without falling experience) and 15 falling elderly (1~3 time falling experience) balance performance in previous research. After 1 year follow up, there was only 1 elderly fall down during this period. It seemed like that falling experience had a ceiling effect on the falling prediction. But we also found out that using single leg standing time could be more accurately to help predicting the fall risk, especially for the falling elderly who could not stand over 10 seconds by single leg, and with a significant correlation between the falling experience and single leg standing time (r = -0.474, p = 0.026). The results also showed that there was significant body sway just before they falling down, and the COP may be an important characteristic in the falling elderly group.
NASA Astrophysics Data System (ADS)
Seo, Jeongmin; Han, Min Cheol; Yeom, Yeon Soo; Lee, Hyun Su; Kim, Chan Hyeong; Jeong, Jong Hwi; Kim, SeongHoon
2017-04-01
In proton therapy, the spot scanning method is known to suffer from the interplay effect induced from the independent movements of the proton beam and the organs in the patient during the treatment. To study the interplay effect, several investigators have performed four-dimensional (4D) dose calculations with some limited temporal resolutions (4 or 10 phases per respiratory cycle) by using the 4D computed tomography (CT) images of the patient; however, the validity of the limited temporal resolutions has not been confirmed. The aim of the present study is to determine whether the previous temporal resolutions (4 or 10 phases per respiratory cycle) are really high enough for adequate study of the interplay effect in spot scanning proton therapy. For this study, a series of 4D dose calculations were performed with a virtual water phantom moving in the vertical direction during dose delivery. The dose distributions were calculated for different temporal resolutions (4, 10, 25, 50, and 100 phases per respiratory cycle), and the calculated dose distributions were compared with the reference dose distribution, which was calculated using an almost continuously-moving water phantom ( i.e., 1000 phases per respiratory cycle). The results of the present study show that the temporal resolutions of 4 and 10 phases per respiratory cycle are not high enough for an accurate evaluation of the interplay effect for spot scanning proton therapy. The temporal resolution should be at least 14 and 17 phases per respiratory cycle for 10-mm and 20-mm movement amplitudes, respectively, even for rigid movement ( i.e., without deformation) of the homogeneous water phantom considered in the present study. We believe that even higher temporal resolutions are needed for an accurate evaluation of the interplay effect in the human body, in which the organs are inhomogeneous and deform during movement.
Jia, Lang; Chen, Jinyun; Wang, Yan; Liu, Yingjiang; Zhang, Yu; Chen, Wenzhi
2014-01-01
This study aimed to assess changes in osteophytic, chondral, and subchondral structures in a surgically-induced osteoarthritis (OA) rabbit model in order to correlate MRI findings with the macroscopic progress of OA and to define the timepoint for disease status in this OA model. The OA model was constructed by surgery in thirty rabbits with ten normal rabbits serving as controls (baseline). High-resolution three-dimensional MRI using a 1.5-T coil was performed at baseline, two, four, and eight weeks post-surgery. MRIs of cartilage lesions, subchondral bone lesions, and osteophyte formations were independently assessed by two blinded radiologists. Ten rabbits were sacrificed at baseline, two, four, and eight weeks post-surgery, and macroscopic evaluation was independently performed by two blinded orthopedic surgeons. The signal intensities and morphologies of chondral and subchondral structures by MRI accurately reflected the degree of OA. Cartilage defects progressed from a grade of 0.05-0.15 to 1.15-1.30 to 1.90-1.97 to 3.00-3.35 at each successive time point, respectively (p<0.05). Subchondral bone lesions progressed from a grade of 0.00 to 0.78-0.90 to 1.27-1.58 to 1.95-2.23 at each successive time point, respectively (p = 0.000). Osteophytes progressed from a size (mm) of 0.00 to 0.87-1.06 to 1.24-1.87 to 2.21-3.21 at each successive time point, respectively (p = 0.000). Serial observations revealed that MRI can accurately detect the progression of cartilage lesions and subchondral bone edema over an eight-week period but may not be accurate in detecting osteophyte sizes. Week four post-surgery was considered the timepoint between OA-negative and OA-positive status in this OA model. The combination of this OA model with MRI evaluation should provide a promising tool for the pre-clinical evaluation of new disease-modifying osteoarthritis drugs.
Atashi, Alireza; Amini, Shahram; Tashnizi, Mohammad Abbasi; Moeinipour, Ali Asghar; Aazami, Mathias Hossain; Tohidnezhad, Fariba; Ghasemi, Erfan; Eslami, Saeid
2018-01-01
Introduction The European System for Cardiac Operative Risk Evaluation II (EuroSCORE II) is a prediction model which maps 18 predictors to a 30-day post-operative risk of death concentrating on accurate stratification of candidate patients for cardiac surgery. Objective The objective of this study was to determine the performance of the EuroSCORE II risk-analysis predictions among patients who underwent heart surgeries in one area of Iran. Methods A retrospective cohort study was conducted to collect the required variables for all consecutive patients who underwent heart surgeries at Emam Reza hospital, Northeast Iran between 2014 and 2015. Univariate and multivariate analysis were performed to identify covariates which significantly contribute to higher EuroSCORE II in our population. External validation was performed by comparing the real and expected mortality using area under the receiver operating characteristic curve (AUC) for discrimination assessment. Also, Brier Score and Hosmer-Lemeshow goodness-of-fit test were used to show the overall performance and calibration level, respectively. Results Two thousand five hundred eight one (59.6% males) were included. The observed mortality rate was 3.3%, but EuroSCORE II had a prediction of 4.7%. Although the overall performance was acceptable (Brier score=0.047), the model showed poor discriminatory power by AUC=0.667 (sensitivity=61.90, and specificity=66.24) and calibration (Hosmer-Lemeshow test, P<0.01). Conclusion Our study showed that the EuroSCORE II discrimination power is less than optimal for outcome prediction and less accurate for resource allocation programs. It highlights the need for recalibration of this risk stratification tool aiming to improve post cardiac surgery outcome predictions in Iran. PMID:29617500
SU-C-BRA-06: Automatic Brain Tumor Segmentation for Stereotactic Radiosurgery Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Stojadinovic, S; Jiang, S
Purpose: Stereotactic radiosurgery (SRS), which delivers a potent dose of highly conformal radiation to the target in a single fraction, requires accurate tumor delineation for treatment planning. We present an automatic segmentation strategy, that synergizes intensity histogram thresholding, super-voxel clustering, and level-set based contour evolving methods to efficiently and accurately delineate SRS brain tumors on contrast-enhance T1-weighted (T1c) Magnetic Resonance Images (MRI). Methods: The developed auto-segmentation strategy consists of three major steps. Firstly, tumor sites are localized through 2D slice intensity histogram scanning. Then, super voxels are obtained through clustering the corresponding voxels in 3D with reference to the similaritymore » metrics composited from spatial distance and intensity difference. The combination of the above two could generate the initial contour surface. Finally, a localized region active contour model is utilized to evolve the surface to achieve the accurate delineation of the tumors. The developed method was evaluated on numerical phantom data, synthetic BRATS (Multimodal Brain Tumor Image Segmentation challenge) data, and clinical patients’ data. The auto-segmentation results were quantitatively evaluated by comparing to ground truths with both volume and surface similarity metrics. Results: DICE coefficient (DC) was performed as a quantitative metric to evaluate the auto-segmentation in the numerical phantom with 8 tumors. DCs are 0.999±0.001 without noise, 0.969±0.065 with Rician noise and 0.976±0.038 with Gaussian noise. DC, NMI (Normalized Mutual Information), SSIM (Structural Similarity) and Hausdorff distance (HD) were calculated as the metrics for the BRATS and patients’ data. Assessment of BRATS data across 25 tumor segmentation yield DC 0.886±0.078, NMI 0.817±0.108, SSIM 0.997±0.002, and HD 6.483±4.079mm. Evaluation on 8 patients with total 14 tumor sites yield DC 0.872±0.070, NMI 0.824±0.078, SSIM 0.999±0.001, and HD 5.926±6.141mm. Conclusion: The developed automatic segmentation strategy, which yields accurate brain tumor delineation in evaluation cases, is promising for its application in SRS treatment planning.« less
Quantitative aspects of inductively coupled plasma mass spectrometry
Wagner, Barbara
2016-01-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644971
Experimental evaluation of radiosity for room sound-field prediction.
Hodgson, Murray; Nosal, Eva-Marie
2006-08-01
An acoustical radiosity model was evaluated for how it performs in predicting real room sound fields. This was done by comparing radiosity predictions with experimental results for three existing rooms--a squash court, a classroom, and an office. Radiosity predictions were also compared with those by ray tracing--a "reference" prediction model--for both specular and diffuse surface reflection. Comparisons were made for detailed and discretized echograms, sound-decay curves, sound-propagation curves, and the variations with frequency of four room-acoustical parameters--EDT, RT, D50, and C80. In general, radiosity and diffuse ray tracing gave very similar predictions. Predictions by specular ray tracing were often very different. Radiosity agreed well with experiment in some cases, less well in others. Definitive conclusions regarding the accuracy with which the rooms were modeled, or the accuracy of the radiosity approach, were difficult to draw. The results suggest that radiosity predicts room sound fields with some accuracy, at least as well as diffuse ray tracing and, in general, better than specular ray tracing. The predictions of detailed echograms are less accurate, those of derived room-acoustical parameters more accurate. The results underline the need to develop experimental methods for accurately characterizing the absorptive and reflective characteristics of room surfaces, possible including phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harpool, K; De La Fuente Herman, T; Ahmad, S
Purpose: To evaluate the performance of a two-dimensional (2D) array-diode- detector for geometric and dosimetric quality assurance (QA) tests of high-dose-rate (HDR) brachytherapy with an Ir-192-source. Methods: A phantom setup was designed that encapsulated a two-dimensional (2D) array-diode-detector (MapCheck2) and a catheter for the HDR brachytherapy Ir-192 source. This setup was used to perform both geometric and dosimetric quality assurance for the HDR-Ir192 source. The geometric tests included: (a) measurement of the position of the source and (b) spacing between different dwell positions. The dosimteric tests include: (a) linearity of output with time, (b) end effect and (c) relative dosemore » verification. The 2D-dose distribution measured with MapCheck2 was used to perform the previous tests. The results of MapCheck2 were compared with the corresponding quality assurance testes performed with Gafchromic-film and well-ionization-chamber. Results: The position of the source and the spacing between different dwell-positions were reproducible within 1 mm accuracy by measuring the position of maximal dose using MapCheck2 in contrast to the film which showed a blurred image of the dwell positions due to limited film sensitivity to irradiation. The linearity of the dose with dwell times measured from MapCheck2 was superior to the linearity measured with ionization chamber due to higher signal-to-noise ratio of the diode readings. MapCheck2 provided more accurate measurement of the end effect with uncertainty < 1.5% in comparison with the ionization chamber uncertainty of 3%. Although MapCheck2 did not provide absolute calibration dosimeter for the activity of the source, it provided accurate tool for relative dose verification in HDR-brachytherapy. Conclusion: The 2D-array-diode-detector provides a practical, compact and accurate tool to perform quality assurance for HDR-brachytherapy with an Ir-192 source. The diodes in MapCheck2 have high radiation sensitivity and linearity that is superior to Gafchromic-films and ionization chamber used for geometric and dosimetric QA in HDR-brachytherapy, respectively.« less
Tree Alignment Based on Needleman-Wunsch Algorithm for Sensor Selection in Smart Homes.
Chua, Sook-Ling; Foo, Lee Kien
2017-08-18
Activity recognition in smart homes aims to infer the particular activities of the inhabitant, the aim being to monitor their activities and identify any abnormalities, especially for those living alone. In order for a smart home to support its inhabitant, the recognition system needs to learn from observations acquired through sensors. One question that often arises is which sensors are useful and how many sensors are required to accurately recognise the inhabitant's activities? Many wrapper methods have been proposed and remain one of the popular evaluators for sensor selection due to its superior accuracy performance. However, they are prohibitively slow during the evaluation process and may run into the risk of overfitting due to the extent of the search. Motivated by this characteristic, this paper attempts to reduce the cost of the evaluation process and overfitting through tree alignment. The performance of our method is evaluated on two public datasets obtained in two distinct smart home environments.
An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation
NASA Astrophysics Data System (ADS)
Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi
A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.
The 1980 US/Canada wheat and barley exploratory experiment, volume 1
NASA Technical Reports Server (NTRS)
Bizzell, R. M.; Prior, H. L.; Payne, R. W.; Disler, J. M.
1983-01-01
The results from the U.S./Canada Wheat and Barley Exploratory Experiment which was completed during FY 1980 are presented. The results indicate that the new crop identification procedures performed well for spring small grains and that they are conductive to automation. The performance of the machine processing techniques shows a significant improvement over previously evaluated technology. However, the crop calendars will require additional development and refinements prior to integration into automated area estimation technology. The evaluation showed the integrated technology to be capable of producing accurate and consistent spring small grains proportion estimates. However, barley proportion estimation technology was not satisfactorily evaluated. The low-density segments examined were judged not to give indicative or unequivocal results. It is concluded that, generally, the spring small grains technology is ready for evaluation in a pilot experiment focusing on sensitivity analyses to a variety of agricultural and meteorological conditions representative of the global environment. It is further concluded that a strong potential exists for establishing a highly efficient technology or spring small grains.
Yin, Xianghui; Wang, Rui; Wang, Shaoxin; Wang, Yukun; Jin, Chengbin; Cao, Zhaoliang; Xuan, Li
2018-02-01
Atmospheric turbulence seriously affects the quality of free-space laser communication. The Strehl ratio (SR) is used to evaluate the effect of atmospheric turbulence on the receiving energy of free-space laser communication systems. However, the SR method does not consider the area of the laser-receiving end face. In this study, the power-in-the-bucket (PIB) method is demonstrated to accurately evaluate the effect of turbulence on the receiving energy. A theoretical equation is first obtained to calculate PIB. Simulated and experimental validations are then performed to verify the effectiveness of the theoretical equation. This work may provide effective guidance for the design and evaluation of free-space laser communication systems.
Machine learning-based dual-energy CT parametric mapping
NASA Astrophysics Data System (ADS)
Su, Kuan-Hao; Kuo, Jung-Wen; Jordan, David W.; Van Hedent, Steven; Klahr, Paul; Wei, Zhouping; Helo, Rose Al; Liang, Fan; Qian, Pengjiang; Pereira, Gisele C.; Rassouli, Negin; Gilkeson, Robert C.; Traughber, Bryan J.; Cheng, Chee-Wai; Muzic, Raymond F., Jr.
2018-06-01
The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Zeff), relative electron density (ρ e), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.
Machine learning-based dual-energy CT parametric mapping.
Su, Kuan-Hao; Kuo, Jung-Wen; Jordan, David W; Van Hedent, Steven; Klahr, Paul; Wei, Zhouping; Al Helo, Rose; Liang, Fan; Qian, Pengjiang; Pereira, Gisele C; Rassouli, Negin; Gilkeson, Robert C; Traughber, Bryan J; Cheng, Chee-Wai; Muzic, Raymond F
2018-06-08
The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Z eff ), relative electron density (ρ e ), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.
Dynamic evaluation of airflow rates for a variable air volume system serving an open-plan office.
Mai, Horace K W; Chan, Daniel W T; Burnett, John
2003-09-01
In a typical air-conditioned office, the thermal comfort and indoor air quality are sustained by delivering the amount of supply air with the correct proportion of outdoor air to the breathing zone. However, in a real office, it is not easy to measure these airflow rates supplied to space, especially when the space is served by a variable air volume (VAV) system. The most accurate method depends on what is being measured, the details of the building and types of ventilation system. The constant concentration tracer gas method as a means to determine ventilation system performance, however, this method becomes more complicated when the air, including the tracer gas is allowed to recirculate. An accurate measurement requires significant resource support in terms of instrumentation set up and also professional interpretation. This method deters regular monitoring of the performance of an airside systems by building managers, and hence the indoor environmental quality, in terms of thermal comfort and indoor air quality, may never be satisfactory. This paper proposes a space zone model for the calculation of all the airflow parameters based on tracer gas measurements, including flow rates of outdoor air, VAV supply, return space, return and exfiltration. Sulphur hexafluoride (SF6) and carbon dioxide (CO2) are used as tracer gases. After using both SF6 and CO2, the corresponding results provide a reference to justify the acceptability of using CO2 as the tracer gas. The validity of using CO2 has the significance that metabolic carbon dioxide can be used as a means to evaluate real time airflow rates. This approach provides a practical protocol for building managers to evaluate the performance of airside systems.
Pulmonary Thromboembolism: Evaluation By Intravenous Angiography
NASA Astrophysics Data System (ADS)
Pond, Gerald D.; Cook, Glenn C.; Woolfenden, James M.; Dodge, Russell R.
1981-11-01
Using perfusion lung scans as a guide, digital video subtraction angiography of the pulmonary arteries was performed in human subjects suspected of having pulmonary embolism. Dogs were employed as a pulmonary embolism model and both routine pulmonary angiography and intravenous pulmonary angiograms were obtained for comparison purposes. We have shown by our preliminary results that the technique is extremely promising as a safe and accurate alternative to routine pulmonary angiography in selected patients.
Low-speed airspeed calibration data for a single-engine research-support aircraft
NASA Technical Reports Server (NTRS)
Holmes, B. J.
1980-01-01
A standard service airspeed system on a single engine research support airplane was calibrated by the trailing anemometer method. The effects of flaps, power, sideslip, and lag were evaluated. The factory supplied airspeed calibrations were not sufficiently accurate for high accuracy flight research applications. The trailing anemometer airspeed calibration was conducted to provide the capability to use the research support airplane to perform pace aircraft airspeed calibrations.
Assessing the Robustness of Graph Statistics for Network Analysis Under Incomplete Information
strategy for dismantling these networks based on their network structure. However, these strategies typically assume complete information about the...combat them with missing information . This thesis analyzes the performance of a variety of network statistics in the context of incomplete information by...leveraging simulation to remove nodes and edges from networks and evaluating the effect this missing information has on our ability to accurately
1976-02-01
Providing Knowledgeable and Accu- rate Information About the Navy; G. Administrative Skills; H. Supporting Other Recruiters and the Command; and...groups knowledgeable about Navy recruiting. Special thanks go to CDR Peebles, LT McGann, LCDR Sigmund, and CAPT Hollingworth for coordinating these...Salesmanship Skills E. Establishing and Maintaining Good Relationships in the Community F. Providing Knowledgeable and Accurate Information About the Navy
DOT National Transportation Integrated Search
2008-09-01
A total of 49 dynamic sled tests were performed with the Hybrid III 10YO to examine issues relating to child belt fit. The goals of these tests were to evaluate ATD response to realistic belt geometries and belt fit, develop methods for accurate, rep...
Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia
Segkouli, Sofia; Tzovaras, Dimitrios; Tsakiris, Thanos; Tsolaki, Magda; Karagiannidis, Charalampos
2015-01-01
Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI) such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs) requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM) simulating mild cognitive impairment (MCI) through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users' cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces' design supported by increased tasks' complexity to capture a more detailed profile of users' capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces' evaluation through simulation on the basis of virtual models of MCI users. PMID:26339282
Identification of bacteria isolated from veterinary clinical specimens using MALDI-TOF MS.
Pavlovic, Melanie; Wudy, Corinna; Zeller-Peronnet, Veronique; Maggipinto, Marzena; Zimmermann, Pia; Straubinger, Alix; Iwobi, Azuka; Märtlbauer, Erwin; Busch, Ulrich; Huber, Ingrid
2015-01-01
Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) has recently emerged as a rapid and accurate identification method for bacterial species. Although it has been successfully applied for the identification of human pathogens, it has so far not been well evaluated for routine identification of veterinary bacterial isolates. This study was performed to compare and evaluate the performance of MALDI-TOF MS based identification of veterinary bacterial isolates with commercially available conventional test systems. Discrepancies of both methods were resolved by sequencing 16S rDNA and, if necessary, the infB gene for Actinobacillus isolates. A total of 375 consecutively isolated veterinary samples were collected. Among the 357 isolates (95.2%) correctly identified at the genus level by MALDI-TOF MS, 338 of them (90.1% of the total isolates) were also correctly identified at the species level. Conventional methods offered correct species identification for 319 isolates (85.1%). MALDI-TOF identification therefore offered more accurate identification of veterinary bacterial isolates. An update of the in-house mass spectra database with additional reference spectra clearly improved the identification results. In conclusion, the presented data suggest that MALDI-TOF MS is an appropriate platform for classification and identification of veterinary bacterial isolates.
Performance of a Heating Block System Designed for Studying the Heat Resistance of Bacteria in Foods
NASA Astrophysics Data System (ADS)
Kou, Xiao-Xi; Li, Rui; Hou, Li-Xia; Huang, Zhi; Ling, Bo; Wang, Shao-Jin
2016-07-01
Knowledge of bacteria’s heat resistance is essential for developing effective thermal treatments. Choosing an appropriate test method is important to accurately determine bacteria’s heat resistances. Although being a major factor to influence the thermo-tolerance of bacteria, the heating rate in samples cannot be controlled in water or oil bath methods due to main dependence on sample’s thermal properties. A heating block system (HBS) was designed to regulate the heating rates in liquid, semi-solid and solid foods using a temperature controller. Distilled water, apple juice, mashed potato, almond powder and beef were selected to evaluate the HBS’s performance by experiment and computer simulation. The results showed that the heating rates of 1, 5 and 10 °C/min with final set-point temperatures and holding times could be easily and precisely achieved in five selected food materials. A good agreement in sample central temperature profiles was obtained under various heating rates between experiment and simulation. The experimental and simulated results showed that the HBS could provide a sufficiently uniform heating environment in food samples. The effect of heating rate on bacterial thermal resistance was evaluated with the HBS. The system may hold potential applications for rapid and accurate assessments of bacteria’s thermo-tolerances.
Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications
Moccia, Antonio
2014-01-01
Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154
Importance of curvature evaluation scale for predictive simulations of dynamic gas-liquid interfaces
NASA Astrophysics Data System (ADS)
Owkes, Mark; Cauble, Eric; Senecal, Jacob; Currie, Robert A.
2018-07-01
The effect of the scale used to compute the interfacial curvature on the prediction of dynamic gas-liquid interfaces is investigated. A new interface curvature calculation methodology referred to herein as the Adjustable Curvature Evaluation Scale (ACES) is proposed. ACES leverages a weighted least squares regression to fit a polynomial through points computed on the volume-of-fluid representation of the gas-liquid interface. The interface curvature is evaluated from this polynomial. Varying the least squares weight with distance from the location where the curvature is being computed, adjusts the scale the curvature is evaluated on. ACES is verified using canonical static test cases and compared against second- and fourth-order height function methods. Simulations of dynamic interfaces, including a standing wave and oscillating droplet, are performed to assess the impact of the curvature evaluation scale for predicting interface motions. ACES and the height function methods are combined with two different unsplit geometric volume-of-fluid (VoF) schemes that define the interface on meshes with different levels of refinement. We find that the results depend significantly on curvature evaluation scale. Particularly, the ACES scheme with a properly chosen weight function is accurate, but fails when the scale is too small or large. Surprisingly, the second-order height function method is more accurate than the fourth-order variant for the dynamic tests even though the fourth-order method performs better for static interfaces. Comparing the curvature evaluation scale of the second- and fourth-order height function methods, we find the second-order method is closer to the optimum scale identified with ACES. This result suggests that the curvature scale is driving the accuracy of the dynamics. This work highlights the importance of studying numerical methods with realistic (dynamic) test cases and that the interactions of the various discretizations is as important as the accuracy of one part of the discretization.
An experimental method for the assessment of color simulation tools.
Lillo, Julio; Alvaro, Leticia; Moreira, Humberto
2014-07-22
The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method. © 2014 ARVO.
Numerical Evaluation of Storm Surge Indices for Public Advisory Purposes
NASA Astrophysics Data System (ADS)
Bass, B.; Bedient, P. B.; Dawson, C.; Proft, J.
2016-12-01
After the devastating hurricane season of 2005, shortcomings with the Saffir-Simpson Hurricane Scale's (SSHS) ability to characterize a tropical cyclones potential to generate storm surge became widely apparent. As a result, several alternative surge indices were proposed to replace the SSHS, including Powell and Reinhold's Integrated Kinetic Energy (IKE) factor, Kantha's Hurricane Surge Index (HSI), and Irish and Resio's Surge Scale (SS). Of the previous, the IKE factor is the only surge index to-date that truly captures a tropical cyclones integrated intensity, size, and wind field distribution. However, since the IKE factor was proposed in 2007, an accurate assessment of this surge index has not been performed. This study provides the first quantitative evaluation of the IKEs ability to serve as a predictor of a tropical cyclones potential surge impacts as compared to other alternative surge indices. Using the tightly coupled ADvanced CIRCulation and Simulating WAves Nearshore models, the surge and wave responses of Hurricane Ike (2008) and 78 synthetic tropical cyclones were evaluated against the SSHS, IKE, HSI and SS. Results along the upper TX coast of the Gulf of Mexico demonstrate that the HSI performs best in capturing the peak surge response of a tropical cyclone, while the IKE accounting for winds greater than tropical storm intensity (IKETS) provides the most accurate estimate of a tropical cyclones regional surge impacts. These results demonstrate that the appropriate selection of a surge index ultimately depends on what information is of interest to be conveyed to the public and/or scientific community.
Lin, Zhaozhou; Zhang, Qiao; Liu, Ruixin; Gao, Xiaojie; Zhang, Lu; Kang, Bingya; Shi, Junhan; Wu, Zidan; Gui, Xinjing; Li, Xuelin
2016-01-01
To accurately, safely, and efficiently evaluate the bitterness of Traditional Chinese Medicines (TCMs), a robust predictor was developed using robust partial least squares (RPLS) regression method based on data obtained from an electronic tongue (e-tongue) system. The data quality was verified by the Grubb’s test. Moreover, potential outliers were detected based on both the standardized residual and score distance calculated for each sample. The performance of RPLS on the dataset before and after outlier detection was compared to other state-of-the-art methods including multivariate linear regression, least squares support vector machine, and the plain partial least squares regression. Both R2 and root-mean-squares error (RMSE) of cross-validation (CV) were recorded for each model. With four latent variables, a robust RMSECV value of 0.3916 with bitterness values ranging from 0.63 to 4.78 were obtained for the RPLS model that was constructed based on the dataset including outliers. Meanwhile, the RMSECV, which was calculated using the models constructed by other methods, was larger than that of the RPLS model. After six outliers were excluded, the performance of all benchmark methods markedly improved, but the difference between the RPLS model constructed before and after outlier exclusion was negligible. In conclusion, the bitterness of TCM decoctions can be accurately evaluated with the RPLS model constructed using e-tongue data. PMID:26821026
NASA Astrophysics Data System (ADS)
Madugundu, Rangaswamy; Al-Gaadi, Khalid A.; Tola, ElKamil; Hassaballa, Abdalhaleem A.; Patil, Virupakshagouda C.
2017-12-01
Accurate estimation of evapotranspiration (ET) is essential for hydrological modeling and efficient crop water management in hyper-arid climates. In this study, we applied the METRIC algorithm on Landsat-8 images, acquired from June to October 2013, for the mapping of ET of a 50 ha center-pivot irrigated alfalfa field in the eastern region of Saudi Arabia. The METRIC-estimated energy balance components and ET were evaluated against the data provided by an eddy covariance (EC) flux tower installed in the field. Results indicated that the METRIC algorithm provided accurate ET estimates over the study area, with RMSE values of 0.13 and 4.15 mm d-1. The METRIC algorithm was observed to perform better in full canopy conditions compared to partial canopy conditions. On average, the METRIC algorithm overestimated the hourly ET by 6.6 % in comparison to the EC measurements; however, the daily ET was underestimated by 4.2 %.
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
Normalization of RNA-seq data using factor analysis of control genes or samples
Risso, Davide; Ngai, John; Speed, Terence P.; Dudoit, Sandrine
2015-01-01
Normalization of RNA-seq data has proven essential to ensure accurate inference of expression levels. Here we show that usual normalization approaches mostly account for sequencing depth and fail to correct for library preparation and other more-complex unwanted effects. We evaluate the performance of the External RNA Control Consortium (ERCC) spike-in controls and investigate the possibility of using them directly for normalization. We show that the spike-ins are not reliable enough to be used in standard global-scaling or regression-based normalization procedures. We propose a normalization strategy, remove unwanted variation (RUV), that adjusts for nuisance technical effects by performing factor analysis on suitable sets of control genes (e.g., ERCC spike-ins) or samples (e.g., replicate libraries). Our approach leads to more-accurate estimates of expression fold-changes and tests of differential expression compared to state-of-the-art normalization methods. In particular, RUV promises to be valuable for large collaborative projects involving multiple labs, technicians, and/or platforms. PMID:25150836
Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations.
Martínez-Romero, Marcos; O'Connor, Martin J; Shankar, Ravi D; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L; Gevaert, Olivier; Graybeal, John; Musen, Mark A
2017-01-01
In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository.
Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations
Martínez-Romero, Marcos; O’Connor, Martin J.; Shankar, Ravi D.; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L.; Gevaert, Olivier; Graybeal, John; Musen, Mark A.
2017-01-01
In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository. PMID:29854196
Detection of nitrogen dioxide by CW cavity-enhanced spectroscopy
NASA Astrophysics Data System (ADS)
Jie, Guo; Han, Ye-Xing; Yu, Zhi-Wei; Tang, Huai-Wu
2016-11-01
In the paper, an accurate and sensitive system was used to monitor the ambient atmospheric NO2 concentrations. This system utilizes cavity attenuated phase shift spectroscopy(CAPS), a technology related to cavity ring down spectroscopy(CRDS). Advantages of the CAPS system include such as: (1) cheap and easy to control the light source, (2) high accuracy, and (3) low detection limit. The performance of the CAPS system was evaluated by measuring of the stability and response of the system. The minima ( 0.08 ppb NO2) in the Allan plots show the optimum average time( 100s) for optimum detection performance of the CAPS system. Over a 20-day-long period of the ambient atmospheric NO2 concentrations monitoring, a comparison of the CAPS system with an extremely accurate and precise chemiluminescence-based NOx analyzer showed that the CAPS system was able to reliably and quantitatively measure both large and small fluctuations in the ambient nitrogen dioxide concentration. The experimental results show that the measuring instrument results correlation is 0.95.
Sancho-García, J C
2011-09-13
Highly accurate coupled-cluster (CC) calculations with large basis sets have been performed to study the binding energy of the (CH)12, (CH)16, (CH)20, and (CH)24 polyhedral hydrocarbons in two, cage-like and planar, forms. We also considered the effect of other minor contributions: core-correlation, relativistic corrections, and extrapolations to the limit of the full CC expansion. Thus, chemically accurate values could be obtained for these complicated systems. These nearly exact results are used to evaluate next the performance of main approximations (i.e., pure, hybrid, and double-hybrid methods) within density functional theory (DFT) in a systematic fashion. Some commonly used functionals, including the B3LYP model, are affected by large errors, and only those having reduced self-interaction error (SIE), which includes the last family of conjectured expressions (double hybrids), are able to achieve reasonable low deviations of 1-2 kcal/mol especially when an estimate for dispersion interactions is also added.
Tsao, Mei-Fen; Chang, Hui-Wen; Chang, Chien-Hsi; Cheng, Chi-Hsuan; Lin, Hsiu-Chen
2017-05-01
Neonatal hypoglycemia may cause severe neurological damages; therefore, tight glycemic control is crucial to identify neonate at risk. Previous blood glucose monitoring system (BGMS) failed to perform well in neonates; there are calls for the tightening of accuracy requirements. It remains a need for accurate BGMS for effective bedside diabetes management in neonatal care within a hospital population. A total of 300 neonates were recruited from local hospitals. Accuracy performance of a commercially available BGMS was evaluated against reference instrument in screening for neonatal hypoglycemia, and assessment was made based on the ISO15197:2013 and a tighter standard. At blood glucose level < 47 mg/dl, BGMS assessed met the minimal accuracy requirement of ISO 15197:2013 and tighter standard at 100% and 97.2%, respectively.
Allenspach, K; Vaden, S L; Harris, T S; Gröne, A; Doherr, M G; Griot-Wenk, M E; Bischoff, S C; Gaschen, F
2006-01-01
To evaluate the colonoscopic allergen provocation (COLAP) test as a new tool for the diagnosis of IgE-mediated food allergy. Oral food challenges as well as COLAP testing were performed in a colony of nine research dogs with proven immediate-type food allergic reactions. In addition, COLAP was performed in five healthy dogs. When compared with the oral challenge test, COLAP accurately determined 18 of 23 (73 per cent) positive oral challenge reactions (73 per cent) in dogs with food allergies and was negative in the healthy dogs. The accuracy of this new test may be higher than that for gastric sensitivity testing. Therefore, COLAP holds promise as a new test to confirm the diagnosis of suspect IgE-mediated food allergy in dogs.
Paksoy, Nadir; Ozbek, Busra
2018-01-01
Over the last few decades, fine needle aspiration cytology (FNA) has emerged as a SAFE (Simple, Accurate, Fast, Economical) diagnostic tool based on the morphologic evaluation of cells. The first and most important step in obtaining accurate results from FNA is to procure sufficient and representative material from the lesion and to appropriately transfer this material to the laboratory. Unfortunately, the most important aspect of this task occurs beyond the control of the cytopathologist, a key reason for obtaining unsatisfactory results with FNA. There is growing interest in the field of cytology in "cytopathologist-performed ultrasound (US)-guided FNA," which has been reported to yield accurate results. The first author has been applying FNA in his own private cytopathology practice with a radiologist and under the guidance of US for more than 20 years. This study retrospectively reviews the utility of this practice. We present a selection of didactic examples under different headings that highlight the application of FNA by a cytopathologist, accompanied by US, under the guidance of a radiologist, in the form of an "outpatient FNA clinic." The use of this technique enhances diagnostic accuracy and prevents pitfalls. The highlights of each case are also outlined as "take-home messages."
NASA Astrophysics Data System (ADS)
Liu, Xin; Lu, Hongbing; Chen, Hanyong; Zhao, Li; Shi, Zhengxing; Liang, Zhengrong
2009-02-01
Developmental dysplasia of the hip is a congenital hip joint malformation affecting the proximal femurs and acetabulum that are subluxatable, dislocatable, and dislocated. Conventionally, physicians made diagnoses and treatments only based on findings from two-dimensional (2D) images by manually calculating clinic parameters. However, anatomical complexity of the disease and the limitation of current standard procedures make accurate diagnosis quite difficultly. In this study, we developed a system that provides quantitative measurement of 3D clinical indexes based on computed tomography (CT) images. To extract bone structure from surrounding tissues more accurately, the system firstly segments the bone using a knowledge-based fuzzy clustering method, which is formulated by modifying the objective function of the standard fuzzy c-means algorithm with additive adaptation penalty. The second part of the system calculates automatically the clinical indexes, which are extended from 2D to 3D for accurate description of spatial relationship between femurs and acetabulum. To evaluate the system performance, experimental study based on 22 patients with unilateral or bilateral affected hip was performed. The results of 3D acetabulum index (AI) automatically provided by the system were validated by comparison with 2D results measured by surgeons manually. The correlation between the two results was found to be 0.622 (p<0.01).
Alternative evaluation metrics for risk adjustment methods.
Park, Sungchul; Basu, Anirban
2018-06-01
Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2
NASA Technical Reports Server (NTRS)
Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.
1988-01-01
The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.
Murayama, Ryoko; Tanabe, Hidenori; Oe, Makoto; Motoo, Yoshiharu; Wagatsuma, Takanori; Michibuchi, Michiko; Kinoshita, Sachiko; Sakai, Keiko; Konya, Chizuko; Sugama, Junko; Sanada, Hiromi
2017-01-01
Early detection of extravasation is important, but conventional methods of detection lack objectivity and reliability. This study evaluated the predictive validity of thermography for identifying extravasation during intravenous antineoplastic therapy. Of 257 patients who received chemotherapy through peripheral veins, extravasation was identified in 26. Thermography was performed every 15 to 30 minutes during the infusions. Sensitivity, specificity, positive predictive value, and negative predictive value using thermography were 84.6%, 94.8%, 64.7%, and 98.2%, respectively. This study showed that thermography offers an accurate prediction of extravasation. PMID:29112585
PredictSNP: Robust and Accurate Consensus Classifier for Prediction of Disease-Related Mutations
Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D.; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri
2014-01-01
Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp. PMID:24453961
Sex differences in left/right confusion.
Jordan, Kirsten; Wüstenberg, Torsten; Jaspers-Feyer, Fern; Fellbrich, Anja; Peters, Michael
2006-01-01
In agreement with the literature, females (n=269) gave themselves significantly poorer ratings than males (n=164) in evaluating their ability to make fast and accurate left/right judgments. In order to evaluate the ecological validity of the self-ratings, subjects were tested on a task that required fast and accurate left/right judgments, on a mental rotation task, and on a task that required navigation of a virtual maze. The correlations between the performances and self-ratings were computed. Both males and females who gave themselves very poor LRC (left/right confusion) ratings had significantly lower accuracy scores on the left/right judgement task than males and females with average ratings, but there was no sex-specific relation between LRC ratings and left/right judgements that would explain why females give themselves lower LRC ratings. For females only, a weak correlation between LRC scores and the learning of the virtual maze was observed, but no significant correlations were observed between LRC scores and mental rotation performance. We conclude that self-ratings on left/right confusion questions, although they yield reliable sex differences, are poor predictors of actual performance on spatial tasks that involve left/right judgements. Thus, and in support of earlier speculations (Sholl and Egeth, 1981; Teng and Lee, 1982; Williams et al., 1993), the principal cause of the marked sex differences in LRC self-ratings likely lies in a greater willingness of females to rate themselves more poorly on questions of this type than is the case for men.
Evaluation of topographical and seasonal feature using GPM IMERG and TRMM 3B42 over Far-East Asia
NASA Astrophysics Data System (ADS)
Kim, Kiyoung; Park, Jongmin; Baik, Jongjin; Choi, Minha
2017-05-01
The acquisition of accurate precipitation data is essential for analyzing various hydrological phenomena and climate change. Recently, the Global Precipitation Measurement (GPM) satellites were launched as a next-generation rainfall mission for observing global precipitation characteristics. The main objective in this study is to assess precipitation products from GPM, especially the Integrated Multi-satellitE Retrievals (GPM-3IMERGHH) and the Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA), using gauge-based precipitation data from Far-East Asia during the pre-monsoon and monsoon seasons. Evaluation was performed by focusing on three different factors: geographical aspects, seasonal factors, and spatial distributions. In both mountainous and coastal regions, the GPM-3IMERGHH product showed better performance than the TRMM 3B42 V7, although both rainfall products showed uncertainties caused by orographic convection and the land-ocean classification algorithm. GPM-3IMERGHH performed about 8% better than TRMM 3B42 V7 during the pre-monsoon and monsoon seasons due to the improvement of loaded sensor and reinforcement in capturing convective rainfall, respectively. In depicting the spatial distribution of precipitation, GPM-3IMERGHH was more accurate than TRMM 3B42 V7 because of its enhanced spatial and temporal resolutions of 10 km and 30 min, respectively. Based on these results, GPM-3IMERGHH would be helpful for not only understanding the characteristics of precipitation with high spatial and temporal resolution, but also for estimating near-real-time runoff patterns.
Zou, Hong-Yan; Wu, Hai-Long; OuYang, Li-Qun; Zhang, Yan; Nie, Jin-Fang; Fu, Hai-Yan; Yu, Ru-Qin
2009-09-14
Two second-order calibration methods based on the parallel factor analysis (PARAFAC) and the alternating penalty trilinear decomposition (APTLD) method, have been utilized for the direct determination of terazosin hydrochloride (THD) in human plasma samples, coupled with the excitation-emission matrix fluorescence spectroscopy. Meanwhile, the two algorithms combing with the standard addition procedures have been applied for the determination of terazosin hydrochloride in tablets and the results were validated by the high-performance liquid chromatography with fluorescence detection. These second-order calibrations all adequately exploited the second-order advantages. For human plasma samples, the average recoveries by the PARAFAC and APTLD algorithms with the factor number of 2 (N=2) were 100.4+/-2.7% and 99.2+/-2.4%, respectively. The accuracy of two algorithms was also evaluated through elliptical joint confidence region (EJCR) tests and t-test. It was found that both algorithms could give accurate results, and only the performance of APTLD was slightly better than that of PARAFAC. Figures of merit, such as sensitivity (SEN), selectivity (SEL) and limit of detection (LOD) were also calculated to compare the performances of the two strategies. For tablets, the average concentrations of THD in tablet were 63.5 and 63.2 ng mL(-1) by using the PARAFAC and APTLD algorithms, respectively. The accuracy was evaluated by t-test and both algorithms could give accurate results, too.
A probabilistic and adaptive approach to modeling performance of pavement infrastructure
DOT National Transportation Integrated Search
2007-08-01
Accurate prediction of pavement performance is critical to pavement management agencies. Reliable and accurate predictions of pavement infrastructure performance can save significant amounts of money for pavement infrastructure management agencies th...
Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen
2015-11-01
In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. Copyright © 2015 Elsevier Inc. All rights reserved.
A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law
Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen
2015-01-01
In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. PMID:26455556
Lew, Henry L; Poole, John H; Lee, Eun Ha; Jaffe, David L; Huang, Hsiu-Chen; Brodd, Edward
2005-03-01
To evaluate whether driving simulator and road test evaluations can predict long-term driving performance, we conducted a prospective study on 11 patients with moderate to severe traumatic brain injury. Sixteen healthy subjects were also tested to provide normative values on the simulator at baseline. At their initial evaluation (time-1), subjects' driving skills were measured during a 30-minute simulator trial using an automated 12-measure Simulator Performance Index (SPI), while a trained observer also rated their performance using a Driving Performance Inventory (DPI). In addition, patients were evaluated on the road by a certified driving evaluator. Ten months later (time-2), family members observed patients driving for at least 3 hours over 4 weeks and rated their driving performance using the DPI. At time-1, patients were significantly impaired on automated SPI measures of driving skill, including: speed and steering control, accidents, and vigilance to a divided-attention task. These simulator indices significantly predicted the following aspects of observed driving performance at time-2: handling of automobile controls, regulation of vehicle speed and direction, higher-order judgment and self-control, as well as a trend-level association with car accidents. Automated measures of simulator skill (SPI) were more sensitive and accurate than observational measures of simulator skill (DPI) in predicting actual driving performance. To our surprise, the road test results at time-1 showed no significant relation to driving performance at time-2. Simulator-based assessment of patients with brain injuries can provide ecologically valid measures that, in some cases, may be more sensitive than a traditional road test as predictors of long-term driving performance in the community.
Learning, memory, and the role of neural network architecture.
Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M
2011-06-01
The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.
Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive
NASA Technical Reports Server (NTRS)
Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)
2001-01-01
Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.
Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz
2012-01-01
The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.
Orsphere: Physics Measurments For Bare, HEU(93.2)-Metal Sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Margaret A.; Bess, John D.; Briggs, J. Blair
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared withmore » the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files” (Reference 1). While performing the ORSphere experiments care was taken to accurately document component dimensions (±0.0001 inches), masses (±0.01 g), and material data. The experiment was also set up to minimize the amount of structural material in the sphere proximity. Two, correlated spheres were evaluated and judged to be acceptable as criticality benchmark experiments. This evaluation is given in HEU-MET-FAST-100. The second, smaller sphere was used for additional reactor physics measurements. Worth measurements (Reference 1, 2, 3 and 4), the delayed neutron fraction (Reference 3, 4 and 5) and surface material worth coefficient (Reference 1 and 2) are all measured and judged to be acceptable as benchmark data. The prompt neutron decay (Reference 6), relative fission density (Reference 7) and relative neutron importance (Reference 7) were measured, but are not evaluated. Information for the evaluation was compiled from References 1 through 7, the experimental logbooks 8 and 9 ; additional drawings and notes provided by the experimenter; and communication with the lead experimenter, John T. Mihalczo.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Margaret A.
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared withmore » the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files” (Reference 1). While performing the ORSphere experiments care was taken to accurately document component dimensions (±0.0001 inches), masses (±0.01 g), and material data. The experiment was also set up to minimize the amount of structural material in the sphere proximity. Two, correlated spheres were evaluated and judged to be acceptable as criticality benchmark experiments. This evaluation is given in HEU-MET-FAST-100. The second, smaller sphere was used for additional reactor physics measurements. Worth measurements (Reference 1, 2, 3 and 4), the delayed neutron fraction (Reference 3, 4 and 5) and surface material worth coefficient (Reference 1 and 2) are all measured and judged to be acceptable as benchmark data. The prompt neutron decay (Reference 6), relative fission density (Reference 7) and relative neutron importance (Reference 7) were measured, but are not evaluated. Information for the evaluation was compiled from References 1 through 7, the experimental logbooks 8 and 9 ; additional drawings and notes provided by the experimenter; and communication with the lead experimenter, John T. Mihalczo.« less
Cacho, J; Sevillano, J; de Castro, J; Herrera, E; Ramos, M P
2008-11-01
Insulin resistance plays a role in the pathogenesis of diabetes, including gestational diabetes. The glucose clamp is considered the gold standard for determining in vivo insulin sensitivity, both in human and in animal models. However, the clamp is laborious, time consuming and, in animals, requires anesthesia and collection of multiple blood samples. In human studies, a number of simple indexes, derived from fasting glucose and insulin levels, have been obtained and validated against the glucose clamp. However, these indexes have not been validated in rats and their accuracy in predicting altered insulin sensitivity remains to be established. In the present study, we have evaluated whether indirect estimates based on fasting glucose and insulin levels are valid predictors of insulin sensitivity in nonpregnant and 20-day-pregnant Wistar and Sprague-Dawley rats. We have analyzed the homeostasis model assessment of insulin resistance (HOMA-IR), the quantitative insulin sensitivity check index (QUICKI), and the fasting glucose-to-insulin ratio (FGIR) by comparing them with the insulin sensitivity (SI(Clamp)) values obtained during the hyperinsulinemic-isoglycemic clamp. We have performed a calibration analysis to evaluate the ability of these indexes to accurately predict insulin sensitivity as determined by the reference glucose clamp. Finally, to assess the reliability of these indexes for the identification of animals with impaired insulin sensitivity, performance of the indexes was analyzed by receiver operating characteristic (ROC) curves in Wistar and Sprague-Dawley rats. We found that HOMA-IR, QUICKI, and FGIR correlated significantly with SI(Clamp), exhibited good sensitivity and specificity, accurately predicted SI(Clamp), and yielded lower insulin sensitivity in pregnant than in nonpregnant rats. Together, our data demonstrate that these indexes provide an easy and accurate measure of insulin sensitivity during pregnancy in the rat.
Miszalski-Jamka, Tomasz; Kuntz-Hehner, Stefanie; Schmidt, Harald; Hammerstingl, Christoph; Tiemann, Klaus; Ghanem, Alexander; Troatz, Clemens; Lüderitz, Berndt; Omran, Heyder
2007-07-01
Myocardial contrast echocardiography (MCE) is a new imaging modality for diagnosing coronary artery disease (CAD). The aim of our study was to evaluate feasibility of qualitative myocardial contrast replenishment (RP) assessment during supine bicycle stress MCE and find out cutoff values for such analysis, which could allow accurate detection of CAD. Forty-four consecutive patients, scheduled for coronary angiography (CA) underwent supine bicycle stress two-dimensional echocardiography (2DE). During the same session, MCE was performed at peak stress and post stress. Ultrasound contrast agent (SonoVue) was administered in continuous mode using an infusion pump (BR-INF 100, Bracco Research). Seventeen-segment model of left ventricle was used in analysis. MCE was assessed off-line in terms of myocardial contrast opacification and RP. RP was evaluated on the basis of the number of cardiac cycles required to refill the segment with contrast after its prior destruction with high-power frames. Determination of cutoff values for RP assessment was performed by means of reference intervals and receiver operating characteristic analysis. Quantitative CA was carried out using CAAS system. MCE could be assessed in 42 patients. CA revealed CAD in 25 patients. Calculated cutoff values for RP-analysis (peak-stress RP >3 cardiac cycles and difference between peak stress and post stress RP >0 cardiac cycles) provided sensitive (88%) and accurate (88%) detection of CAD. Sensitivity and accuracy of 2DE were 76% and 79%, respectively. Qualitative RP-analysis based on the number of cardiac cycles required to refill myocardium with contrast is feasible during supine bicycle stress MCE and enables accurate detection of CAD.
Suomi NPP OMPS limb profiler initial sensor performance assessment
NASA Astrophysics Data System (ADS)
Jaross, Glen; Chen, Grace; Kowitt, Mark; Warner, Jeremy; Xu, Philippe; Kelly, Thomas; Linda, Michael; Flittner, David
2012-11-01
Following the successful launch of the Ozone Mapping and Profiler Suite (OMPS) aboard the Suomi National Polar-orbiting Partnership (NPP) spacecraft, the NASA OMPS Limb team began an evaluation of sensor and data product performance in relation to the original goals for this instrument. Does the sensor design work as well as expected, and can limb scatter measurements by NPP OMPS and successor instruments form the basis for accurate long-term monitoring of ozone vertical profiles? While this paper does not address the latter question, the answer to the former is a qualified Yes given this early stage of the mission.
A General Model for Performance Evaluation in DS-CDMA Systems with Variable Spreading Factors
NASA Astrophysics Data System (ADS)
Chiaraluce, Franco; Gambi, Ennio; Righi, Giorgia
This paper extends previous analytical approaches for the study of CDMA systems to the relevant case of multipath environments where users can operate at different bit rates. This scenario is of interest for the Wideband CDMA strategy employed in UMTS, and the model permits the performance comparison of classic and more innovative spreading signals. The method is based on the characteristic function approach, that allows to model accurately the various kinds of interferences. Some numerical examples are given with reference to the ITU-R M. 1225 Recommendations, but the analysis could be extended to different channel descriptions.
Research on Multi-Temporal PolInSAR Modeling and Applications
NASA Astrophysics Data System (ADS)
Hong, Wen; Pottier, Eric; Chen, Erxue
2014-11-01
In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman-Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal Land P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.
Research on Multi-Temporal PolInSAR Modeling and Applications
NASA Astrophysics Data System (ADS)
Hong, Wen; Pottier, Eric; Chen, Erxue
2014-11-01
In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman- Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal L- and P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.
Generation of calibrated tungsten target x-ray spectra: modified TBC model.
Costa, Paulo R; Nersissian, Denise Y; Salvador, Fernanda C; Rio, Patrícia B; Caldas, Linda V E
2007-01-01
In spite of the recent advances in the experimental detection of x-ray spectra, theoretical or semi-empirical approaches for determining realistic x-ray spectra in the range of diagnostic energies are important tools for planning experiments, estimating radiation doses in patients, and formulating radiation shielding models. The TBC model is one of the most useful approaches since it allows for straightforward computer implementation, and it is able to accurately reproduce the spectra generated by tungsten target x-ray tubes. However, as originally presented, the TBC model fails in situations where the determination of x-ray spectra produced by an arbitrary waveform or the calculation of realistic values of air kerma for a specific x-ray system is desired. In the present work, the authors revisited the assumptions used in the original paper published by . They proposed a complementary formulation for taking into account the waveform and the representation of the calculated spectra in a dosimetric quantity. The performance of the proposed model was evaluated by comparing values of air kerma and first and second half value layers from calculated and measured spectra by using different voltages and filtrations. For the output, the difference between experimental and calculated data was better then 5.2%. First and second half value layers presented differences of 23.8% and 25.5% in the worst case. The performance of the model in accurately calculating these data was better for lower voltage values. Comparisons were also performed with spectral data measured using a CZT detector. Another test was performed by the evaluation of the model when considering a waveform distinct of a constant potential. In all cases the model results can be considered as a good representation of the measured data. The results from the modifications to the TBC model introduced in the present work reinforce the value of the TBC model for application of quantitative evaluations in radiation physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakime, Antoine, E-mail: thakime@yahoo.com; Deschamps, Frederic; Garcia Marques de Carvalho, Enio
2011-04-15
Purpose: This study was designed to evaluate the spatial accuracy of matching volumetric computed tomography (CT) data of hepatic metastases with real-time ultrasound (US) using a fusion imaging system (VNav) according to different clinical settings. Methods: Twenty-four patients with one hepatic tumor identified on enhanced CT and US were prospectively enrolled. A set of three landmarks markers was chosen on CT and US for image registration. US and CT images were then superimposed using the fusion imaging display mode. The difference in spatial location between the tumor visible on the CT and the US on the overlay images (reviewer no.more » 1, comment no. 2) was measured in the lateral, anterior-posterior, and vertical axis. The maximum difference (Dmax) was evaluated for different predictive factors.CT performed 1-30 days before registration versus immediately before. Use of general anesthesia for CT and US versus no anesthesia.Anatomic landmarks versus landmarks that include at least one nonanatomic structure, such as a cyst or a calcificationResultsOverall, Dmax was 11.53 {+-} 8.38 mm. Dmax was 6.55 {+-} 7.31 mm with CT performed immediately before VNav versus 17.4 {+-} 5.18 with CT performed 1-30 days before (p < 0.0001). Dmax was 7.05 {+-} 6.95 under general anesthesia and 16.81 {+-} 6.77 without anesthesia (p < 0.0015). Landmarks including at least one nonanatomic structure increase Dmax of 5.2 mm (p < 0.0001). The lowest Dmax (1.9 {+-} 1.4 mm) was obtained when CT and VNav were performed under general anesthesia, one immediately after the other. Conclusions: VNav is accurate when adequate clinical setup is carefully selected. Only under these conditions (reviewer no. 2), liver tumors not identified on US can be accurately targeted for biopsy or radiofrequency ablation using fusion imaging.« less
Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank
2018-02-01
The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint corticosteroid injection may be an appropriate new standard for treatment and surgical decision-making. II - Systematic Review.
Multifaceted bench comparative evaluation of latest intensive care unit ventilators.
Garnier, M; Quesnel, C; Fulgencio, J-P; Degrain, M; Carteaux, G; Bonnet, F; Similowski, T; Demoule, A
2015-07-01
Independent bench studies using specific ventilation scenarios allow testing of the performance of ventilators in conditions similar to clinical settings. The aims of this study were to determine the accuracy of the latest generation ventilators to deliver chosen parameters in various typical conditions and to provide clinicians with a comprehensive report on their performance. Thirteen modern intensive care unit ventilators were evaluated on the ASL5000 test lung with and without leakage for: (i) accuracy to deliver exact tidal volume (VT) and PEEP in assist-control ventilation (ACV); (ii) performance of trigger and pressurization in pressure support ventilation (PSV); and (iii) quality of non-invasive ventilation algorithms. In ACV, only six ventilators delivered an accurate VT and nine an accurate PEEP. Eleven devices failed to compensate VT and four the PEEP in leakage conditions. Inspiratory delays differed significantly among ventilators in invasive PSV (range 75-149 ms, P=0.03) and non-invasive PSV (range 78-165 ms, P<0.001). The percentage of the ideal curve (concomitantly evaluating the pressurization speed and the levels of pressure reached) also differed significantly (range 57-86% for invasive PSV, P=0.04; and 60-90% for non-invasive PSV, P<0.001). Non-invasive ventilation algorithms efficiently prevented the decrease in pressurization capacities and PEEP levels induced by leaks in, respectively, 10 and 12 out of the 13 ventilators. We observed real heterogeneity of performance amongst the latest generation of intensive care unit ventilators. Although non-invasive ventilation algorithms appear to maintain adequate pressurization efficiently in the case of leakage, basic functions, such as delivered VT in ACV and pressurization in PSV, are often less reliable than the values displayed by the device suggest. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Wagner, Karoline; Springer, Burkard; Imkamp, Frank; Opota, Onya; Greub, Gilbert; Keller, Peter M
2018-04-01
Pneumonia is a severe infectious disease. In addition to common viruses and bacterial pathogens (e.g. Streptococcus pneumoniae), fastidious respiratory pathogens like Chlamydia pneumoniae, Mycoplasma pneumoniae and Legionella spp. can cause severe atypical pneumonia. They do not respond to penicillin derivatives, which may cause failure of antibiotic empirical therapy. The same applies for infections with B. pertussis and B. parapertussis, the cause of pertussis disease, that may present atypically and need to be treated with macrolides. Moreover, these fastidious bacteria are difficult to identify by culture or serology, and therefore often remain undetected. Thus, rapid and accurate identification of bacterial pathogens causing atypical pneumonia is crucial. We performed a retrospective method evaluation study to evaluate the diagnostic performance of the new, commercially available Lightmix ® multiplex RT-PCR assay that detects these fastidious bacterial pathogens causing atypical pneumonia. In this retrospective study, 368 clinical respiratory specimens, obtained from patients suffering from atypical pneumonia that have been tested negative for the presence of common agents of pneumonia by culture and viral PCR, were investigated. These clinical specimens have been previously characterized by singleplex RT-PCR assays in our diagnostic laboratory and were used to evaluate the diagnostic performance of the respiratory multiplex Lightmix ® RT-PCR. The multiplex RT-PCR displayed a limit of detection between 5 and 10 DNA copies for different in-panel organisms and showed identical performance characteristics with respect to specificity and sensitivity as in-house singleplex RT-PCRs for pathogen detection. The Lightmix ® multiplex RT-PCR assay represents a low-cost, time-saving and accurate diagnostic tool with high throughput potential. The time-to-result using an automated DNA extraction device for respiratory specimens followed by multiplex RT-PCR detection was below 4 h, which is expected to significantly improve diagnostics for atypical pneumonia-associated bacterial pathogens. Copyright © 2018 The Authors. Published by Elsevier GmbH.. All rights reserved.
van Leersum, M; Schweitzer, M E; Gannon, F; Finkel, G; Vinitski, S; Mitchell, D G
1996-11-01
To develop MR criteria for grades of chondromalacia patellae and to assess the accuracy of these grades. Fat-suppressed T2-weighted double-echo, fat-suppressed T2-weighted fast spin echo, fat-suppressed T1-weighted, and gradient echo sequences were performed at 1.5 T for the evaluation of chondromalacia. A total of 1000 MR, 200 histologic, and 200 surface locations were graded for chondromalacia and statistically compared. Compared with gross inspection as well as with histology the most accurate sequences were fat-suppressed T2-weighted conventional spin echo and fat suppressed T2-weighted fast spin echo, although the T1-weighted and proton density images also correlated well. The most accurate MR criteria applied to the severe grades of chondromalacia, with less accurate results for lesser grades. This study demonstrates that fat-suppressed routine T2-weighted and fast spin echo T2-weighted sequences seem to be more accurate than proton density, T1-weighted, and gradient echo sequences in grading chondromalacia. Good histologic and macroscopic correlation was seen in more severe grades of chondromalacia, but problems remain for the early grades in all sequences studied.
NASA Technical Reports Server (NTRS)
Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori
2005-01-01
Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25 percent of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.
NASA Technical Reports Server (NTRS)
Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori
1999-01-01
Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25% of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust-drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.
Controller evaluations of the descent advisor automation aid
NASA Technical Reports Server (NTRS)
Tobias, Leonard; Volckers, Uwe; Erzberger, Heinz
1989-01-01
An automation aid to assist air traffic controllers in efficiently spacing traffic and meeting arrival times at a fix has been developed at NASA Ames Research Center. The automation aid, referred to as the descent advisor (DA), is based on accurate models of aircraft performance and weather conditions. The DA generates suggested clearances, including both top-of-descent point and speed profile data, for one or more aircraft in order to achieve specific time or distance separation objectives. The DA algorithm is interfaced with a mouse-based, menu-driven controller display that allows the air traffic controller to interactively use its accurate predictive capability to resolve conflicts and issue advisories to arrival aircraft. This paper focuses on operational issues concerning the utilization of the DA, specifically, how the DA can be used for prediction, intrail spacing, and metering. In order to evaluate the DA, a real time simulation was conducted using both current and retired controller subjects. Controllers operated in teams of two, as they do in the present environment; issues of training and team interaction will be discussed. Evaluations by controllers indicated considerable enthusiasm for the DA aid, and provided specific recommendations for using the tool effectively.
Efficient field testing for load rating railroad bridges
NASA Astrophysics Data System (ADS)
Schulz, Jeffrey L.; Brett C., Commander
1995-06-01
As the condition of our infrastructure continues to deteriorate, and the loads carried by our bridges continue to increase, an ever growing number of railroad and highway bridges require load limits. With safety and transportation costs at both ends of the spectrum. the need for accurate load rating is paramount. This paper describes a method that has been developed for efficient load testing and evaluation of short- and medium-span bridges. Through the use of a specially-designed structural testing system and efficient load test procedures, a typical bridge can be instrumented and tested at 64 points in less than one working day and with minimum impact on rail traffic. Various techniques are available to evaluate structural properties and obtain a realistic model. With field data, a simple finite element model is 'calibrated' and its accuracy is verified. Appropriate design and rating loads are applied to the resulting model and stress predictions are made. This technique has been performed on numerous structures to address specific problems and to provide accurate load ratings. The merits and limitations of this approach are discussed in the context of actual examples of both rail and highway bridges that were tested and evaluated.
Impact of study design on development and evaluation of an activity-type classifier.
van Hees, Vincent T; Golubic, Rajna; Ekelund, Ulf; Brage, Søren
2013-04-01
Methods to classify activity types are often evaluated with an experimental protocol involving prescribed physical activities under confined (laboratory) conditions, which may not reflect real-life conditions. The present study aims to evaluate how study design may impact on classifier performance in real life. Twenty-eight healthy participants (21-53 yr) were asked to wear nine triaxial accelerometers while performing 58 activity types selected to simulate activities in real life. For each sensor location, logistic classifiers were trained in subsets of up to 8 activities to distinguish between walking and nonwalking activities and were then evaluated in all 58 activities. Different weighting factors were used to convert the resulting confusion matrices into an estimation of the confusion matrix as would apply in the real-life setting by creating four different real-life scenarios, as well as one traditional laboratory scenario. The sensitivity of a classifier estimated with a traditional laboratory protocol is within the range of estimates derived from real-life scenarios for any body location. The specificity, however, was systematically overestimated by the traditional laboratory scenario. Walking time was systematically overestimated, except for lower back sensor data (range: 7-757%). In conclusion, classifier performance under confined conditions may not accurately reflect classifier performance in real life. Future studies that aim to evaluate activity classification methods are warranted to pay special attention to the representativeness of experimental conditions for real-life conditions.
Space Station Freedom electrical performance model
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Green, Robert D.; Kerslake, Thomas W.; Mckissock, David B.; Trudell, Jeffrey J.
1993-01-01
The baseline Space Station Freedom electric power system (EPS) employs photovoltaic (PV) arrays and nickel hydrogen (NiH2) batteries to supply power to housekeeping and user electrical loads via a direct current (dc) distribution system. The EPS was originally designed for an operating life of 30 years through orbital replacement of components. As the design and development of the EPS continues, accurate EPS performance predictions are needed to assess design options, operating scenarios, and resource allocations. To meet these needs, NASA Lewis Research Center (LeRC) has, over a 10 year period, developed SPACE (Station Power Analysis for Capability Evaluation), a computer code designed to predict EPS performance. This paper describes SPACE, its functionality, and its capabilities.
Zhang, Li; Liu, Haiyu; Qin, Lingling; Zhang, Zhixin; Wang, Qing; Zhang, Qingqing; Lu, Zhiwei; Wei, Shengli; Gao, Xiaoyan; Tu, Pengfei
2015-02-01
A global chemical profiling based quality evaluation approach using ultra performance liquid chromatography with tandem quadrupole time-of-flight mass spectrometry was developed for the quality evaluation of three rhubarb species, including Rheum palmatum L., Rheum tanguticum Maxim. ex Balf., and Rheum officinale Baill. Considering that comprehensive detection of chemical components is crucial for the global profile, a systemic column performance evaluation method was developed. Based on this, a Cortecs column was used to acquire the chemical profile, and Chempattern software was employed to conduct similarity evaluation and hierarchical cluster analysis. The results showed R. tanguticum could be differentiated from R. palmatum and R. officinale at the similarity value 0.65, but R. palmatum and R. officinale could not be distinguished effectively. Therefore, a common pattern based on three rhubarb species was developed to conduct the quality evaluation, and the similarity value 0.50 was set as an appropriate threshold to control the quality of rhubarb. A total of 88 common peaks were identified by their accurate mass and fragmentation, and partially verified by reference standards. Through the verification, the newly developed method could be successfully used for evaluating the holistic quality of rhubarb. It would provide a reference for the quality control of other herbal medicines. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chen, Shuonan; Mar, Jessica C
2018-06-19
A fundamental fact in biology states that genes do not operate in isolation, and yet, methods that infer regulatory networks for single cell gene expression data have been slow to emerge. With single cell sequencing methods now becoming accessible, general network inference algorithms that were initially developed for data collected from bulk samples may not be suitable for single cells. Meanwhile, although methods that are specific for single cell data are now emerging, whether they have improved performance over general methods is unknown. In this study, we evaluate the applicability of five general methods and three single cell methods for inferring gene regulatory networks from both experimental single cell gene expression data and in silico simulated data. Standard evaluation metrics using ROC curves and Precision-Recall curves against reference sets sourced from the literature demonstrated that most of the methods performed poorly when they were applied to either experimental single cell data, or simulated single cell data, which demonstrates their lack of performance for this task. Using default settings, network methods were applied to the same datasets. Comparisons of the learned networks highlighted the uniqueness of some predicted edges for each method. The fact that different methods infer networks that vary substantially reflects the underlying mathematical rationale and assumptions that distinguish network methods from each other. This study provides a comprehensive evaluation of network modeling algorithms applied to experimental single cell gene expression data and in silico simulated datasets where the network structure is known. Comparisons demonstrate that most of these assessed network methods are not able to predict network structures from single cell expression data accurately, even if they are specifically developed for single cell methods. Also, single cell methods, which usually depend on more elaborative algorithms, in general have less similarity to each other in the sets of edges detected. The results from this study emphasize the importance for developing more accurate optimized network modeling methods that are compatible for single cell data. Newly-developed single cell methods may uniquely capture particular features of potential gene-gene relationships, and caution should be taken when we interpret these results.
Evaluation of dental pulp sensibility tests in a clinical setting.
Jespersen, James J; Hellstein, John; Williamson, Anne; Johnson, William T; Qian, Fang
2014-03-01
The goal of this project was to evaluate the performance of dental pulp sensibility testing with Endo Ice (1,1,1,2-tetrafluoroethane) and an electric pulp tester (EPT) and to determine the effect of several variables on the reliability of these tests. Data were collected from 656 patients seen in the University of Iowa College of Dentistry Endodontic graduate clinic. The results of pulpal sensibility tests, along with the tooth number, age, sex, number of restored surfaces, presence or absence of clinical or radiographic caries, and reported recent use of analgesic medications, were recorded. The presence of vital tissue within the pulp chamber was used to verify the diagnosis. The Endo Ice results showed accuracy, 0.904; sensitivity, 0.916; specificity, 0.896; positive predictive value, 0.862; and negative predictive value, 0.937. The EPT results showed accuracy, 0.75; sensitivity, 0.84; specificity, 0.74; positive predictive value, 0.58; and negative predictive value, 0.90. Patients aged 21-50 years exhibited a more accurate response to cold testing (P = .0043). Vital teeth with caries responded more accurately to cold testing (P = .0077). There was no statistically significant difference noted with any other variable examined. Pulpal sensibility testing with Endo Ice and EPT are accurate and reliable methods of determining pulpal vitality. Patients aged 21-50 exhibited a more accurate response to cold. Sex, tooth type, number of restored surfaces, presence of caries, and recent analgesic use did not significantly alter the results of pulpal sensibility testing in this study. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Hodges, P W; Kippers, V; Richardson, C A
1997-01-01
Fine-wire electromyography is primarily utilised for the recording of activity of the deep musculature, however, due to the location of these muscles, accurate electrode placement is difficult. Real-time ultrasound imaging (RTUI) of muscle tissue has been used for the guidance of the needle insertion for the placement of electrodes into the muscles of the abdominal wall. The validity of RTUI guidance of needle insertion into the deep muscles has not been determined. A cadaveric study was conducted to evaluate the accuracy with which RTUI can be used to guide fine-wire electrode placement using the posterior fibres of gluteus medius (PGM) as an example. Pilot studies revealed that the ultrasound resolution of cadaveric tissue is markedly reduced making it impossible to directly evaluate the technique, therefore, three studies were conducted. An initial study involved the demarcation of the anatomical boundaries of PGM using RTUI to define a technique based on an anatomical landmark that was consisent with the in vivo RTUI guided needle placement technique. This anatomical landmark was then used as the guide for the cadaveric needle insertion. Once the needle was positioned 0.05 ml of dye was introduced and the specimen dissected. The dye was accurately placed in PGM in 100% of the specimens. Finally, fine-wire electrodes were inserted into the PGM of five volunteers and manoeuvres performed indicating the accuracy of placement. This study supports the use of ultrasound imaging for the accurate guidance of needle insertion for fine-wire and needle EMG electrodes.
Evaluation of downscaled, gridded climate data for the conterminous United States
Robert J. Behnke,; Stephen J. Vavrus,; Andrew Allstadt,; Thomas P. Albright,; Thogmartin, Wayne E.; Volker C. Radeloff,
2016-01-01
Weather and climate affect many ecological processes, making spatially continuous yet fine-resolution weather data desirable for ecological research and predictions. Numerous downscaled weather data sets exist, but little attempt has been made to evaluate them systematically. Here we address this shortcoming by focusing on four major questions: (1) How accurate are downscaled, gridded climate data sets in terms of temperature and precipitation estimates?, (2) Are there significant regional differences in accuracy among data sets?, (3) How accurate are their mean values compared with extremes?, and (4) Does their accuracy depend on spatial resolution? We compared eight widely used downscaled data sets that provide gridded daily weather data for recent decades across the United States. We found considerable differences among data sets and between downscaled and weather station data. Temperature is represented more accurately than precipitation, and climate averages are more accurate than weather extremes. The data set exhibiting the best agreement with station data varies among ecoregions. Surprisingly, the accuracy of the data sets does not depend on spatial resolution. Although some inherent differences among data sets and weather station data are to be expected, our findings highlight how much different interpolation methods affect downscaled weather data, even for local comparisons with nearby weather stations located inside a grid cell. More broadly, our results highlight the need for careful consideration among different available data sets in terms of which variables they describe best, where they perform best, and their resolution, when selecting a downscaled weather data set for a given ecological application.
Evaluation of on-board hydrogen storage methods f or high-speed aircraft
NASA Technical Reports Server (NTRS)
Akyurtlu, Ates; Akyurtlu, Jale F.
1991-01-01
Hydrogen is the fuel of choice for hypersonic vehicles. Its main disadvantage is its low liquid and solid density. This increases the vehicle volume and hence the drag losses during atmospheric flight. In addition, the dry mass of the vehicle is larger due to larger vehicle structure and fuel tankage. Therefore it is very desirable to find a fuel system with smaller fuel storage requirements without deteriorating the vehicle performance substantially. To evaluate various candidate fuel systems, they were first screened thermodynamically with respect to their energy content and cooling capacities. To evaluate the vehicle performance with different fuel systems, a simple computer model is developed to compute the vehicle parameters such as the vehicle volume, dry mass, effective specific impulse, and payload capacity. The results indicate that if the payload capacity (or the gross lift-off mass) is the most important criterion, only slush hydrogen and liquid hydrogen - liquid methane gel shows better performance than the liquid hydrogen vehicle. If all the advantages of a smaller vehicle are considered and a more accurate mass analysis can be performed, other systems using endothermic fuels such as cyclohexane, and some boranes may prove to be worthy of further consideration.
Kakinuma, Ryutaro; Kodama, Ken; Yamada, Kouzo; Yokoyama, Akira; Adachi, Shuji; Mori, Kiyoshi; Fukuyama, Yasuro; Fukuda, Yasuro; Kuriyama, Keiko; Oda, Junichi; Oda, Junji; Noguchi, Masayuki; Matsuno, Yoshihiro; Yokose, Tomoyuki; Ohmatsu, Hironobu; Nishiwaki, Yutaka
2008-01-01
To evaluate the performance of 4 methods of measuring the extent of ground-glass opacities as a means of predicting the 5-year relapse-free survival of patients with peripheral nonsmall cell lung cancer (NSLC). Ground-glass opacities on thin-section computed tomographic images of 120 peripheral NSLCs were measured at 7 medical institutions by the length, area, modified length, and vanishing ratio (VR) methods. The performance (Az) of each method in predicting the 5-year relapse-free survival was evaluated using receiver operating characteristic analysis. The mean Az value obtained by the length, area, modified length, and VR methods in the receiver operating characteristic analyses was 0.683, 0.702, 0.728, and 0.784, respectively. The differences between the mean Az value obtained by the VR method and by the other 3 methods were significant. Vanishing ratio method was the most accurate predictor of the 5-year relapse-free survival of patients with peripheral NSLC.
Quantitative evaluation of pairs and RS steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2004-06-01
We give initial results from a new project which performs statistically accurate evaluation of the reliability of image steganalysis algorithms. The focus here is on the Pairs and RS methods, for detection of simple LSB steganography in grayscale bitmaps, due to Fridrich et al. Using libraries totalling around 30,000 images we have measured the performance of these methods and suggest changes which lead to significant improvements. Particular results from the project presented here include notes on the distribution of the RS statistic, the relative merits of different "masks" used in the RS algorithm, the effect on reliability when previously compressed cover images are used, and the effect of repeating steganalysis on the transposed image. We also discuss improvements to the Pairs algorithm, restricting it to spatially close pairs of pixels, which leads to a substantial performance improvement, even to the extent of surpassing the RS statistic which was previously thought superior for grayscale images. We also describe some of the questions for a general methodology of evaluation of steganalysis, and potential pitfalls caused by the differences between uncompressed, compressed, and resampled cover images.
Kondoh, Shun; Chiba, Hirofumi; Nishikiori, Hirotaka; Umeda, Yasuaki; Kuronuma, Koji; Otsuka, Mitsuo; Yamada, Gen; Ohnishi, Hirofumi; Mori, Mitsuru; Kondoh, Yasuhiro; Taniguchi, Hiroyuki; Homma, Sakae; Takahashi, Hiroki
2016-09-01
The clinical course of idiopathic pulmonary fibrosis (IPF) shows great inter-individual differences. It is important to standardize the severity classification to accurately evaluate each patient׳s prognosis. In Japan, an original severity classification (the Japanese disease severity classification, JSC) is used. In the United States, the new multidimensional index and staging system (the GAP model) has been proposed. The objective of this study was to evaluate the model performance for the prediction of mortality risk of the JSC and GAP models using a large cohort of Japanese patients with IPF. This is a retrospective cohort study including 326 patients with IPF in the Hokkaido prefecture from 2003 to 2007. We obtained the survival curves of each stage of the GAP and JSC models to perform a comparison. In the GAP model, the prognostic value for mortality risk of Japanese patients was also evaluated. In the JSC, patient prognoses were roughly divided into two groups, mild cases (Stages I and II) and severe cases (Stages III and IV). In the GAP model, there was no significant difference in survival between Stages II and III, and the mortality rates in the patients classified into the GAP Stages I and II were underestimated. It is difficult to predict accurate prognosis of IPF using the JSC and the GAP models. A re-examination of the variables from the two models is required, as well as an evaluation of the prognostic value to revise the severity classification for Japanese patients with IPF. Copyright © 2016 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.
Formaldehyde: a comparative evaluation of four monitoring methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coyne, L.B.; Cook, R.E.; Mann, J.R.
1985-10-01
The performances of four formaldehyde monitoring devices were compared in a series of laboratory and field experiments. The devices evaluated included the DuPont C-60 formaldehyde badge, the SKC impregnated charcoal tube, an impinger/polarographic method and the MDA Lion formaldemeter. The major evaluation parameters included: concentration range, effects of humidity, sample storage, air velocity, accuracy, precision, interferences from methanol, styrene, 1,3-butadiene, sulfur dioxide and dimethylamine. Based on favorable performances in the laboratory and field, each device was useful for monitoring formaldehyde in the industrial work environment; however, these devices were not evaluated for residential exposure assessment. The impinger/polarographic method had amore » sensitivity of 0.06 ppm, based on a 20-liter air sample volume, and accurately determined the short-term excursion limit (STEL). It was useful for area monitoring but was not very practical for time-weighted average (TWA) personal monitoring measurements. The DuPont badge had a sensitivity of 2.8 ppm-hr and accurately and simply determined TWA exposures. It was not sensitive enough to measure STEL exposures, however, and positive interferences resulted if 1,3-butadiene was present. The SKC impregnated charcoal tube measured both TWA and STEL concentrations and had a sensitivity of 0.06 ppm based on a 25-liter air sample volume. Lightweight and simple to use, the MDA Lion formaldemeter had a sensitivity of 0.2 ppm. It had the advantage of giving an instantaneous reading in the field; however, it must be used with caution because it responded to many interferences. The method of choice depended on the type of sampling required, field conditions encountered during sampling and an understanding of the limitations of each monitoring device.« less
Methods for accurate cold-chain temperature monitoring using digital data-logger thermometers
NASA Astrophysics Data System (ADS)
Chojnacky, M. J.; Miller, W. M.; Strouse, G. F.
2013-09-01
Complete and accurate records of vaccine temperature history are vital to preserving drug potency and patient safety. However, previously published vaccine storage and handling guidelines have failed to indicate a need for continuous temperature monitoring in vaccine storage refrigerators. We evaluated the performance of seven digital data logger models as candidates for continuous temperature monitoring of refrigerated vaccines, based on the following criteria: out-of-box performance and compliance with manufacturer accuracy specifications over the range of use; measurement stability over extended, continuous use; proper setup in a vaccine storage refrigerator so that measurements reflect liquid vaccine temperatures; and practical methods for end-user validation and establishing metrological traceability. Data loggers were tested using ice melting point checks and by comparison to calibrated thermocouples to characterize performance over 0 °C to 10 °C. We also monitored logger performance in a study designed to replicate the range of vaccine storage and environmental conditions encountered at provider offices. Based on the results of this study, the Centers for Disease Control released new guidelines on proper methods for storage, handling, and temperature monitoring of vaccines for participants in its federally-funded Vaccines for Children Program. Improved temperature monitoring practices will ultimately decrease waste from damaged vaccines, improve consumer confidence, and increase effective inoculation rates.
Gong, Xuepeng; Lu, Qipeng
2015-01-01
A new monochromator is designed to develop a high performance soft X-ray microscopy beamline at Shanghai Synchrotron Radiation Facility (SSRF). But owing to its high resolving power and high accurate spectrum output, there exist many technical difficulties. In the paper presented, as two primary design targets for the monochromator, theoretical energy resolution and photon flux of the beamline are calculated. For wavelength scanning mechanism, primary factors affecting the rotary angle errors are presented, and the measuring results are 0.15'' and 0.17'' for plane mirror and plane grating, which means that it is possible to provide sufficient scanning precision to specific wavelength. For plane grating switching mechanism, the repeatabilities of roll, yaw and pitch angles are 0.08'', 0.12'' and 0.05'', which can guarantee the high accurate switch of the plane grating effectively. After debugging, the repeatability of light spot drift reaches to 0.7'', which further improves the performance of the monochromator. The commissioning results show that the energy resolving power is higher than 10000 at Ar L-edge, the photon flux is higher than 1 × 108 photons/sec/200 mA, and the spatial resolution is better than 30 nm, demonstrating that the monochromator performs very well and reaches theoretical predictions.
NASA Astrophysics Data System (ADS)
El Hattab, M. H.; Vernon, D.; Mijic, A.
2017-12-01
Low impact development practices (LID) are deemed to have a synergetic effect in mitigating urban storm water flooding. Designing and implementing effective LID practices require reliable real-life data about their performance in different applications; however, there are limited studies providing such data. In this study an innovative micro-monitoring system to assess the performance of porous pavement and rain gardens as retrofitting technologies was developed. Three pilot streets in London, UK were selected as part of Thames Water Utilities Limited's Counters Creek scheme. The system includes a V-notch weir installed at the outlet of each LID device to provide an accurate and reliable quantification over a wide range of discharges. In addition to, a low flow sensor installed downstream of the V-notch to cross-check the readings. Having a flow survey time-series of the pre-retrofitting conditions from the study streets, extensive laboratory calibrations under different flow conditions depicting the exact site conditions were performed prior to installing the devices in the field. The micro-monitoring system is well suited for high-resolution temporal monitoring and enables accurate long-term evaluation of LID components' performance. Initial results from the field validated the robustness of the system in fulfilling its requirements.
ERIC Educational Resources Information Center
Daigneault, Pierre-Marc; Jacob, Steve
2009-01-01
While participatory evaluation (PE) constitutes an important trend in the field of evaluation, its ontology has not been systematically analyzed. As a result, the concept of PE is ambiguous and inadequately theorized. Furthermore, no existing instrument accurately measures stakeholder participation. First, this article attempts to overcome these…
Abuhamad, Alfred; Zhao, Yili; Abuhamad, Sharon; Sinkovskaya, Elena; Rao, Rashmi; Kanaan, Camille; Platt, Lawrence
2016-01-01
This study aims to validate the feasibility and accuracy of a new standardized six-step approach to the performance of the focused basic obstetric ultrasound examination, and compare the new approach to the regular approach performed in the scheduled obstetric ultrasound examination. A new standardized six-step approach to the performance of the focused basic obstetric ultrasound examination, to evaluate fetal presentation, fetal cardiac activity, presence of multiple pregnancy, placental localization, amniotic fluid volume evaluation, and biometric measurements, was prospectively performed on 100 pregnant women between 18(+0) and 27(+6) weeks of gestation and another 100 pregnant women between 28(+0) and 36(+6) weeks of gestation. The agreement of findings for each of the six steps of the standardized six-step approach was evaluated against the regular approach. In all ultrasound examinations performed, substantial to perfect agreement (Kappa value between 0.64 and 1.00) was observed between the new standardized six-step approach and the regular approach. The new standardized six-step approach to the focused basic obstetric ultrasound examination can be performed successfully and accurately between 18(+0) and 36(+6) weeks of gestation. This standardized approach can be of significant benefit to limited resource settings and in point of care obstetric ultrasound applications. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Machine learning bandgaps of double perovskites
Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; Ramprasad, R.; Gubernatis, J. E.; Lookman, T.
2016-01-01
The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the most crucial and relevant predictors. The developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance. PMID:26783247
Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong
2007-04-01
Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.
NASA Technical Reports Server (NTRS)
Feedback, Daniel L.; Cibuzar, Branelle R.
2009-01-01
The Urine Monitoring System (UMS) is a system designed to collect an individual crewmember's void, gently separate urine from air, accurately measure void volume, allow for void sample acquisition, and discharge remaining urine into the Waste Collector Subsystem (WCS) onboard the International Space Station. The Urine Monitoring System (UMS) is a successor design to the existing Space Shuttle system and will resolve anomalies such as: liquid carry-over, inaccurate void volume measurements, and cross contamination in void samples. The crew will perform an evaluation of airflow at the ISS UMS urinal hose interface, a calibration evaluation, and a full user interface evaluation. o The UMS can be used to facilitate non-invasive methods for monitoring crew health, evaluation of countermeasures, and implementation of a variety of biomedical research protocols on future exploration missions.
Evaluation of several non-reflecting computational boundary conditions for duct acoustics
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Zorumski, William E.; Hodge, Steve L.
1994-01-01
Several non-reflecting computational boundary conditions that meet certain criteria and have potential applications to duct acoustics are evaluated for their effectiveness. The same interior solution scheme, grid, and order of approximation are used to evaluate each condition. Sparse matrix solution techniques are applied to solve the matrix equation resulting from the discretization. Modal series solutions for the sound attenuation in an infinite duct are used to evaluate the accuracy of each non-reflecting boundary conditions. The evaluations are performed for sound propagation in a softwall duct, for several sources, sound frequencies, and duct lengths. It is shown that a recently developed nonlocal boundary condition leads to sound attenuation predictions considerably more accurate for short ducts. This leads to a substantial reduction in the number of grid points when compared to other non-reflecting conditions.
Needle Steering in Biological Tissue using Ultrasound-based Online Curvature Estimation
Moreira, Pedro; Patil, Sachin; Alterovitz, Ron; Misra, Sarthak
2014-01-01
Percutaneous needle insertions are commonly performed for diagnostic and therapeutic purposes. Accurate placement of the needle tip is important to the success of many needle procedures. The current needle steering systems depend on needle-tissue-specific data, such as maximum curvature, that is unavailable prior to an interventional procedure. In this paper, we present a novel three-dimensional adaptive steering method for flexible bevel-tipped needles that is capable of performing accurate tip placement without previous knowledge about needle curvature. The method steers the needle by integrating duty-cycled needle steering, online curvature estimation, ultrasound-based needle tracking, and sampling-based motion planning. The needle curvature estimation is performed online and used to adapt the path and duty cycling. We evaluated the method using experiments in a homogenous gelatin phantom, a two-layer gelatin phantom, and a biological tissue phantom composed of a gelatin layer and in vitro chicken tissue. In all experiments, virtual obstacles and targets move in order to represent the disturbances that might occur due to tissue deformation and physiological processes. The average targeting error using our new adaptive method is 40% lower than using the conventional non-adaptive duty-cycled needle steering method. PMID:26229729
Augmented Endoscopic Images Overlaying Shape Changes in Bone Cutting Procedures.
Nakao, Megumi; Endo, Shota; Nakao, Shinichi; Yoshida, Munehito; Matsuda, Tetsuya
2016-01-01
In microendoscopic discectomy for spinal disorders, bone cutting procedures are performed in tight spaces while observing a small portion of the target structures. Although optical tracking systems are able to measure the tip of the surgical tool during surgery, the poor shape information available during surgery makes accurate cutting difficult, even if preoperative computed tomography and magnetic resonance images are used for reference. Shape estimation and visualization of the target structures are essential for accurate cutting. However, time-varying shape changes during cutting procedures are still challenging issues for intraoperative navigation. This paper introduces a concept of endoscopic image augmentation that overlays shape changes to support bone cutting procedures. This framework handles the history of the location of the measured drill tip as a volume label and visualizes the remains to be cut overlaid on the endoscopic image in real time. A cutting experiment was performed with volunteers, and the feasibility of this concept was examined using a clinical navigation system. The efficacy of the cutting aid was evaluated with respect to the shape similarity, total moved distance of a cutting tool, and required cutting time. The results of the experiments showed that cutting performance was significantly improved by the proposed framework.
Single-pass memory system evaluation for multiprogramming workloads
NASA Technical Reports Server (NTRS)
Conte, Thomas M.; Hwu, Wen-Mei W.
1990-01-01
Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.
Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction.
Kim, Dong Sik
2016-08-01
The noise power spectrum (NPS) of an image sensor provides the spectral noise properties needed to evaluate sensor performance. Hence, measuring an accurate NPS is important. However, the fixed pattern noise from the sensor's nonuniform gain inflates the NPS, which is measured from images acquired by the sensor. Detrending the low-frequency fixed pattern is traditionally used to accurately measure NPS. However, detrending methods cannot remove high-frequency fixed patterns. In order to efficiently correct the fixed pattern noise, a gain-correction technique based on the gain map can be used. The gain map is generated using the average of uniformly illuminated images without any objects. Increasing the number of images n for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite n , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is O(√n) . An NPS measurement algorithm based on the gain map is then proposed for any given n . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ( n=1 ) can provide an accurate NPS because of the compensation constant (1+1/n) .
Telemedicine Consultations in Oral and Maxillofacial Surgery: A Follow-Up Study.
Wood, Eric W; Strauss, Robert A; Janus, Charles; Carrico, Caroline K
2016-02-01
The purpose of this study was to follow up on the previous study in evaluating the efficiency and reliability of telemedicine consultations for preoperative assessment of patients. A retrospective study of 335 patients over a 6-year period was performed to evaluate success rates of telemedicine consultations in adequately assessing patients for surgical treatment under anesthesia. Success or failure of the telemedicine consultation was measured by the ability to triage patients appropriately for the hospital operating room versus the clinic, to provide an accurate diagnosis and treatment plan, and to provide a sufficient medical and physical assessment for planned anesthesia. Data gathered from the average distance traveled and data from a previous telemedicine study performed by the National Institute of Justice were used to estimate the cost savings of using telemedicine consultations over the 6-year period. Practitioners performing the consultation were successful 92.2% of the time in using the data collected to make a diagnosis and treatment plan. Patients were triaged correctly 99.6% of the time for the clinic or hospital operating room. Most patients (98.0%) were given sufficient medical and physical assessment and were able to undergo surgery with anesthesia as planned at the clinic appointment immediately after telemedicine consultation. Most patients (95.9%) were given an accurate diagnosis and treatment plan. The estimated amount saved by providing consultation by telemedicine and eliminating in-office consultation was substantial at $134,640. This study confirms the findings from previous studies that telemedicine consultations are as reliable as those performed by traditional methods. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Germaine, Stephen S.; O'Donnell, Michael S.; Aldridge, Cameron L.; Baer, Lori; Fancher, Tammy; McBeth, Jamie; McDougal, Robert R.; Waltermire, Robert; Bowen, Zachary H.; Diffendorfer, James; Garman, Steven; Hanson, Leanne
2012-01-01
We evaluated how well three leading information-extraction software programs (eCognition, Feature Analyst, Feature Extraction) and manual hand digitization interpreted information from remotely sensed imagery of a visually complex gas field in Wyoming. Specifically, we compared how each mapped the area of and classified the disturbance features present on each of three remotely sensed images, including 30-meter-resolution Landsat, 10-meter-resolution SPOT (Satellite Pour l'Observation de la Terre), and 0.6-meter resolution pan-sharpened QuickBird scenes. Feature Extraction mapped the spatial area of disturbance features most accurately on the Landsat and QuickBird imagery, while hand digitization was most accurate on the SPOT imagery. Footprint non-overlap error was smallest on the Feature Analyst map of the Landsat imagery, the hand digitization map of the SPOT imagery, and the Feature Extraction map of the QuickBird imagery. When evaluating feature classification success against a set of ground-truthed control points, Feature Analyst, Feature Extraction, and hand digitization classified features with similar success on the QuickBird and SPOT imagery, while eCognition classified features poorly relative to the other methods. All maps derived from Landsat imagery classified disturbance features poorly. Using the hand digitized QuickBird data as a reference and making pixel-by-pixel comparisons, Feature Extraction classified features best overall on the QuickBird imagery, and Feature Analyst classified features best overall on the SPOT and Landsat imagery. Based on the entire suite of tasks we evaluated, Feature Extraction performed best overall on the Landsat and QuickBird imagery, while hand digitization performed best overall on the SPOT imagery, and eCognition performed worst overall on all three images. Error rates for both area measurements and feature classification were prohibitively high on Landsat imagery, while QuickBird was time and cost prohibitive for mapping large spatial extents. The SPOT imagery produced map products that were far more accurate than Landsat and did so at a far lower cost than QuickBird imagery. Consideration of degree of map accuracy required, costs associated with image acquisition, software, operator and computation time, and tradeoffs in the form of spatial extent versus resolution should all be considered when evaluating which combination of imagery and information-extraction method might best serve any given land use mapping project. When resources permit, attaining imagery that supports the highest classification and measurement accuracy possible is recommended.
The Automated Assessment of Postural Stability: Balance Detection Algorithm.
Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad
2017-12-01
Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect ® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0.05. Despite some level of disagreement between human and AAPS-generated scores, the use of an automated system yields important advantages over currently available human-based alternatives. These results underscore the value of using the AAPS, that can be quickly deployed in the field and/or in outdoor settings with minimal set-up time. Finally, the AAPS can record multiple error types and their time course with extremely high temporal resolution. These features are not achievable by humans, who cannot keep track of multiple balance errors with such a high resolution. Together, these results suggest that computerized BESS calculation may provide more accurate and consistent measures of balance than those derived from human experts.
Evaluation of Fear Using Nonintrusive Measurement of Multimodal Sensors
Choi, Jong-Suk; Bang, Jae Won; Heo, Hwan; Park, Kang Ryoung
2015-01-01
Most previous research into emotion recognition used either a single modality or multiple modalities of physiological signal. However, the former method allows for limited enhancement of accuracy, and the latter has the disadvantages that its performance can be affected by head or body movements. Further, the latter causes inconvenience to the user due to the sensors attached to the body. Among various emotions, the accurate evaluation of fear is crucial in many applications, such as criminal psychology, intelligent surveillance systems and the objective evaluation of horror movies. Therefore, we propose a new method for evaluating fear based on nonintrusive measurements obtained using multiple sensors. Experimental results based on the t-test, the effect size and the sum of all of the correlation values with other modalities showed that facial temperature and subjective evaluation are more reliable than electroencephalogram (EEG) and eye blinking rate for the evaluation of fear. PMID:26205268
Active learning for noisy oracle via density power divergence.
Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi
2013-10-01
The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.
Comparison of custom to standard TKA instrumentation with computed tomography.
Ng, Vincent Y; Arnott, Lindsay; Li, Jia; Hopkins, Ronald; Lewis, Jamie; Sutphen, Sean; Nicholson, Lisa; Reader, Douglas; McShane, Michael A
2014-08-01
There is conflicting evidence whether custom instrumentation for total knee arthroplasty (TKA) improves component position compared to standard instrumentation. Studies have relied on long-limb radiographs limited to two-dimensional (2D) analysis and subjected to rotational inaccuracy. We used postoperative computed tomography (CT) to evaluate preoperative three-dimensional templating and CI to facilitate accurate and efficient implantation of TKA femoral and tibial components. We prospectively evaluated a single-surgeon cohort of 78 TKA patients (51 custom, 27 standard) with postoperative CT scans using 3D reconstruction and contour-matching technology to preoperative imaging. Component alignment was measured in coronal, sagittal and axial planes. Preoperative templating for custom instrumentation was 87 and 79 % accurate for femoral and tibial component size. All custom components were within 1 size except for the tibial component in one patient (2 sizes). Tourniquet time was 5 min longer for custom (30 min) than standard (25 min). In no case was custom instrumentation aborted in favour of standard instrumentation nor was original alignment of custom instrumentation required to be adjusted intraoperatively. There were more outliers greater than 2° from intended alignment with standard instrumentation than custom for both components in all three planes. Custom instrumentation was more accurate in component position for tibial coronal alignment (custom: 1.5° ± 1.2°; standard: 3° ± 1.9°; p = 0.0001) and both tibial (custom: 1.4° ± 1.1°; standard: 16.9° ± 6.8°; p < 0.0001) and femoral (custom: 1.2° ± 0.9°; standard: 3.1° ± 2.1°; p < 0.0001) rotational alignment, and was similar to standard instrumentation in other measurements. When evaluated with CT, custom instrumentation performs similar or better to standard instrumentation in component alignment and accurately templates component size. Tourniquet time was mildly increased for custom compared to standard.
Kaneyama, Shuichi; Sugawara, Taku; Sumi, Masatoshi
2015-03-15
Clinical trial for midcervical pedicle screw insertion using a novel patient-specific intraoperative screw guiding device. To evaluate the availability of the "Screw Guide Template" (SGT) system for insertion of midcervical pedicle screws. Despite many efforts for accurate midcervical pedicle screw insertion, there still remain unacceptable rate of screw malpositioning that might cause neurovascular injuries. We developed patient-specific SGT system for safe and accurate intraoperative screw navigation tool and have reported its availability for the screw insertion to C2 vertebra and thoracic spine. Preoperatively, the bone image on computed tomography was analyzed and the trajectories of the screws were designed in 3-dimensional format. Three types of templates were created for each lamina: location template, drill guide template, and screw guide template. During the operations, after engaging the templates directly with the laminae, drilling, tapping, and screwing were performed with each template. We placed 80 midcervical pedicle screws for 20 patients. The accuracy and safety of the screw insertion by SGT system were evaluated using postoperative computed tomographic scan by calculation of screw deviation from the preplanned trajectory and evaluation of screw breach of pedicle wall. All templates fitted the laminae and screw navigation procedures proceeded uneventfully. All screws were inserted accurately with the mean screw deviation from planned trajectory of 0.29 ± 0.31 mm and no neurovascular complication was experienced. We demonstrated that our SGT system could support the precise screw insertion in midcervical pedicle. SGT prescribes the safe screw trajectory in a 3-dimensional manner and the templates fit and lock directly to the target laminae, which prevents screwing error along with the change of spinal alignment during the surgery. These advantages of the SGT system guarantee the high accuracy in screw insertion, which allowed surgeons to insert cervical pedicle screws safely. 3.
Optical Coherence Tomography Evaluation in the Multicenter Uveitis Steroid Treatment (MUST) Trial
Domalpally, Amitha; Altaweel, Michael M.; Kempen, John H.; Myers, Dawn; Davis, Janet L; Foster, C Stephen; Latkany, Paul; Srivastava, Sunil K.; Stawell, Richard J.; Holbrook, Janet T.
2013-01-01
Purpose To describe the evaluation of optical coherence tomography (OCT) scans in the Muliticenter Uveitis Steroid Treatment (MUST) trial and report baseline OCT features of enrolled participants. Methods Time domain OCTs acquired by certified photographers using a standardized scan protocol were evaluated at a Reading Center. Accuracy of retinal thickness data was confirmed with quality evaluation and caliper measurement of centerpoint thickness (CPT) was performed when unreliable. Morphological evaluation included cysts, subretinal fluid,epiretinal membranes (ERMs),and vitreomacular traction. Results Of the 453 OCTs evaluated, automated retinal thickness was accurate in 69.5% of scans, caliper measurement was performed in 26%,and 4% were ungradable. Intraclass correlation was 0.98 for reproducibility of caliper measurement. Macular edema (centerpoint thickness ≥ 240um) was present in 36%. Cysts were present in 36.6% of scans and ERMs in 27.8%, predominantly central. Intergrader agreement ranged from 78 − 82% for morphological features. Conclusion Retinal thickness data can be retrieved in a majority of OCT scans in clinical trial submissions for uveitis studies. Small cysts and ERMs involving the center are common in intermediate and posterior/panuveitis requiring systemic corticosteroid therapy. PMID:23163490
Ono, Yohei; Kashihara, Rina; Yasojima, Nobutoshi; Kasahara, Hideki; Shimizu, Yuka; Tamura, Kenichi; Tsutsumi, Kaori; Sutherland, Kenneth; Koike, Takao; Kamishima, Tamotsu
2016-06-01
Accurate evaluation of joint space width (JSW) is important in the assessment of rheumatoid arthritis (RA). In clinical radiography of bilateral hands, the oblique incidence of X-rays is unavoidable, which may cause perceptional or measurement error of JSW. The objective of this study was to examine whether tomosynthesis, a recently developed modality, can facilitate a more accurate evaluation of JSW than radiography under the condition of oblique incidence of X-rays. We investigated quantitative errors derived from the oblique incidence of X-rays by imaging phantoms simulating various finger joint spaces using radiographs and tomosynthesis images. We then compared the qualitative results of the modified total Sharp score of a total of 320 joints from 20 patients with RA between these modalities. A quantitative error was prominent when the location of the phantom was shifted along the JSW direction. Modified total Sharp scores of tomosynthesis images were significantly higher than those of radiography, that is to say JSW was regarded as narrower in tomosynthesis than in radiography when finger joints were located where the oblique incidence of X-rays is expected in the JSW direction. Tomosynthesis can facilitate accurate evaluation of JSW in finger joints of patients with RA, even with oblique incidence of X-rays. Accurate evaluation of JSW is necessary for the management of patients with RA. Through phantom and clinical studies, we demonstrate that tomosynthesis may achieve more accurate evaluation of JSW.
An accurate segmentation method for volumetry of brain tumor in 3D MRI
NASA Astrophysics Data System (ADS)
Wang, Jiahui; Li, Qiang; Hirai, Toshinori; Katsuragawa, Shigehiko; Li, Feng; Doi, Kunio
2008-03-01
Accurate volumetry of brain tumors in magnetic resonance imaging (MRI) is important for evaluating the interval changes in tumor volumes during and after treatment, and also for planning of radiation therapy. In this study, an automated volumetry method for brain tumors in MRI was developed by use of a new three-dimensional (3-D) image segmentation technique. First, the central location of a tumor was identified by a radiologist, and then a volume of interest (VOI) was determined automatically. To substantially simplify tumor segmentation, we transformed the 3-D image of the tumor into a two-dimensional (2-D) image by use of a "spiral-scanning" technique, in which a radial line originating from the center of the tumor scanned the 3-D image spirally from the "north pole" to the "south pole". The voxels scanned by the radial line provided a transformed 2-D image. We employed dynamic programming to delineate an "optimal" outline of the tumor in the transformed 2-D image. We then transformed the optimal outline back into 3-D image space to determine the volume of the tumor. The volumetry method was trained and evaluated by use of 16 cases with 35 brain tumors. The agreement between tumor volumes provided by computer and a radiologist was employed as a performance metric. Our method provided relatively accurate results with a mean agreement value of 88%.
Mitrović, Uroš; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga
2018-02-01
Image guidance for minimally invasive surgery is based on spatial co-registration and fusion of 3D pre-interventional images and treatment plans with the 2D live intra-interventional images. The spatial co-registration or 3D-2D registration is the key enabling technology; however, the performance of state-of-the-art automated methods is rather unclear as they have not been assessed under the same test conditions. Herein we perform a quantitative and comparative evaluation of ten state-of-the-art methods for 3D-2D registration on a public dataset of clinical angiograms. Image database consisted of 3D and 2D angiograms of 25 patients undergoing treatment for cerebral aneurysms or arteriovenous malformations. On each of the datasets, highly accurate "gold-standard" registrations of 3D and 2D images were established based on patient-attached fiducial markers. The database was used to rigorously evaluate ten state-of-the-art 3D-2D registration methods, namely two intensity-, two gradient-, three feature-based and three hybrid methods, both for registration of 3D pre-interventional image to monoplane or biplane 2D images. Intensity-based methods were most accurate in all tests (0.3 mm). One of the hybrid methods was most robust with 98.75% of successful registrations (SR) and capture range of 18 mm for registrations of 3D to biplane 2D angiograms. In general, registration accuracy was similar whether registration of 3D image was performed onto mono- or biplanar 2D images; however, the SR was substantially lower in case of 3D to monoplane 2D registration. Two feature-based and two hybrid methods had clinically feasible execution times in the order of a second. Performance of methods seems to fall below expectations in terms of robustness in case of registration of 3D to monoplane 2D images, while translation into clinical image guidance systems seems readily feasible for methods that perform registration of the 3D pre-interventional image onto biplanar intra-interventional 2D images.
Mell, Matthew; Tefera, Girma; Thornton, Frank; Siepman, David; Turnipseed, William
2007-03-01
The diagnostic accuracy of magnetic resonance angiography (MRA) in the infrapopliteal arterial segment is not well defined. This study evaluated the clinical utility and diagnostic accuracy of time-resolved imaging of contrast kinetics (TRICKS) MRA compared with digital subtraction contrast angiography (DSA) in planning for percutaneous interventions of popliteal and infrapopliteal arterial occlusive disease. Patients who underwent percutaneous lower extremity interventions for popliteal or tibial occlusive disease were identified for this study. Preprocedural TRICKS MRA was performed with 1.5 Tesla (GE Healthcare, Waukesha, Wis) magnetic resonance imaging scanners with a flexible peripheral vascular coil, using the TRICKS technique with gadodiamide injection. DSA was performed using standard techniques in angiography suite with a 15-inch image intensifier. DSA was considered the gold standard. The MRA and DSA were then evaluated in a blinded fashion by a radiologist and a vascular surgeon. The popliteal artery and tibioperoneal trunk were evaluated separately, and the tibial arteries were divided into proximal, mid, and distal segments. Each segment was interpreted as normal (0% to 49% stenosis), stenotic (50% to 99% stenosis), or occluded (100%). Lesion morphology was classified according to the TransAtlantic Inter-Society Consensus (TASC). We calculated concordance between the imaging studies and the sensitivity and specificity of MRA. The clinical utility of MRA was also assessed in terms of identifying arterial access site as well as predicting technical success of the percutaneous treatment. Comparisons were done on 150 arterial segments in 30 limbs of 27 patients. When evaluated by TASC classification, TRICKS MRA correlated with DSA in 83% of the popliteal and in 88% of the infrapopliteal segments. MRA correctly identified significant disease of the popliteal artery with a sensitivity of 94% and a specificity of 92%, and of the tibial arteries with a sensitivity of 100% and specificity of 84%. When used to evaluate for stenosis vs occlusion, MRA interpretation agreed with DSA 90% of the time. Disagreement occurred in 15 arterial segments, most commonly in distal tibioperoneal arteries. MRA misdiagnosed occlusion for stenosis in 11 of 15 segments, and stenosis for occlusion in four of 15 segments. Arterial access was accurately planned based on preprocedural MRA findings in 29 of 30 patients. MRA predicted technical success 83% of the time. Five technical failures were due to inability to cross arterial occlusions, all accurately identified by MRA. TRICKS MRA is an accurate method of evaluating patients for popliteal and infrapopliteal arterial occlusive disease and can be used for planning percutaneous interventions.
42 CFR 493.1254 - Standard: Maintenance and function checks.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ensures equipment, instrument, and test system performance that is necessary for accurate and reliable... equipment, instrument, and test system performance that is necessary for accurate and reliable test results...
Milne, Marjorie E; Steward, Christopher; Firestone, Simon M; Long, Sam N; O'Brien, Terrence J; Moffat, Bradford A
2016-04-01
To develop representative MRI atlases of the canine brain and to evaluate 3 methods of atlas-based segmentation (ABS). 62 dogs without clinical signs of epilepsy and without MRI evidence of structural brain disease. The MRI scans from 44 dogs were used to develop 4 templates on the basis of brain shape (brachycephalic, mesaticephalic, dolichocephalic, and combined mesaticephalic and dolichocephalic). Atlas labels were generated by segmenting the brain, ventricular system, hippocampal formation, and caudate nuclei. The MRI scans from the remaining 18 dogs were used to evaluate 3 methods of ABS (manual brain extraction and application of a brain shape-specific template [A], automatic brain extraction and application of a brain shape-specific template [B], and manual brain extraction and application of a combined template [C]). The performance of each ABS method was compared by calculation of the Dice and Jaccard coefficients, with manual segmentation used as the gold standard. Method A had the highest mean Jaccard coefficient and was the most accurate ABS method assessed. Measures of overlap for ABS methods that used manual brain extraction (A and C) ranged from 0.75 to 0.95 and compared favorably with repeated measures of overlap for manual extraction, which ranged from 0.88 to 0.97. Atlas-based segmentation was an accurate and repeatable method for segmentation of canine brain structures. It could be performed more rapidly than manual segmentation, which should allow the application of computer-assisted volumetry to large data sets and clinical cases and facilitate neuroimaging research and disease diagnosis.
de Bruin, Elza C.; Whiteley, Jessica L.; Corcoran, Claire; Kirk, Pauline M.; Fox, Jayne C.; Armisen, Javier; Lindemann, Justin P. O.; Schiavon, Gaia; Ambrose, Helen J.; Kohlmann, Alexander
2017-01-01
Personalized healthcare relies on accurate companion diagnostic assays that enable the most appropriate treatment decision for cancer patients. Extensive assay validation prior to use in a clinical setting is essential for providing a reliable test result. This poses a challenge for low prevalence mutations with limited availability of appropriate clinical samples harboring the mutation. To enable prospective screening for the low prevalence AKT1 E17K mutation, we have developed and validated a competitive allele-specific TaqMan® PCR (castPCR™) assay for mutation detection in formalin-fixed paraffin-embedded (FFPE) tumor tissue. Analysis parameters of the castPCR™ assay were established using an FFPE DNA reference standard and its analytical performance was assessed using 338 breast cancer and gynecological cancer FFPE samples. With recent technical advances for minimally invasive mutation detection in circulating tumor DNA (ctDNA), we subsequently also evaluated the OncoBEAM™ assay to enable plasma specimens as additional diagnostic opportunity for AKT1 E17K mutation testing. The analysis performance of the OncoBEAM™ test was evaluated using a novel AKT1 E17K ctDNA reference standard consisting of sheared genomic DNA spiked into human plasma. Both assays are employed at centralized testing laboratories operating according to quality standards for prospective identification of the AKT1 E17K mutation in ER+ breast cancer patients in the context of a clinical trial evaluating the AKT inhibitor AZD5363 in combination with endocrine (fulvestrant) therapy. PMID:28472036
NASA Technical Reports Server (NTRS)
Rohatgi, Naresh K.; Ingham, John D.
1992-01-01
An assessment approach for accurate evaluation of bioprocesses for large-scale production of industrial chemicals is presented. Detailed energy-economic assessments of a potential esterification process were performed, where ethanol vapor in the presence of water from a bioreactor is catalytically converted to ethyl acetate. Results show that such processes are likely to become more competitive as the cost of substrates decreases relative to petrolium costs. A commercial ASPEN process simulation provided a reasonably consistent comparison with energy economics calculated using JPL developed software. Detailed evaluations of the sensitivity of production cost to material costs and annual production rates are discussed.
Active depth-guiding handheld micro-forceps for membranectomy based on CP-SSOCT
NASA Astrophysics Data System (ADS)
Cheon, Gyeong Woo; Lee, Phillip; Gonenc, Berk; Gehlbach, Peter L.; Kang, Jin U.
2016-03-01
In this study, we demonstrate a handheld motion-compensated micro-forceps system using common-path swept source optical coherence tomography with highly accurate depth-targeting and depth-locking for Epiretinal Membrane Peeling. Two motors and a touch sensor were used to separate the two independent motions: motion compensation and tool-tip manipulation. A smart motion monitoring and guiding algorithm was devised for precise and intuitive freehand control. Ex-vivo bovine eye experiments were performed to evaluate accuracy in a bovine retina retinal membrane peeling model. The evaluation demonstrates system capabilities of 40 um accuracy when peeling the epithelial layer of bovine retina.
Holzhauser, Thomas; Ree, Ronald van; Poulsen, Lars K; Bannon, Gary A
2008-10-01
There is detailed guidance on how to perform bioinformatic analyses and enzymatic degradation studies for genetically modified crops under consideration for approval by regulatory agencies; however, there is no consensus in the scientific community on the details of how to perform IgE serum studies. IgE serum studies are an important safety component to acceptance of genetically modified crops when the introduced protein is novel, the introduced protein is similar to known allergens, or the crop is allergenic. In this manuscript, we describe the characteristics of the reagents, validation of assay performance, and data analysis necessary to optimize the information obtained from serum testing of novel proteins and genetically modified (GM) crops and to make results more accurate and comparable between different investigations.
CF6 High Pressure Compressor and Turbine Clearance Evaluations
NASA Technical Reports Server (NTRS)
Radomski, M. A.; Cline, L. D.
1981-01-01
In the CF6 Jet Engine Diagnostics Program the causes of performance degradation were determined for each component of revenue service engines. It was found that a significant contribution to performance degradation was caused by increased airfoil tip radial clearances in the high pressure compressor and turbine areas. Since the influence of these clearances on engine performance and fuel consumption is significant, it is important to accurately establish these relatonships. It is equally important to understand the causes of clearance deterioration so that they can be reduced or eliminated. The results of factory engine tests run to enhance the understanding of the high pressure compressor and turbine clearance effects on performance are described. The causes of clearance deterioration are indicated and potential improvements in clearance control are discussed.
NASA Astrophysics Data System (ADS)
Abou-Khousa, M. A.; Zoughi, R.
2007-03-01
Non-invasive monitoring of dielectric slab thickness is of great interest in various industrial applications. This paper focuses on estimating the thickness of dielectric slabs, and consequently monitoring their variations, utilizing wideband microwave signals and the MUtiple SIgnal Characterization (MUSIC) algorithm. The performance of the proposed approach is assessed by validating simulation results with laboratory experiments. The results clearly indicate the utility of this overall approach for accurate dielectric slab thickness evaluation.
Optical analysis of thermal induced structural distortions
NASA Technical Reports Server (NTRS)
Weinswig, Shepard; Hookman, Robert A.
1991-01-01
The techniques used for the analysis of thermally induced structural distortions of optical components such as scanning mirrors and telescope optics are outlined. Particular attention is given to the methodology used in the thermal and structural analysis of the GOES scan mirror, the optical analysis using Zernike coefficients, and the optical system performance evaluation. It is pointed out that the use of Zernike coefficients allows an accurate, effective, and simple linkage between thermal/mechanical effects and the optical design.
Expanded Prediction Equations of Human Sweat Loss and Water Needs
2009-01-01
Evaluation of the limits to accurate sweat loss prediction during prolonged exercise. Eur J Appl Physiol 101: 215–224, 2007. 4. Chinevere TD ...113–117, 2001. 17. Miller RG. Simultaneous Statistical Interference (2nd ed.). New York: Springer, 1981. 18. Mitchell JW, Nadel ER, Stolwijk JAJ ...modeling of physiological responses and human performance in the heat. Comput Biol Med 16: 319–329, 1986. 20. Saltin B, Gagge AP, Stolwijk JAJ . Body
1985-09-01
34 " develop a more accurate concept of human behavior. In addition, students learn how to improve their abilities to 22 [ .". ., , . . * lead, follow...contains four volumes with 36 lessons. This block defines the arena where professional Air Force officers operate. In addition, students learn to... learned in unit A, to perform limited position classification casework, and to * -write evaluation reports. Students may either enroll in Unit A only, or in
Tian, Qijie; Chang, Songtao; He, Fengyun; Li, Zhou; Qiao, Yanfeng
2017-06-10
Internal stray radiation is a key factor that influences infrared imaging systems, and its suppression level is an important criterion to evaluate system performance, especially for cryogenic infrared imaging systems, which are highly sensitive to thermal sources. In order to achieve accurate measurement for internal stray radiation, an approach is proposed, which is based on radiometric calibration using a spherical mirror. First of all, the theory of spherical mirror design is introduced. Then, the calibration formula considering the integration time is presented. Following this, the details regarding the measurement method are presented. By placing a spherical mirror in front of the infrared detector, the influence of internal factors of the detector on system output can be obtained. According to the calibration results of the infrared imaging system, the output caused by internal stray radiation can be acquired. Finally, several experiments are performed in a chamber with controllable inside temperatures to validate the theory proposed in this paper. Experimental results show that the measurement results are in good accordance with the theoretical analysis, and demonstrate that the proposed theories are valid and can be employed in practical applications. The proposed method can achieve accurate measurement for internal stray radiation at arbitrary integration time and ambient temperatures. The measurement result can be used to evaluate whether the suppression level meets the system requirement.
Development of Methods to Evaluate Safer Flight Characteristics
NASA Technical Reports Server (NTRS)
Basciano, Thomas E., Jr.; Erickson, Jon D.
1997-01-01
The goal of the proposed research is to begin development of a simulation that models the flight characteristics of the Simplified Aid For EVA Rescue (SAFER) pack. Development of such a simulation was initiated to ultimately study the effect an Orbital Replacement Unit (ORU) has on SAFER dynamics. A major function of this program will be to calculate fuel consumption for many ORUs with different masses and locations. This will ultimately determine the maximum ORU mass an astronaut can carry and still perform a self-rescue without jettisoning the unit. A second primary goal is to eventually simulate relative motion (vibration) between the ORU and astronaut. After relative motion is accurately modeled it will be possible to evaluate the robustness of the control system and optimize performance as needed. The first stage in developing the simulation is the ability to model a standardized, total, self-rescue scenario, making it possible to accurately compare different program runs. In orbit an astronaut has only limited data and will not be able to follow the most fuel efficient trajectory; therefore, it is important to correctly model the procedures an astronaut would use in orbit so that good fuel consumption data can be obtained. Once this part of the program is well tested and verified, the vibration (relative motion) of the ORU with respect to the astronaut can be studied.
A standardization model based on image recognition for performance evaluation of an oral scanner.
Seo, Sang-Wan; Lee, Wan-Sun; Byun, Jae-Young; Lee, Kyu-Bok
2017-12-01
Accurate information is essential in dentistry. The image information of missing teeth is used in optically based medical equipment in prosthodontic treatment. To evaluate oral scanners, the standardized model was examined from cases of image recognition errors of linear discriminant analysis (LDA), and a model that combines the variables with reference to ISO 12836:2015 was designed. The basic model was fabricated by applying 4 factors to the tooth profile (chamfer, groove, curve, and square) and the bottom surface. Photo-type and video-type scanners were used to analyze 3D images after image capture. The scans were performed several times according to the prescribed sequence to distinguish the model from the one that did not form, and the results confirmed it to be the best. In the case of the initial basic model, a 3D shape could not be obtained by scanning even if several shots were taken. Subsequently, the recognition rate of the image was improved with every variable factor, and the difference depends on the tooth profile and the pattern of the floor surface. Based on the recognition error of the LDA, the recognition rate decreases when the model has a similar pattern. Therefore, to obtain the accurate 3D data, the difference of each class needs to be provided when developing a standardized model.
Langenderfer, Joseph E; Rullkoetter, Paul J; Mell, Amy G; Laz, Peter J
2009-04-01
An accurate assessment of shoulder kinematics is useful for understanding healthy normal and pathological mechanics. Small variability in identifying and locating anatomical landmarks (ALs) has potential to affect reported shoulder kinematics. The objectives of this study were to quantify the effect of landmark location variability on scapular and humeral kinematic descriptions for multiple subjects using probabilistic analysis methods, and to evaluate the consistency in results across multiple subjects. Data from 11 healthy subjects performing humeral elevation in the scapular plane were used to calculate Euler angles describing humeral and scapular kinematics. Probabilistic analyses were performed for each subject to simulate uncertainty in the locations of 13 upper-extremity ALs. For standard deviations of 4 mm in landmark location, the analysis predicted Euler angle envelopes between the 1 and 99 percentile bounds of up to 16.6 degrees . While absolute kinematics varied with the subject, the average 1-99% kinematic ranges for the motion were consistent across subjects and sensitivity factors showed no statistically significant differences between subjects. The description of humeral kinematics was most sensitive to the location of landmarks on the thorax, while landmarks on the scapula had the greatest effect on the description of scapular elevation. The findings of this study can provide a better understanding of kinematic variability, which can aid in making accurate clinical diagnoses and refining kinematic measurement techniques.
NASA Astrophysics Data System (ADS)
Testi, D.; Schito, E.; Menchetti, E.; Grassi, W.
2014-11-01
Constructions built in Italy before 1945 (about 30% of the total built stock) feature low energy efficiency. Retrofit actions in this field can lead to valuable energetic and economic savings. In this work, we ran a dynamic simulation of a historical building of the University of Pisa during the heating season. We firstly evaluated the energy requirements of the building and the performance of the existing natural gas boiler, validated with past billings of natural gas. We also verified the energetic savings obtainable by the substitution of the boiler with an air-to-water electrically-driven modulating heat pump, simulated through a cycle-based model, evaluating the main economic metrics. The cycle-based model of the heat pump, validated with manufacturers' data available only at specified temperature and load conditions, can provide more accurate results than the simplified models adopted by current technical standards, thus increasing the effectiveness of energy audits.
Children's reasoning about evaluative feedback.
Heyman, Gail D; Fu, Genyue; Sweet, Monica A; Lee, Kang
2009-11-01
Children's reasoning about the willingness of peers to convey accurate positive and negative performance feedback to others was investigated among a total of 179 6- to 11-year-olds from the USA and China. In Study 1, which was conducted in the USA only, participants responded that peers would be more likely to provide positive feedback than negative feedback, and this tendency was strongest among the younger children. In Study 2, the expectation that peers would preferentially disclose positive feedback was replicated among children from the USA, and was also seen among younger but not older children from China. Participants in all groups took the relationship between communication partners into account when predicting whether peers would express evaluative feedback. Results of open-ended responses suggested cross-cultural differences, including a greater emphasis by Chinese children on the implications of evaluative feedback for future performance, and reference by some older Chinese children to the possibility that positive feedback might make the recipient 'too proud'.
MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim
2018-01-01
A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.
Predicting successful tactile mapping of virtual objects.
Brayda, Luca; Campus, Claudio; Gori, Monica
2013-01-01
Improving spatial ability of blind and visually impaired people is the main target of orientation and mobility (O&M) programs. In this study, we use a minimalistic mouse-shaped haptic device to show a new approach aimed at evaluating devices providing tactile representations of virtual objects. We consider psychophysical, behavioral, and subjective parameters to clarify under which circumstances mental representations of spaces (cognitive maps) can be efficiently constructed with touch by blindfolded sighted subjects. We study two complementary processes that determine map construction: low-level perception (in a passive stimulation task) and high-level information integration (in an active exploration task). We show that jointly considering a behavioral measure of information acquisition and a subjective measure of cognitive load can give an accurate prediction and a practical interpretation of mapping performance. Our simple TActile MOuse (TAMO) uses haptics to assess spatial ability: this may help individuals who are blind or visually impaired to be better evaluated by O&M practitioners or to evaluate their own performance.
Predicting survival across chronic interstitial lung disease: the ILD-GAP model.
Ryerson, Christopher J; Vittinghoff, Eric; Ley, Brett; Lee, Joyce S; Mooney, Joshua J; Jones, Kirk D; Elicker, Brett M; Wolters, Paul J; Koth, Laura L; King, Talmadge E; Collard, Harold R
2014-04-01
Risk prediction is challenging in chronic interstitial lung disease (ILD) because of heterogeneity in disease-specific and patient-specific variables. Our objective was to determine whether mortality is accurately predicted in patients with chronic ILD using the GAP model, a clinical prediction model based on sex, age, and lung physiology, that was previously validated in patients with idiopathic pulmonary fibrosis. Patients with idiopathic pulmonary fibrosis (n=307), chronic hypersensitivity pneumonitis (n=206), connective tissue disease-associated ILD (n=281), idiopathic nonspecific interstitial pneumonia (n=45), or unclassifiable ILD (n=173) were selected from an ongoing database (N=1,012). Performance of the previously validated GAP model was compared with novel prediction models in each ILD subtype and the combined cohort. Patients with follow-up pulmonary function data were used for longitudinal model validation. The GAP model had good performance in all ILD subtypes (c-index, 74.6 in the combined cohort), which was maintained at all stages of disease severity and during follow-up evaluation. The GAP model had similar performance compared with alternative prediction models. A modified ILD-GAP Index was developed for application across all ILD subtypes to provide disease-specific survival estimates using a single risk prediction model. This was done by adding a disease subtype variable that accounted for better adjusted survival in connective tissue disease-associated ILD, chronic hypersensitivity pneumonitis, and idiopathic nonspecific interstitial pneumonia. The GAP model accurately predicts risk of death in chronic ILD. The ILD-GAP model accurately predicts mortality in major chronic ILD subtypes and at all stages of disease.
Szender, J Brian; Frederick, Peter J; Eng, Kevin H; Akers, Stacey N; Lele, Shashikant B; Odunsi, Kunle
2015-03-01
The National Surgical Quality Improvement Program is aimed at preventing perioperative complications. An online calculator was recently published, but the primary studies used limited gynecologic surgery data. The purpose of this study was to evaluate the performance of the National Surgical Quality Improvement Program Universal Surgical Risk Calculator (URC) on the patients of a gynecologic oncology service. We reviewed 628 consecutive surgeries performed by our gynecologic oncology service between July 2012 and June 2013. Demographic data including diagnosis and cancer stage, if applicable, were collected. Charts were reviewed to determine complication rates. Specific complications were as follows: death, pneumonia, cardiac complications, surgical site infection (SSI) or urinary tract infection, renal failure, or venous thromboembolic event. Data were compared with modeled outcomes using Brier scores and receiver operating characteristic curves. Significance was declared based on P < 0.05. The model accurately predicated death and venous thromboembolic event, with Brier scores of 0.004 and 0.003, respectively. Predicted risk was 50% greater than experienced for urinary tract infection; the experienced SSI and pneumonia rates were 43% and 36% greater than predicted. For any complication, the Brier score 0.023 indicates poor performance of the model. In this study of gynecologic surgeries, we could not verify the predictive value of the URC for cardiac complications, SSI, and pneumonia. One disadvantage of applying a URC to multiple subspecialties is that with some categories, complications are not accurately estimated. Our data demonstrate that some predicted risks reported by the calculator need to be interpreted with reservation.
Development and Validation of the Appearance and Performance Enhancing Drug Use Schedule
Langenbucher, James W.; Lai, Justine Karmin; Loeb, Katharine L.; Hollander, Eric
2011-01-01
Appearance-and-performance enhancing drug (APED) use is a form of drug use that includes use of a wide range of substances such as anabolic-androgenic steroids (AASs) and associated behaviors including intense exercise and dietary control. To date, there are no reliable or valid measures of the core features of APED use. The present study describes the development and psychometric evaluation of the Appearance and Performance Enhancing Drug Use Schedule (APEDUS) which is a semi-structured interview designed to assess the spectrum of drug use and related features of APED use. Eighty-five current APED using men and women (having used an illicit APED in the past year and planning to use an illicit APED in the future) completed the APEDUS and measures of convergent and divergent validity. Inter-rater agreement, scale reliability, one-week test-retest reliability, convergent and divergent validity, and construct validity were evaluated for each of the APEDUS scales. The APEDUS is a modular interview with 10 sections designed to assess the core drug and non-drug phenomena associated with APED use. All scales and individual items demonstrated high inter-rater agreement and reliability. Individual scales significantly correlated with convergent measures (DSM-IV diagnoses, aggression, impulsivity, eating disorder pathology) and were uncorrelated with a measure of social desirability. APEDUS subscale scores were also accurate measures of AAS dependence. The APEDUS is a reliable and valid measure of APED phenomena and an accurate measure of the core pathology associated with APED use. Issues with assessing APED use are considered and future research considered. PMID:21640487
Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe
2016-01-01
Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s. PMID:27441719
Figuera, Carlos; Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe
2016-01-01
Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s.
Guimaraes, Carolina V; Grzeszczuk, Robert; Bisset, George S; Donnelly, Lane F
2018-03-01
When implementing or monitoring department-sanctioned standardized radiology reports, feedback about individual faculty performance has been shown to be a useful driver of faculty compliance. Most commonly, these data are derived from manual audit, which can be both time-consuming and subject to sampling error. The purpose of this study was to evaluate whether a software program using natural language processing and machine learning could accurately audit radiologist compliance with the use of standardized reports compared with performed manual audits. Radiology reports from a 1-month period were loaded into such a software program, and faculty compliance with use of standardized reports was calculated. For that same period, manual audits were performed (25 reports audited for each of 42 faculty members). The mean compliance rates calculated by automated auditing were then compared with the confidence interval of the mean rate by manual audit. The mean compliance rate for use of standardized reports as determined by manual audit was 91.2% with a confidence interval between 89.3% and 92.8%. The mean compliance rate calculated by automated auditing was 92.0%, within that confidence interval. This study shows that by use of natural language processing and machine learning algorithms, an automated analysis can accurately define whether reports are compliant with use of standardized report templates and language, compared with manual audits. This may avoid significant labor costs related to conducting the manual auditing process. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Simulation Evaluation of Pilot Inputs for Real Time Modeling During Commercial Flight Operations
NASA Technical Reports Server (NTRS)
Martos, Borja; Ranaudo, Richard; Oltman, Ryan; Myhre, Nick
2017-01-01
Aircraft dynamics characteristics can only be identified from flight data when the aircraft dynamics are excited sufficiently. A preliminary study was conducted into what types and levels of manual piloted control excitation would be required for accurate Real-Time Parameter IDentification (RTPID) results by commercial airline pilots. This includes assessing the practicality for the pilot to provide this excitation when cued, and to further understand if pilot inputs during various phases of flight provide sufficient excitation naturally. An operationally representative task was evaluated by 5 commercial airline pilots using the NASA Ice Contamination Effects Flight Training Device (ICEFTD). Results showed that it is practical to use manual pilot inputs only as a means of achieving good RTPID in all phases of flight and in flight turbulence conditions. All pilots were effective in satisfying excitation requirements when cued. Much of the time, cueing was not even necessary, as just performing the required task provided enough excitation for accurate RTPID estimation. Pilot opinion surveys reported that the additional control inputs required when prompted by the excitation cueing were easy to make, quickly mastered, and required minimal training.
Shokouhian, M; Morling, R C S; Kale, I
2012-01-01
The pulse oximeter is a well-known device for measuring the level of oxygen in blood. Since their invention, pulse oximeters have been under constant development in both aspects of hardware and software; however there are still unsolved problems that limit their performance [6], [7]. Many fresh algorithms and new design techniques are being suggested every year by industry and academic researchers which claim that they can improve accuracy of measurements [8], [9]. With the lack of an accurate computer-based behavioural model for pulse oximeters, the only way for evaluation of these newly developed systems and algorithms is through hardware implementation which can be both expensive and time consuming. This paper presents an accurate Simulink based behavioural model for a pulse oximeter that can be used by industry and academia alike working in this area, as an exploration as well as productivity enhancement tool during their research and development process. The aim of this paper is to introduce a new computer-based behavioural model which provides a simulation environment from which new ideas can be rapidly evaluated long before the real implementation.
Use of refractometry and colorimetry as field methods to rapidly assess antimalarial drug quality.
Green, Michael D; Nettey, Henry; Villalva Rojas, Ofelia; Pamanivong, Chansapha; Khounsaknalath, Lamphet; Grande Ortiz, Miguel; Newton, Paul N; Fernández, Facundo M; Vongsack, Latsamy; Manolin, Ot
2007-01-04
The proliferation of counterfeit and poor-quality drugs is a major public health problem; especially in developing countries lacking adequate resources to effectively monitor their prevalence. Simple and affordable field methods provide a practical means of rapidly monitoring drug quality in circumstances where more advanced techniques are not available. Therefore, we have evaluated refractometry, colorimetry and a technique combining both processes as simple and accurate field assays to rapidly test the quality of the commonly available antimalarial drugs; artesunate, chloroquine, quinine, and sulfadoxine. Method bias, sensitivity, specificity and accuracy relative to high-performance liquid chromatographic (HPLC) analysis of drugs collected in the Lao PDR were assessed for each technique. The HPLC method for each drug was evaluated in terms of assay variability and accuracy. The accuracy of the combined method ranged from 0.96 to 1.00 for artesunate tablets, chloroquine injectables, quinine capsules, and sulfadoxine tablets while the accuracy was 0.78 for enterically coated chloroquine tablets. These techniques provide a generally accurate, yet simple and affordable means to assess drug quality in resource-poor settings.
Comparison of segmentation algorithms for fluorescence microscopy images of cells.
Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L
2011-07-01
The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.
Lymph node micrometastasis in gastrointestinal tract cancer--a clinical aspect.
Natsugoe, Shoji; Arigami, Takaaki; Uenosono, Yoshikazu; Yanagita, Shigehiro; Nakajo, Akihiro; Matsumoto, Masataka; Okumura, Hiroshi; Kijima, Yuko; Sakoda, Masahiko; Mataki, Yuko; Uchikado, Yasuto; Mori, Shinichiro; Maemura, Kosei; Ishigami, Sumiya
2013-10-01
Lymph node micrometastasis (LNM) can now be detected thanks to the development of various biological methods such as immunohistochemistry (IHC) and reverse transcription-polymerase chain reaction (RT-PCR). Although several reports have examined LNM in various carcinomas, including gastrointestinal (GI) cancer, the clinical significance of LNM remains controversial. Clinically, the presence of LNM is particularly important in patients without nodal metastasis on routine histological examination (pN0), because patients with pN0 but with LNM already in fact have metastatic potential. However, at present, several technical obstacles are impeding the detection of LNM using methods such as IHC or RT-PCR. Accurate evaluation should be carried out using the same antibody or primer and the same technique in a large number of patients. The clinical importance of the difference between LNM and isolated tumor cells (≤0.2 mm in diameter) will also be gradually clarified. It is important that the results of basic studies on LNM are prospectively introduced into the clinical field. Rapid diagnosis of LNM using IHC and RT-PCR during surgery would be clinically useful. Currently, minimally invasive treatments such as endoscopic submucosal dissection and laparoscopic surgery with individualized lymphadenectomy are increasingly being performed. Accurate diagnosis of LNM would clarify issues of curability and safety when performing such treatments. In the near future, individualized lymphadenectomy will develop based on the establishment of rapid, accurate diagnosis of LNM.
Yılmaz, Dilek; Sönmez, Ferah; Karakaş, Sacide; Yavaşcan, Önder; Aksu, Nejat; Ömürlü, İmran Kurt; Yenisey, Çiğdem
2016-06-01
Malnutrition is one of the major causes of morbidity and mortality in children with chronic kidney disease (CKD). The objective of this study was to evaluate nutritional status of children with stage 3-4 CKD and treated by peritoneal dialysis or hemodialysis using anthropometric measurements, biochemical parameters and bioelectrical impedance analysis. The study included a total of 52 patients and 46 healthy children. In anthropometric evaluation, the children with CKD had lower values for standard deviation score for weight, height, body mass index, skinfold thickness and mid-arm circumference than those of healthy children (p < 0.05). The fat mass (%) and the body cell mass (%) measurements performed by bioelectrical impedance analysis were lower compared with the control group (p < 0.05). It is considered that bioelectrical impedance analysis measurement should be used with anthropometric measurements, which are easy to perform, to achieve more accurate nutritional evaluation in children. © The Author [2016]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
OMPS Limb Profiler Instrument Performance Assessment
NASA Technical Reports Server (NTRS)
Jaross, Glen R.; Bhartia, Pawan K.; Chen, Grace; Kowitt, Mark; Haken, Michael; Chen, Zhong; Xu, Philippe; Warner, Jeremy; Kelly, Thomas
2014-01-01
Following the successful launch of the Ozone Mapping and Profiler Suite (OMPS) aboard the Suomi National Polar-orbiting Partnership (SNPP) spacecraft, the NASA OMPS Limb team began an evaluation of instrument and data product performance. The focus of this paper is the instrument performance in relation to the original design criteria. Performance that is closer to expectations increases the likelihood that limb scatter measurements by SNPP OMPS and successor instruments can form the basis for accurate long-term monitoring of ozone vertical profiles. The team finds that the Limb instrument operates mostly as designed and basic performance meets or exceeds the original design criteria. Internally scattered stray light and sensor pointing knowledge are two design challenges with the potential to seriously degrade performance. A thorough prelaunch characterization of stray light supports software corrections that are accurate to within 1% in radiances up to 60 km for the wavelengths used in deriving ozone. Residual stray light errors at 1000nm, which is useful in retrievals of stratospheric aerosols, currently exceed 10%. Height registration errors in the range of 1 km to 2 km have been observed that cannot be fully explained by known error sources. An unexpected thermal sensitivity of the sensor also causes wavelengths and pointing to shift each orbit in the northern hemisphere. Spectral shifts of as much as 0.5nm in the ultraviolet and 5 nm in the visible, and up to 0.3 km shifts in registered height, must be corrected in ground processing.
Turbine Performance Optimization Task Status
NASA Technical Reports Server (NTRS)
Griffin, Lisa W.; Turner, James E. (Technical Monitor)
2001-01-01
Capability to optimize for turbine performance and accurately predict unsteady loads will allow for increased reliability, Isp, and thrust-to-weight. The development of a fast, accurate aerodynamic design, analysis, and optimization system is required.
Development of a practical costing method for hospitals.
Cao, Pengyu; Toyabe, Shin-Ichi; Akazawa, Kouhei
2006-03-01
To realize an effective cost control, a practical and accurate cost accounting system is indispensable in hospitals. In traditional cost accounting systems, the volume-based costing (VBC) is the most popular cost accounting method. In this method, the indirect costs are allocated to each cost object (services or units of a hospital) using a single indicator named a cost driver (e.g., Labor hours, revenues or the number of patients). However, this method often results in rough and inaccurate results. The activity based costing (ABC) method introduced in the mid 1990s can prove more accurate results. With the ABC method, all events or transactions that cause costs are recognized as "activities", and a specific cost driver is prepared for each activity. Finally, the costs of activities are allocated to cost objects by the corresponding cost driver. However, it is much more complex and costly than other traditional cost accounting methods because the data collection for cost drivers is not always easy. In this study, we developed a simplified ABC (S-ABC) costing method to reduce the workload of ABC costing by reducing the number of cost drivers used in the ABC method. Using the S-ABC method, we estimated the cost of the laboratory tests, and as a result, similarly accurate results were obtained with the ABC method (largest difference was 2.64%). Simultaneously, this new method reduces the seven cost drivers used in the ABC method to four. Moreover, we performed an evaluation using other sample data from physiological laboratory department to certify the effectiveness of this new method. In conclusion, the S-ABC method provides two advantages in comparison to the VBC and ABC methods: (1) it can obtain accurate results, and (2) it is simpler to perform. Once we reduce the number of cost drivers by applying the proposed S-ABC method to the data for the ABC method, we can easily perform the cost accounting using few cost drivers after the second round of costing.