Improving Student Performance through Computer-Based Assessment: Insights from Recent Research.
ERIC Educational Resources Information Center
Ricketts, C.; Wilks, S. J.
2002-01-01
Compared student performance on computer-based assessment to machine-graded multiple choice tests. Found that performance improved dramatically on the computer-based assessment when students were not required to scroll through the question paper. Concluded that students may be disadvantaged by the introduction of online assessment unless care is…
ERIC Educational Resources Information Center
Rosenquist, Anders; Shavelson, Richard J.; Ruiz-Primo, Maria Araceli
Inconsistencies in scores from computer-simulated and "hands-on" science performance assessments have led to questions about the exchangeability of these two methods in spite of the highly touted potential of computer-simulated performance assessment. This investigation considered possible explanations for students' inconsistent performances: (1)…
NASA Astrophysics Data System (ADS)
Bergey, Bradley W.; Ketelhut, Diane Jass; Liang, Senfeng; Natarajan, Uma; Karakus, Melissa
2015-10-01
The primary aim of the study was to examine whether performance on a science assessment in an immersive virtual environment was associated with changes in scientific inquiry self-efficacy. A secondary aim of the study was to examine whether performance on the science assessment was equitable for students with different levels of computer game self-efficacy, including whether gender differences were observed. We examined 407 middle school students' scientific inquiry self-efficacy and computer game self-efficacy before and after completing a computer game-like assessment about a science mystery. Results from path analyses indicated that prior scientific inquiry self-efficacy predicted achievement on end-of-module questions, which in turn predicted change in scientific inquiry self-efficacy. By contrast, computer game self-efficacy was neither predictive of nor predicted by performance on the science assessment. While boys had higher computer game self-efficacy compared to girls, multi-group analyses suggested only minor gender differences in how efficacy beliefs related to performance. Implications for assessments with virtual environments and future design and research are discussed.
ERIC Educational Resources Information Center
White, Sheida; Kim, Young Yee; Chen, Jing; Liu, Fei
2015-01-01
This study examined whether or not fourth-graders could fully demonstrate their writing skills on the computer and factors associated with their performance on the National Assessment of Educational Progress (NAEP) computer-based writing assessment. The results suggest that high-performing fourth-graders (those who scored in the upper 20 percent…
Embedded assessment algorithms within home-based cognitive computer game exercises for elders.
Jimison, Holly; Pavel, Misha
2006-01-01
With the recent consumer interest in computer-based activities designed to improve cognitive performance, there is a growing need for scientific assessment algorithms to validate the potential contributions of cognitive exercises. In this paper, we present a novel methodology for incorporating dynamic cognitive assessment algorithms within computer games designed to enhance cognitive performance. We describe how this approach works for variety of computer applications and describe cognitive monitoring results for one of the computer game exercises. The real-time cognitive assessments also provide a control signal for adapting the difficulty of the game exercises and providing tailored help for elders of varying abilities.
Performance of a computer-based assessment of cognitive function measures in two cohorts of seniors
USDA-ARS?s Scientific Manuscript database
Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool f...
NASA Astrophysics Data System (ADS)
Ahn, Sul-Ah; Jung, Youngim
2016-10-01
The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.
Descriptive and Criterion-Referenced Self-Assessment with L2 Readers
ERIC Educational Resources Information Center
Brantmeier, Cindy; Vanderplank, Robert
2008-01-01
Brantmeier [Brantmeier, C., 2006. "Advanced L2 learners and reading placement: self-assessment, computer-based testing, and subsequent performance." 'System 34" (1), 15-35] found that self-assessment (SA) of second language (L2) reading ability is not an accurate predictor for computer-based testing or subsequent classroom performance. With 359…
ERIC Educational Resources Information Center
Bergey, Bradley W.; Ketelhut, Diane Jass; Liang, Senfeng; Natarajan, Uma; Karakus, Melissa
2015-01-01
The primary aim of the study was to examine whether performance on a science assessment in an immersive virtual environment was associated with changes in scientific inquiry self-efficacy. A secondary aim of the study was to examine whether performance on the science assessment was equitable for students with different levels of computer game…
Measuring coherence of computer-assisted likelihood ratio methods.
Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H
2015-04-01
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Technology in Note Taking and Assessment: The Effects of Congruence on Student Performance
ERIC Educational Resources Information Center
Barrett, Matthew E.; Swan, Alexander B.; Mamikonian, Ani; Ghajoyan, Inna; Kramarova, Olga; Youmans, Robert J.
2014-01-01
This study examined the encoding specificity principle in relation to traditional and computer-based note taking and assessment formats in higher education. Students (N = 79) took lecture notes either by hand (n = 40) or by computer (n = 39) and then completed either a computer or a paper-based assessment. When note taking and assessment formats…
ERIC Educational Resources Information Center
Escudier, M. P.; Newton, T. J.; Cox, M. J.; Reynolds, P. A.; Odell, E. W.
2011-01-01
This study compared higher education dental undergraduate student performance in online assessments with performance in traditional paper-based tests and investigated students' perceptions of the fairness and acceptability of online tests, and showed performance to be comparable. The project design involved two parallel cross-over trials, one in…
Specifying and Refining a Measurement Model for a Computer-Based Interactive Assessment
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…
High Performance Computing and Networking for Science--Background Paper.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Office of Technology Assessment.
The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…
PERFORMANCE OF A COMPUTER-BASED ASSESSMENT OF COGNITIVE FUNCTION MEASURES IN TWO COHORTS OF SENIORS
Espeland, Mark A.; Katula, Jeffrey A.; Rushing, Julia; Kramer, Arthur F.; Jennings, Janine M.; Sink, Kaycee M.; Nadkarni, Neelesh K.; Reid, Kieran F.; Castro, Cynthia M.; Church, Timothy; Kerwin, Diana R.; Williamson, Jeff D.; Marottoli, Richard A.; Rushing, Scott; Marsiske, Michael; Rapp, Stephen R.
2013-01-01
Background Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. Design The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool for assessing memory performance and executive functioning. The Lifestyle Interventions and Independence for Seniors (LIFE) investigators incorporated this battery in a full scale multicenter clinical trial (N=1635). We describe relationships that test scores have with those from interviewer-administered cognitive function tests and risk factors for cognitive deficits and describe performance measures (completeness, intra-class correlations). Results Computer-based assessments of cognitive function had consistent relationships across the pilot and full scale trial cohorts with interviewer-administered assessments of cognitive function, age, and a measure of physical function. In the LIFE cohort, their external validity was further demonstrated by associations with other risk factors for cognitive dysfunction: education, hypertension, diabetes, and physical function. Acceptable levels of data completeness (>83%) were achieved on all computer-based measures, however rates of missing data were higher among older participants (odds ratio=1.06 for each additional year; p<0.001) and those who reported no current computer use (odds ratio=2.71; p<0.001). Intra-class correlations among clinics were at least as low (ICC≤0.013) as for interviewer measures (ICC≤0.023), reflecting good standardization. All cognitive measures loaded onto the first principal component (global cognitive function), which accounted for 40% of the overall variance. Conclusion Our results support the use of computer-based tools for assessing cognitive function in multicenter clinical trials of older individuals. PMID:23589390
A Computer-Assisted Test Design and Diagnosis System for Use by Classroom Teachers
ERIC Educational Resources Information Center
He, Q.; Tymms, P.
2005-01-01
Computer-assisted assessment (CAA) has become increasingly important in education in recent years. A variety of computer software systems have been developed to help assess the performance of students at various levels. However, such systems are primarily designed to provide objective assessment of students and analysis of test items, and focus…
ERIC Educational Resources Information Center
Lenard, Mary Jane; Wessels, Susan; Khanlarian, Cindi
2010-01-01
Using a model developed by Young (2000), this paper explores the relationship between performance in the Accounting Information Systems course, self-assessed computer skills, and attitudes toward computers. Results show that after taking the AIS course, students experience a change in perception about their use of computers. Females'…
Examining the Effect of Computer-Based Passage Presentation on Reading Test Performance
ERIC Educational Resources Information Center
Higgins, Jennifer; Russell, Michael; Hoffmann, Thomas
2005-01-01
To examine the impact of transitioning 4th grade reading comprehension assessments to the computer, 219 fourth graders were randomly assigned to take a one-hour reading comprehension assessment on paper, on a computer using scrolling text to navigate through passages, or on a computer using paging text to navigate through passages. This study…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.
1995-12-31
In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
ERIC Educational Resources Information Center
Rosen, Yigal
2015-01-01
How can activities in which collaborative skills of an individual are measured be standardized? In order to understand how students perform on collaborative problem solving (CPS) computer-based assessment, it is necessary to examine empirically the multi-faceted performance that may be distributed across collaboration methods. The aim of this…
Toward Automated Computer-Based Visualization and Assessment of Team-Based Performance
ERIC Educational Resources Information Center
Ifenthaler, Dirk
2014-01-01
A considerable amount of research has been undertaken to provide insights into the valid assessment of team performance. However, in many settings, manual and therefore labor-intensive assessment instruments for team performance have limitations. Therefore, automated assessment instruments enable more flexible and detailed insights into the…
Computer program for assessing the theoretical performance of a three dimensional inlet
NASA Technical Reports Server (NTRS)
Agnone, A. M.; Kung, F.
1972-01-01
A computer program for determining the theoretical performance of a three dimensional inlet is presented. An analysis for determining the capture area, ram force, spillage force, and surface pressure force is presented, along with the necessary computer program. A sample calculation is also included.
Four Studies on Aspects of Assessing Computational Performance. Technical Report No. 297.
ERIC Educational Resources Information Center
Romberg, Thomas A., Ed.
The four studies reported in this document deal with aspects of assessing students' performance on computational skills. The first study grew out of a need for an instrument to measure students' speed at recalling addition facts. This had seemed to be a very easy task, but it proved to be much more difficult than anticipated. The second study grew…
ERIC Educational Resources Information Center
Rolka, Christine; Remshagen, Anja
2015-01-01
Contextualized learning is considered beneficial for student success. In this article, we assess the impact of context-based learning tools on student grade performance in an introductory computer science course. In particular, we investigate two central questions: (1) does the use context-based learning tools, robots and animations, affect…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Heroux, Michael A.; Barrett, Richard F.
The performance of a large-scale, production-quality science and engineering application (‘app’) is often dominated by a small subset of the code. Even within that subset, computational and data access patterns are often repeated, so that an even smaller portion can represent the performance-impacting features. If application developers, parallel computing experts, and computer architects can together identify this representative subset and then develop a small mini-application (‘miniapp’) that can capture these primary performance characteristics, then this miniapp can be used to both improve the performance of the app as well as provide a tool for co-design for the high-performance computing community.more » However, a critical question is whether a miniapp can effectively capture key performance behavior of an app. This study provides a comparison of an implicit finite element semiconductor device modeling app on unstructured meshes with an implicit finite element miniapp on unstructured meshes. The goal is to assess whether the miniapp is predictive of the performance of the app. Finally, single compute node performance will be compared, as well as scaling up to 16,000 cores. Results indicate that the miniapp can be reasonably predictive of the performance characteristics of the app for a single iteration of the solver on a single compute node.« less
Lin, Paul T.; Heroux, Michael A.; Barrett, Richard F.; ...
2015-07-30
The performance of a large-scale, production-quality science and engineering application (‘app’) is often dominated by a small subset of the code. Even within that subset, computational and data access patterns are often repeated, so that an even smaller portion can represent the performance-impacting features. If application developers, parallel computing experts, and computer architects can together identify this representative subset and then develop a small mini-application (‘miniapp’) that can capture these primary performance characteristics, then this miniapp can be used to both improve the performance of the app as well as provide a tool for co-design for the high-performance computing community.more » However, a critical question is whether a miniapp can effectively capture key performance behavior of an app. This study provides a comparison of an implicit finite element semiconductor device modeling app on unstructured meshes with an implicit finite element miniapp on unstructured meshes. The goal is to assess whether the miniapp is predictive of the performance of the app. Finally, single compute node performance will be compared, as well as scaling up to 16,000 cores. Results indicate that the miniapp can be reasonably predictive of the performance characteristics of the app for a single iteration of the solver on a single compute node.« less
ERIC Educational Resources Information Center
Worrell, Jamie; Duffy, Mary Lou; Brady, Michael P.; Dukes, Charles; Gonzalez-DeHass, Alyssa
2016-01-01
Many schools use computer-based testing to measure students' progress for end-of-the-year and statewide assessments. There is little research to support whether computer-based testing accurately reflects student progress, particularly among students with learning, performance, and generalization difficulties. This article summarizes an…
Comparability of Computer-Based and Paper-Based Science Assessments
ERIC Educational Resources Information Center
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E.
2018-01-01
We compared students' performance on a paper-based test (PBT) and three computer-based tests (CBTs). The three computer-based tests used different test navigation and answer selection features, allowing us to examine how these features affect student performance. The study sample consisted of 9,698 fourth through twelfth grade students from across…
Computer System Performance Measurement Techniques for ARTS III Computer Systems
DOT National Transportation Integrated Search
1973-12-01
The potential contribution of direct system measurement in the evolving ARTS 3 Program is discussed and software performance measurement techniques are comparatively assessed in terms of credibility of results, ease of implementation, volume of data,...
ERIC Educational Resources Information Center
Hsu, Ting-Chia
2016-01-01
In this study, a peer assessment system using the grid-based knowledge classification approach was developed to improve students' performance during computer skills training. To evaluate the effectiveness of the proposed approach, an experiment was conducted in a computer skills certification course. The participants were divided into three…
ERIC Educational Resources Information Center
Hawkes, Mark; Hategekimana, Claver
2010-01-01
This study focuses on the impact of wireless, mobile computing tools on student assessment outcomes. In a campus-wide wireless, mobile computing environment at an upper Midwest university, an empirical analysis is applied to understand the relationship between student performance and Tablet PC use. An experimental/control group comparison of…
Evaluation of computer-aided detection and diagnosis systems.
Petrick, Nicholas; Sahiner, Berkman; Armato, Samuel G; Bert, Alberto; Correale, Loredana; Delsanto, Silvia; Freedman, Matthew T; Fryd, David; Gur, David; Hadjiiski, Lubomir; Huo, Zhimin; Jiang, Yulei; Morra, Lia; Paquerault, Sophie; Raykar, Vikas; Samuelson, Frank; Summers, Ronald M; Tourassi, Georgia; Yoshida, Hiroyuki; Zheng, Bin; Zhou, Chuan; Chan, Heang-Ping
2013-08-01
Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and "best practices" for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD system's effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in clinical practice.
Evaluation of computer-aided detection and diagnosis systemsa)
Petrick, Nicholas; Sahiner, Berkman; Armato, Samuel G.; Bert, Alberto; Correale, Loredana; Delsanto, Silvia; Freedman, Matthew T.; Fryd, David; Gur, David; Hadjiiski, Lubomir; Huo, Zhimin; Jiang, Yulei; Morra, Lia; Paquerault, Sophie; Raykar, Vikas; Samuelson, Frank; Summers, Ronald M.; Tourassi, Georgia; Yoshida, Hiroyuki; Zheng, Bin; Zhou, Chuan; Chan, Heang-Ping
2013-01-01
Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and “best practices” for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD system's effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in clinical practice. PMID:23927365
The Influence of Test Mode and Visuospatial Ability on Mathematics Assessment Performance
ERIC Educational Resources Information Center
Logan, Tracy
2015-01-01
Mathematics assessment and testing are increasingly situated within digital environments with international tests moving to computer-based testing in the near future. This paper reports on a secondary data analysis which explored the influence the mode of assessment--computer-based (CBT) and pencil-and-paper based (PPT)--and visuospatial ability…
Biomechanical effects of mobile computer location in a vehicle cab.
Saginus, Kyle A; Marklin, Richard W; Seeley, Patricia; Simoneau, Guy G; Freier, Stephen
2011-10-01
The objective of this research is to determine the best location to place a conventional mobile computer supported by a commercially available mount in a light truck cab. U.S. and Canadian electric utility companies are in the process of integrating mobile computers into their fleet vehicle cabs. There are no publications on the effect of mobile computer location in a vehicle cab on biomechanical loading, performance, and subjective assessment. The authors tested four locations of mobile computers in a light truck cab in a laboratory study to determine how location affected muscle activity of the lower back and shoulders; joint angles of the shoulders, elbows, and wrist; user performance; and subjective assessment. A total of 22 participants were tested in this study. Placing the mobile computer closer to the steering wheel reduced low back and shoulder muscle activity. Joint angles of the shoulders, elbows, and wrists were also closer to neutral angle. Biomechanical modeling revealed substantially less spinal compression and trunk muscle force. In general, there were no practical differences in performance between the locations. Subjective assessment indicated that users preferred the mobile computer to be as close as possible to the steering wheel. Locating the mobile computer close to the steering wheel reduces risk of injuries, such as low back pain and shoulder tendonitis. Results from the study can guide electric utility companies in the installation of mobile computers into vehicle cabs. Results may also be generalized to other industries that use trucklike vehicles, such as construction.
Human behavior and human performance: Psychomotor demands
NASA Technical Reports Server (NTRS)
1992-01-01
The results of several experiments are presented in abstract form. These studies are critical for the interpretation and acceptance of flight based science to be conducted by the Behavior and Performance project. Some representative titles are as follow: External audio for IBM/PC compatible computers; A comparative assessment of psychomotor performance (target prediction by humans and macaques); Response path (a dependent measure for computer maze solving and other tasks); Behavioral asymmetries of psychomotor performance in Rhesus monkey (a dissociation between hand preference and skill); Testing primates with joystick based automated apparatus; and Environmental enrichment and performance assessment for ground or flight based research with primates;
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Role of HPC in Advancing Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2004-01-01
On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.
Mitra, Nilesh Kumar; Barua, Ankur
2015-03-03
The impact of web-based formative assessment practices on performance of undergraduate medical students in summative assessments is not widely studied. This study was conducted among third-year undergraduate medical students of a designated university in Malaysia to compare the effect, on performance in summative assessment, of repeated computer-based formative assessment with automated feedback with that of single paper-based formative assessment with face-to face feedback. This quasi-randomized trial was conducted among two groups of undergraduate medical students who were selected by stratified random technique from a cohort undertaking the Musculoskeletal module. The control group C (n = 102) was subjected to a paper-based formative MCQ test. The experimental group E (n = 65) was provided three online formative MCQ tests with automated feedback. The summative MCQ test scores for both these groups were collected after the completion of the module. In this study, no significant difference was observed between the mean summative scores of the two groups. However, Band 1 students from group E with higher entry qualification showed higher mean score in the summative assessment. A trivial, but significant and positive correlation (r(2) = +0.328) was observed between the online formative test scores and summative assessment scores of group E. The proportionate increase of performance in group E was found to be almost double than group C. The use of computer based formative test with automated feedback improved the performance of the students with better academic background in the summative assessment. Computer-based formative test can be explored as an optional addition to the curriculum of pre-clinical integrated medical program to improve the performance of the students with higher academic ability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L L; Trent, D S; Budden, M J
During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-03
...] Guidances for Industry and Food and Drug Administration Staff: Computer-Assisted Detection Devices Applied... Clinical Performance Assessment: Considerations for Computer-Assisted Detection Devices Applied to... guidance, entitled ``Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device...
Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.
2018-01-01
In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGrail, B.P.; Mahoney, L.A.
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less
Keeping Student Performance Central: The New York Assessment Collection. Studies on Exhibitions.
ERIC Educational Resources Information Center
Allen, David; McDonald, Joseph
This report describes a computer tool used by the state of New York to assess student performance in elementary and secondary grades. Based on the premise that every assessment is a system of interacting elements, the tool examines students on six dimensions: vision, prompt, coaching context, performance, standards, and reflection. Vision, which…
Nair, Sankaran N; Czaja, Sara J; Sharit, Joseph
2007-06-01
This article explores the role of age, cognitive abilities, prior experience, and knowledge in skill acquisition for a computer-based simulated customer service task. Fifty-two participants aged 50-80 performed the task over 4 consecutive days following training. They also completed a battery that assessed prior computer experience and cognitive abilities. The data indicated that overall quality and efficiency of performance improved with practice. The predictors of initial level of performance and rate of change in performance varied according to the performance parameter assessed. Age and fluid intelligence predicted initial level and rate of improvement in overall quality, whereas crystallized intelligence and age predicted initial e-mail processing time, and crystallized intelligence predicted rate of change in e-mail processing time over days. We discuss the implications of these findings for the design of intervention strategies.
ERIC Educational Resources Information Center
Deutsch, William
1992-01-01
Reviews the history of the development of the field of performance technology. Highlights include early teaching machines, instructional technology, learning theory, programed instruction, the systems approach, needs assessment, branching versus linear program formats, programing languages, and computer-assisted instruction. (LRW)
Devitt, P; Cehic, D; Palmer, E
1998-06-01
Student teaching of surgery has been devolved from the university in an effort to increase and broaden undergraduate clinical experience. In order to ensure uniformity of learning we have defined learning objectives and provided a computer-based package to supplement clinical teaching. A study was undertaken to evaluate the place of computer-based learning in a clinical environment. Twelve modules were provided for study during a 6-week attachment. These covered clinical problems related to cardiology, neurosurgery and gastrointestinal haemorrhage. Eighty-four fourth-year students undertook a pre- and post-test assessment on these three topics as well as acute abdominal pain. No extra learning material on the latter topic was provided during the attachment. While all students showed significant improvement in performance in the post-test assessment, those who had access to the computer material performed significantly better than did the controls. Within the topics, students in both groups performed equally well on the post-test assessment of acute abdominal pain but the control group's performance was significantly lacking on the topic of gastrointestinal haemorrhage, suggesting that the bulk of learning on this subject came from the computer material and little from the clinical attachment. This type of learning resource can be used to supplement the student's clinical experience and at the same time monitor what they learn during clinical clerkships and identify areas of weakness.
Trends in computer applications in science assessment
NASA Astrophysics Data System (ADS)
Kumar, David D.; Helgeson, Stanley L.
1995-03-01
Seven computer applications to science assessment are reviewed. Conventional test administration includes record keeping, grading, and managing test banks. Multiple-choice testing involves forced selection of an answer from a menu, whereas constructed-response testing involves options for students to present their answers within a set standard deviation. Adaptive testing attempts to individualize the test to minimize the number of items and time needed to assess a student's knowledge. Figurai response testing assesses science proficiency in pictorial or graphic mode and requires the student to construct a mental image rather than selecting a response from a multiple choice menu. Simulations have been found useful for performance assessment on a large-scale basis in part because they make it possible to independently specify different aspects of a real experiment. An emerging approach to performance assessment is solution pathway analysis, which permits the analysis of the steps a student takes in solving a problem. Virtually all computer-based testing systems improve the quality and efficiency of record keeping and data analysis.
Assessing cognition following petrol sniffing for Indigenous Australians.
Dingwall, Kylie M; Lewis, Matthew S; Maruff, Paul; Cairney, Sheree
2010-07-01
Chronic petrol inhalation can be associated with significant cognitive impairment. While rehabilitation programs can rely on such skills to educate clients and achieve treatment outcomes, cognitive function is rarely assessed on admission. This is particularly true for Indigenous populations where standard assessments are not appropriate. This paper describes a process for assessing cognition in Indigenous Australians. Two studies investigate firstly the demographic factors impacting on cognition for healthy Indigenous Australians and secondly the utility of the assessment process for detecting petrol sniffing related cognitive impairments. Study One assessed a naturalistic sample of healthy Indigenous Australians from the Northern Territory (N = 206; mean age = 28.03) on computerised tests of psychomotor speed, visual attention, memory, learning, spatial awareness and executive functions. Multiple regression analyses determined the unique contributions of six factors (age, education, gender, familiarity with computers, regular long term cannabis use and locality) to the variance in performance for this group. Study Two examined group differences in cognitive performance on the same tests between healthy Indigenous Australians (N = 96) and Indigenous petrol sniffers (N = 50; both age restricted to < 26 years) while controlling those factors found to impact on performance from Study One. Age, computer familiarity, and education significantly contributed to the variance in performance measures. While controlling these factors, petrol abuse was associated with poorer performance on complex tasks of psychomotor, visual attention, memory, learning, spatial awareness and executive function. This assessment process is useful for detecting substance abuse related impairments in Indigenous Australians and when using this assessment process, age and computer familiarity in particular should be controlled for.
NASA Technical Reports Server (NTRS)
Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory
1995-01-01
The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.
Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mustain, Christopher J.
The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.
Design Rationale for a Complex Performance Assessment
ERIC Educational Resources Information Center
Williamson, David M.; Bauer, Malcolm; Steinberg, Linda S.; Mislevy, Robert J.; Behrens, John T.; DeMark, Sarah F.
2004-01-01
In computer-based interactive environments meant to support learning, students must bring a wide range of relevant knowledge, skills, and abilities to bear jointly as they solve meaningful problems in a learning domain. To function effectively as an assessment, a computer system must additionally be able to evoke and interpret observable evidence…
Use of microcomputers for self-assessment and continuing education in anaesthesia.
Schmulian, C; Kenny, G N; Campbell, D
1982-01-01
The suitability of computer-assisted self-assessment was evaluated as a means of quality assurance in anaesthesia. Altogether 202 anaesthetists participated in four trials of the method. Analyses of their performance in a self-assessment programme dealing with obstetric anaesthesia showed significant differences in the levels of knowledge of anaesthetists practising for different numbers of years. The acceptability of computer-assisted self-assessment ranged from 91% to 100% for the four trials. Similar programs have been shown to be of value in medical education and thus computer-assisted self-assessment appears to fulfil simultaneously the functions of self-assessment and continuing education. The criteria for a method of quality assurance are met. PMID:6800476
NASA Technical Reports Server (NTRS)
Goodwin, Sabine A.; Raj, P.
1999-01-01
Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.
Borgonovi, Francesca
2016-04-01
Video games are a favorite leisure-time activity among teenagers worldwide. This study examines cross-national gender differences in reading achievement and video gaming and whether video gaming explains gender differences in reading achievement and differences in performance between paper-based and computer-based reading. We use data from a representative sample of 145,953 students from 26 countries who sat the PISA 2012 assessments and provided self-reports on use of video games. Although boys tend to have poorer results in both the computer-based and the paper-based reading assessments, boys' under achievement is smaller when the assessment is delivered on computer than when it is delivered on paper. Boys underperformance compared to girls in the two reading assessments is particularly pronounced among low-achieving students. Among both boys and girls moderate use of single-player games is associated with a performance advantage. However, frequent engagement with collaborative online games is generally associated with a steep reduction in achievement, particularly in the paper-based test and particularly among low-achieving students. Excessive gaming may hinder academic achievement, but moderate gaming can promote positive student outcomes. In many countries video gaming explains the difference in the gender gap in reading between the paper-based and the computer-based assessments. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Guimarães, Bruno; Ribeiro, José; Cruz, Bernardo; Ferreira, André; Alves, Hélio; Cruz-Correia, Ricardo; Madeira, Maria Dulce; Ferreira, Maria Amélia
2018-01-01
The time, material, and staff-consuming nature of anatomy's traditional pen-and-paper assessment system, the increase in the number of students enrolling in medical schools and the ever-escalating workload of academic staff have made the use of computer-based assessment (CBA) an attractive proposition. To understand the impact of such shift in the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitz, R.R.; Rittmann, P.D.; Wood, M.I.
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrationsmore » in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.« less
Point Cloud-Based Automatic Assessment of 3D Computer Animation Courseworks
ERIC Educational Resources Information Center
Paravati, Gianluca; Lamberti, Fabrizio; Gatteschi, Valentina; Demartini, Claudio; Montuschi, Paolo
2017-01-01
Computer-supported assessment tools can bring significant benefits to both students and teachers. When integrated in traditional education workflows, they may help to reduce the time required to perform the evaluation and consolidate the perception of fairness of the overall process. When integrated within on-line intelligent tutoring systems,…
Virtual reality computer simulation.
Grantcharov, T P; Rosenberg, J; Pahle, E; Funch-Jensen, P
2001-03-01
Objective assessment of psychomotor skills should be an essential component of a modern surgical training program. There are computer systems that can be used for this purpose, but their wide application is not yet generally accepted. The aim of this study was to validate the role of virtual reality computer simulation as a method for evaluating surgical laparoscopic skills. The study included 14 surgical residents. On day 1, they performed two runs of all six tasks on the Minimally Invasive Surgical Trainer, Virtual Reality (MIST VR). On day 2, they performed a laparoscopic cholecystectomy on living pigs; afterward, they were tested again on the MIST VR. A group of experienced surgeons evaluated the trainees' performance on the animal operation, giving scores for total performance error and economy of motion. During the tasks on the MIST VR, errors and noneconomy of movements for the left and right hand were also recorded. There were significant correlations between error scores in vivo and three of the six in vitro tasks (p < 0.05). In vivo economy scores correlated significantly with non-economy right-hand scores for five of the six tasks and with non-economy left-hand scores for one of the six tasks (p < 0.05). In this study, laparoscopic performance in the animal model correlated significantly with performance on the computer simulator. Thus, the computer model seems to be a promising objective method for the assessment of laparoscopic psychomotor skills.
Performativity, Fabrication and Trust: Exploring Computer-Mediated Moderation
ERIC Educational Resources Information Center
Clapham, Andrew
2013-01-01
Based on research conducted in an English secondary school, this paper explores computer-mediated moderation as a performative tool. The Module Assessment Meeting (MAM) was the moderation approach under investigation. I mobilise ethnographic data generated by a key informant, and triangulated with that from other actors in the setting, in order to…
Tablet computers in assessing performance in a high stakes exam: opinion matters.
Currie, G P; Sinha, S; Thomson, F; Cleland, J; Denison, A R
2017-06-01
Background Tablet computers have emerged as a tool to capture, process and store data in examinations, yet evidence relating to their acceptability and usefulness in assessment is limited. Methods We performed an observational study to explore opinions and attitudes relating to tablet computer use in recording performance in a final year objective structured clinical examination at a single UK medical school. Examiners completed a short questionnaire encompassing background, forced-choice and open questions. Forced choice questions were analysed using descriptive statistics and open questions by framework analysis. Results Ninety-two (97% response rate) examiners completed the questionnaire of whom 85% had previous use of tablet computers. Ninety per cent felt checklist mark allocation was 'very/quite easy', while approximately half considered recording 'free-type' comments was 'easy/very easy'. Greater overall efficiency of marking and resource savings were considered the main advantages of tablet computers, while concerns relating to technological failure and ability to record free type comments were raised. Discussion In a context where examiners were familiar with tablet computers, they were preferred to paper checklists, although concerns were raised. This study adds to the limited literature underpinning the use of electronic devices as acceptable tools in objective structured clinical examinations.
Students' Perceived Usefulness of Formative Feedback for a Computer-Adaptive Test
ERIC Educational Resources Information Center
Lilley, Mariana; Barker, Trevor
2007-01-01
In this paper we report on research related to the provision of automated feedback based on a computer adaptive test (CAT), used in formative assessment. A cohort of 76 second year university undergraduates took part in a formative assessment with a CAT and were provided with automated feedback on their performance. A sample of students responded…
ERIC Educational Resources Information Center
Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng
2016-01-01
The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…
Assess/Mitigate Risk through the Use of Computer-Aided Software Engineering (CASE) Tools
NASA Technical Reports Server (NTRS)
Aguilar, Michael L.
2013-01-01
The NASA Engineering and Safety Center (NESC) was requested to perform an independent assessment of the mitigation of the Constellation Program (CxP) Risk 4421 through the use of computer-aided software engineering (CASE) tools. With the cancellation of the CxP, the assessment goals were modified to capture lessons learned and best practices in the use of CASE tools. The assessment goal was to prepare the next program for the use of these CASE tools. The outcome of the assessment is contained in this document.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2001-01-01
Three-dimensional transonic flow over a delta wing is investigated using several turbulence models. The performance of linear eddy viscosity models and an explicit algebraic stress model is assessed at the start of vortex flow, and the results compared with experimental data. To assess the effect of transition location, computations that either fix transition aft of the leading edge or are fully turbulent are performed. These computations show that grid resolution, transition location and turbulence model significantly affect the 3D flowfield.
1983-12-30
AD-Ri46 57? ARCHITECTURE DESIGN AND SYSTEM; PERFORMANCE ASSESSMENT i/i AND DEVELOPMENT ME..(U) NAVAL SURFACE WEAPONS CENTER SILYER SPRING MD J...AD-A 146 577 NSIWC TR 83-324 ARCHITECTURE , DESIGN , AND SYSTEM; PERFORMANCE ASSESSMENT AND DEVELOPMENT METHODOLOGY...REPORT NUMBER 12. GOVT ACCESSION NO.3. RECIPIENT’S CATALOG NUMBER NSWC TR 83-324 10- 1 1 51’ 4. ?ITLE (and subtitle) ARCHITECTURE , DESIGN , AND SYSTEM; S
Performance assessment of small-package-class nonintrusive inspection systems
NASA Astrophysics Data System (ADS)
Spradling, Michael L.; Hyatt, Roger
1997-02-01
The DoD Counterdrug Technology Development Program has addressed the development and demonstration of technology to enhance nonintrusive inspection of small packages such as passenger baggage, commercially delivered parcels, and breakbulk cargo items. Within the past year they have supported several small package-class nonintrusive inspection system performance assessment activities. All performance assessment programs involved the use of a red/blue team concept and were conducted in accordance with approved assessment protocols. This paper presents a discussion related to the systematic performance assessment of small package-class nonintrusive inspection technologies, including transmission, backscatter and computed tomography x-ray imaging, and protocol-related considerations for the assessment of these systems.
ERIC Educational Resources Information Center
Wang, Xiao-Ming; Hwang, Gwo-Jen; Liang, Zi-Yun; Wang, Hsiu-Ying
2017-01-01
It has become an important and challenging issue to foster students' concepts and skills of computer programming. Scholars believe that programming training could promote students' higher order thinking performance; however, many school teachers have reported the difficulty of teaching programming courses. Although several previous studies have…
Sehlen, Susanne; Ott, Martin; Marten-Mittag, Birgitt; Haimerl, Wolfgang; Dinkel, Andreas; Duehmke, Eckhart; Klein, Christian; Schaefer, Christof; Herschbach, Peter
2012-07-01
This study investigated feasibility and acceptance of computer-based assessment for the identification of psychosocial distress in routine radiotherapy care. 155 cancer patients were assessed using QSC-R10, PO-Bado-SF and Mach-9. The congruence between computerized tablet PC and conventional paper assessment was analysed in 50 patients. The agreement between the 2 modes was high (ICC 0.869-0.980). Acceptance of computer-based assessment was very high (>95%). Sex, age, education, distress and Karnofsky performance status (KPS) did not influence acceptance. Computerized assessment was rated more difficult by older patients (p = 0.039) and patients with low KPS (p = 0.020). 75.5% of the respondents supported referral for psycho-social intervention for distressed patients. The prevalence of distress was 27.1% (QSC-R10). Computer-based assessment allows easy identification of distressed patients. Level of staff involvement is low, and the results are quickly available for care providers. © Georg Thieme Verlag KG Stuttgart · New York.
2010-09-01
application of existing assessment tools that may be applicable to Marine Air Ground Task Force (MAGTF) Command, Control, Communications and...of existing assessment tools that may be applicable to Marine Air Ground Task Force (MAGTF) Command, Control, Communications and Computers (C4...assessment tools and analysis concepts that may be extended to the Marine Corps’ C4 System of Systems assessment methodology as a means to obtain a
Radiological performance assessment for the E-Area Vaults Disposal Facility. Appendices A through M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, J.R.
1994-04-15
These document contains appendices A-M for the performance assessment. They are A: details of models and assumptions, B: computer codes, C: data tabulation, D: geochemical interactions, E: hydrogeology of the Savannah River Site, F: software QA plans, G: completeness review guide, H: performance assessment peer review panel recommendations, I: suspect soil performance analysis, J: sensitivity/uncertainty analysis, K: vault degradation study, L: description of naval reactor waste disposal, M: porflow input file. (GHH)
NASA Technical Reports Server (NTRS)
Rediess, Herman A.; Hewett, M. D.
1991-01-01
The requirements are assessed for the use of remote computation to support HRV flight testing. First, remote computational requirements were developed to support functions that will eventually be performed onboard operational vehicles of this type. These functions which either cannot be performed onboard in the time frame of initial HRV flight test programs because the technology of airborne computers will not be sufficiently advanced to support the computational loads required, or it is not desirable to perform the functions onboard in the flight test program for other reasons. Second, remote computational support either required or highly desirable to conduct flight testing itself was addressed. The use is proposed of an Automated Flight Management System which is described in conceptual detail. Third, autonomous operations is discussed and finally, unmanned operations.
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Chen, Cheng-Chuan; Chen, Yi-Hui
2012-01-01
This research attempted to categorize reflection in a Web-based portfolio assessment using the Chinese Word Segmenting System (CWSS). Another aim of this research was to explore reflective performance in which individual differences were further examined. Participants were 45 eight-grade students from a junior high school taking a computer course.…
Game-Based Assessment: Investigating the Impact on Test Anxiety and Exam Performance
ERIC Educational Resources Information Center
Mavridis, A.; Tsiatsos, T.
2017-01-01
The aim of this study is to assess the impact of a 3D educational computer game on students' test anxiety and exam performance when used in evaluative situations as compared to the traditional method of examination. The participants of the study were students in tertiary education who were examined using game-based assessment and traditional…
[The characteristics of computer simulation of traffic accidents].
Zou, Dong-Hua; Liu, Ning-Guo; Chen, Jian-Guo; Jin, Xian-Long; Zhang, Xiao-Yun; Zhang, Jian-Hua; Chen, Yi-Jiu
2008-12-01
To reconstruct the collision process of traffic accident and the injury mode of the victim by computer simulation technology in forensic assessment of traffic accident. Forty actual accidents were reconstructed by stimulation software and high performance computer based on analysis of the trace evidences at the scene, damage of the vehicles and injury of the victims, with 2 cases discussed in details. The reconstruction correlated very well in 28 cases, well in 9 cases, and suboptimal in 3 cases with the above parameters. Accurate reconstruction of the accident would be helpful for assessment of the injury mechanism of the victims. Reconstruction of the collision process of traffic accident and the injury mechanism of the victim by computer simulation is useful in traffic accident assessment.
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.
1985-01-01
A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.
ERIC Educational Resources Information Center
Denison, Aaron
2013-01-01
The purpose of this study was to examine the utility of a brief assessment for the selection of an effective instruction to increase fluency performance on computation math problems. Participants were four general education third-grade students who performed below the median score on a classwide administered multiple math skills probe. Students…
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ditmars, J.D.; Walbridge, E.W.; Rote, D.M.
1983-10-01
Repository performance assessment is analysis that identifies events and processes that might affect a repository system for isolation of radioactive waste, examines their effects on barriers to waste migration, and estimates the probabilities of their occurrence and their consequences. In 1983 Battelle Memorial Institute's Office of Nuclear Waste Isolation (ONWI) prepared two plans - one for performance assessment for a waste repository in salt and one for verification and validation of performance assessment technology. At the request of the US Department of Energy's Salt Repository Project Office (SRPO), Argonne National Laboratory reviewed those plans and prepared this report to advisemore » SRPO of specific areas where ONWI's plans for performance assessment might be improved. This report presents a framework for repository performance assessment that clearly identifies the relationships among the disposal problems, the processes underlying the problems, the tools for assessment (computer codes), and the data. In particular, the relationships among important processes and 26 model codes available to ONWI are indicated. A common suggestion for computer code verification and validation is the need for specific and unambiguous documentation of the results of performance assessment activities. A major portion of this report consists of status summaries of 27 model codes indicated as potentially useful by ONWI. The code summaries focus on three main areas: (1) the code's purpose, capabilities, and limitations; (2) status of the elements of documentation and review essential for code verification and validation; and (3) proposed application of the code for performance assessment of salt repository systems. 15 references, 6 figures, 4 tables.« less
Computational Tools to Assess Turbine Biological Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.
2014-07-24
Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, amore » suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.« less
Assessing Functional Performance using a Computer-Based Simulations of Everyday Activities
Czaja, Sara J.; Loewenstein, David A.; Lee, Chin Chin; Fu, Shih Hua; Harvey, Philip D.
2016-01-01
Current functional capacity (FC) measures for patients with schizophrenia typically involve informant assessments or are in paper and pencil format, requiring in-person administration by a skilled assessor. This approach presents logistic problems and limits the possibilities for remote assessment, an important issue for these patients. This study evaluated the feasibility of using a computer-based assessment battery, including simulations of everyday activities. The battery was compared to in-person standard assessments of cognition and FC with respect to baseline convergence and sensitivity to group differences. The battery, administered on a touch screen computer, included measures of critical everyday activities, including: ATM Banking/Financial Management, Prescriptions Refill via Telephone/Voice Menu System, and Forms Completion (simulating a clinic and patient history form). The sample included 77 older adult patients with schizophrenia and 24 older adult healthy controls that were administered the battery at two time points. The results indicated that the battery was sensitive to group differences in FC. Performance on the battery was also moderately correlated with standard measures of cognitive abilities and showed convergence with standard measures of FC, while demonstrating good test-retest reliability. Our results show that it is feasible to use technology-based assessment protocols with older adults and patients with schizophrenia. The battery overcomes logistic constraints associated with current FC assessment protocols as the battery is computer-based, can be delivered remotely and does not require a healthcare professional for administration. PMID:27913159
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Michael K.; Davidson, Megan
As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less
GASP- General Aviation Synthesis Program. Volume 1: Main program. Part 1: Theoretical development
NASA Technical Reports Server (NTRS)
Hague, D.
1978-01-01
The General Aviation synthesis program performs tasks generally associated with aircraft preliminary design and allows an analyst the capability of performing parametric studies in a rapid manner. GASP emphasizes small fixed-wing aircraft employing propulsion systems varying froma single piston engine with fixed pitch propeller through twin turboprop/ turbofan powered business or transport type aircraft. The program, which may be operated from a computer terminal in either the batch or interactive graphic mode, is comprised of modules representing the various technical disciplines integrated into a computational flow which ensures that the interacting effects of design variables are continuously accounted for in the aircraft sizing procedure. The model is a useful tool for comparing configurations, assessing aircraft performance and economics, performing tradeoff and sensitivity studies, and assessing the impact of advanced technologies on aircraft performance and economics.
ERIC Educational Resources Information Center
Stamm, Meelis; Stamm, Raini; Koskel, Sade
2008-01-01
Study aim: Assessment of feasibility of using own computer software "Game" at competitions. Material and methods: The data were collected during Estonian championships in 2006 for male volleyball teams of the 13-15-years age group (n = 8). In all games, the performance of both teams was recorded in parallel with two computers. A total of…
NASA Technical Reports Server (NTRS)
Ghaffari, Farhad
1999-01-01
Unstructured grid Euler computations, performed at supersonic cruise speed, are presented for a High Speed Civil Transport (HSCT) configuration, designated as the Technology Concept Airplane (TCA) within the High Speed Research (HSR) Program. The numerical results are obtained for the complete TCA cruise configuration which includes the wing, fuselage, empennage, diverters, and flow through nacelles at M (sub infinity) = 2.4 for a range of angles-of-attack and sideslip. Although all the present computations are performed for the complete TCA configuration, appropriate assumptions derived from the fundamental supersonic aerodynamic principles have been made to extract aerodynamic predictions to complement the experimental data obtained from a 1.675%-scaled truncated (aft fuselage/empennage components removed) TCA model. The validity of the computational results, derived from the latter assumptions, are thoroughly addressed and discussed in detail. The computed surface and off-surface flow characteristics are analyzed and the pressure coefficient contours on the wing lower surface are shown to correlate reasonably well with the available pressure sensitive paint results, particularly, for the complex flow structures around the nacelles. The predicted longitudinal and lateral/directional performance characteristics for the truncated TCA configuration are shown to correlate very well with the corresponding wind-tunnel data across the examined range of angles-of-attack and sideslip. The complementary computational results for the longitudinal and lateral/directional performance characteristics for the complete TCA configuration are also presented along with the aerodynamic effects due to empennage components. Results are also presented to assess the computational method performance, solution sensitivity to grid refinement, and solution convergence characteristics.
ERIC Educational Resources Information Center
Widdicombe, Richard Toby
Measuring the effects of computer-managed instruction (CMI) on the teaching of and student writing about literature involves more than having students write and then evaluating their performance. Measurement is made difficult by the fact that the computer technology used in instruction is in a state of flux. Variation of computer technology,…
ERIC Educational Resources Information Center
Ursavas, Omer Faruk; Karal, Hasan
2009-01-01
In this study it is aimed to determine the level of pre-service teachers' computer phobia. Whether or not computer phobia meaningfully varies statistically according to gender and computer experience has been tested in the study. The study was performed on 430 pre-service teachers at the Education Faculty in Rize/Turkey. Data in the study were…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rider, William J.; Witkowski, Walter R.; Mousseau, Vincent Andrew
2016-04-13
The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationallymore » simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.« less
Academic physicians' assessment of the effects of computers on health care.
Detmer, W. M.; Friedman, C. P.
1994-01-01
We assessed the attitudes of academic physicians towards computers in health care at two academic medical centers that are in the early stages of clinical information-system deployment. We distributed a 4-page questionnaire to 470 subjects, and a total of 272 physicians (58%) responded. Our results show that respondents use computers frequently, primarily to perform academic-oriented tasks as opposed to clinical tasks. Overall, respondents viewed computers as being slightly beneficial to health care. They perceive self-education and access to up-to-date information as the most beneficial aspects of computers and are most concerned about privacy issues and the effect of computers on the doctor-patient relationship. Physicians with prior computer training and greater knowledge of informatics concepts had more favorable attitudes towards computers in health care. We suggest that negative attitudes towards computers can be addressed by careful system design as well as targeted educational activities. PMID:7949990
Perceptions and performance using computer-based testing: One institution's experience.
Bloom, Timothy J; Rich, Wesley D; Olson, Stephanie M; Adams, Michael L
2018-02-01
The purpose of this study was to evaluate student and faculty perceptions of the transition to a required computer-based testing format and to identify any impact of this transition on student exam performance. Separate questionnaires sent to students and faculty asked about perceptions of and problems with computer-based testing. Exam results from program-required courses for two years prior to and two years following the adoption of computer-based testing were compared to determine if this testing format impacted student performance. Responses to Likert-type questions about perceived ease of use showed no difference between students with one and three semesters experience with computer-based testing. Of 223 student-reported problems, 23% related to faculty training with the testing software. Students most commonly reported improved feedback (46% of responses) and ease of exam-taking (17% of responses) as benefits to computer-based testing. Faculty-reported difficulties were most commonly related to problems with student computers during an exam (38% of responses) while the most commonly identified benefit was collecting assessment data (32% of responses). Neither faculty nor students perceived an impact on exam performance due to computer-based testing. An analysis of exam grades confirmed there was no consistent performance difference between the paper and computer-based formats. Both faculty and students rapidly adapted to using computer-based testing. There was no evidence that switching to computer-based testing had any impact on student exam performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Cognitive training in Parkinson disease: cognition-specific vs nonspecific computer training.
Zimmermann, Ronan; Gschwandtner, Ute; Benz, Nina; Hatz, Florian; Schindler, Christian; Taub, Ethan; Fuhr, Peter
2014-04-08
In this study, we compared a cognition-specific computer-based cognitive training program with a motion-controlled computer sports game that is not cognition-specific for their ability to enhance cognitive performance in various cognitive domains in patients with Parkinson disease (PD). Patients with PD were trained with either a computer program designed to enhance cognition (CogniPlus, 19 patients) or a computer sports game with motion-capturing controllers (Nintendo Wii, 20 patients). The effect of training in 5 cognitive domains was measured by neuropsychological testing at baseline and after training. Group differences over all variables were assessed with multivariate analysis of variance, and group differences in single variables were assessed with 95% confidence intervals of mean difference. The groups were similar regarding age, sex, and educational level. Patients with PD who were trained with Wii for 4 weeks performed better in attention (95% confidence interval: -1.49 to -0.11) than patients trained with CogniPlus. In our study, patients with PD derived at least the same degree of cognitive benefit from non-cognition-specific training involving movement as from cognition-specific computerized training. For patients with PD, game consoles may be a less expensive and more entertaining alternative to computer programs specifically designed for cognitive training. This study provides Class III evidence that, in patients with PD, cognition-specific computer-based training is not superior to a motion-controlled computer game in improving cognitive performance.
Regev, Sivan; Hadas-Lidor, Noami; Rosenberg, Limor
2016-08-01
In this study, the assessment tool "Internet and Computer User Profile" questionnaire (ICUP) is presented and validated. It was developed in order to gather information for setting intervention goals to meet current demands. Sixty-eight subjects aged 23-68 participated in the study. The study group (n = 28) was sampled from two vocational centers. The control group consisted of 40 participants from the general population that were sampled by convenience sampling based on the demographics of the study group. Subjects from both groups answered the ICUP questionnaire. Subjects of the study group answered the General Self- Efficacy (GSE) questionnaire and performed the Assessment of Computer Task Performance (ACTP) test in order to examine the convergent validity of the ICUP. Twenty subjects from both groups retook the ICUP questionnaire in order to obtain test-retest results. Differences between groups were tested using multiple analysis of variance (MANOVA) tests. Pearson and Spearman's tests were used for calculating correlations. Cronbach's alpha coefficient and k equivalent were used to assess internal consistency. The results indicate that the questionnaire is valid and reliable. They emphasize that the layout of the ICUP items facilitates in making a comprehensive examination of the client's perception regarding his participation in computer and internet activities. Implications for Rehabiliation The assessment tool "Internet and Computer User Profile" (ICUP) questionnaire is a novel assessment tool that evaluates operative use and individual perception of computer activities. The questionnaire is valid and reliable for use with participants of vocational centers dealing with mental illness. It is essential to facilitate access to computers for people with mental illnesses, seeing that they express similar interest in computers and internet as people from the general population of the same age. Early intervention will be particularly effective for young adults dealing with mental illness, since the digital gap between them and young people in general is relatively small.
Peretz, Chava; Korczyn, Amos D; Shatil, Evelyn; Aharonson, Vered; Birnboim, Smadar; Giladi, Nir
2011-01-01
Many studies have suggested that cognitive training can result in cognitive gains in healthy older adults. We investigated whether personalized computerized cognitive training provides greater benefits than those obtained by playing conventional computer games. This was a randomized double-blind interventional study. Self-referred healthy older adults (n = 155, 68 ± 7 years old) were assigned to either a personalized, computerized cognitive training or to a computer games group. Cognitive performance was assessed at baseline and after 3 months by a neuropsychological assessment battery. Differences in cognitive performance scores between and within groups were evaluated using mixed effects models in 2 approaches: adherence only (AO; n = 121) and intention to treat (ITT; n = 155). Both groups improved in cognitive performance. The improvement in the personalized cognitive training group was significant (p < 0.03, AO and ITT approaches) in all 8 cognitive domains. However, in the computer games group it was significant (p < 0.05) in only 4 (AO) or 6 domains (ITT). In the AO analysis, personalized cognitive training was significantly more effective than playing games in improving visuospatial working memory (p = 0.0001), visuospatial learning (p = 0.0012) and focused attention (p = 0.0019). Personalized, computerized cognitive training appears to be more effective than computer games in improving cognitive performance in healthy older adults. Further studies are needed to evaluate the ecological validity of these findings. Copyright © 2011 S. Karger AG, Basel.
What We've Learned about Assessing Hands-On Science.
ERIC Educational Resources Information Center
Shavelson, Richard J.; Baxter, Gail P.
1992-01-01
A recent study compared hands-on scientific inquiry assessment to assessments involving lab notebooks, computer simulations, short-answer paper-and-pencil problems, and multiple-choice questions. Creating high quality performance assessments is a costly, time-consuming process requiring considerable scientific and technological know-how. Improved…
Error Ratio Analysis: Alternate Mathematics Assessment for General and Special Educators.
ERIC Educational Resources Information Center
Miller, James H.; Carr, Sonya C.
1997-01-01
Eighty-seven elementary students in grades four, five, and six, were administered a 30-item multiplication instrument to assess performance in computation across grade levels. An interpretation of student performance using error ratio analysis is provided and the use of this method with groups of students for instructional decision making is…
Automated image quality assessment for chest CT scans.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2018-02-01
Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.
Zhang, Xue; Xiao, Yang; Zeng, Jie; Qiu, Weibao; Qian, Ming; Wang, Congzhi; Zheng, Rongqin; Zheng, Hairong
2014-01-01
To develop and evaluate a computer-assisted method of quantifying five-point elasticity scoring system based on ultrasound real-time elastography (RTE), for classifying benign and malignant breast lesions, with pathologic results as the reference standard. Conventional ultrasonography (US) and RTE images of 145 breast lesions (67 malignant, 78 benign) were performed in this study. Each lesion was automatically contoured on the B-mode image by the level set method and mapped on the RTE image. The relative elasticity value of each pixel was reconstructed and classified into hard or soft by the fuzzy c-means clustering method. According to the hardness degree inside lesion and its surrounding tissue, the elasticity score of the RTE image was computed in an automatic way. Visual assessments of the radiologists were used for comparing the diagnostic performance. Histopathologic examination was used as the reference standard. The Student's t test and receiver operating characteristic (ROC) curve analysis were performed for statistical analysis. Considering score 4 or higher as test positive for malignancy, the diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 93.8% (136/145), 92.5% (62/67), 94.9% (74/78), 93.9% (62/66), and 93.7% (74/79) for the computer-assisted scheme, and 89.7% (130/145), 85.1% (57/67), 93.6% (73/78), 92.0% (57/62), and 88.0% (73/83) for manual assessment. Area under ROC curve (Az value) for the proposed method was higher than the Az value for visual assessment (0.96 vs. 0.93). Computer-assisted quantification of classical five-point scoring system can significantly eliminate the interobserver variability and thereby improve the diagnostic confidence of classifying the breast lesions to avoid unnecessary biopsy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Assessing computer skills in Tanzanian medical students: an elective experience
Samuel, Miriam; Coombes, John C; Miranda, J Jaime; Melvin, Rob; Young, Eoin JW; Azarmina, Pejman
2004-01-01
Background One estimate suggests that by 2010 more than 30% of a physician's time will be spent using information technology tools. The aim of this study is to assess the information and communication technologies (ICT) skills of medical students in Tanzania. We also report a pilot intervention of peer mentoring training in ICT by medical students from the UK tutoring students in Tanzania. Methods Design: Cross sectional study and pilot intervention study. Participants: Fourth year medical students (n = 92) attending Muhimbili University College of Health Sciences, Dar es Salaam, Tanzania. Main outcome measures: Self-reported assessment of competence on ICT-related topics and ability to perform specific ICT tasks. Further information related to frequency of computer use (hours per week), years of computer use, reasons for use and access to computers. Skills at specific tasks were reassessed for 12 students following 4 to 6 hours of peer mentoring training. Results The highest levels of competence in generic ICT areas were for email, Internet and file management. For other skills such as word processing most respondents reported low levels of competence. The abilities to perform specific ICT skills were low – less than 60% of the participants were able to perform the core specific skills assessed. A period of approximately 5 hours of peer mentoring training produced an approximate doubling of competence scores for these skills. Conclusion Our study has found a low level of ability to use ICT facilities among medical students in a leading university in sub-Saharan Africa. A pilot scheme utilising UK elective students to tutor basic skills showed potential. Attention is required to develop interventions that can improve ICT skills, as well as computer access, in order to bridge the digital divide. PMID:15306029
Assessing computer skills in Tanzanian medical students: an elective experience.
Samuel, Miriam; Coombes, John C; Miranda, J Jaime; Melvin, Rob; Young, Eoin J W; Azarmina, Pejman
2004-08-12
One estimate suggests that by 2010 more than 30% of a physician's time will be spent using information technology tools. The aim of this study is to assess the information and communication technologies (ICT) skills of medical students in Tanzania. We also report a pilot intervention of peer mentoring training in ICT by medical students from the UK tutoring students in Tanzania. Cross sectional study and pilot intervention study. Fourth year medical students (n = 92) attending Muhimbili University College of Health Sciences, Dar es Salaam, Tanzania. Self-reported assessment of competence on ICT-related topics and ability to perform specific ICT tasks. Further information related to frequency of computer use (hours per week), years of computer use, reasons for use and access to computers. Skills at specific tasks were reassessed for 12 students following 4 to 6 hours of peer mentoring training. The highest levels of competence in generic ICT areas were for email, Internet and file management. For other skills such as word processing most respondents reported low levels of competence. The abilities to perform specific ICT skills were low - less than 60% of the participants were able to perform the core specific skills assessed. A period of approximately 5 hours of peer mentoring training produced an approximate doubling of competence scores for these skills. Our study has found a low level of ability to use ICT facilities among medical students in a leading university in sub-Saharan Africa. A pilot scheme utilising UK elective students to tutor basic skills showed potential. Attention is required to develop interventions that can improve ICT skills, as well as computer access, in order to bridge the digital divide.
Onboard Navigation Systems Characteristics
NASA Technical Reports Server (NTRS)
1979-01-01
The space shuttle onboard navigation systems characteristics are described. A standard source of equations and numerical data for use in error analyses and mission simulations related to space shuttle development is reported. The sensor characteristics described are used for shuttle onboard navigation performance assessment. The use of complete models in the studies depend on the analyses to be performed, the capabilities of the computer programs, and the availability of computer resources.
2014-01-01
Quantitative imaging biomarkers (QIBs) are being used increasingly in medicine to diagnose and monitor patients’ disease. The computer algorithms that measure QIBs have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms’ bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms’ performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for QIB studies. PMID:24919828
An approach to quality and performance control in a computer-assisted clinical chemistry laboratory.
Undrill, P E; Frazer, S C
1979-01-01
A locally developed, computer-based clinical chemistry laboratory system has been in operation since 1970. This utilises a Digital Equipment Co Ltd PDP 12 and an interconnected PDP 8/F computer. Details are presented of the performance and quality control techniques incorporated into the system. Laboratory performance is assessed through analysis of results from fixed-level control sera as well as from cumulative sum methods. At a simple level the presentation may be considered purely indicative, while at a more sophisticated level statistical concepts have been introduced to aid the laboratory controller in decision-making processes. PMID:438340
ERIC Educational Resources Information Center
Botello, Jennifer A.
2014-01-01
With increased dependence on computer-based standardized tests to assess academic achievement, technological literacy has become an essential skill. Yet, because students have unequal access to technology, they may not have equal opportunities to perform well on these computer-based tests. The researcher had observed students taking the STAR…
The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the "Border" study. Keywords: Computers; Software; QA/QC.
The National Human Exposure Assessment Sur...
Assessing performance of flaw characterization methods through uncertainty propagation
NASA Astrophysics Data System (ADS)
Miorelli, R.; Le Bourdais, F.; Artusi, X.
2018-04-01
In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.
Safety analysis of interchanges
DOT National Transportation Integrated Search
2007-06-01
The objectives of this research are to synthesize the current state of knowledge concerning the safety assessment of new or modified interchanges; develop a spreadsheet-based computational tool for performing safety assessments of interchanges; and i...
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Wu, Bing-Hong
2012-01-01
This study explored the reliability and validity of teacher assessment under a Web-based portfolio assessment environment (or Web-based teacher portfolio assessment). Participants were 72 eleventh graders taking the "Computer Application" course. The students perform portfolio creation, inspection, self- and peer-assessment using the Web-based…
Update and review of accuracy assessment techniques for remotely sensed data
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.
1983-01-01
Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.
Curtin, Lindsay B; Finn, Laura A; Czosnowski, Quinn A; Whitman, Craig B; Cawley, Michael J
2011-08-10
To assess the impact of computer-based simulation on the achievement of student learning outcomes during mannequin-based simulation. Participants were randomly assigned to rapid response teams of 5-6 students and then teams were randomly assigned to either a group that completed either computer-based or mannequin-based simulation cases first. In both simulations, students used their critical thinking skills and selected interventions independent of facilitator input. A predetermined rubric was used to record and assess students' performance in the mannequin-based simulations. Feedback and student performance scores were generated by the software in the computer-based simulations. More of the teams in the group that completed the computer-based simulation before completing the mannequin-based simulation achieved the primary outcome for the exercise, which was survival of the simulated patient (41.2% vs. 5.6%). The majority of students (>90%) recommended the continuation of simulation exercises in the course. Students in both groups felt the computer-based simulation should be completed prior to the mannequin-based simulation. The use of computer-based simulation prior to mannequin-based simulation improved the achievement of learning goals and outcomes. In addition to improving participants' skills, completing the computer-based simulation first may improve participants' confidence during the more real-life setting achieved in the mannequin-based simulation.
Providing Formative Feedback From a Summative Computer-aided Assessment
Sewell, Robert D. E.
2007-01-01
Objectives To examine the effectiveness of providing formative feedback for summative computer-aided assessment. Design Two groups of first-year undergraduate life science students in pharmacy and neuroscience who were studying an e-learning package in a common pharmacology module were presented with a computer-based summative assessment. A sheet with individualized feedback derived from each of the 5 results sections of the assessment was provided to each student. Students were asked via a questionnaire to evaluate the form and method of feedback. Assessment The students were able to reflect on their performance and use the feedback provided to guide their future study or revision. There was no significant difference between the responses from pharmacy and neuroscience students. Students' responses on the questionnaire indicated a generally positive reaction to this form of feedback. Conclusions Findings suggest that additional formative assessment conveyed by this style and method would be appreciated and valued by students. PMID:17533442
Evaluation of Advanced Computing Techniques and Technologies: Reconfigurable Computing
NASA Technical Reports Server (NTRS)
Wells, B. Earl
2003-01-01
The focus of this project was to survey the technology of reconfigurable computing determine its level of maturity and suitability for NASA applications. To better understand and assess the effectiveness of the reconfigurable design paradigm that is utilized within the HAL-15 reconfigurable computer system. This system was made available to NASA MSFC for this purpose, from Star Bridge Systems, Inc. To implement on at least one application that would benefit from the performance levels that are possible with reconfigurable hardware. It was originally proposed that experiments in fault tolerance and dynamically reconfigurability would be perform but time constraints mandated that these be pursued as future research.
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
Computer predictions of ground storage effects on performance of Galileo and ISPM generators
NASA Technical Reports Server (NTRS)
Chmielewski, A.
1983-01-01
Radioisotope Thermoelectric Generators (RTG) that will supply electrical power to the Galileo and International Solar Polar Mission (ISPM) spacecraft are exposed to several degradation mechanisms during the prolonged ground storage before launch. To assess the effect of storage on the RTG flight performance, a computer code has been developed which simulates all known degradation mechanisms that occur in an RTG during storage and flight. The modeling of these mechanisms and their impact on the RTG performance are discussed.
High Performance Computing at NASA
NASA Technical Reports Server (NTRS)
Bailey, David H.; Cooper, D. M. (Technical Monitor)
1994-01-01
The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.
Surgical skill acquisition with self-directed practice using computer-based video training.
Jowett, Nathan; LeBlanc, Vicki; Xeroulis, George; MacRae, Helen; Dubrowski, Adam
2007-02-01
Computer-based video training (CBVT) provides flexible opportunities for surgical trainees to learn fundamental technical skills, but may be ineffective in self-directed practice settings because of poor trainee self-assessment. This study examined whether CBVT is effective in a self-directed learning environment among novice trainees. Thirty novice trainees used CBVT to learn the 1-handed square knot while self-assessing their proficiency every 3 minutes. On reaching self-assessed skill proficiency, trainees were randomized to either cease practice or to complete additional practice. Performance was evaluated with computer and expert-based measures during practice and on pretests, posttests, and 1-week retention tests. Analyses revealed performance improvements for both groups (all P < .05), but no differences between the 2 groups (all P > .05) on all tests. CBVT for the 1-handed square knot is effective in a self-directed learning environment among novices. This lends support to the implementation of self-directed digital media-based learning within surgical curricula.
Data systems and computer science space data systems: Onboard networking and testbeds
NASA Technical Reports Server (NTRS)
Dalton, Dan
1991-01-01
The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: justification; technology challenges; program description; and state-of-the-art assessment.
A Text Analysis of the Marine Corps Fitness Report
2017-06-01
difficulty in quantitatively analyzing textual. The study pulls 835 anonymous and non-attributable surveys between 2005 and 2009 from the Center for... quantitative assessments of performance. 14. SUBJECT TERMS natural language processing, fitness reports, computational linguistics, manpower 15. NUMBER...Corps provide word-picture guidance to distinguish talented Marines and promote conformity in issuing quantitative assessments of performance. vi
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
Kanumuri, Prathima; Ganai, Sabha; Wohaibi, Eyad M.; Bush, Ronald W.; Grow, Daniel R.
2008-01-01
Background: The study aim was to compare the effectiveness of virtual reality and computer-enhanced video-scopic training devices for training novice surgeons in complex laparoscopic skills. Methods: Third-year medical students received instruction on laparoscopic intracorporeal suturing and knot tying and then underwent a pretraining assessment of the task using a live porcine model. Students were then randomized to objectives-based training on either the virtual reality (n=8) or computer-enhanced (n=8) training devices for 4 weeks, after which the assessment was repeated. Results: Posttraining performance had improved compared with pretraining performance in both task completion rate (94% versus 18%; P<0.001*) and time [181±58 (SD) versus 292±24*]. Performance of the 2 groups was comparable before and after training. Of the subjects, 88% thought that haptic cues were important in simulators. Both groups agreed that their respective training systems were effective teaching tools, but computer-enhanced device trainees were more likely to rate their training as representative of reality (P<0.01). Conclusions: Training on virtual reality and computer-enhanced devices had equivalent effects on skills improvement in novices. Despite the perception that haptic feedback is important in laparoscopic simulation training, its absence in the virtual reality device did not impede acquisition of skill. PMID:18765042
Static aeroelastic analysis and tailoring of a single-element racing car wing
NASA Astrophysics Data System (ADS)
Sadd, Christopher James
This thesis presents the research from an Engineering Doctorate research programme in collaboration with Reynard Motorsport Ltd, a manufacturer of racing cars. Racing car wing design has traditionally considered structures to be rigid. However, structures are never perfectly rigid and the interaction between aerodynamic loading and structural flexibility has a direct impact on aerodynamic performance. This interaction is often referred to as static aeroelasticity and the focus of this research has been the development of a computational static aeroelastic analysis method to improve the design of a single-element racing car wing. A static aeroelastic analysis method has been developed by coupling a Reynolds-Averaged Navier-Stokes CFD analysis method with a Finite Element structural analysis method using an iterative scheme. Development of this method has included assessment of CFD and Finite Element analysis methods and development of data transfer and mesh deflection methods. Experimental testing was also completed to further assess the computational analyses. The computational and experimental results show a good correlation and these studies have also shown that a Navier-Stokes static aeroelastic analysis of an isolated wing can be performed at an acceptable computational cost. The static aeroelastic analysis tool was used to assess methods of tailoring the structural flexibility of the wing to increase its aerodynamic performance. These tailoring methods were then used to produce two final wing designs to increase downforce and reduce drag respectively. At the average operating dynamic pressure of the racing car, the computational analysis predicts that the downforce-increasing wing has a downforce of C[1]=-1.377 in comparison to C[1]=-1.265 for the original wing. The computational analysis predicts that the drag-reducing wing has a drag of C[d]=0.115 in comparison to C[d]=0.143 for the original wing.
Computed Tomography Inspection and Analysis for Additive Manufacturing Components
NASA Technical Reports Server (NTRS)
Beshears, Ronald D.
2017-01-01
Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws and geometric features were inspected using a 2-megavolt linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed to determine the impact of additive manufacturing on inspectability of objects with complex geometries.
Internal aerodynamics of a generic three-dimensional scramjet inlet at Mach 10
NASA Technical Reports Server (NTRS)
Holland, Scott D.
1995-01-01
A combined computational and experimental parametric study of the internal aerodynamics of a generic three-dimensional sidewall compression scramjet inlet configuration at Mach 10 has been performed. The study was designed to demonstrate the utility of computational fluid dynamics as a design tool in hypersonic inlet flow fields, to provide a detailed account of the nature and structure of the internal flow interactions, and to provide a comprehensive surface property and flow field database to determine the effects of contraction ratio, cowl position, and Reynolds number on the performance of a hypersonic scramjet inlet configuration. The work proceeded in several phases: the initial inviscid assessment of the internal shock structure, the preliminary computational parametric study, the coupling of the optimized configuration with the physical limitations of the facility, the wind tunnel blockage assessment, and the computational and experimental parametric study of the final configuration. Good agreement between computation and experimentation was observed in the magnitude and location of the interactions, particularly for weakly interacting flow fields. Large-scale forward separations resulted when the interaction strength was increased by increasing the contraction ratio or decreasing the Reynolds number.
Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.
Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin
2017-01-01
Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.
Participatory Classification in a System for Assessing Multimodal Transportation Patterns
2015-02-17
Culler Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2015-8 http...California at Berkeley,Electrical Engineering and Computer Sciences,Berkeley,CA,94720 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING...confirmation screen This section sketches the characteristics of the data that was collected, computes the accuracy of the auto- mated inference algorithm
Cooperation, Technology, and Performance: A Case Study.
ERIC Educational Resources Information Center
Cavanagh, Thomas; Dickenson, Sabrina; Brandt, Suzanne
1999-01-01
Describes the CTP (Cooperation, Technology, and Performance) model and explains how it is used by the Department of Veterans Affairs-Veteran's Benefit Administration (VBA) for training. Discusses task analysis; computer-based training; cooperative-based learning environments; technology-based learning; performance-assessment methods; courseware…
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Investigating Computer-Based Formative Assessments in a Medical Terminology Course
ERIC Educational Resources Information Center
Wilbanks, Jammie T.
2012-01-01
Research has been conducted on the effectiveness of formative assessments and on effectively teaching medical terminology; however, research had not been conducted on the use of formative assessments in a medical terminology course. A quantitative study was performed which captured data from a pretest, self-assessment, four module exams, and a…
Piloting an outcome-based programme evaluation tool in undergraduate medical education.
Raupach, Tobias; Schiekirka, Sarah; Münscher, Christian; Beißbarth, Tim; Himmel, Wolfgang; Burckhardt, Gerhard; Pukrop, Tobias
2012-01-01
Different approaches to performance-oriented allocation of resources according to teaching quality are currently being discussed within German medical schools. The implementation of these programmes is impeded by a lack of valid criteria to measure teaching quality. An assessment of teaching quality should include structural and procedural aspects but focus on learning outcome itself. The aim of this study was to implement a novel, outcome-based evaluation tool within the clinical phase of a medical curriculum and address differences between the novel tool and traditional evaluation methods. Student self-assessments before and after completion of a teaching module were used to compute performance gains for specific learning objectives. Mean performance gains in each module were compared to student expectations before the module and data derived from a traditional evaluation tool using overall course ratings at the end of the module. A ranking of the 21 modules according to computed performance gains yielded entirely different results than module rankings based on overall course ratings. There was no significant correlation between performance gain and overall ratings. However, the latter were significantly correlated to student expectations before entering the module as well as structural and procedural parameters (Pearson's r 0.7-0.9). Performance gain computed from comparative self-assessments adds an important new dimension to course evaluation in medical education. In contrast to overall course ratings, the novel tool is less heavily confounded by construct-irrelevant factors. Thus, it appears to be more appropriate than overall course ratings in determining teaching quality and developing algorithms to guide performance-oriented resource allocation in medical education.
Comparing Assessment Methods in Undergraduate Statistics Courses
ERIC Educational Resources Information Center
Baxter, Sarah E.
2017-01-01
The purpose of this study was to compare undergraduate students' academic performance and attitudes about statistics in the context of two different types of assessment structures for an introductory statistics course. One assessment structure used in-class quizzes that emphasized computation and procedural fluency as well as vocabulary…
Economizing Education: Assessment Algorithms and Calculative Agencies
ERIC Educational Resources Information Center
O'Keeffe, Cormac
2017-01-01
International Large Scale Assessments have been producing data about educational attainment for over 60 years. More recently however, these assessments as tests have become digitally and computationally complex and increasingly rely on the calculative work performed by algorithms. In this article I first consider the coordination of relations…
Computer assessment of interview data using latent semantic analysis.
Dam, Gregory; Kaufmann, Stefan
2008-02-01
Clinical interviews are a powerful method for assessing students' knowledge and conceptualdevelopment. However, the analysis of the resulting data is time-consuming and can create a "bottleneck" in large-scale studies. This article demonstrates the utility of computational methods in supporting such an analysis. Thirty-four 7th-grade student explanations of the causes of Earth's seasons were assessed using latent semantic analysis (LSA). Analyses were performed on transcriptions of student responses during interviews administered, prior to (n = 21) and after (n = 13) receiving earth science instruction. An instrument that uses LSA technology was developed to identify misconceptions and assess conceptual change in students' thinking. Its accuracy, as determined by comparing its classifications to the independent coding performed by four human raters, reached 90%. Techniques for adapting LSA technology to support the analysis of interview data, as well as some limitations, are discussed.
Moncho, Salvador; Autschbach, Jochen
2010-01-12
A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.
Valero, Germán; Cárdenas, Paula
The Faculty of Veterinary Medicine and Animal Science of the National Autonomous University of Mexico (UNAM) uses the Moodle learning management system for formative and summative computer assessment. The authors of this article-the teacher primarily responsible for Moodle implementation and a researcher who is a recent Moodle adopter-describe and discuss the students' and teachers' attitudes to summative and formative computer assessment in Moodle. Item analysis of quiz results helped us to identify and fix poorly performing questions, which greatly reduced student complaints and improved objective assessment. The use of certainty-based marking (CBM) in formative assessment in veterinary pathology was well received by the students and should be extended to more courses. The importance of having proficient computer support personnel should not be underestimated. A properly translated language pack is essential for the use of Moodle in a language other than English.
Benefits of music training in mandarin-speaking pediatric cochlear implant users.
Fu, Qian-Jie; Galvin, John J; Wang, Xiaosong; Wu, Jiunn-Liang
2015-02-01
The aims of this study were to assess young (5- to 10-year-old) Mandarin-speaking cochlear implant (CI) users' musical pitch perception and to assess the benefits of computer-based home training on performance. Melodic contour identification (MCI) was used to assess musical pitch perception in 14 Mandarin-speaking pediatric CI users; the instrument timbre and the contour length were varied as experimental parameters. Six subjects received subsequent MCI training on their home computer in which auditory and visual feedback were provided. MCI performance was generally poor (grand mean=33.3% correct) and highly variable, with scores ranging from 9.3% to 98.1% correct; there was no significant effect of instrument timbre or contour length on performance (p>.05). After 4 weeks of training, performance sharply improved. Follow-up measures that were conducted 8 weeks after training was stopped showed no significant decline in MCI performance. For the 6 trained subjects, there was a significant effect of contour length for the training and follow-up measures. These preliminary data suggest that although baseline MCI performance initially may be poor, training may greatly improve Mandarin-speaking pediatric CI users' melodic pitch perception.
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Tseng, Kuo-Hung; Lou, Shi-Jer
2012-01-01
This study explored the consistency and difference of teacher-, student self- and peer-assessment in the context of Web-based portfolio assessment. Participants were 72 senior high school students enrolled in a computer application course. Through the assessment system, the students performed portfolio creation, inspection, self- and…
A Computational Framework for Efficient Low Temperature Plasma Simulations
NASA Astrophysics Data System (ADS)
Verma, Abhishek Kumar; Venkattraman, Ayyaswamy
2016-10-01
Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.
High-performance computing with quantum processing units
Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.; ...
2017-03-01
The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel
Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less
High-performance computing with quantum processing units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.
The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less
On-Line Mathematics Assessment: The Impact of Mode on Performance and Question Answering Strategies
ERIC Educational Resources Information Center
Johnson, Martin; Green, Sylvia
2006-01-01
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Assessment of Computer Literacy of Nurses in Lesotho.
Mugomeri, Eltony; Chatanga, Peter; Maibvise, Charles; Masitha, Matseliso
2016-11-01
Health systems worldwide are moving toward use of information technology to improve healthcare delivery. However, this requires basic computer skills. This study assessed the computer literacy of nurses in Lesotho using a cross-sectional quantitative approach. A structured questionnaire with 32 standardized computer skills was distributed to 290 randomly selected nurses in Maseru District. Univariate and multivariate logistic regression analyses in Stata 13 were performed to identify factors associated with having inadequate computer skills. Overall, 177 (61%) nurses scored below 16 of the 32 skills assessed. Finding hyperlinks on Web pages (63%), use of advanced search parameters (60.2%), and downloading new software (60.1%) proved to be challenging to the highest proportions of nurses. Age, sex, year of obtaining latest qualification, computer experience, and work experience were significantly (P < .05) associated with inadequate computer skills in univariate analysis. However, in multivariate analyses, sex (P = .001), year of obtaining latest qualification (P = .011), and computer experience (P < .001) emerged as significant factors. The majority of nurses in Lesotho have inadequate computer skills, and this is significantly associated with having many years since obtaining their latest qualification, being female, and lack of exposure to computers. These factors should be considered during planning of training curriculum for nurses in Lesotho.
Issues in ATM Support of High-Performance, Geographically Distributed Computing
NASA Technical Reports Server (NTRS)
Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G
1995-01-01
This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.
Computed Tomography Inspection and Analysis for Additive Manufacturing Components
NASA Technical Reports Server (NTRS)
Beshears, Ronald D.
2016-01-01
Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.
Multiple drivers shape the types of chemical assessments performed within many regulatory agencies including economic considerations, data availability, and the ultimate application of the assessment. The result is that chemical assessments are “fit-for-purpose” ranging from pri...
Computer assisted blast design and assessment tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, A.R.; Kleine, T.H.; Forsyth, W.W.
1995-12-31
In general the software required by a blast designer includes tools that graphically present blast designs (surface and underground), can analyze a design or predict its result, and can assess blasting results. As computers develop and computer literacy continues to rise the development of and use of such tools will spread. An example of the tools that are becoming available includes: Automatic blast pattern generation and underground ring design; blast design evaluation in terms of explosive distribution and detonation simulation; fragmentation prediction; blast vibration prediction and minimization; blast monitoring for assessment of dynamic performance; vibration measurement, display and signal processing;more » evaluation of blast results in terms of fragmentation; and risk and reliability based blast assessment. The authors have identified a set of criteria that are essential in choosing appropriate software blasting tools.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
Assessment of Automatic Fare Collection Equipment at Three European Transit Properties
DOT National Transportation Integrated Search
1982-12-01
This report is an assessment of automatic fare collection (AFC) equipment performance conducted at three European properties in accordance with procedures defined in the Property Evaluation Plan (PEP) developed by Input Output Computer Services, Inc....
Computer Analysis Of High-Speed Roller Bearings
NASA Technical Reports Server (NTRS)
Coe, H.
1988-01-01
High-speed cylindrical roller-bearing analysis program (CYBEAN) developed to compute behavior of cylindrical rolling-element bearings at high speeds and with misaligned shafts. With program, accurate assessment of geometry-induced roller preload possible for variety of out-ring and housing configurations and loading conditions. Enables detailed examination of bearing performance and permits exploration of causes and consequences of bearing skew. Provides general capability for assessment of designs of bearings supporting main shafts of engines. Written in FORTRAN IV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Draxl, Caroline; Hopson, Thomas
Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less
Rybacka, Anna; Goździk-Spychalska, Joanna; Rybacki, Adam; Piorunek, Tomasz; Batura-Gabryel, Halina; Karmelita-Katulska, Katarzyna
2018-05-04
In cystic fibrosis, pulmonary function tests (PFTs) and computed tomography are used to assess lung function and structure, respectively. Although both techniques of assessment are congruent there are lingering doubts about which PFTs variables show the best congruence with computed tomography scoring. In this study we addressed the issue by reinvestigating the association between PFTs variables and the score of changes seen in computed tomography scans in patients with cystic fibrosis with and without pulmonary exacerbation. This retrospective study comprised 40 patients in whom PFTs and computed tomography were performed no longer than 3 weeks apart. Images (inspiratory: 0.625 mm slice thickness, 0.625 mm interval; expiratory: 1.250 mm slice thickness, 10 mm interval) were evaluated with the Bhalla scoring system. The most frequent structural abnormality found in scans were bronchiectases and peribronchial thickening. The strongest relationship was found between the Bhalla sore and forced expiratory volume in 1 s (FEV1). The Bhalla sore also was related to forced vital capacity (FVC), FEV1/FVC ratio, residual volume (RV), and RV/total lung capacity (TLC) ratio. We conclude that lung structural data obtained from the computed tomography examination are highly congruent to lung function data. Thus, computed tomography imaging may supersede functional assessment in cases of poor compliance with spirometry procedures in the lederly or children. Computed tomography also seems more sensitive than PFTs in the assessment of cystic fibrosis progression. Moreover, in early phases of cystic fibrosis, computed tomography, due to its excellent resolution, may be irreplaceable in monitoring pulmonary damage.
Obuchowski, Nancy A; Barnhart, Huiman X; Buckler, Andrew J; Pennello, Gene; Wang, Xiao-Feng; Kalpathy-Cramer, Jayashree; Kim, Hyun J Grace; Reeves, Anthony P
2015-02-01
Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients' disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms' bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms' performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for quantitative imaging biomarker studies. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
ERIC Educational Resources Information Center
Beale, Ivan L.
2005-01-01
Computer assisted learning (CAL) can involve a computerised intelligent learning environment, defined as an environment capable of automatically, dynamically and continuously adapting to the learning context. One aspect of this adaptive capability involves automatic adjustment of instructional procedures in response to each learner's performance,…
Evaluation and Assessment of a Biomechanics Computer-Aided Instruction.
ERIC Educational Resources Information Center
Washington, N.; Parnianpour, M.; Fraser, J. M.
1999-01-01
Describes the Biomechanics Tutorial, a computer-aided instructional tool that was developed at Ohio State University to expedite the transition from lecture to application for undergraduate students. Reports evaluation results that used statistical analyses and student questionnaires to show improved performance on posttests as well as positive…
Computer Science Concept Inventories: Past and Future
ERIC Educational Resources Information Center
Taylor, C.; Zingaro, D.; Porter, L.; Webb, K. C.; Lee, C. B.; Clancy, M.
2014-01-01
Concept Inventories (CIs) are assessments designed to measure student learning of core concepts. CIs have become well known for their major impact on pedagogical techniques in other sciences, especially physics. Presently, there are no widely used, validated CIs for computer science. However, considerable groundwork has been performed in the form…
Divergence of Digital World of Teachers
ERIC Educational Resources Information Center
Uzunboylu, Huseyin; Tuncay, Nazime
2010-01-01
There exists great diversity in the teachers' digital world. Teachers are being discriminated based on numerous educational gaps. This paper seeks to assess the extent of the digital divide among the North Cyprus vocational teachers along the four axes: age, Internet access, computer access, and performance (computing knowledge/experience). A…
TEACHING ENGINEERING DESIGN, A STUDY OF JOBSHOP.
ERIC Educational Resources Information Center
ENTWISLE, DORIS R.; HUGGINS, W.H.
THE USE OF A COMPUTER PROGRAM BY ENGINEERING STUDENTS TO SIMULATE A JOB SHOP THAT MANUFACTURES ELECTRONIC DEVICES HAS INDICATED THAT SIMULATION METHODS OFFER REALISTIC ASSISTANCE IN TEACHING. EACH STUDENT IN THE STUDY SUBMITTED SPECIFICATIONS FOR A CIRCUIT DESIGN AND, FROM THE COMPUTER, RECEIVED PERFORMANCE ASSESSMENTS OF THE CIRCUIT WHICH…
Shalf, John M.; Leland, Robert
2015-12-01
Here, photolithography systems are on pace to reach atomic scale by the mid-2020s, necessitating alternatives to continue realizing faster, more predictable, and cheaper computing performance. If the end of Moore's law is real, a research agenda is needed to assess the viability of novel semiconductor technologies and navigate the ensuing challenges.
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Liang, Chaoyun; Chen, Yi-Hui
2013-01-01
This study explored the reliability and validity of Web-based portfolio self-assessment. Participants were 72 senior high school students enrolled in a computer application course. The students created learning portfolios, viewed peers' work, and performed self-assessment on the Web-based portfolio assessment system. The results indicated: 1)…
NASA Astrophysics Data System (ADS)
Lehman, Donald Clifford
Today's medical laboratories are dealing with cost containment health care policies and unfilled laboratory positions. Because there may be fewer experienced clinical laboratory scientists, students graduating from clinical laboratory science (CLS) programs are expected by their employers to perform accurately in entry-level positions with minimal training. Information in the CLS field is increasing at a dramatic rate, and instructors are expected to teach more content in the same amount of time with the same resources. With this increase in teaching obligations, instructors could use a tool to facilitate grading. The research question was, "Can computer-assisted assessment evaluate students in an accurate and time efficient way?" A computer program was developed to assess CLS students' ability to evaluate peripheral blood smears. Automated grading permits students to get results quicker and allows the laboratory instructor to devote less time to grading. This computer program could improve instruction by providing more time to students and instructors for other activities. To be valuable, the program should provide the same quality of grading as the instructor. These benefits must outweigh potential problems such as the time necessary to develop and maintain the program, monitoring of student progress by the instructor, and the financial cost of the computer software and hardware. In this study, surveys of students and an interview with the laboratory instructor were performed to provide a formative evaluation of the computer program. In addition, the grading accuracy of the computer program was examined. These results will be used to improve the program for use in future courses.
Kontopantelis, Evangelos; Buchan, Iain; Reeves, David; Checkland, Kath; Doran, Tim
2013-08-02
To investigate the relationship between performance on the UK Quality and Outcomes Framework pay-for-performance scheme and choice of clinical computer system. Retrospective longitudinal study. Data for 2007-2008 to 2010-2011, extracted from the clinical computer systems of general practices in England. All English practices participating in the pay-for-performance scheme: average 8257 each year, covering over 99% of the English population registered with a general practice. Levels of achievement on 62 quality-of-care indicators, measured as: reported achievement (levels of care after excluding inappropriate patients); population achievement (levels of care for all patients with the relevant condition) and percentage of available quality points attained. Multilevel mixed effects multiple linear regression models were used to identify population, practice and clinical computing system predictors of achievement. Seven clinical computer systems were consistently active in the study period, collectively holding approximately 99% of the market share. Of all population and practice characteristics assessed, choice of clinical computing system was the strongest predictor of performance across all three outcome measures. Differences between systems were greatest for intermediate outcomes indicators (eg, control of cholesterol levels). Under the UK's pay-for-performance scheme, differences in practice performance were associated with the choice of clinical computing system. This raises the question of whether particular system characteristics facilitate higher quality of care, better data recording or both. Inconsistencies across systems need to be understood and addressed, and researchers need to be cautious when generalising findings from samples of providers using a single computing system.
NASA Technical Reports Server (NTRS)
Tanner, J. A.; Stubbs, S. M.; Dreher, R. C.; Smith, E. G.
1982-01-01
A computer study was performed to assess the accuracy of three brake pressure-torque mathematical models. The investigation utilized one main gear wheel, brake, and tire assembly of a McDonnell Douglas DC-9 series 10 airplane. The investigation indicates that the performance of aircraft antiskid braking systems is strongly influenced by tire characteristics, dynamic response of the antiskid control valve, and pressure-torque response of the brake. The computer study employed an average torque error criterion to assess the accuracy of the models. The results indicate that a variable nonlinear spring with hysteresis memory function models the pressure-torque response of the brake more accurately than currently used models.
Computation of Turbulent Wake Flows in Variable Pressure Gradient
NASA Technical Reports Server (NTRS)
Duquesne, N.; Carlson, J. R.; Rumsey, C. L.; Gatski, T. B.
1999-01-01
Transport aircraft performance is strongly influenced by the effectiveness of high-lift systems. Developing wakes generated by the airfoil elements are subjected to strong pressure gradients and can thicken very rapidly, limiting maximum lift. This paper focuses on the effects of various pressure gradients on developing symmetric wakes and on the ability of a linear eddy viscosity model and a non-linear explicit algebraic stress model to accurately predict their downstream evolution. In order to reduce the uncertainties arising from numerical issues when assessing the performance of turbulence models, three different numerical codes with the same turbulence models are used. Results are compared to available experimental data to assess the accuracy of the computational results.
Extreme Scale Computing to Secure the Nation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D L; McGraw, J R; Johnson, J R
2009-11-10
Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national securitymore » requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).« less
Results from a Prototype Proton-CT Head Scanner
NASA Astrophysics Data System (ADS)
Johnson, R. P.; Bashkirov, V. A.; Coutrakon, G.; Giacometti, V.; Karbasi, P.; Karonis, N. T.; Ordoñez, C. E.; Pankuch, M.; Sadrozinski, H. F.-W.; Schubert, K. E.; Schulte, R. W.
We are exploring low-dose proton radiography and computed tomography (pCT) as techniques to improve the accuracy of proton treatment planning and to provide artifact-free images for verification and adaptive therapy at the time of treatment. Here we report on comprehensive beam test results with our prototype pCT head scanner. The detector system and data acquisition attain a sustained rate of more than a million protons individually measured per second, allowing a full CT scan to be completed in six minutes or less of beam time. In order to assess the performance of the scanner for proton radiography as well as computed tomography, we have performed numerous scans of phantoms at the Northwestern Medicine Chicago Proton Center including a custom phantom designed to assess the spatial resolution, a phantom to assess the measurement of relative stopping power, and a dosimetry phantom. Some images, performance, and dosimetry results from those phantom scans are presented together with a description of the instrument, the data acquisition system, and the calibration methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornung, Richard D.; Hones, Holger E.
The RAJA Performance Suite is designed to evaluate performance of the RAJA performance portability library on a wide variety of important high performance computing (HPC) algorithmic lulmels. These kernels assess compiler optimizations and various parallel programming model backends accessible through RAJA, such as OpenMP, CUDA, etc. The Initial version of the suite contains 25 computational kernels, each of which appears in 6 variants: Baseline SequcntiaJ, RAJA SequentiaJ, Baseline OpenMP, RAJA OpenMP, Baseline CUDA, RAJA CUDA. All variants of each kernel perform essentially the same mathematical operations and the loop body code for each kernel is identical across all variants. Theremore » are a few kernels, such as those that contain reduction operations, that require CUDA-specific coding for their CUDA variants. ActuaJ computer instructions executed and how they run in parallel differs depending on the parallel programming model backend used and which optimizations are perfonned by the compiler used to build the Perfonnance Suite executable. The Suite will be used primarily by RAJA developers to perform regular assessments of RAJA performance across a range of hardware platforms and compilers as RAJA features are being developed. It will also be used by LLNL hardware and software vendor panners for new defining requirements for future computing platform procurements and acceptance testing. In particular, the RAJA Performance Suite will be used for compiler acceptance testing of the upcoming CORAUSierra machine {initial LLNL delivery expected in late-2017/early 2018) and the CORAL-2 procurement. The Suite will aJso be used to generate concise source code reproducers of compiler and runtime issues we uncover so that we may provide them to relevant vendors to be fixed.« less
Passenger rail vehicle safety assessment methodology. Volume I, Summary of safe performance limits.
DOT National Transportation Integrated Search
2000-04-01
This report presents a methodology based on computer simulation that asseses the safe dyamic performance limits of commuter passenger vehicles. The methodology consists of determining the critical design parameters and characteristic properties of bo...
NASA Technical Reports Server (NTRS)
Meyer, Donald; Uchenik, Igor
2007-01-01
The PPC750 Performance Monitor (Perfmon) is a computer program that helps the user to assess the performance characteristics of application programs running under the Wind River VxWorks real-time operating system on a PPC750 computer. Perfmon generates a user-friendly interface and collects performance data by use of performance registers provided by the PPC750 architecture. It processes and presents run-time statistics on a per-task basis over a repeating time interval (typically, several seconds or minutes) specified by the user. When the Perfmon software module is loaded with the user s software modules, it is available for use through Perfmon commands, without any modification of the user s code and at negligible performance penalty. Per-task run-time performance data made available by Perfmon include percentage time, number of instructions executed per unit time, dispatch ratio, stack high water mark, and level-1 instruction and data cache miss rates. The performance data are written to a file specified by the user or to the serial port of the computer
Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers
Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...
2016-01-28
Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less
Garcia, Elisângela Zacanti; Yamashita, Hélio Kiitiro; Garcia, Davi Sousa; Padovani, Marina Martins Pereira; Azevedo, Renata Rangel; Chiari, Brasília Maria
2016-01-01
Cone beam computed tomography (CBCT), which represents an alternative to traditional computed tomography and magnetic resonance imaging, may be a useful instrument to study vocal tract physiology related to vocal exercises. This study aims to evaluate the applicability of CBCT to the assessment of variations in the vocal tract of healthy individuals before and after vocal exercises. Voice recordings and CBCT images before and after vocal exercises performed by 3 speech-language pathologists without vocal complaints were collected and compared. Each participant performed 1 type of exercise, i.e., Finnish resonance tube technique, prolonged consonant "b" technique, or chewing technique. The analysis consisted of an acoustic analysis and tomographic imaging. Modifications of the vocal tract settings following vocal exercises were properly detected by CBCT, and changes in the acoustic parameters were, for the most part, compatible with the variations detected in image measurements. CBCT was shown to be capable of properly assessing the changes in vocal tract settings promoted by vocal exercises. © 2017 S. Karger AG, Basel.
Performance Assessment Institute-NV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lombardo, Joesph
2012-12-31
The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical nationalmore » and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.« less
Barrett, R. F.; Crozier, P. S.; Doerfler, D. W.; ...
2014-09-28
Computational science and engineering application programs are typically large, complex, and dynamic, and are often constrained by distribution limitations. As a means of making tractable rapid explorations of scientific and engineering application programs in the context of new, emerging, and future computing architectures, a suite of miniapps has been created to serve as proxies for full scale applications. Each miniapp is designed to represent a key performance characteristic that does or is expected to significantly impact the runtime performance of an application program. In this paper we introduce a methodology for assessing the ability of these miniapps to effectively representmore » these performance issues. We applied this methodology to four miniapps, examining the linkage between them and an application they are intended to represent. Herein we evaluate the fidelity of that linkage. This work represents the initial steps required to begin to answer the question, ''Under what conditions does a miniapp represent a key performance characteristic in a full app?''« less
[Software for performing a global phenotypic and genotypic nutritional assessment].
García de Diego, L; Cuervo, M; Martínez, J A
2013-01-01
The nutritional assessment of a patient needs the simultaneous managing a extensive information and a great number of databases, as both aspects of the process of nutrition and the clinical situation of the patient are analyzed. The introduction of computers in the nutritional area constitutes an extraordinary advance in the administration of nutrition information, providing a complete assessment of nutritional aspects in a quick and easy way. To develop a computer program that can be used as a tool for assessing the nutritional status of the patient, the education of clinical staff, for epidemiological studies and for educational purposes. Based on a computer program which assists the health specialist to perform a full nutritional evaluation of the patient, through the registration and assessment of the phenotypic and genotypic features. The application provides nutritional prognosis based on anthropometric and biochemical parameters, images of states of malnutrition, questionnaires to characterize diseases, diagnostic criteria, identification of alleles associated with the development of specific metabolic illnesses and questionnaires of quality of life, for a custom actuation. The program includes, as part of the nutritional assessment of the patient, food intake analysis, design of diets and promotion of physical activity, introducing food frequency questionnaires, dietary recalls, healthy eating indexes, model diets, fitness tests, and recommendations, recalls and questionnaires of physical activity. A computer program performed under Java Swing, using SQLite database and some external libraries such as JfreeChart for plotting graphs. This brand new designed software is composed of five blocks categorized into ten modules named: Patients, Anthropometry, Clinical History, Biochemistry, Dietary History, Diagnostic (with genetic make up), Quality of life, Physical activity, Energy expenditure and Diets. Each module has a specific function which evaluates a different aspect of the nutritional status of the patient. UNyDIET is a global computer program, customized and upgradeable, easy to use and versatile, aimed to health specialists, medical staff, dietitians, nutritionists, scientists and educators. This tool can be used as a working instrument in programs promoting health, nutritional and clinical assessments as well as in the evaluation of health care quality, in epidemiological studies, in nutrition intervention programs and teaching. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.
On the role of minicomputers in structural design
NASA Technical Reports Server (NTRS)
Storaasli, O. O.
1977-01-01
Results are presented of exploratory studies on the use of a minicomputer in conjunction with large-scale computers to perform structural design tasks, including data and program management, use of interactive graphics, and computations for structural analysis and design. An assessment is made of minicomputer use for the structural model definition and checking and for interpreting results. Included are results of computational experiments demonstrating the advantages of using both a minicomputer and a large computer to solve a large aircraft structural design problem.
Human-Computer Interaction and Virtual Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler)
1995-01-01
The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowery, P.S.; Lessor, D.L.
Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less
Unsteady Full Annulus Simulations of a Transonic Axial Compressor Stage
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Hathaway, Michael D.; Chen, Jen-Ping
2009-01-01
Two recent research endeavors in turbomachinery at NASA Glenn Research Center have focused on compression system stall inception and compression system aerothermodynamic performance. Physical experiment and computational research are ongoing in support of these research objectives. TURBO, an unsteady, three-dimensional, Navier-Stokes computational fluid dynamics code commissioned and developed by NASA, has been utilized, enhanced, and validated in support of these endeavors. In the research which follows, TURBO is shown to accurately capture compression system flow range-from choke to stall inception-and also to accurately calculate fundamental aerothermodynamic performance parameters. Rigorous full-annulus calculations are performed to validate TURBO s ability to simulate the unstable, unsteady, chaotic stall inception process; as part of these efforts, full-annulus calculations are also performed at a condition approaching choke to further document TURBO s capabilities to compute aerothermodynamic performance data and support a NASA code assessment effort.
A framework for cognitive monitoring using computer game interactions.
Jimison, Holly B; Pavel, Misha; Bissell, Payton; McKanna, James
2007-01-01
Many countries are faced with a rapidly increasing economic and social challenge of caring for their elderly population. Cognitive issues are at the forefront of the list of concerns. People over the age of 75 are at risk for medically related cognitive decline and confusion, and the early detection of cognitive problems would allow for more effective clinical intervention. However, standard cognitive assessments are not diagnostically sensitive and are performed infrequently. To address these issues, we have developed a set of adaptive computer games to monitor cognitive performance in a home environment. Assessment algorithms for various aspects of cognition are embedded in the games. The monitoring of these metrics allows us to detect within subject trends over time, providing a method for the early detection of cognitive decline. In addition, the real-time information on cognitive state is used to adapt the user interface to the needs of the individual user. In this paper we describe the software architecture and methodology for monitoring cognitive performance using data from natural computer interactions in a home setting.
Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers
Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin
2017-01-01
Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513
DOT National Transportation Integrated Search
1980-10-01
The present study examined a variety of possible predictors of complex monitoring performance. The criterion task was designed to resemble that of a highly automated air traffic control radar system containing computer-generated alphanumeric displays...
Central Data Processing System (CDPS) user's manual: Solar heating and cooling program
NASA Technical Reports Server (NTRS)
1976-01-01
The software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple sites is presented. The instrumentation data associated with these systems is collected, processed, and presented in a form which supported continuity of performance evaluation across all applications. The CDPS consisted of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. Users of the performance data base were identified, and procedures for operation, and guidelines for software maintenance were outlined. The manual also defined the output capabilities of the CDPS in support of external users of the system.
Generic Hypersonic Inlet Module Analysis
NASA Technical Reports Server (NTRS)
Cockrell, Chares E., Jr.; Huebner, Lawrence D.
2004-01-01
A computational study associated with an internal inlet drag analysis was performed for a generic hypersonic inlet module. The purpose of this study was to determine the feasibility of computing the internal drag force for a generic scramjet engine module using computational methods. The computational study consisted of obtaining two-dimensional (2D) and three-dimensional (3D) computational fluid dynamics (CFD) solutions using the Euler and parabolized Navier-Stokes (PNS) equations. The solution accuracy was assessed by comparisons with experimental pitot pressure data. The CFD analysis indicates that the 3D PNS solutions show the best agreement with experimental pitot pressure data. The internal inlet drag analysis consisted of obtaining drag force predictions based on experimental data and 3D CFD solutions. A comparative assessment of each of the drag prediction methods is made and the sensitivity of CFD drag values to computational procedures is documented. The analysis indicates that the CFD drag predictions are highly sensitive to the computational procedure used.
ERIC Educational Resources Information Center
Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul
2012-01-01
This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…
ERIC Educational Resources Information Center
Popov, Vitaliy; Biemans, Harm J. A.; Kuznetsov, Andrei N.; Mulder, Martin
2014-01-01
In this exploratory study, the authors introduced an interculturally enriched collaboration script (IECS) for working in culturally diverse groups within a computer-supported collaborative learning (CSCL) environment and then assessed student online collaborative behaviour, learning performance and experiences. The question was if and how these…
DOT National Transportation Integrated Search
1995-09-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...
The potential of computer-aided process engineering (CAPE) tools to enable process engineers to improve the environmental performance of both their processes and across the life cycle (from cradle-to-grave) has long been proffered. However, this use of CAPE has not been fully ach...
Experimental Investigation of Project Orion Crew Exploration Vehicle Aeroheating in AEDC Tunnel 9
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Horvath, Thomas J.; Berger, Karen T.; Lillard, Randolph P.; Kirk, Benjamin S.; Coblish, Joseph J.; Norris, Joseph D.
2008-01-01
An investigation of the aeroheating environment of the Project Orion Crew Entry Vehicle has been performed in the Arnold Engineering Development Center Tunnel 9. The goals of this test were to measure turbulent heating augmentation levels on the heat shield and to obtain high-fidelity heating data for assessment of computational fluid dynamics methods. Laminar and turbulent predictions were generated for all wind tunnel test conditions and comparisons were performed with the data for the purpose of helping to define uncertainty margins for the computational method. Data from both the wind tunnel test and the computational study are presented herein.
Automatic and Objective Assessment of Alternating Tapping Performance in Parkinson's Disease
Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker
2013-01-01
This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions (‘speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’) and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping. PMID:24351667
Automatic and objective assessment of alternating tapping performance in Parkinson's disease.
Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker
2013-12-09
This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions ('speed', 'accuracy', 'fatigue' and 'arrhythmia') and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping.
Reliability of a computer-based system for measuring visual performance skills.
Erickson, Graham B; Citek, Karl; Cove, Michelle; Wilczek, Jennifer; Linster, Carolyn; Bjarnason, Brendon; Langemo, Nathan
2011-09-01
Athletes have demonstrated better visual abilities than nonathletes. A vision assessment for an athlete should include methods to evaluate the quality of visual performance skills in the most appropriate, accurate, and repeatable manner. This study determines the reliability of the visual performance measures assessed with a computer-based system, known as the Nike Sensory Station. One hundred twenty-five subjects (56 men, 69 women), age 18 to 30, completed Phase I of the study. Subjects attended 2 sessions, separated by at least 1 week, in which identical protocols were followed. Subjects completed the following assessments: Visual Clarity, Contrast Sensitivity, Depth Perception, Near-Far Quickness, Target Capture, Perception Span, Eye-Hand Coordination, Go/No Go, and Reaction Time. An additional 36 subjects (20 men, 16 women), age 22 to 35, completed Phase II of the study involving modifications to the equipment, instructions, and protocols from Phase I. Results show no significant change in performance over time on assessments of Visual Clarity, Contrast Sensitivity, Depth Perception, Target Capture, Perception Span, and Reaction Time. Performance did improve over time for Near-Far Quickness, Eye-Hand Coordination, and Go/No Go. The results of this study show that many of the Nike Sensory Station assessments show repeatability and no learning effect over time. The measures that did improve across sessions show an expected learning effect caused by the motor response characteristics being measured. Copyright © 2011 American Optometric Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Ali, Syed Firasat; Khan, M. Javed; Rossi, Marcia J.; Heath, Bruce e.; Crane, Peter; Ward, Marcus; Crier, Tomyka; Knighten, Tremaine; Culpepper, Christi
2007-01-01
One result of the relatively recent advances in computing technology has been the decreasing cost of computers and increasing computational power. This has allowed high fidelity airplane simulations to be run on personal computers (PC). Thus, simulators are now used routinely by pilots to substitute real flight hours for simulated flight hours for training for an aircraft type rating thereby reducing the cost of flight training. However, FAA regulations require that such substitution training must be supervised by Certified Flight Instructors (CFI). If the CFI presence could be reduced or eliminated for certain tasks this would mean a further cost savings to the pilot. This would require that the flight simulator have a certain level of 'intelligence' in order to provide feedback on pilot performance similar to that of a CFI. The 'intelligent' flight simulator would have at least the capability to use data gathered from the flight to create a measure for the performance of the student pilot. Also, to fully utilize the advances in computational power, the simulator would be capable of interacting with the student pilot using the best possible training interventions. This thesis reports on the two studies conducted at Tuskegee University investigating the effects of interventions on the learning of two flight maneuvers on a flight simulator and the robustness and accuracy of calculated performance indices as compared to CFI evaluations of performance. The intent of these studies is to take a step in the direction of creating an 'intelligent' flight simulator. The first study deals with the comparisons of novice pilot performance trained at different levels of above real-time to execute a level S-turn. The second study examined the effect of out-of-the-window (OTW) visual cues in the form of hoops on the performance of novice pilots learning to fly a landing approach on the flight simulator. The reliability/robustness of the computed performance metrics was assessed by comparing them with the evaluations of the landing approach maneuver by a number of CFIs.
Kontopantelis, Evangelos; Buchan, Iain; Reeves, David; Checkland, Kath; Doran, Tim
2013-01-01
Objectives To investigate the relationship between performance on the UK Quality and Outcomes Framework pay-for-performance scheme and choice of clinical computer system. Design Retrospective longitudinal study. Setting Data for 2007–2008 to 2010–2011, extracted from the clinical computer systems of general practices in England. Participants All English practices participating in the pay-for-performance scheme: average 8257 each year, covering over 99% of the English population registered with a general practice. Main outcome measures Levels of achievement on 62 quality-of-care indicators, measured as: reported achievement (levels of care after excluding inappropriate patients); population achievement (levels of care for all patients with the relevant condition) and percentage of available quality points attained. Multilevel mixed effects multiple linear regression models were used to identify population, practice and clinical computing system predictors of achievement. Results Seven clinical computer systems were consistently active in the study period, collectively holding approximately 99% of the market share. Of all population and practice characteristics assessed, choice of clinical computing system was the strongest predictor of performance across all three outcome measures. Differences between systems were greatest for intermediate outcomes indicators (eg, control of cholesterol levels). Conclusions Under the UK's pay-for-performance scheme, differences in practice performance were associated with the choice of clinical computing system. This raises the question of whether particular system characteristics facilitate higher quality of care, better data recording or both. Inconsistencies across systems need to be understood and addressed, and researchers need to be cautious when generalising findings from samples of providers using a single computing system. PMID:23913774
Space Shuttle Underside Astronaut Communications Performance Evaluation
NASA Technical Reports Server (NTRS)
Hwu, Shian U.; Dobbins, Justin A.; Loh, Yin-Chung; Kroll, Quin D.; Sham, Catherine C.
2005-01-01
The Space Shuttle Ultra High Frequency (UHF) communications system is planned to provide Radio Frequency (RF) coverage for astronauts working underside of the Space Shuttle Orbiter (SSO) for thermal tile inspection and repairing. This study is to assess the Space Shuttle UHF communication performance for astronauts in the shadow region without line-of-sight (LOS) to the Space Shuttle and Space Station UHF antennas. To insure the RF coverage performance at anticipated astronaut worksites, the link margin between the UHF antennas and Extravehicular Activity (EVA) Astronauts with significant vehicle structure blockage was analyzed. A series of near-field measurements were performed using the NASA/JSC Anechoic Chamber Antenna test facilities. Computational investigations were also performed using the electromagnetic modeling techniques. The computer simulation tool based on the Geometrical Theory of Diffraction (GTD) was used to compute the signal strengths. The signal strength was obtained by computing the reflected and diffracted fields along the propagation paths between the transmitting and receiving antennas. Based on the results obtained in this study, RF coverage for UHF communication links was determined for the anticipated astronaut worksite in the shadow region underneath the Space Shuttle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potchen, E.J.
Questions regarding what imaging performance goals need to be met to produce effective biomedical research using positron emission computer tomography, how near those performance goals are to being realized by imaging systems, and the dependence of currently-unachieved performance goals on design and operational factors have been addressed in the past year, along with refinement of economic estimates for the capital and operating costs of a PECT research facility. The two primary sources of information have been solicitations of expert opinion and review of current literature. (ACR)
Barker, Matthew D; Purdy, Suzanne C
2016-01-01
This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.
Design and assessment of an interactive physics tutoring environment
NASA Astrophysics Data System (ADS)
Scott, Lisa Ann
2001-07-01
The application of scientific principles is an extremely important skill taught in undergraduate introductory science courses, yet many students emerge from such courses unable to reliably apply the scientific principles they have ostensibly learned. In an attempt to address this problem, the knowledge and thought processes needed to apply an important principle in introductory physics (Newton's law) were carefully analyzed. Reliable performance requires not only declarative knowledge but also corresponding procedural knowledge and the basic cognitive functions of deciding, implementing and assessing. Computer programs called guided-practice PALs (P&barbelow;ersonal A&barbelow;ssistants for Ḻearning) were developed to teach explicitly the knowledge and thought processes needed to apply Newton's law to solve problems. These programs employ a modified form of Palincsar and Brown's reciprocal-teaching strategy (1984) in which students and computers alternately coach each other, taking turns making decisions, implementing and assessing them. The computer programs make it practically feasible to provide students with individual guidance and feedback ordinarily unavailable in most courses. In a pilot study, the guided-practice PALs were found to be nearly as effective as individual tutoring by expert teachers and significantly more effective than the instruction provided in a well-taught physics course. This guided practice however is not sufficient to ensure that students develop the ability to perform independently. Accordingly, independent-performance PALs were developed which require students to work independently, receiving only the minimal feedback necessary to successfully complete the task. These independent-performance PALS are interspersed with guided-practice PALs to create an instructional environment which facilitates a gradual transition to independent performance. In a study designed to assess the efficacy of the PAL instruction, students in the PAL group used only guided-practice PALS and students in the PAL+ group used both guided-practice and independent-performance PALS. The performance of the PAL and PAL+ groups were compared to the performance of a Control group which received traditional instruction. The addition of the independent-performance PALS proved to be at least as effective as the guided-practice PALs alone, and both forms of PAL instruction were significantly more effective than traditional instruction.
CFD-Predicted Tile Heating Bump Factors Due to Tile Overlay Repairs
NASA Technical Reports Server (NTRS)
Lessard, Victor R.
2006-01-01
A Computational Fluid Dynamics investigation of the Orbiter's Tile Overlay Repair (TOR) is performed to assess the aeroheating Damage Assessment Team's (DAT) existing heating correlation method for protuberance interference heating on the surrounding thermal protection system. Aerothermodynamic heating analyses are performed for TORs at the design reference damage locations body points 1800 and 1075 for a Mach 17.9 and a=39deg STS-107 flight trajectory point with laminar flow. Six different cases are considered. The computed peak heating bump factor on the surrounding tiles are below the DAT's heating bump factor values for smooth tile cases. However, for the uneven tiles cases the peak interference heating is shown to be considerably higher than the existing correlation prediction.
Dodd, Lori E; Wagner, Robert F; Armato, Samuel G; McNitt-Gray, Michael F; Beiden, Sergey; Chan, Heang-Ping; Gur, David; McLennan, Geoffrey; Metz, Charles E; Petrick, Nicholas; Sahiner, Berkman; Sayre, Jim
2004-04-01
Cancer of the lung and bronchus is the leading fatal malignancy in the United States. Five-year survival is low, but treatment of early stage disease considerably improves chances of survival. Advances in multidetector-row computed tomography technology provide detection of smaller lung nodules and offer a potentially effective screening tool. The large number of images per exam, however, requires considerable radiologist time for interpretation and is an impediment to clinical throughput. Thus, computer-aided diagnosis (CAD) methods are needed to assist radiologists with their decision making. To promote the development of CAD methods, the National Cancer Institute formed the Lung Image Database Consortium (LIDC). The LIDC is charged with developing the consensus and standards necessary to create an image database of multidetector-row computed tomography lung images as a resource for CAD researchers. To develop such a prospective database, its potential uses must be anticipated. The ultimate applications will influence the information that must be included along with the images, the relevant measures of algorithm performance, and the number of required images. In this article we outline assessment methodologies and statistical issues as they relate to several potential uses of the LIDC database. We review methods for performance assessment and discuss issues of defining "truth" as well as the complications that arise when truth information is not available. We also discuss issues about sizing and populating a database.
The grout/glass performance assessment code system (GPACS) with verification and benchmarking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepho, M.G.; Sutherland, W.H.; Rittmann, P.D.
1994-12-01
GPACS is a computer code system for calculating water flow (unsaturated or saturated), solute transport, and human doses due to the slow release of contaminants from a waste form (in particular grout or glass) through an engineered system and through a vadose zone to an aquifer, well and river. This dual-purpose document is intended to serve as a user`s guide and verification/benchmark document for the Grout/Glass Performance Assessment Code system (GPACS). GPACS can be used for low-level-waste (LLW) Glass Performance Assessment and many other applications including other low-level-waste performance assessments and risk assessments. Based on all the cses presented, GPACSmore » is adequate (verified) for calculating water flow and contaminant transport in unsaturated-zone sediments and for calculating human doses via the groundwater pathway.« less
Computation of Asteroid Proper Elements: Recent Advances
NASA Astrophysics Data System (ADS)
Knežević, Z.
2017-12-01
The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.
Benefits of Music Training in Mandarin-Speaking Pediatric Cochlear Implant Users
ERIC Educational Resources Information Center
Fu, Qian-Jie; Galvin, John J., III; Wang, Xiaosong; Wu, Jiunn-Liang
2015-01-01
Purpose: The aims of this study were to assess young (5- to 10-year-old) Mandarin-speaking cochlear implant (CI) users' musical pitch perception and to assess the benefits of computer-based home training on performance. Method: Melodic contour identification (MCI) was used to assess musical pitch perception in 14 Mandarin-speaking pediatric CI…
What about Me?: Individual Self-Assessment by Skill and Level of Language Instruction
ERIC Educational Resources Information Center
Brantmeier, Cindy; Vanderplank, Robert; Strube, Michael
2012-01-01
In an investigation with advanced language learners, Brantmeier [Brantmeier, C., 2006. "Advanced L2 learners and reading placement: self-assessment, computer based testing, and subsequent performance." "System" 34 (1), 15-35.] reports that self-assessment (SA) of second language (L2) reading ability, when measured with self-rated scales, is not an…
Brammer, C; Dawson, D; Joseph, M; Tipper, J; Jemmet, T; Liew, L; Spinou, C; Grew, N; Pigadas, N; Rehman, K
2017-05-01
This study aimed to assess head and neck cancer patient satisfaction with the use of a touch-screen computer patient-completed questionnaire for assessing Adult Co-morbidity Evaluation 27 co-morbidity scores prior to treatment, along with its clinical reliability. A total of 96 head and neck cancer patients were included in the audit. An accurate Adult Co-morbidity Evaluation 27 co-morbidity score was achieved via patient-completed questionnaire assessment for 97 per cent of participants. In all, 96 per cent of patients found the use of a touch-screen computer acceptable and would be willing to use one again, and 62 per cent would be willing to do so without help. Patients were more likely to be willing to use the computer again without help if they were aged 65 years or younger (χ2 test; p = 0.0054) or had a performance status of 0 or 1 (χ2 test; p = 0.00034). Use of a touch-screen computer is an acceptable approach for assessing Adult Co-morbidity Evaluation 27 scores at pre-treatment assessment in a multidisciplinary joint surgical-oncology clinic.
Hjermstad, Marianne Jensen; Lie, Hanne C; Caraceni, Augusto; Currow, David C; Fainsinger, Robin L; Gundersen, Odd Erik; Haugen, Dagny Faksvaag; Heitzer, Ellen; Radbruch, Lukas; Stone, Patrick C; Strasser, Florian; Kaasa, Stein; Loge, Jon Håvard
2012-11-01
Symptom assessment by computers is only effective if it provides valid results and is perceived as useful for clinical use by the end users: patients and health care providers. To identify factors associated with discontinuation, time expenditure, and patient preferences of the computerized symptom assessment used in an international multicenter data collection project: the European Palliative Care Research Collaborative-Computerized Symptom Assessment. Cancer patients with incurable metastatic or locally advanced disease were recruited from 17 centers in eight countries, providing 1017 records for analyses. Observer-based registrations and patient-reported measures on pain, depression, and physical function were entered on touch screen laptop computers. The entire assessment was completed by 94.9% (n = 965), with median age 63 years (range 18-91 years) and median Karnofsky Performance Status (KPS) score of 70 (range 20-100). Predictive factors for noncompletion were higher age, lower KPS, and more pain (P ≤ 0.012). Time expenditure among completers increased with higher age, male gender, Norwegian nationality, number of comorbidities, and lower physical functioning (P ≤ 0.007) but was inversely related to pain levels and tiredness (P ≤ 0.03). Need for assistance was predicted by higher age, nationality other than Norwegian, lower KPS, and lower educational level (P < 0.001). More than 50% of patients preferred computerized assessment to a paper and pencil version. The high completion rate shows that symptom assessment by computers is feasible in patients with advanced cancer. However, reduced performance status reduces compliance and increases the need for assistance. Future work should aim at identifying the minimum set of valid screening questions and refine the software to optimize symptom assessment and reduce respondent burden in frail patients. Copyright © 2012 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Assessing the Amazon Cloud Suitability for CLARREO's Computational Needs
NASA Technical Reports Server (NTRS)
Goldin, Daniel; Vakhnin, Andrei A.; Currey, Jon C.
2015-01-01
In this document we compare the performance of the Amazon Web Services (AWS), also known as Amazon Cloud, with the CLARREO (Climate Absolute Radiance and Refractivity Observatory) cluster and assess its suitability for computational needs of the CLARREO mission. A benchmark executable to process one month and one year of PARASOL (Polarization and Anistropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) data was used. With the optimal AWS configuration, adequate data-processing times, comparable to the CLARREO cluster, were found. The assessment of alternatives to the CLARREO cluster continues and several options, such as a NASA-based cluster, are being considered.
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel
2016-03-01
Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.
Seismic performance assessment of base-isolated safety-related nuclear structures
Huang, Y.-N.; Whittaker, A.S.; Luco, N.
2010-01-01
Seismic or base isolation is a proven technology for reducing the effects of earthquake shaking on buildings, bridges and infrastructure. The benefit of base isolation has been presented in terms of reduced accelerations and drifts on superstructure components but never quantified in terms of either a percentage reduction in seismic loss (or percentage increase in safety) or the probability of an unacceptable performance. Herein, we quantify the benefits of base isolation in terms of increased safety (or smaller loss) by comparing the safety of a sample conventional and base-isolated nuclear power plant (NPP) located in the Eastern U.S. Scenario- and time-based assessments are performed using a new methodology. Three base isolation systems are considered, namely, (1) Friction Pendulum??? bearings, (2) lead-rubber bearings and (3) low-damping rubber bearings together with linear viscous dampers. Unacceptable performance is defined by the failure of key secondary systems because these systems represent much of the investment in a new build power plant and ensure the safe operation of the plant. For the scenario-based assessments, the probability of unacceptable performance is computed for an earthquake with a magnitude of 5.3 at a distance 7.5 km from the plant. For the time-based assessments, the annual frequency of unacceptable performance is computed considering all potential earthquakes that may occur. For both assessments, the implementation of base isolation reduces the probability of unacceptable performance by approximately four orders of magnitude for the same NPP superstructure and secondary systems. The increase in NPP construction cost associated with the installation of seismic isolators can be offset by substantially reducing the required seismic strength of secondary components and systems and potentially eliminating the need to seismically qualify many secondary components and systems. ?? 2010 John Wiley & Sons, Ltd.
Probabilistic Assessment of National Wind Tunnel
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M.; Chamis, C. C.
1996-01-01
A preliminary probabilistic structural assessment of the critical section of National Wind Tunnel (NWT) is performed using NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) computer code. Thereby, the capabilities of NESSUS code have been demonstrated to address reliability issues of the NWT. Uncertainties in the geometry, material properties, loads and stiffener location on the NWT are considered to perform the reliability assessment. Probabilistic stress, frequency, buckling, fatigue and proof load analyses are performed. These analyses cover the major global and some local design requirements. Based on the assumed uncertainties, the results reveal the assurance of minimum 0.999 reliability for the NWT. Preliminary life prediction analysis results show that the life of the NWT is governed by the fatigue of welds. Also, reliability based proof test assessment is performed.
Jacova, Claudia; McGrenere, Joanna; Lee, Hyunsoo S; Wang, William W; Le Huray, Sarah; Corenblith, Emily F; Brehmer, Matthew; Tang, Charlotte; Hayden, Sherri; Beattie, B Lynn; Hsiung, Ging-Yuek R
2015-01-01
Cognitive Testing on Computer (C-TOC) is a novel computer-based test battery developed to improve both usability and validity in the computerized assessment of cognitive function in older adults. C-TOC's usability was evaluated concurrently with its iterative development to version 4 in subjects with and without cognitive impairment, and health professional advisors representing different ethnocultural groups. C-TOC version 4 was then validated against neuropsychological tests (NPTs), and by comparing performance scores of subjects with normal cognition, Cognitive Impairment Not Dementia (CIND) and Alzheimer disease. C-TOC's language tests were validated in subjects with aphasic disorders. The most important usability issue that emerged from consultations with 27 older adults and with 8 cultural advisors was the test-takers' understanding of the task, particularly executive function tasks. User interface features did not pose significant problems. C-TOC version 4 tests correlated with comparator NPT (r=0.4 to 0.7). C-TOC test scores were normal (n=16)>CIND (n=16)>Alzheimer disease (n=6). All normal/CIND NPT performance differences were detected on C-TOC. Low computer knowledge adversely affected test performance, particularly in CIND. C-TOC detected impairments in aphasic disorders (n=11). In general, C-TOC had good validity in detecting cognitive impairment. Ensuring test-takers' understanding of the tasks, and considering their computer knowledge appear important steps towards C-TOC's implementation.
The Relationship between Classworks and Georgia State Testing
ERIC Educational Resources Information Center
Baker, Amanda
2014-01-01
As teachers in Georgia utilize formative assessments to ensure successful performance on the state mandated Criterion Referenced Competency Test (CRCT), it is essential that the data they receive from these assessments are accurate. This study examined the correlation between the computer-assisted instructional program, Classworks, and student…
Real-time seismic monitoring and functionality assessment of a building
Celebi, M.; ,
2005-01-01
This paper presents recent developments and approaches (using GPS technology and real-time double-integration) to obtain displacements and, in turn, drift ratios, in real-time or near real-time to meet the needs of the engineering and user community in seismic monitoring and assessing the functionality and damage condition of structures. Drift ratios computed in near real-time allow technical assessment of the damage condition of a building. Relevant parameters, such as the type of connections and story structural characteristics (including geometry) are used in computing drifts corresponding to several pre-selected threshold stages of damage. Thus, drift ratios determined from real-time monitoring can be compared to pre-computed threshold drift ratios. The approaches described herein can be used for performance evaluation of structures and can be considered as building health-monitoring applications.
ERIC Educational Resources Information Center
Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.
This document, which reflects Mississippi's statutory requirement that instructional programs be based on core curricula and performance-based assessment, contains outlines of the instructional units required in local instructional management plans and daily lesson plans for secondary-level courses in business and computer technology I-II.…
ERIC Educational Resources Information Center
Quinn, Joseph G.; King, Karen; Roberts, David; Carey, Linda; Mousley, Angela
2009-01-01
It is compulsory for first year biological science students at Queens University Belfast to complete a range of assessed, laboratory-based practicals in various scientific procedures including dissection. This study investigates student performance and attitudes when they have to complete a traditional dissection and a computer based learning…
Computational Intelligence and Its Impact on Future High-Performance Engineering Systems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler)
1996-01-01
This document contains presentations from the joint UVA/NASA Workshop on Computational Intelligence held at the Virginia Consortium of Engineering and Science Universities, Hampton, Virginia, June 27-28, 1995. The presentations addressed activities in the areas of fuzzy logic, neural networks, and evolutionary computations. Workshop attendees represented NASA, the National Science Foundation, the Department of Energy, National Institute of Standards and Technology (NIST), the Jet Propulsion Laboratory, industry, and academia. The workshop objectives were to assess the state of technology in the Computational intelligence area and to provide guidelines for future research.
Research on computer aided testing of pilot response to critical in-flight events
NASA Technical Reports Server (NTRS)
Giffin, W. C.; Rockwell, T. H.; Smith, P. J.
1984-01-01
Experiments on pilot decision making are described. The development of models of pilot decision making in critical in flight events (CIFE) are emphasized. The following tests are reported on the development of: (1) a frame system representation describing how pilots use their knowledge in a fault diagnosis task; (2) assessment of script norms, distance measures, and Markov models developed from computer aided testing (CAT) data; and (3) performance ranking of subject data. It is demonstrated that interactive computer aided testing either by touch CRT's or personal computers is a useful research and training device for measuring pilot information management in diagnosing system failures in simulated flight situations. Performance is dictated by knowledge of aircraft sybsystems, initial pilot structuring of the failure symptoms and efficient testing of plausible causal hypotheses.
The effect of magnification loupes on the performance of preclinical dental students.
Maggio, Margrit P; Villegas, Hilda; Blatz, Markus B
2011-01-01
optical magnifying devices such as magnification loupes are increasingly used in clinical practice and educational settings. However, scientific evidence to validate their benefits is limited. This study assessed the effect of dental magnification loupes on psychomotor skill acquisition during a preclinical operative dentistry course. the performance of first-year dental students was assessed during an Advanced Simulation Course (AS) using virtual reality-based technology (VRBT) training. The test group consisted of 116 dental students using magnification loupes (+MAG), while students not using them (-MAG, n = 116) served as the control. The following parameters were evaluated: number of successfully passing preparation procedures per course rotation, amount of time per tooth preparation, number of times students needed computer assistance and evaluation, and amount of time spent in the computer assistance and evaluation mode per procedure. Data were collected on each student through VRBT during the preparation procedure and stored on a closed network server computer. Unpaired t tests were used to analyze mean differences between the groups. In addition, student acceptance of magnification loupes was measured and evaluated through survey interpretation. +MAG students completed more preparations, worked faster per procedure, and used the computer-assisted evaluation less frequently and for shorter periods, therefore displaying greater overall performance. The survey revealed a high degree of student acceptance of using magnification. dental magnification loupes significantly enhanced student performance during preclinical dental education and were considered an effective adjunct by the students who used them.
Scenarios for the Hanford immobilized Low-Activity waste (ILAW) performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANN, F.M.
The purpose of the next version of the Hanford Immobilized Low-Activity Tank Waste (ILAW) Performance Assessment (ILAW PA) is to provide an updated estimate of the long-term human health and environmental impact of the disposal of ILAW and to compare these estimates against performance objectives displayed in Tables 1,2, and 3 (Mann 1999a). Such a radiological performance assessment is required by U.S. Department of Energy (DOE) Orders on radioactive waste management (DOE 1988a and DOE 1999a). This document defines the scenarios that will be used for the next update of the PA that is scheduled to be issued in 2001.more » Since the previous performance assessment (Mann 1998) was issued, considerable additional data on waste form behavior and site-specific soil geotechnical properties have been collected. In addition, the 2001 ILAW PA will benefit from improved computer models and the experience gained from the previous performance assessment. However, the scenarios (that is, the features, events, and processes analyzed in the Performance assessment) for the next PA are very similar to the ones in the 1998 PA.« less
NASA HPCC Technology for Aerospace Analysis and Design
NASA Technical Reports Server (NTRS)
Schulbach, Catherine H.
1999-01-01
The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu
2015-04-01
From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.
Role of cognitive assessment for high school graduates prior to choosing their college major.
AlAbdulwahab, Sami S; Kachanathu, Shaji John; AlSaeed, Abdullah Saad
2018-02-01
[Purpose] Academic performance of college students can be impacted by the efficacy of students' ability and teaching methods. It is important to assess the progression of college students' cognitive abilities among different college majors and as they move from junior to senior levels. However, dearth of studies have been examined the role of cognitive ability tests as a tool to determine the aptitude of the perspective students. Therefore, this study assessed cognitive abilities of computer science and ART students. [Subjects and Methods] Participants were 130 college students (70 computer and 60 art students) in their first and final years of study at King Saud University. Cognitive ability was assessed using the Test of Nonverbal Intelligence, Third Edition. [Results] The cognitive ability of computer science students were statistically better than that of art students and were shown improvement from junior to senior levels, while the cognitive ability of art students did not. [Conclusion] The cognitive ability of computer science college students was superior compared to those in art, indicating the importance of cognitive ability assessment for high school graduates prior to choosing a college major. Cognitive scales should be included as an aptitude assessment tool for decision-makers and prospective students to determine an appropriate career, which might reduce the rate of university drop out.
Li, Hui; Jin, Dan; Qiao, Fang; Chen, Jianchang; Gong, Jianping
Computed tomography coronary angiography, a key method for obtaining coronary artery images, is widely used to screen for coronary artery diseases due to its noninvasive nature. In China, 64-slice computed tomography systems are now the most common models. As factors that directly affect computed tomography performance, heart rate and rhythm control are regulated by the autonomic nervous system and are highly related to the emotional state of the patient. The aim of this prospective study is to use a pre-computed tomography scan Self-Rating Anxiety Scale assessment to analyze the effects of tension and anxiety on computed tomography coronary angiography success. Subjects aged 18-85 years who were planned to undergo computed tomography coronary angiography were enrolled; 1 to 2 h before the computed tomography scan, basic patient data (gender, age, heart rate at rest, and family history) and Self-Rating Anxiety Scale score were obtained. The same group of imaging department doctors, technicians, and nurses performed computed tomography coronary angiography for all the enrolled subjects and observed whether those subjects could finish the computed tomography coronary angiography scan and provide clear, diagnostically valuable images. Participants were divided into successful (obtained diagnostically useful coronary images) and unsuccessful groups. Basic data and Self-Rating Anxiety Scale scores were compared between the groups. The Self-Rating Anxiety Scale standard score of the successful group was lower than that of the unsuccessful group (P = 0.001). As the Self-Rating Anxiety Scale standard score rose, the success rate of computed tomography coronary angiography decreased. The Self-Rating Anxiety Scale score has a negative relationship with computed tomography coronary angiography success. Anxiety can be a disadvantage in computed tomography coronary angiography examination. The pre-computed tomography coronary angiography scan Self-Rating Anxiety Scale score may be a useful tool for assessing whether a computed tomography coronary angiography scan will be successful or not. © The Author(s) 2015.
Assessment of a human computer interface prototyping environment
NASA Technical Reports Server (NTRS)
Moore, Loretta A.
1993-01-01
A Human Computer Interface (HCI) prototyping environment with embedded evaluation capability has been successfully assessed which will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. The HCI prototyping environment is designed to include four components: (1) a HCI format development tool, (2) a test and evaluation simulator development tool, (3) a dynamic, interactive interface between the HCI prototype and simulator, and (4) an embedded evaluation capability to evaluate the adequacy of an HCI based on a user's performance.
Effects of Inlet Distortion on Aeromechanical Stability of a Forward-Swept High-Speed Fan
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.
2011-01-01
Concerns regarding noise, propulsive efficiency, and fuel burn are inspiring aircraft designs wherein the propulsive turbomachines are partially (or fully) embedded within the airframe; such designs present serious concerns with regard to aerodynamic and aeromechanic performance of the compression system in response to inlet distortion. Separately, a forward-swept high-speed fan was developed to address noise concerns of modern podded turbofans; however this fan encounters aeroelastic instability (flutter) as it approaches stall. A three-dimensional, unsteady, Navier-Stokes computational fluid dynamics code is applied to analyze and corroborate fan performance with clean inlet flow. This code, already validated in its application to assess aerodynamic damping of vibrating blades at various flow conditions, is modified and then applied in a computational study to preliminarily assess the effects of inlet distortion on aeroelastic stability of the fan. Computational engineering application and implementation issues are discussed, followed by an investigation into the aeroelastic behavior of the fan with clean and distorted inlets.
Imai, Kazuhiro
2015-01-01
Finite element analysis (FEA) is an advanced computer technique of structural stress analysis developed in engineering mechanics. Because the compressive behavior of vertebral bone shows nonlinear behavior, a nonlinear FEA should be utilized to analyze the clinical vertebral fracture. In this article, a computed tomography-based nonlinear FEA (CT/FEA) to analyze the vertebral bone strength, fracture pattern, and fracture location is introduced. The accuracy of the CT/FEA was validated by performing experimental mechanical testing with human cadaveric specimens. Vertebral bone strength and the minimum principal strain at the vertebral surface were accurately analyzed using the CT/FEA. The experimental fracture pattern and fracture location were also accurately simulated. Optimization of the element size was performed by assessing the accuracy of the CT/FEA, and the optimum element size was assumed to be 2 mm. It is expected that the CT/FEA will be valuable in analyzing vertebral fracture risk and assessing therapeutic effects on osteoporosis. PMID:26029476
InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Hamledari, Hesam
In this research, an envisioned automated intelligent robotic solution for automated indoor data collection and inspection that employs a series of unmanned aerial vehicles (UAV), entitled "InPRO", is presented. InPRO consists of four stages, namely: 1) automated path planning; 2) autonomous UAV-based indoor inspection; 3) automated computer vision-based assessment of progress; and, 4) automated updating of 4D building information models (BIM). The works presented in this thesis address the third stage of InPRO. A series of computer vision-based methods that automate the assessment of construction progress using images captured at indoor sites are introduced. The proposed methods employ computer vision and machine learning techniques to detect the components of under-construction indoor partitions. In particular, framing (studs), insulation, electrical outlets, and different states of drywall sheets (installing, plastering, and painting) are automatically detected using digital images. High accuracy rates, real-time performance, and operation without a priori information are indicators of the methods' promising performance.
Analysis and methodology for aeronautical systems technology program planning
NASA Technical Reports Server (NTRS)
White, M. J.; Gershkoff, I.; Lamkin, S.
1983-01-01
A structured methodology was developed that allows the generation, analysis, and rank-ordering of system concepts by their benefits and costs, indicating the preferred order of implementation. The methodology is supported by a base of data on civil transport aircraft fleet growth projections and data on aircraft performance relating the contribution of each element of the aircraft to overall performance. The performance data are used to assess the benefits of proposed concepts. The methodology includes a computer program for performing the calculations needed to rank-order the concepts and compute their cumulative benefit-to-cost ratio. The use of the methodology and supporting data is illustrated through the analysis of actual system concepts from various sources.
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Sanderson, A. C.
1994-01-01
Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.
LeDell, Erin; Petersen, Maya; van der Laan, Mark
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.
A Computational Study of an Oscillating VR-12 Airfoil with a Gurney Flap
NASA Technical Reports Server (NTRS)
Rhee, Myung
2004-01-01
Computations of the flow over an oscillating airfoil with a Gurney-flap are performed using a Reynolds Averaged Navier-Stokes code and compared with recent experimental data. The experimental results have been generated for different sizes of the Gurney flaps. The computations are focused mainly on a configuration. The baseline airfoil without a Gurney flap is computed and compared with the experiments in both steady and unsteady cases for the purpose of initial testing of the code performance. The are carried out with different turbulence models. Effects of the grid refinement are also examined and unsteady cases, in addition to the assessment of solver effects. The results of the comparisons of steady lift and drag computations indicate that the code is reasonably accurate in the attached flow of the steady condition but largely overpredicts the lift and underpredicts the drag in the higher angle steady flow.
Computational Investigations in Rectangular Convergent and Divergent Ribbed Channels
NASA Astrophysics Data System (ADS)
Sivakumar, Karthikeyan; Kulasekharan, N.; Natarajan, E.
2018-05-01
Computational investigations on the rib turbulated flow inside a convergent and divergent rectangular channel with square ribs of different rib heights and different Reynolds numbers (Re=20,000, 40,000 and 60,000). The ribs were arranged in a staggered fashion between the upper and lower surfaces of the test section. Computational investigations are carried out using computational fluid dynamic software ANSYS Fluent 14.0. Suitable solver settings like turbulence models were identified from the literature and the boundary conditions for the simulations on a solution of independent grid. Computations were carried out for both convergent and divergent channels with 0 (smooth duct), 1.5, 3, 6, 9 and 12 mm rib heights, to identify the ribbed channel with optimal performance, assessed using a thermo hydraulic performance parameter. The convergent and divergent rectangular channels show higher Nu values than the standard correlation values.
Petersen, Maya; van der Laan, Mark
2015-01-01
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calderer, Antoni; Yang, Xiaolei; Angelidis, Dionysios
2015-10-30
The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.
The employment of a spoken language computer applied to an air traffic control task.
NASA Technical Reports Server (NTRS)
Laveson, J. I.; Silver, C. A.
1972-01-01
Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.
Systemic Lisbon Battery: Normative Data for Memory and Attention Assessments.
Gamito, Pedro; Morais, Diogo; Oliveira, Jorge; Ferreira Lopes, Paulo; Picareli, Luís Felipe; Matias, Marcelo; Correia, Sara; Brito, Rodrigo
2016-05-04
Memory and attention are two cognitive domains pivotal for the performance of instrumental activities of daily living (IADLs). The assessment of these functions is still widely carried out with pencil-and-paper tests, which lack ecological validity. The evaluation of cognitive and memory functions while the patients are performing IADLs should contribute to the ecological validity of the evaluation process. The objective of this study is to establish normative data from virtual reality (VR) IADLs designed to activate memory and attention functions. A total of 243 non-clinical participants carried out a paper-and-pencil Mini-Mental State Examination (MMSE) and performed 3 VR activities: art gallery visual matching task, supermarket shopping task, and memory fruit matching game. The data (execution time and errors, and money spent in the case of the supermarket activity) was automatically generated from the app. Outcomes were computed using non-parametric statistics, due to non-normality of distributions. Age, academic qualifications, and computer experience all had significant effects on most measures. Normative values for different levels of these measures were defined. Age, academic qualifications, and computer experience should be taken into account while using our VR-based platform for cognitive assessment purposes. ©Pedro Gamito, Diogo Morais, Jorge Oliveira, Paulo Ferreira Lopes, Luís Felipe Picareli, Marcelo Matias, Sara Correia, Rodrigo Brito. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 04.05.2016.
Effects of Thai Dancing on Median Neurodynamic Response During 4-Hour Computer Use.
Mekhora, Keerin; Septham, Chatdao; Jalayondeja, Wattana
2015-06-01
To investigate the effects of Thai dancing on median neurodynamic response during 4-hour computer use. Twenty-four healthy participants aged 20-30 years performed 5 minutes of Thai dancing including Prom See Na, Yoong Fon Hang, Sod Soy Mala, Lor Keaw and Cha Nee Rai Mai during a 10-minute break of 4-hour computer use. All participants were assessed for nerve tension by elbow range of motion ofupper limb neurodynamic test 1 (ULNT1) and components of quick test. The discomfort was measured by visual analogue discomfort scale (VADS). These measurements were assessed before and after computer work. The statistical analyses employed paired t-test for continuous outcome and Friedman's test. The median nerve tension (indicated by elbow range of motion) was significantly reduced at before and after work, when 5 minutes of Thai dancing was introduced during the break. While components of the quick test emphasized that Thai dance immediately helped reduce the median nerve tension. The VADS in eight body areas increased over the period of 4 hours, but decreased after performing Thai dancing (p<0.05). Thai dancing helped relieve median nerve tension and body discomfort. It may be recommended as an exercise during break for computer users who continuously work to prevent WMSDs.
An assessment technique for computer-socket manufacturing
Sanders, Joan; Severance, Michael
2015-01-01
An assessment strategy is presented for testing the quality of carving and forming of individual computer aided manufacturing facilities. The strategy is potentially useful to facilities making sockets and companies marketing manufacturing equipment. To execute the strategy, an evaluator fabricates a collection of test models and sockets using the manufacturing suite under evaluation, and then measures their shapes using scanning equipment. Overall socket quality is assessed by comparing socket shapes with electronic file shapes. Then model shapes are compared with electronic file shapes to characterize carving performance. Socket shapes are compared with model shapes to characterize forming performance. The mean radial error (MRE), which is the average difference in radii between the two shapes being compared, provides insight into sizing quality. Inter-quartile range (IQR), the range of radial error for the best matched half of the points on the surfaces being compared, provides insight into shape quality. By determining MRE and IQR for carving and forming separately, the source(s) of socket shape error may be pinpointed. The developed strategy may provide a useful tool to the prosthetics community and industry to help identify problems and limitations in computer aided manufacturing and insight into appropriate modifications to overcome them. PMID:21938663
Giacopuzzi, Edoardo; Laffranchi, Mattia; Berardelli, Romina; Ravasio, Viola; Ferrarotti, Ilaria; Gooptu, Bibek; Borsani, Giuseppe; Fra, Annamaria
2018-06-07
The growth of publicly available data informing upon genetic variations, mechanisms of disease and disease sub-phenotypes offers great potential for personalised medicine. Computational approaches are likely required to assess large numbers of novel genetic variants. However, the integration of genetic, structural and pathophysiological data still represents a challenge for computational predictions and their clinical use. We addressed these issues for alpha-1-antitrypsin deficiency, a disease mediated by mutations in the SERPINA1 gene encoding alpha-1-antitrypsin. We compiled a comprehensive database of SERPINA1 coding mutations and assigned them apparent pathological relevance based upon available data. 'Benign' and 'Pathogenic' mutations were used to assess performance of 31 pathogenicity predictors. Well-performing algorithms clustered the subset of variants known to be severely pathogenic with high scores. Eight new mutations identified in the ExAC database and achieving high scores were selected for characterisation in cell models and showed secretory deficiency and polymer formation, supporting the predictive power of our computational approach. The behaviour of the pathogenic new variants and consistent outliers were rationalised by considering the protein structural context and residue conservation. These findings highlight the potential of computational methods to provide meaningful predictions of the pathogenic significance of novel mutations and identify areas for further investigation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
2011-01-01
The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.
Directly Comparing Computer and Human Performance in Language Understanding and Visual Reasoning.
ERIC Educational Resources Information Center
Baker, Eva L.; And Others
Evaluation models are being developed for assessing artificial intelligence (AI) systems in terms of similar performance by groups of people. Natural language understanding and vision systems are the areas of concentration. In simplest terms, the goal is to norm a given natural language system's performance on a sample of people. The specific…
This study assessed the pollutant emission offset potential of distributed grid-connected photovoltaic (PV) power systems. Computer-simulated performance results were utilized for 211 PV systems located across the U.S. The PV systems' monthly electrical energy outputs were based ...
Saliency image of feature building for image quality assessment
NASA Astrophysics Data System (ADS)
Ju, Xinuo; Sun, Jiyin; Wang, Peng
2011-11-01
The purpose and method of image quality assessment are quite different for automatic target recognition (ATR) and traditional application. Local invariant feature detectors, mainly including corner detectors, blob detectors and region detectors etc., are widely applied for ATR. A saliency model of feature was proposed to evaluate feasibility of ATR in this paper. The first step consisted of computing the first-order derivatives on horizontal orientation and vertical orientation, and computing DoG maps in different scales respectively. Next, saliency images of feature were built based auto-correlation matrix in different scale. Then, saliency images of feature of different scales amalgamated. Experiment were performed on a large test set, including infrared images and optical images, and the result showed that the salient regions computed by this model were consistent with real feature regions computed by mostly local invariant feature extraction algorithms.
The influence of test mode and visuospatial ability on mathematics assessment performance
NASA Astrophysics Data System (ADS)
Logan, Tracy
2015-12-01
Mathematics assessment and testing are increasingly situated within digital environments with international tests moving to computer-based testing in the near future. This paper reports on a secondary data analysis which explored the influence the mode of assessment—computer-based (CBT) and pencil-and-paper based (PPT)—and visuospatial ability had on students' mathematics test performance. Data from 804 grade 6 Singaporean students were analysed using the knowledge discovery in data design. The results revealed statistically significant differences between performance on CBT and PPT test modes across content areas concerning whole number algebraic patterns and data and chance. However, there were no performance differences for content areas related to spatial arrangements geometric measurement or other number. There were also statistically significant differences in performance between those students who possess higher levels of visuospatial ability compared to those with lower levels across all six content areas. Implications include careful consideration for the comparability of CBT and PPT testing and the need for increased attention to the role of visuospatial reasoning in student's mathematics reasoning.
ERIC Educational Resources Information Center
Newhouse, C. Paul
2015-01-01
This paper reports on the outcomes of a three-year study investigating the use of digital technologies to increase the authenticity of high-stakes summative assessment in four Western Australian senior secondary courses. The study involved 82 teachers and 1015 students and a range of digital forms of assessment using computer-based exams, digital…
Costa, Ana Filipa; Puga-Leal, Rogério; Nunes, Isabel L
2011-01-01
The objective of this paper is to present a study on the assessment of the work ability of a group of aged computers workers. The study was developed with the goal of creating a decision making framework oriented towards the maintenance of the health and working ability of aged workers. Fifty computer workers participated in this study. They were administrative secretaries and computer technicians working mainly with office computers. The method used to assess the work ability was the Work Ability Index (WAI). 78% of the participants had good or excellent work ability and only 2% a poor one. The average WAI score was 40.5 (SD=5.761; min=27; max=49). This study confirms the decrease in work ability of workers while aging. The group overall work ability was slightly higher than the reference values develop by the Finnish Institute of Occupational Health. The assessment of work ability is fundamental to make age-friendly workplaces. WAI is one tool designed to perform such assessment. The results obtained could assist the early identification of situations where employees are struggling with their work ability, thus helping to prioritize ergonomic interventions devoted to improve the working conditions, and allowing the continued employment of aging workers on their current job.
Space Shuttle Communications Coverage Analysis for Thermal Tile Inspection
NASA Technical Reports Server (NTRS)
Kroll, Quin D.; Hwu, Shian U.; Upanavage, Matthew; Boster, John P.; Chavez, Mark A.
2009-01-01
The space shuttle ultra-high frequency Space-to-Space Communication System has to provide adequate communication coverage for astronauts who are performing thermal tile inspection and repair on the underside of the space shuttle orbiter (SSO). Careful planning and quantitative assessment are necessary to ensure successful system operations and mission safety in this work environment. This study assesses communication systems performance for astronauts who are working in the underside, non-line-of-sight shadow region on the space shuttle. All of the space shuttle and International Space Station (ISS) transmitting antennas are blocked by the SSO structure. To ensure communication coverage at planned inspection worksites, the signal strength and link margin between the SSO/ISS antennas and the extravehicular activity astronauts, whose line-of-sight is blocked by vehicle structure, was analyzed. Investigations were performed using rigorous computational electromagnetic modeling techniques. Signal strength was obtained by computing the reflected and diffracted fields along the signal propagation paths between transmitting and receiving antennas. Radio frequency (RF) coverage was determined for thermal tile inspection and repair missions using the results of this computation. Analysis results from this paper are important in formulating the limits on reliable communication range and RF coverage at planned underside inspection and repair worksites.
Performance assessment of KORAT-3D on the ANL IBM-SP computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.
1999-09-01
The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less
Total quality through computer integrated manufacturing in the pharmaceutical industry.
Ufret, C M
1995-01-01
The role of Computer Integrated Manufacturing (CIM) in the pursue of total quality in pharmaceutical manufacturing is assessed. CIM key objectives, design criteria, and performance measurements, in addition to its scope and implementation in a hierarchical structure, are explored in detail. Key elements for the success of each phase in a CIM project and a brief status of current CIM implementations in the pharmaceutical industry are presented. The role of World Class Manufacturing performance standards and other key issues to achieve full CIM benefits are also addressed.
What a Decade of Experiments Reveals about Factors that Influence the Sense of Presence
2006-03-01
Function HRV heart rate variability IBM International Business Machines Corporation ICAT International Conference on Artificial Intelligence and...Questionnaire. Person-related meas.: Social anxiety , age, gender, computer use. Task-related measures: Social anxiety assessment of partner. Performance...co-presence. (4) Computer use had a significant positive correlation with co-presence. (5) Participant’s social anxiety had a significant
Olson, Scott A.
1997-01-01
Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Hammond, Robert E.
1997-01-01
Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
ERIC Educational Resources Information Center
Mandinach, Ellen B.
This study investigated the degree to which 48 seventh and eighth grade students of different abilities acquired strategic planning knowledge from an intellectual computer game ("Wumpus"). Relationships between ability and student performance with two versions of the game were also investigated. The two versions differed in the structure…
Voruganti, Teja R; O'Brien, Mary Ann; Straus, Sharon E; McLaughlin, John R; Grunfeld, Eva
2015-09-24
Health risk assessment tools compute an individual's risk of developing a disease. Routine use of such tools by primary care physicians (PCPs) is potentially useful in chronic disease prevention. We sought physicians' awareness and perceptions of the usefulness, usability and feasibility of performing assessments with computer-based risk assessment tools in primary care settings. Focus groups and usability testing with a computer-based risk assessment tool were conducted with PCPs from both university-affiliated and community-based practices. Analysis was derived from grounded theory methodology. PCPs (n = 30) were aware of several risk assessment tools although only select tools were used routinely. The decision to use a tool depended on how use impacted practice workflow and whether the tool had credibility. Participants felt that embedding tools in the electronic medical records (EMRs) system might allow for health information from the medical record to auto-populate into the tool. User comprehension of risk could also be improved with computer-based interfaces that present risk in different formats. In this study, PCPs chose to use certain tools more regularly because of usability and credibility. Despite there being differences in the particular tools a clinical practice used, there was general appreciation for the usefulness of tools for different clinical situations. Participants characterised particular features of an ideal tool, feeling strongly that embedding risk assessment tools in the EMR would maximise accessibility and use of the tool for chronic disease management. However, appropriate practice workflow integration and features that facilitate patient understanding at point-of-care are also essential.
NASA Astrophysics Data System (ADS)
Coffman, Mitchell Ward
The purpose of this dissertation was to examine the relationship between student access to a computer at home and academic achievement. The 2009 National Assessment of Educational Progress (NAEP) dataset was probed using the National Data Explorer (NDE) to investigate correlations in the subsets of SES, Parental Education, Race, and Gender as it relates to access of a home computer and improved performance scores for U.S. public school grade 12 science students. A causal-comparative approach was employed seeking clarity on the relationship between home access and performance scores. The influence of home access cannot overcome the challenges students of lower SES face. The achievement gap, or a second digital divide, for underprivileged classes of students, including minorities does not appear to contract via student access to a home computer. Nonetheless, in tests for significance, statistically significant improvement in science performance scores was reported for those having access to a computer at home compared to those not having access. Additionally, regression models reported evidence of correlations between and among subsets of controls for the demographic factors gender, race, and socioeconomic status. Variability in these correlations was high; suggesting influence from unobserved factors may have more impact upon the dependent variable. Having access to a computer at home increases performance scores for grade 12 general science students of all races, genders and socioeconomic levels. However, the performance gap is roughly equivalent to the existing performance gap of the national average for science scores, suggesting little influence from access to a computer on academic achievement. The variability of scores reported in the regression analysis models reflects a moderate to low effect, suggesting an absence of causation. These statistical results are accurate and confirm the literature review, whereby having access to a computer at home and the predictor variables were found to have a significant impact on performance scores, although the data presented suggest computer access at home is less influential upon performance scores than poverty and its correlates.
Assessing stapes piston position using computed tomography: a cadaveric study.
Hahn, Yoav; Diaz, Rodney; Hartman, Jonathan; Bobinski, Matthew; Brodie, Hilary
2009-02-01
Temporal bone computed tomographic (CT) scanning in the postoperative stapedotomy patient is inaccurate in assessing stapes piston position within the vestibule. Poststapedotomy patients that have persistent vertigo may undergo CT scanning to assess the position of the stapes piston within the vestibule to rule out overly deep insertion. Vertigo is a recognized complication of the deep piston, and CT evaluation is often recommended. The accuracy of CT scan in this setting is unestablished. Stapedotomy was performed on 12 cadaver ears, and stainless steel McGee pistons were placed. The cadaver heads were then scanned using a fine-cut temporal bone protocol. Temporal bone dissection was performed with microscopic measurement of the piston depth in the vestibule. These values were compared with depth of intravestibular penetration measured on CT scan by 4 independent measurements. The intravestibular penetration as assessed by computed tomography was consistently greater than the value found on cadaveric anatomic dissection. The radiographic bias was greater when piston location within the vestibule was shallower. The axial CT scan measurement was 0.53 mm greater, on average, than the anatomic measurement. On average, the coronal CT measurement was 0.68 mm greater than the anatomic measurement. The degree of overestimation of penetration, however, was highly inconsistent. Standard temporal bone CT scan is neither an accurate nor precise examination of stapes piston depth within the vestibule. We found that CT measurement consistently overstated intravestibular piston depth. Computed tomography is not a useful study in the evaluation of piston depth for poststapedectomy vertigo and is of limited value in this setting.
The multi-disciplinary design study: A life cycle cost algorithm
NASA Technical Reports Server (NTRS)
Harding, R. R.; Pichi, F. J.
1988-01-01
The approach and results of a Life Cycle Cost (LCC) analysis of the Space Station Solar Dynamic Power Subsystem (SDPS) including gimbal pointing and power output performance are documented. The Multi-Discipline Design Tool (MDDT) computer program developed during the 1986 study has been modified to include the design, performance, and cost algorithms for the SDPS as described. As with the Space Station structural and control subsystems, the LCC of the SDPS can be computed within the MDDT program as a function of the engineering design variables. Two simple examples of MDDT's capability to evaluate cost sensitivity and design based on LCC are included. MDDT was designed to accept NASA's IMAT computer program data as input so that IMAT's detailed structural and controls design capability can be assessed with expected system LCC as computed by MDDT. No changes to IMAT were required. Detailed knowledge of IMAT is not required to perform the LCC analyses as the interface with IMAT is noninteractive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrissey, Elmer; O'Donnell, James; Keane, Marcus
2004-03-29
Minimizing building life cycle energy consumption is becoming of paramount importance. Performance metrics tracking offers a clear and concise manner of relating design intent in a quantitative form. A methodology is discussed for storage and utilization of these performance metrics through an Industry Foundation Classes (IFC) instantiated Building Information Model (BIM). The paper focuses on storage of three sets of performance data from three distinct sources. An example of a performance metrics programming hierarchy is displayed for a heat pump and a solar array. Utilizing the sets of performance data, two discrete performance effectiveness ratios may be computed, thus offeringmore » an accurate method of quantitatively assessing building performance.« less
Probability calculations for three-part mineral resource assessments
Ellefsen, Karl J.
2017-06-27
Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.
Applicability Analysis of Validation Evidence for Biomedical Computational Models
Pathmanathan, Pras; Gray, Richard A.; Romero, Vicente J.; ...
2017-09-07
Computational modeling has the potential to revolutionize medicine the way it transformed engineering. However, despite decades of work, there has only been limited progress to successfully translate modeling research to patient care. One major difficulty which often occurs with biomedical computational models is an inability to perform validation in a setting that closely resembles how the model will be used. For example, for a biomedical model that makes in vivo clinically relevant predictions, direct validation of predictions may be impossible for ethical, technological, or financial reasons. Unavoidable limitations inherent to the validation process lead to challenges in evaluating the credibilitymore » of biomedical model predictions. Therefore, when evaluating biomedical models, it is critical to rigorously assess applicability, that is, the relevance of the computational model, and its validation evidence to the proposed context of use (COU). However, there are no well-established methods for assessing applicability. In this paper, we present a novel framework for performing applicability analysis and demonstrate its use with a medical device computational model. The framework provides a systematic, step-by-step method for breaking down the broad question of applicability into a series of focused questions, which may be addressed using supporting evidence and subject matter expertise. The framework can be used for model justification, model assessment, and validation planning. While motivated by biomedical models, it is relevant to a broad range of disciplines and underlying physics. Finally, the proposed applicability framework could help overcome some of the barriers inherent to validation of, and aid clinical implementation of, biomedical models.« less
Applicability Analysis of Validation Evidence for Biomedical Computational Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pathmanathan, Pras; Gray, Richard A.; Romero, Vicente J.
Computational modeling has the potential to revolutionize medicine the way it transformed engineering. However, despite decades of work, there has only been limited progress to successfully translate modeling research to patient care. One major difficulty which often occurs with biomedical computational models is an inability to perform validation in a setting that closely resembles how the model will be used. For example, for a biomedical model that makes in vivo clinically relevant predictions, direct validation of predictions may be impossible for ethical, technological, or financial reasons. Unavoidable limitations inherent to the validation process lead to challenges in evaluating the credibilitymore » of biomedical model predictions. Therefore, when evaluating biomedical models, it is critical to rigorously assess applicability, that is, the relevance of the computational model, and its validation evidence to the proposed context of use (COU). However, there are no well-established methods for assessing applicability. In this paper, we present a novel framework for performing applicability analysis and demonstrate its use with a medical device computational model. The framework provides a systematic, step-by-step method for breaking down the broad question of applicability into a series of focused questions, which may be addressed using supporting evidence and subject matter expertise. The framework can be used for model justification, model assessment, and validation planning. While motivated by biomedical models, it is relevant to a broad range of disciplines and underlying physics. Finally, the proposed applicability framework could help overcome some of the barriers inherent to validation of, and aid clinical implementation of, biomedical models.« less
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task
NASA Astrophysics Data System (ADS)
Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.
2014-12-01
Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.
Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A
2014-12-01
To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
Engineering studies related to Skylab program. [assessment of automatic gain control data
NASA Technical Reports Server (NTRS)
Hayne, G. S.
1973-01-01
The relationship between the S-193 Automatic Gain Control data and the magnitude of received signal power was studied in order to characterize performance parameters for Skylab equipment. The r-factor was used for the assessment and is defined to be less than unity, and a function of off-nadir angle, ocean surface roughness, and receiver signal to noise ratio. A digital computer simulation was also used to assess to additive receiver, or white noise. The system model for the digital simulation is described, along with intermediate frequency and video impulse response functions used, details of the input waveforms, and results to date. Specific discussion of the digital computer programs used is also provided.
Ga-67 uptake in the lung in sarcoidosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, D.G.; Johnson, S.M.; Harris, C.C.
1984-02-01
Images were obtained with Ga-67 and bronchopulmonary lavage performed in 21 patients with sarcoidosis (31 studies). The Ga-67 index, a semiquantitative criterion, was compared to a quantitative computer index based on lung:liver activity ratios; accuracy in predicting active alveolitis (defined by lavage lymphocyte counts) was assessed and differences between 24- and 48-hour studies examined. Computer activity ratios correlated well with the Ga-67 index, which had a sensitivity of 64%, specificity of 71%, 82%, and 77%, respectively, for the computer scores. Scores at 24 and 48 hours were similar. These results suggest that (a) Ga-67 scanning is useful in staging activitymore » in pulmonary sarcoidosis, (b) quantitative computer scores are accurate in predicting disease activity, and (c) scanning can be performed 24 or 48 hours after injection.« less
Rondon, Silmara; Sassi, Fernanda Chiarion; Furquim de Andrade, Claudia Regina
2013-02-25
Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students' prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students' performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students' short and long-term knowledge retention.
2013-01-01
Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention. PMID:23442203
Assessing the limitations of the Banister model in monitoring training
Hellard, Philippe; Avalos, Marta; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude; Millet, Grégoire P.
2006-01-01
The aim of this study was to carry out a statistical analysis of the Banister model to verify how useful it is in monitoring the training programmes of elite swimmers. The accuracy, the ill-conditioning and the stability of this model were thus investigated. Training loads of nine elite swimmers, measured over one season, were related to performances with the Banister model. Firstly, to assess accuracy, the 95% bootstrap confidence interval (95% CI) of parameter estimates and modelled performances were calculated. Secondly, to study ill-conditioning, the correlation matrix of parameter estimates was computed. Finally, to analyse stability, iterative computation was performed with the same data but minus one performance, chosen randomly. Performances were significantly related to training loads in all subjects (R2= 0.79 ± 0.13, P < 0.05) and the estimation procedure seemed to be stable. Nevertheless, the 95% CI of the most useful parameters for monitoring training were wide τa =38 (17, 59), τf =19 (6, 32), tn =19 (7, 35), tg =43 (25, 61). Furthermore, some parameters were highly correlated making their interpretation worthless. The study suggested possible ways to deal with these problems and reviewed alternative methods to model the training-performance relationships. PMID:16608765
Understanding the Performance and Potential of Cloud Computing for Scientific Applications
Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin; ...
2015-02-19
In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less
Understanding the Performance and Potential of Cloud Computing for Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin
In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less
Bret-Zurita, Montserrat; Cuesta, Emilio; Cartón, Antonio; Díez, Jesús; Aroca, Ángel; Oliver, José M; Gutiérrez-Larraya, Federico
2014-11-01
Although congenital heart defects are the most common major congenital abnormalities, the associated mortality has been decreasing due to improvements in their diagnosis and treatment. We assessed the usefulness of 64-multidetector computed tomography in the diagnosis and management of these patients. This 5-year observational, analytical, retrospective, cohort study included a total of 222 tomographic studies of patients with congenital heart disease. Computed tomography scans were read twice and medical records were reviewed. We assessed the complexity of the disease, patient, and radiological technique, and evaluated the contribution of new data in relation to clinical suspicion and diagnostic change. A confidence interval was set at 95% and a P value of<.05 was used as the cutoff for statistical significance. In 35.1% of patients, the treatment procedure was performed after computed tomography without other tests. Additional diagnostic catheterization was performed in 12.5% of patients. There were new findings in 77% of patients (82.9% with complex disease), which prompted a change in patient management in 35.6%. All unexpected reports described new findings. No significant differences were found by age, sex, study period, urgency of the test order, patient complexity, or difficulty of the technique. Use of 64-detector computed tomography yields good diagnostic performance in congenital heart disease, prompts changes in management in more than one-third of patients, and reveals new findings in relation to the presumed diagnosis in 77% of patients. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.
Computer Security Incident Response Team Effectiveness: A Needs Assessment
Van der Kleij, Rick; Kleinhuis, Geert; Young, Heather
2017-01-01
Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response. PMID:29312051
Computer Security Incident Response Team Effectiveness: A Needs Assessment.
Van der Kleij, Rick; Kleinhuis, Geert; Young, Heather
2017-01-01
Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response.
ERIC Educational Resources Information Center
Meritt, Julia; Gibson, David; Christensen, Rhonda; Knezek, Gerald
2013-01-01
Two alternative technologies forming the basis of computer-mediated teacher preparation systems are compared and contrasted regarding implementation, operation, and assessment considerations. The role-playing system in Second Life is shown to have the unique characteristic of developing a co-constructed pedagogical identity, while the flight…
ERIC Educational Resources Information Center
Bulut, Okan; Lei, Ming; Guo, Qi
2018-01-01
Item positions in educational assessments are often randomized across students to prevent cheating. However, if altering item positions results in any significant impact on students' performance, it may threaten the validity of test scores. Two widely used approaches for detecting position effects -- logistic regression and hierarchical…
Diagnostic cardiology: Noninvasive imaging techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Come, P.C.
1985-01-01
This book contains 23 chapters. Some of the chapter titles are: The chest x-ray and cardiac series; Computed tomographic scanning of the heart, coronary arteries, and great vessels; Digital subtraction angiography in the assessment of cardiovascular disease; Magnetic resonance: technique and cardiac applications; Basics of radiation physics and instrumentation; and Nuclear imaging: the assessment of cardiac performance.
Guide to Mathematics Released Items: Understanding Scoring. 2015
ERIC Educational Resources Information Center
Partnership for Assessment of Readiness for College and Careers, 2015
2015-01-01
The 2014-2015 administrations of the PARCC assessment included two separate test administration windows: the Performance-Based Assessment (PBA) and the End-of-Year (EOY), both of which were administered in paper-based and computer-based formats. The first window was for administration of the PBA, and the second window was for the administration of…
An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon
NASA Technical Reports Server (NTRS)
Rutherford, Brian
2000-01-01
The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.
NASA Technical Reports Server (NTRS)
Booth, David; Flegel, Ashlie
2015-01-01
A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.
NASA Technical Reports Server (NTRS)
Booth, David T.; Flegel, Ashlie B.
2015-01-01
A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.
Medical education as a science: the quality of evidence for computer-assisted instruction.
Letterie, Gerard S
2003-03-01
A marked increase in the number of computer programs for computer-assisted instruction in the medical sciences has occurred over the past 10 years. The quality of both the programs and the literature that describe these programs has varied considerably. The purposes of this study were to evaluate the published literature that described computer-assisted instruction in medical education and to assess the quality of evidence for its implementation, with particular emphasis on obstetrics and gynecology. Reports published between 1988 and 2000 on computer-assisted instruction in medical education were identified through a search of MEDLINE and Educational Resource Identification Center and a review of the bibliographies of the articles that were identified. Studies were selected if they included a description of computer-assisted instruction in medical education, regardless of the type of computer program. Data were extracted with a content analysis of 210 reports. The reports were categorized according to study design (comparative, prospective, descriptive, review, or editorial), type of computer-assisted instruction, medical specialty, and measures of effectiveness. Computer-assisted instruction programs included online technologies, CD-ROMs, video laser disks, multimedia work stations, virtual reality, and simulation testing. Studies were identified in all medical specialties, with a preponderance in internal medicine, general surgery, radiology, obstetrics and gynecology, pediatrics, and pathology. Ninety-six percent of the articles described a favorable impact of computer-assisted instruction in medical education, regardless of the quality of the evidence. Of the 210 reports that were identified, 60% were noncomparative, descriptive reports of new techniques in computer-assisted instruction, and 15% and 14% were reviews and editorials, respectively, of existing technology. Eleven percent of studies were comparative and included some form of assessment of the effectiveness of the computer program. These assessments included pre- and posttesting and questionnaires to score program quality, perceptions of the medical students and/or residents regarding the program, and impact on learning. In one half of these comparative studies, computer-assisted instruction was compared with traditional modes of teaching, such as text and lectures. Six studies compared performance before and after the computer-assisted instruction. Improvements were shown in 5 of the studies. In the remainder of the studies, computer-assisted instruction appeared to result in similar test performance. Despite study design or outcome, most articles described enthusiastic endorsement of the programs by the participants, including medical students, residents, and practicing physicians. Only 1 study included cost analysis. Thirteen of the articles were in obstetrics and gynecology. Computer-assisted instruction has assumed to have an increasing role in medical education. In spite of enthusiastic endorsement and continued improvements in software, few studies of good design clearly demonstrate improvement in medical education over traditional modalities. There are no comparative studies in obstetrics and gynecology that demonstrate a clear-cut advantage. Future studies of computer-assisted instruction that include comparisons and cost assessments to gauge their effectiveness over traditional methods may better define their precise role.
NASA Astrophysics Data System (ADS)
Simeral, J. D.; Kim, S.-P.; Black, M. J.; Donoghue, J. P.; Hochberg, L. R.
2011-04-01
The ongoing pilot clinical trial of the BrainGate neural interface system aims in part to assess the feasibility of using neural activity obtained from a small-scale, chronically implanted, intracortical microelectrode array to provide control signals for a neural prosthesis system. Critical questions include how long implanted microelectrodes will record useful neural signals, how reliably those signals can be acquired and decoded, and how effectively they can be used to control various assistive technologies such as computers and robotic assistive devices, or to enable functional electrical stimulation of paralyzed muscles. Here we examined these questions by assessing neural cursor control and BrainGate system characteristics on five consecutive days 1000 days after implant of a 4 × 4 mm array of 100 microelectrodes in the motor cortex of a human with longstanding tetraplegia subsequent to a brainstem stroke. On each of five prospectively-selected days we performed time-amplitude sorting of neuronal spiking activity, trained a population-based Kalman velocity decoding filter combined with a linear discriminant click state classifier, and then assessed closed-loop point-and-click cursor control. The participant performed both an eight-target center-out task and a random target Fitts metric task which was adapted from a human-computer interaction ISO standard used to quantify performance of computer input devices. The neural interface system was further characterized by daily measurement of electrode impedances, unit waveforms and local field potentials. Across the five days, spiking signals were obtained from 41 of 96 electrodes and were successfully decoded to provide neural cursor point-and-click control with a mean task performance of 91.3% ± 0.1% (mean ± s.d.) correct target acquisition. Results across five consecutive days demonstrate that a neural interface system based on an intracortical microelectrode array can provide repeatable, accurate point-and-click control of a computer interface to an individual with tetraplegia 1000 days after implantation of this sensor.
Simeral, J D; Kim, S-P; Black, M J; Donoghue, J P; Hochberg, L R
2013-01-01
The ongoing pilot clinical trial of the BrainGate neural interface system aims in part to assess the feasibility of using neural activity obtained from a small-scale, chronically implanted, intracortical microelectrode array to provide control signals for a neural prosthesis system. Critical questions include how long implanted microelectrodes will record useful neural signals, how reliably those signals can be acquired and decoded, and how effectively they can be used to control various assistive technologies such as computers and robotic assistive devices, or to enable functional electrical stimulation of paralyzed muscles. Here we examined these questions by assessing neural cursor control and BrainGate system characteristics on five consecutive days 1000 days after implant of a 4 × 4 mm array of 100 microelectrodes in the motor cortex of a human with longstanding tetraplegia subsequent to a brainstem stroke. On each of five prospectively-selected days we performed time-amplitude sorting of neuronal spiking activity, trained a population-based Kalman velocity decoding filter combined with a linear discriminant click state classifier, and then assessed closed-loop point-and-click cursor control. The participant performed both an eight-target center-out task and a random target Fitts metric task which was adapted from a human-computer interaction ISO standard used to quantify performance of computer input devices. The neural interface system was further characterized by daily measurement of electrode impedances, unit waveforms and local field potentials. Across the five days, spiking signals were obtained from 41 of 96 electrodes and were successfully decoded to provide neural cursor point-and-click control with a mean task performance of 91.3% ± 0.1% (mean ± s.d.) correct target acquisition. Results across five consecutive days demonstrate that a neural interface system based on an intracortical microelectrode array can provide repeatable, accurate point-and-click control of a computer interface to an individual with tetraplegia 1000 days after implantation of this sensor. PMID:21436513
[Clinical decision to perform cranial computed tomography in children with non-severe head injury].
Franco-Koehrlen, Celine Alicia; Iglesias-Leboreiro, José; Bernárdez-Zapata, Isabel; Rendón-Macías, Mario Enrique
The main goal of this article was to evaluate if the decision to perform cranial computed tomography (CT) in children with minor head injury is determined by the presence or absence of the physician during assessment in the emergency room. Clinical files of 92 patients from 8 months to 4 years of age were selected. Those children were evaluated at the emergency department of the Spanish Hospital of Mexico due to non-severe traumatic brain injury. Glasgow Coma Scale (GCS) score was determined in all patients. Groups of patients were compared: 1) patients having CT, 2) patients with a physician who attended the initial assessment, 3) patients whose attending physician did not arrive to assess the patient and 4) patients assessed by the emergency room staff. 38% of patients with non-severe brain injury underwent CT, 8.6% had a brain injury visible on the CT. Moderate intensity impacts were greater in patients with CT. Regarding the ECG, it was found that most children scored 15 points (p=0.03). In patients without a physician, a greater trend was demonstrated for performing CT. Patients with minor head injury but without neurological signs should undergo a detailed clinical evaluation in order to avoid unwarranted CT. Copyright © 2015. Publicado por Masson Doyma México S.A.
Application of CFD to a generic hypersonic flight research study
NASA Technical Reports Server (NTRS)
Green, Michael J.; Lawrence, Scott L.; Dilley, Arthur D.; Hawkins, Richard W.; Walker, Mary M.; Oberkampf, William L.
1993-01-01
Computational analyses have been performed for the initial assessment of flight research vehicle concepts that satisfy requirements for potential hypersonic experiments. Results were obtained from independent analyses at NASA Ames, NASA Langley, and Sandia National Labs, using sophisticated time-dependent Navier-Stokes and parabolized Navier-Stokes methods. Careful study of a common problem consisting of hypersonic flow past a slightly blunted conical forebody was undertaken to estimate the level of uncertainty in the computed results, and to assess the capabilities of current computational methods for predicting boundary-layer transition onset. Results of this study in terms of surface pressure and heat transfer comparisons, as well as comparisons of boundary-layer edge quantities and flow-field profiles are presented here. Sensitivities to grid and gas model are discussed. Finally, representative results are presented relating to the use of Computational Fluid Dynamics in the vehicle design and the integration/support of potential experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The algorithm develops a single health score for office computers, today just Windows, but we plan to extend this to Apple computers. The score is derived from various parameters, including: CPU Utilization; Memory Utilization; Various Error logs; Disk Problems; and Disk write queue length. It then uses a weighting scheme to balance these parameters and provide an overall health score. By using these parameters, we are not just assessing the theoretical performance of the components of the computer, rather we are using actual performance metrics that are selected to be a more realistic representation of the experience of the personmore » using the computer. This includes compensating for the nature of their use. If there are two identical computers and the user of one places heavy demands on their computer compared with the user of the second computer, the former will have a lower health score. This allows us to provide a 'fit for purpose' score tailored to the assigned user. This is very helpful data to inform the mangers when individual computers need to be replaced. Additionally it provides specific information that can facilitate the fixing of the computer, to extend it's useful lifetime. This presents direct financial savings, time savings for users transferring from one computer to the next, and better environmental stewardship.« less
Carlsson, Emilia; Miniscalco, Carmela; Gillberg, Christopher; Åsberg Johnels, Jakob
2018-03-26
We have developed a False-Belief (FB) understanding task for use on a computer tablet, trying to assess FB understanding in a less social way. It is based on classical FB protocols, and additionally includes a manipulation of language in an attempt to explore the facilitating effect of linguistic support during FB processing. Specifically, the FB task was presented in three auditory conditions: narrative, silent, and interference. The task was assumed to shed new light on the FB difficulties often observed in Autism Spectrum Disorder (ASD). Sixty-eight children with ASD (M = 7.5 years) and an age matched comparison group with 98 typically developing (TD) children were assessed with the FB task. The children with ASD did not perform above chance level in any condition, and significant differences in success rates were found between the groups in two conditions (silent and narrative), with TD children performing better. We discuss implications, limitations, and further developments.
Prediction of Transonic Vortex Flows Using Linear and Nonlinear Turbulent Eddy Viscosity Models
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Gatski, Thomas B.
2000-01-01
Three-dimensional transonic flow over a delta wing is investigated with a focus on the effect of transition and influence of turbulence stress anisotropies. The performance of linear eddy viscosity models and an explicit algebraic stress model is assessed at the start of vortex flow, and the results compared with experimental data. To assess the effect of transition location, computations that either fix transition or are fully turbulent are performed. To assess the effect of the turbulent stress anisotropy, comparisons are made between predictions from the algebraic stress model and the linear eddy viscosity models. Both transition location and turbulent stress anisotropy significantly affect the 3D flow field. The most significant effect is found to be the modeling of transition location. At a Mach number of 0.90, the computed solution changes character from steady to unsteady depending on transition onset. Accounting for the anisotropies in the turbulent stresses also considerably impacts the flow, most notably in the outboard region of flow separation.
Assessment of Reduced-Kinetics Mechanisms for Combustion of Jet Fuel in CFD Applications
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Kundu, Krihna P.; Yungster, Shaye J.
2014-01-01
A computational effort was undertaken to analyze the details of fluid flow in Lean-Direct Injection (LDI) combustors for next-generation LDI design. The National Combustor Code (NCC) was used to perform reacting flow computations on single-element LDI injector configurations. The feasibility of using a reduced chemical-kinetics approach, which optimizes the reaction rates and species to model the emissions characteristics typical of lean-burning gas-turbine combustors, was assessed. The assessments were performed with Reynolds- Averaged Navier-Stokes (RANS) and Time-Filtered Navier Stokes (TFNS) time-integration, with a Lagrangian spray model with the NCC code. The NCC predictions for EINOx and combustor exit temperature were compared with experimental data for two different single-element LDI injector configurations, with 60deg and 45deg axially swept swirler vanes. The effects of turbulence-chemistry interaction on the predicted flow in a typical LDI combustor were studied with detailed comparisons of NCC TFNS with experimental data.
Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.
Fleming, Stephen M; Daw, Nathaniel D
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Self-Evaluation of Decision-Making: A General Bayesian Framework for Metacognitive Computation
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one’s own actions to metacognitive judgments. In addition, the model provides insight into why subjects’ metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. PMID:28004960
Patel, A; Jameson, K A; Edwards, M H; Ward, K; Gale, C R; Cooper, C; Dennison, Elaine M
2018-04-24
This study investigated the association between mild cognitive impairment (MCI) and physical function and bone health in older adults. MCI was associated with poor physical performance but not bone mineral density or bone microarchitecture. Cross-sectional study to investigate the association between mild cognitive impairment (MCI) and physical performance, and bone health, in a community-dwelling cohort of older adults. Cognitive function of 222 men and 221 women (mean age 75.5 and 75.8 years in men and women, respectively) was assessed by the Strawbridge questionnaire and Mini Mental State Exam (MMSE). Participants underwent dual-energy X-ray absorptiometry (DXA), peripheral-quantitative computed tomography (pQCT) and high-resolution peripheral-quantitative computed tomography (HR-pQCT) scans to assess their bone density, strength and microarchitecture. Their physical function was assessed and a physical performance (PP) score was recorded. In the study, 11.8% of women and 8.1% of men were cognitively impaired on the MMSE (score < 24). On the Strawbridge questionnaire, 24% of women were deemed cognitively impaired compared to 22.3% of men. Cognitive impairment on the Strawbridge questionnaire was associated with poorer physical performance score in men but not in women in the unadjusted analysis. MMSE < 24 was strongly associated with the risk of low physical performance in men (OR 12.9, 95% CI 1.67, 99.8, p = 0.01). Higher MMSE score was associated with better physical performance in both sexes. Poorer cognitive function, whether assessed by the Strawbridge questionnaire, or by MMSE score, was not associated with bone density, shape or microarchitecture, in either sex. MCI in older adults was associated with poor physical performance, but not bone density, shape or microarchitecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mou, J.I.; King, C.
The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downing, D.J.
1993-10-01
This paper discusses Carol Gotway`s paper, ``The Use of Conditional Simulation in Nuclear Waste Site Performance Assessment.`` The paper centers on the use of conditional simulation and the use of geostatistical methods to simulate an entire field of values for subsequent use in a complex computer model. The issues of sampling designs for geostatistics, semivariogram estimation and anisotropy, turning bands method for random field generation, and estimation of the comulative distribution function are brought out.
Instantaneous Assessment Of Athletic Performance Using High Speed Video
NASA Astrophysics Data System (ADS)
Hubbard, Mont; Alaways, LeRoy W.
1988-02-01
We describe the use of high speed video to provide quantitative assessment of motion in athletic performance. Besides the normal requirement for accuracy, an essential feature is that the information be provided rapidly enough so that it my serve as valuable feedback in the learning process. The general considerations which must be addressed in the development of such a computer based system are discussed. These ideas are illustrated specifically through the description of a prototype system which has been designed for the javelin throw.
Diamond, Alan; Nowotny, Thomas; Schmuker, Michael
2016-01-01
Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing. PMID:26778950
Talarczyk-Desole, Joanna; Berger, Anna; Taszarek-Hauke, Grażyna; Hauke, Jan; Pawelczyk, Leszek; Jedrzejczak, Piotr
2017-01-01
The aim of the study was to check the quality of computer-assisted sperm analysis (CASA) system in comparison to the reference manual method as well as standardization of the computer-assisted semen assessment. The study was conducted between January and June 2015 at the Andrology Laboratory of the Division of Infertility and Reproductive Endocrinology, Poznań University of Medical Sciences, Poland. The study group consisted of 230 men who gave sperm samples for the first time in our center as part of an infertility investigation. The samples underwent manual and computer-assisted assessment of concentration, motility and morphology. A total of 184 samples were examined twice: manually, according to the 2010 WHO recommendations, and with CASA, using the program set-tings provided by the manufacturer. Additionally, 46 samples underwent two manual analyses and two computer-assisted analyses. The p-value of p < 0.05 was considered as statistically significant. Statistically significant differences were found between all of the investigated sperm parameters, except for non-progressive motility, measured with CASA and manually. In the group of patients where all analyses with each method were performed twice on the same sample we found no significant differences between both assessments of the same probe, neither in the samples analyzed manually nor with CASA, although standard deviation was higher in the CASA group. Our results suggest that computer-assisted sperm analysis requires further improvement for a wider application in clinical practice.
NASA Technical Reports Server (NTRS)
1975-01-01
A system analysis of the shuttle orbiter baseline system management (SM) computer function is performed. This analysis results in an alternative SM design which is also described. The alternative design exhibits several improvements over the baseline, some of which are increased crew usability, improved flexibility, and improved growth potential. The analysis consists of two parts: an application assessment and an implementation assessment. The former is concerned with the SM user needs and design functional aspects. The latter is concerned with design flexibility, reliability, growth potential, and technical risk. The system analysis is supported by several topical investigations. These include: treatment of false alarms, treatment of off-line items, significant interface parameters, and a design evaluation checklist. An in-depth formulation of techniques, concepts, and guidelines for design of automated performance verification is discussed.
Predicting "Hot" and "Warm" Spots for Fragment Binding.
Rathi, Prakash Chandra; Ludlow, R Frederick; Hall, Richard J; Murray, Christopher W; Mortenson, Paul N; Verdonk, Marcel L
2017-05-11
Computational fragment mapping methods aim to predict hotspots on protein surfaces where small fragments will bind. Such methods are popular for druggability assessment as well as structure-based design. However, to date researchers developing or using such tools have had no clear way of assessing the performance of these methods. Here, we introduce the first diverse, high quality validation set for computational fragment mapping. The set contains 52 diverse examples of fragment binding "hot" and "warm" spots from the Protein Data Bank (PDB). Additionally, we describe PLImap, a novel protocol for fragment mapping based on the Protein-Ligand Informatics force field (PLIff). We evaluate PLImap against the new fragment mapping test set, and compare its performance to that of simple shape-based algorithms and fragment docking using GOLD. PLImap is made publicly available from https://bitbucket.org/AstexUK/pli .
Kuchma, V R; Teksheva, L M; Vyatleva, O A; Kurgansky, A M
2013-01-01
A comparison of the legibility of the three different media: paper personal computer (LCD) and the e-reader in schoolchildren (12-14 years of age) was performed. Comprehensive assessment of the reading performance (speed of reading aloud, the number of errors in reading, and integral indicator of the degree of difficulty of visual task), its physiological value (according to data of EEG, EOG and ECG), and subjective preference of type of media by the schoolchildren showed that by the sum of signs e-reader occupies an intermediate position between printed text and the computer screen. The effect of increased emotional intensity of using e-reader (increased motivation), which is manifested in the preservation of the sympathetic nervous system activation after the reading, was obtained. The necessity of additional research has been shown.
Spontaneous Movements of a Computer Mouse Reveal Egoism and In-group Favoritism.
Maliszewski, Norbert; Wojciechowski, Łukasz; Suszek, Hubert
2017-01-01
The purpose of the project was to assess whether the first spontaneous movements of a computer mouse, when making an assessment on a scale presented on the screen, may express a respondent's implicit attitudes. In Study 1, the altruistic behaviors of 66 students were assessed. The students were led to believe that the task they were performing was also being performed by another person and they were asked to distribute earnings between themselves and the partner. The participants performed the tasks under conditions with and without distractors. With the distractors, in the first few seconds spontaneous mouse movements on the scale expressed a selfish distribution of money, while later the movements gravitated toward more altruism. In Study 2, 77 Polish students evaluated a painting by a Polish/Jewish painter on a scale. They evaluated it under conditions of full or distracted cognitive abilities. Spontaneous movements of the mouse on the scale were analyzed. In addition, implicit attitudes toward both Poles and Jews were measured with the Implicit Association Test (IAT). A significant association between implicit attitudes (IAT) and spontaneous evaluation of images using a computer mouse was observed in the group with the distractor. The participants with strong implicit in-group favoritism of Poles revealed stronger preference for the Polish painter's work in the first few seconds of mouse movement. Taken together, these results suggest that spontaneous mouse movements may reveal egoism (in-group favoritism), i.e., processes that were not observed in the participants' final decisions (clicking on the scale).
The Effect of Formative Assessments on Language Performance
ERIC Educational Resources Information Center
Radford, Brian W.
2014-01-01
This study sought to improve the language learning outcomes at the Missionary Training Center in Provo, Utah. Young men and women between the ages of 19-24 are taught a foreign language in an accelerated environment. In an effort to improve learning outcomes, computer-based practice and teaching of language performance criteria were provided to…
ERIC Educational Resources Information Center
Sasson, Joseph R.; Austin, John
2005-01-01
Eleven computer terminal operators participated in an experiment that assessed effects of several interventions aimed at increasing safe ergonomic performance. All participants received ergonomics training and performance feedback while six of them collected observations of safe behavior among the remaining five participants. Effects of…
Lifespan Differences in Nonlinear Dynamics during Rest and Auditory Oddball Performance
ERIC Educational Resources Information Center
Muller, Viktor; Lindenberger, Ulman
2012-01-01
Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an…
Ghobrial, Fady Emil Ibrahim; Eldin, Manal Salah; Razek, Ahmed Abdel Khalek Abdel; Atwan, Nadia Ibrahim; Shamaa, Sameh Sayed Ahmed
2017-01-01
To assess inter-observer agreement of revised RECIST criteria (version 1.1) for computed tomography assessment of hepatic metastases of breast cancer. A prospective study was conducted in 28 female patients with breast cancer and with at least one measurable metastatic lesion in the liver that was treated with 3 cycles of anthracycline-based chemotherapy. All patients underwent computed tomography of the abdomen with 64-row multi- detector CT at baseline and after 3 cycles of chemotherapy for response assessment. Image analysis was performed by 2 observers, based on the RECIST criteria (version 1.1). Computed tomography revealed partial response of hepatic metastases in 7 patients (25%) by one observer and in 10 patients (35.7%) by the other observer, with good inter-observer agreement (k=0.75, percent agreement of 89.29%). Stable disease was detected in 19 patients (67.8%) by one observer and in 16 patients (57.1%) by the other observer, with good agreement (k=0.774, percent agreement of 89.29%). Progressive disease was detected in 2 patients (7.2%) by both observers, with perfect agreement (k=1, percent agreement of 100%). The overall inter-observer agreement in the CT-based response assessment of hepatic metastasis between the two observers was good ( k =0.793, percent agreement of 89.29%). We concluded that computed tomography is a reliable and reproducible imaging modality for response assessment of hepatic metastases of breast cancer according to the RECIST criteria (version 1.1).
Liu, Y; Wickens, C D
1994-11-01
The evaluation of mental workload is becoming increasingly important in system design and analysis. The present study examined the structure and assessment of mental workload in performing decision and monitoring tasks by focusing on two mental workload measurements: subjective assessment and time estimation. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines displayed on a computer monitor. The subject was either in charge of the customer assignment (manual mode) or was monitoring an automated system performing the same task (automatic mode). In both cases, the subjects were required to detect the non-optimal assignments that they or the computer had made. Time pressure was manipulated by the experimenter to create fast and slow conditions. The results revealed a multi-dimensional structure of mental workload and a multi-step process of subjective workload assessment. The results also indicated that subjective workload was more influenced by the subject's participatory mode than by the factor of task speed. The time estimation intervals produced while performing the decision and monitoring tasks had significantly greater length and larger variability than those produced while either performing no other tasks or performing a well practised customer assignment task. This result seemed to indicate that time estimation was sensitive to the presence of perceptual/cognitive demands, but not to response related activities to which behavioural automaticity has developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorai, Prashun; Toberer, Eric S.; Stevanović, Vladan
Quasi low-dimensional structures are abundant among known thermoelectric materials, primarily because of their low lattice thermal conductivities. In this work, we have computationally assessed the potential of 427 known binary quasi-2D structures in 272 different chemistries for thermoelectric performance. To assess the thermoelectric performance, we employ an improved version of our previously developed descriptor for thermoelectric performance [Yan et al., Energy Environ. Sci., 2015, 8, 983]. The improvement is in the explicit treatment of van der Waals interactions in quasi-2D materials, which leads to significantly better predictions of their crystal structures and lattice thermal conductivities. The improved methodology correctly identifiesmore » known binary quasi-2D thermoelectric materials such as Sb2Te3, Bi2Te3, SnSe, SnS, InSe, and In2Se3. As a result, we propose candidate quasi-2D binary materials, a number of which have not been previously considered for thermoelectric applications.« less
Two-Dimensional Computational Model for Wave Rotor Flow Dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
1996-01-01
A two-dimensional (theta,z) Navier-Stokes solver for multi-port wave rotor flow simulation is described. The finite-volume form of the unsteady thin-layer Navier-Stokes equations are integrated in time on multi-block grids that represent the stationary inlet and outlet ports and the moving rotor passages of the wave rotor. Computed results are compared with three-port wave rotor experimental data. The model is applied to predict the performance of a planned four-port wave rotor experiment. Two-dimensional flow features that reduce machine performance and influence rotor blade and duct wall thermal loads are identified. The performance impact of rounding the inlet port wall, to inhibit separation during passage gradual opening, is assessed.
Coexisting cervical spondylotic myelopathy and bilateral carpal tunnel syndromes.
Epstein, N E; Epstein, J A; Carras, R
1989-03-01
In six patients, operations for bilateral carpal tunnel syndromes (CTS) were performed or were about to be performed without the awareness of the presence of underlying cervical spondylo-stenosis. Only later, when symptoms of myeloradiculopathy were recognized, was the diagnosis confirmed and decompressive laminectomy performed. Because the symptoms of CTS may resemble or be masked and accentuated by the cervical disorder, patients with the presumed diagnosis of bilateral CTS should undergo appropriate critical neurologic, electrodiagnostic, and neuroradiologic (magnetic resonance imaging, computed tomography, myelo-computed tomography) assessment. If these guidelines are followed, the radiculopathy caused by cervical pathology will be appropriately recognized and treated, possibly averting the need for carpal tunnel decompression or modifying treatment.
Friederichs, Hendrik; Marschall, Bernhard; Weissenstein, Anne
2014-12-05
Practicing evidence-based medicine is an important aspect of providing good medical care. Accessing external information through literature searches on computer-based systems can effectively achieve integration in clinical care. We conducted a pilot study using smartphones, tablets, and stationary computers as search devices at the bedside. The objective was to determine possible differences between the various devices and assess students' internet use habits. In a randomized controlled pilot study, 120 students were divided in three groups. One control group solved clinical problems on a computer and two intervention groups used mobile devices at the bedside. In a questionnaire, students were asked to report their internet use habits as well as their satisfaction with their respective search tool using a 5-point Likert scale. Of 120 surveys, 94 (78.3%) complete data sets were analyzed. The mobility of the tablet (3.90) and the smartphone (4.39) was seen as a significant advantage over the computer (2.38, p < .001). However, for performing an effective literature search at the bedside, the computer (3.22) was rated superior to both tablet computers (2.13) and smartphones (1.68). No significant differences were detected between tablets and smartphones except satisfaction with screen size (tablet 4.10, smartphone 2.00, p < .001). Using a mobile device at the bedside to perform an extensive search is not suitable for students who prefer using computers. However, mobility is regarded as a substantial advantage, and therefore future applications might facilitate quick and simple searches at the bedside.
Application of computational aero-acoustics to real world problems
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.
ERIC Educational Resources Information Center
Kraemer, Sara; Thorn, Christopher A.
2010-01-01
The purpose of this exploratory study was to identify and describe some of the dimensions of scientific collaborations using high throughput computing (HTC) through the lens of a virtual team performance framework. A secondary purpose was to assess the viability of using a virtual team performance framework to study scientific collaborations using…
Ferraro, Mauro; Auricchio, Ferdinando; Boatti, Elisa; Scalet, Giulia; Conti, Michele; Morganti, Simone; Reali, Alessandro
2015-01-01
Computer-based simulations are nowadays widely exploited for the prediction of the mechanical behavior of different biomedical devices. In this aspect, structural finite element analyses (FEA) are currently the preferred computational tool to evaluate the stent response under bending. This work aims at developing a computational framework based on linear and higher order FEA to evaluate the flexibility of self-expandable carotid artery stents. In particular, numerical simulations involving large deformations and inelastic shape memory alloy constitutive modeling are performed, and the results suggest that the employment of higher order FEA allows accurately representing the computational domain and getting a better approximation of the solution with a widely-reduced number of degrees of freedom with respect to linear FEA. Moreover, when buckling phenomena occur, higher order FEA presents a superior capability of reproducing the nonlinear local effects related to buckling phenomena. PMID:26184329
Parametric Design of Injectors for LDI-3 Combustors
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Mongia, Hukam; Lee, Phil
2015-01-01
Application of a partially calibrated National Combustion Code (NCC) for providing guidance in the design of the 3rd generation of the Lean-Direct Injection (LDI) multi-element combustion configuration (LDI-3) is summarized. NCC was used to perform non-reacting and two-phase reacting flow computations on several LDI-3 injector configurations in a single-element and a five-element injector array. All computations were performed with a consistent approach for mesh-generation, turbulence, spray simulations, ignition and chemical kinetics-modeling. Both qualitative and quantitative assessment of the computed flowfield characteristics of the several design options led to selection of an optimal injector LDI- 3 design that met all the requirements including effective area, aerodynamics and fuel-air mixing criteria. Computed LDI-3 emissions (namely, NOx, CO and UHC) will be compared with the prior generation LDI- 2 combustor experimental data at relevant engine cycle conditions.
All-Particle Multiscale Computation of Hypersonic Rarefied Flow
NASA Astrophysics Data System (ADS)
Jun, E.; Burt, J. M.; Boyd, I. D.
2011-05-01
This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.
A heterogeneous and parallel computing framework for high-resolution hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Smith, Luke; Liang, Qiuhua
2015-04-01
Shock-capturing hydrodynamic models are now widely applied in the context of flood risk assessment and forecasting, accurately capturing the behaviour of surface water over ground and within rivers. Such models are generally explicit in their numerical basis, and can be computationally expensive; this has prohibited full use of high-resolution topographic data for complex urban environments, now easily obtainable through airborne altimetric surveys (LiDAR). As processor clock speed advances have stagnated in recent years, further computational performance gains are largely dependent on the use of parallel processing. Heterogeneous computing architectures (e.g. graphics processing units or compute accelerator cards) provide a cost-effective means of achieving high throughput in cases where the same calculation is performed with a large input dataset. In recent years this technique has been applied successfully for flood risk mapping, such as within the national surface water flood risk assessment for the United Kingdom. We present a flexible software framework for hydrodynamic simulations across multiple processors of different architectures, within multiple computer systems, enabled using OpenCL and Message Passing Interface (MPI) libraries. A finite-volume Godunov-type scheme is implemented using the HLLC approach to solving the Riemann problem, with optional extension to second-order accuracy in space and time using the MUSCL-Hancock approach. The framework is successfully applied on personal computers and a small cluster to provide considerable improvements in performance. The most significant performance gains were achieved across two servers, each containing four NVIDIA GPUs, with a mix of K20, M2075 and C2050 devices. Advantages are found with respect to decreased parametric sensitivity, and thus in reducing uncertainty, for a major fluvial flood within a large catchment during 2005 in Carlisle, England. Simulations for the three-day event could be performed on a 2m grid within a few hours. In the context of a rapid pluvial flood event in Newcastle upon Tyne during 2012, the technique allows simulation of inundation for a 31km2 of the city centre in less than an hour on a 2m grid; however, further grid refinement is required to fully capture important smaller flow pathways. Good agreement between the model and observed inundation is achieved for a variety of dam failure, slow fluvial inundation, rapid pluvial inundation, and defence breach scenarios in the UK.
NASA Astrophysics Data System (ADS)
Sun, Yujia; Zhang, Xiaobing; Howell, John R.
2017-06-01
This work investigates the performance of the DOM, FVM, P1, SP3 and P3 methods for 2D combined natural convection and radiation heat transfer for an absorbing, emitting medium. The Monte Carlo method is used to solve the RTE coupled with the energy equation, and its results are used as benchmark solutions. Effects of the Rayleigh number, Planck number and optical thickness are considered, all covering several orders of magnitude. Temperature distributions, heat transfer rate and computational performance in terms of accuracy and computing time are presented and analyzed.
Development of computer games for assessment and training in post-stroke arm telerehabilitation.
Rodriguez-de-Pablo, Cristina; Perry, Joel C; Cavallaro, Francesca I; Zabaleta, Haritz; Keller, Thierry
2012-01-01
Stroke is the leading cause of long term disability among adults in industrialized nations. The majority of these disabilities include deficiencies in arm function, which can make independent living very difficult. Research shows that better results in rehabilitation are obtained when patients receive more intensive therapy. However this intensive therapy is currently too expensive to be provided by the public health system, and at home few patients perform the repetitive exercises recommended by their therapists. Computer games can provide an affordable, enjoyable, and effective way to intensify treatment, while keeping the patient as well as their therapists informed about their progress. This paper presents the study, design, implementation and user-testing of a set of computer games for at-home assessment and training of upper-limb motor impairment after stroke.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, Marshall C.; Rakowski, Cynthia L.; Serkowski, John A.
2013-06-25
Over the past two decades, there have been many studies describing injury mechanisms associated with turbine passage, the response of various fish species to these mechanisms, and the probability of survival through dams. Although developing tools to design turbines that improve passage survival has been difficult and slow, a more robust quantification of the turbine environment has emerged through integrating physical model data, fish survival data, and computational fluid dynamics (CFD) studies. Grant County Public Utility District (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbinemore » units that are now almost 50 years old. The Utility District plans to refit all of these aging turbines with new turbines. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when replacing the turbines. In this presentation, a method for turbine biological performance assessment (BioPA) is introduced. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We will present application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.« less
Assessing and predicting drug-induced anticholinergic risks: an integrated computational approach.
Xu, Dong; Anderson, Heather D; Tao, Aoxiang; Hannah, Katia L; Linnebur, Sunny A; Valuck, Robert J; Culbertson, Vaughn L
2017-11-01
Anticholinergic (AC) adverse drug events (ADEs) are caused by inhibition of muscarinic receptors as a result of designated or off-target drug-receptor interactions. In practice, AC toxicity is assessed primarily based on clinician experience. The goal of this study was to evaluate a novel concept of integrating big pharmacological and healthcare data to assess clinical AC toxicity risks. AC toxicity scores (ATSs) were computed using drug-receptor inhibitions identified through pharmacological data screening. A longitudinal retrospective cohort study using medical claims data was performed to quantify AC clinical risks. ATS was compared with two previously reported toxicity measures. A quantitative structure-activity relationship (QSAR) model was established for rapid assessment and prediction of AC clinical risks. A total of 25 common medications, and 575,228 exposed and unexposed patients were analyzed. Our data indicated that ATS is more consistent with the trend of AC outcomes than other toxicity methods. Incorporating drug pharmacokinetic parameters to ATS yielded a QSAR model with excellent correlation to AC incident rate ( R 2 = 0.83) and predictive performance (cross validation Q 2 = 0.64). Good correlation and predictive performance ( R 2 = 0.68/ Q 2 = 0.29) were also obtained for an M2 receptor-specific QSAR model and tachycardia, an M2 receptor-specific ADE. Albeit using a small medication sample size, our pilot data demonstrated the potential and feasibility of a new computational AC toxicity scoring approach driven by underlying pharmacology and big data analytics. Follow-up work is under way to further develop the ATS scoring approach and clinical toxicity predictive model using a large number of medications and clinical parameters.
Word Spelling Assessment Using ICT: The Effect of Presentation Modality
ERIC Educational Resources Information Center
Sarris, Menelaos; Panagiotakopoulos, Chris
2010-01-01
Up-to-date spelling process was assessed using typical spelling-to-dictation tasks, where children's performance was evaluated mainly in terms of spelling error scores. In the present work a simple graphical computer interface is reported, aiming to investigate the effects of input modality (e.g. visual and verbal) in word spelling. The software…
Marking Essays on Screen: An Investigation into the Reliability of Marking Extended Subjective Texts
ERIC Educational Resources Information Center
Johnson, Martin; Nadas, Rita; Bell, John F.
2010-01-01
There is a growing body of research literature that considers how the mode of assessment, either computer-based or paper-based, might affect candidates' performances. Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor consistency when…
ERIC Educational Resources Information Center
Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan
2014-01-01
Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…
Myth Busting: Using Data Mining to Refute Link between Transfer Students and Retention Risk
ERIC Educational Resources Information Center
McAleer, Brenda; Szakas, Joseph S.
2010-01-01
In the past few years, universities have become much more involved in outcomes assessment. Outside of the classroom analysis of learning outcomes, an investigation is performed into the use of current data mining tools to assess the issue of student retention within the Computer Information Systems (CIS) department. Utilizing both a historical…
The `TTIME' Package: Performance Evaluation in a Cluster Computing Environment
NASA Astrophysics Data System (ADS)
Howe, Marico; Berleant, Daniel; Everett, Albert
2011-06-01
The objective of translating developmental event time across mammalian species is to gain an understanding of the timing of human developmental events based on known time of those events in animals. The potential benefits include improvements to diagnostic and intervention capabilities. The CRAN `ttime' package provides the functionality to infer unknown event timings and investigate phylogenetic proximity utilizing hierarchical clustering of both known and predicted event timings. The original generic mammalian model included nine eutherian mammals: Felis domestica (cat), Mustela putorius furo (ferret), Mesocricetus auratus (hamster), Macaca mulatta (monkey), Homo sapiens (humans), Mus musculus (mouse), Oryctolagus cuniculus (rabbit), Rattus norvegicus (rat), and Acomys cahirinus (spiny mouse). However, the data for this model is expected to grow as more data about developmental events is identified and incorporated into the analysis. Performance evaluation of the `ttime' package across a cluster computing environment versus a comparative analysis in a serial computing environment provides an important computational performance assessment. A theoretical analysis is the first stage of a process in which the second stage, if justified by the theoretical analysis, is to investigate an actual implementation of the `ttime' package in a cluster computing environment and to understand the parallelization process that underlies implementation.
Fleming, John; Conway, Joy; Majoral, Caroline; Tossici-Bolt, Livia; Katz, Ira; Caillibotte, Georges; Perchet, Diane; Pichelin, Marine; Muellinger, Bernhard; Martonen, Ted; Kroneberg, Philipp; Apiou-Sbirlea, Gabriela
2011-02-01
Gamma camera imaging is widely used to assess pulmonary aerosol deposition. Conventional planar imaging provides limited information on its regional distribution. In this study, single photon emission computed tomography (SPECT) was used to describe deposition in three dimensions (3D) and combined with X-ray computed tomography (CT) to relate this to lung anatomy. Its performance was compared to planar imaging. Ten SPECT/CT studies were performed on five healthy subjects following carefully controlled inhalation of radioaerosol from a nebulizer, using a variety of inhalation regimes. The 3D spatial distribution was assessed using a central-to-peripheral ratio (C/P) normalized to lung volume and for the right lung was compared to planar C/P analysis. The deposition by airway generation was calculated for each lung and the conducting airways deposition fraction compared to 24-h clearance. The 3D normalized C/P ratio correlated more closely with 24-h clearance than the 2D ratio for the right lung [coefficient of variation (COV), 9% compared to 15% p < 0.05]. Analysis of regional distribution was possible for both lungs in 3D but not in 2D due to overlap of the stomach on the left lung. The mean conducting airways deposition fraction from SPECT for both lungs was not significantly different from 24-h clearance (COV 18%). Both spatial and generational measures of central deposition were significantly higher for the left than for the right lung. Combined SPECT/CT enabled improved analysis of aerosol deposition from gamma camera imaging compared to planar imaging. 3D radionuclide imaging combined with anatomical information from CT and computer analysis is a useful approach for applications requiring regional information on deposition.
Ku-band signal design study. [space shuttle orbiter data processing network
NASA Technical Reports Server (NTRS)
Rubin, I.
1978-01-01
Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.
Del Brutto, Oscar H; Mera, Robertino M; Sullivan, Lauren J; Zambrano, Mauricio; King, Nathan R
2016-10-01
We aimed to assess whether carotid siphon calcifications (as seen on computed tomography) are associated with worse performance in the Montreal Cognitive Assessment in 584 stroke-free individuals living in rural Ecuador. Using mean Montreal Cognitive Assessment score of subjects with Grade 1 calcifications (23.1 ± 4.2) as the referent category, fully adjusted generalized linear models showed significant associations between severity of carotid siphon calcifications and cognitive performance (mean Montreal Cognitive Assessment scores: 20.2 ± 4.8 for Grade 2 (p = 0.004), 19.7 ± 5.3 for Grade 3 (p = 0.0001), and 18.8 ± 4.1 for Grade 4 (p = 0.02)). Predictive Montreal Cognitive Assessment score margins were higher in individuals with Grade 1 calcifications than in other groups. This study shows an inverse relationship between calcium content in the carotid siphon and cognitive performance in Amerindians.
Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.
Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo
2015-11-01
The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.
NASA Technical Reports Server (NTRS)
Wortz, E. C.; Saur, A. J.; Nowlis, D. P.; Kendall, M. P.
1974-01-01
Results are presented of an initial experiment in a research program designed to develop objective techniques for psychological assessment of individuals and groups participating in long-duration space flights. Specifically examined is the rationale for utilizing measures of attention as an objective assessment technique. Subjects participating in the experiment performed various tasks (eg, playing matrix games which appeared on a display screen along with auditory stimuli). The psychophysiological reactions of the subjects were measured and are given. Previous research of various performance and psychophysiological methods of measuring attention is also discussed. The experiment design (independent and dependent variables) and apparatus (computers and display devices) are described and shown. Conclusions and recommendations are presented.
Critical thinking traits of top-tier experts and implications for computer science education
NASA Astrophysics Data System (ADS)
Bushey, Dean E.
A documented shortage of technical leadership and top-tier performers in computer science jeopardizes the technological edge, security, and economic well-being of the nation. The 2005 President's Information and Technology Advisory Committee (PITAC) Report on competitiveness in computational sciences highlights the major impact of science, technology, and innovation in keeping America competitive in the global marketplace. It stresses the fact that the supply of science, technology, and engineering experts is at the core of America's technological edge, national competitiveness and security. However, recent data shows that both undergraduate and postgraduate production of computer scientists is falling. The decline is "a quiet crisis building in the United States," a crisis that, if allowed to continue unchecked, could endanger America's well-being and preeminence among the world's nations. Past research on expert performance has shown that the cognitive traits of critical thinking, creativity, and problem solving possessed by top-tier performers can be identified, observed and measured. The studies show that the identified attributes are applicable across many domains and disciplines. Companies have begun to realize that cognitive skills are important for high-level performance and are reevaluating the traditional academic standards they have used to predict success for their top-tier performers in computer science. Previous research in the computer science field has focused either on programming skills of its experts or has attempted to predict the academic success of students at the undergraduate level. This study, on the other hand, examines the critical-thinking skills found among experts in the computer science field in order to explore the questions, "What cognitive skills do outstanding performers possess that make them successful?" and "How do currently used measures of academic performance correlate to critical-thinking skills among students?" The results of this study suggest a need to examine how critical-thinking abilities are learned in the undergraduate computer science curriculum and the need to foster these abilities in order to produce the high-level, critical-thinking professionals necessary to fill the growing need for these experts. Due to the fact that current measures of academic performance do not adequately depict students' cognitive abilities, assessment of these skills must be incorporated into existing curricula.
ERIC Educational Resources Information Center
Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.
2016-01-01
Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…
ERIC Educational Resources Information Center
Campbell, Adelle C.
2017-01-01
This study examined the predictive relationship of a brief computation measure administered in the fall, winter, and spring of first, second, and third grade with the mathematic portion of a state-mandated academic achievement test administered in the spring of third grade. The relationship between mathematical achievement and resource…
Propagation Environment Assessment Using UAV Electromagnetic Sensors
2018-03-01
could be added, we limit this study to two dimensions.) The computer program then processes the data and determines the existence of any atmospheric... computer to have large processing capacity, and a typical workstation desktop or laptop can perform the function. E. FLIGHT PATTERNS AND DATA...different types of flight patterns were studied , and our findings show that the vertical flight pattern using a rotary platform is more efficient
Assessing the quality of activities in a smart environment.
Cook, Diane J; Schmitter-Edgecombe, M
2009-01-01
Pervasive computing technology can provide valuable health monitoring and assistance technology to help individuals live independent lives in their own homes. As a critical part of this technology, our objective is to design software algorithms that recognize and assess the consistency of activities of daily living that individuals perform in their own homes. We have designed algorithms that automatically learn Markov models for each class of activity. These models are used to recognize activities that are performed in a smart home and to identify errors and inconsistencies in the performed activity. We validate our approach using data collected from 60 volunteers who performed a series of activities in our smart apartment testbed. The results indicate that the algorithms correctly label the activities and successfully assess the completeness and consistency of the performed task. Our results indicate that activity recognition and assessment can be automated using machine learning algorithms and smart home technology. These algorithms will be useful for automating remote health monitoring and interventions.
Navarro-Ramirez, Rodrigo; Berlin, Connor; Lang, Gernot; Hussain, Ibrahim; Janssen, Insa; Sloan, Stephen; Askin, Gulce; Avila, Mauricio J; Zubkov, Micaella; Härtl, Roger
2018-01-01
Two-dimensional radiographic methods have been proposed to evaluate the radiographic outcome after indirect decompression through extreme lateral interbody fusion (XLIF). However, the assessment of neural decompression in a single plane may underestimate the effect of indirect decompression on central canal and foraminal volumes. The present study aimed to assess the reliability and consistency of a novel 3-dimensional radiographic method that assesses neural decompression by volumetric analysis using a new generation of intraoperative fan-beam computed tomography scanner in patients undergoing XLIF. Prospectively collected data from 7 patients (9 levels) undergoing XLIF was retrospectively analyzed. Three independent, blind raters using imaging analysis software performed volumetric measurements pre- and postoperatively to determine central canal and foraminal volumes. Intrarater and Interrater reliability tests were performed to assess the reliability of this novel volumetric method. The interrater reliability between the three raters ranged from 0.800 to 0.952, P < 0.0001. The test-retest analysis on a randomly selected subset of three patients showed good to excellent internal reliability (range of 0.78-1.00) for all 3 raters. There was a significant increase in mean volume ≈20% for right foramen, left foramen, and central canal volumes postoperatively (P = 0.0472; P = 0.0066; P = 0.0003, respectively). Here we demonstrate a new volumetric analysis technique that is feasible, reliable, and reproducible amongst independent raters for central canal and foraminal volumes in the lumbar spine using an intraoperative computed tomography scanner. Copyright © 2017. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Rivers, S. M. B.; Wahls, R. A.; Owens, L. R.
2001-01-01
A computational study focused on leading-edge radius effects and associated Reynolds number sensitivity for a High Speed Civil Transport configuration at transonic conditions was conducted as part of NASA's High Speed Research Program. The primary purposes were to assess the capabilities of computational fluid dynamics to predict Reynolds number effects for a range of leading-edge radius distributions on a second-generation supersonic transport configuration, and to evaluate the potential performance benefits of each at the transonic cruise condition. Five leading-edge radius distributions are described, and the potential performance benefit including the Reynolds number sensitivity for each is presented. Computational results for two leading-edge radius distributions are compared with experimental results acquired in the National Transonic Facility over a broad Reynolds number range.
The Effect of Context Change on Simple Acquisition Disappears with Increased Training
ERIC Educational Resources Information Center
Leon, Samuel P.; Abad, Maria J. F.; Rosas, Juan M.
2010-01-01
The goal of this experiment was to assess the impact that experience with a task has on the context specificity of the learning that occurs. Participants performed an instrumental task within a computer game where different responses were performed in the presence of discriminative stimuli to obtain reinforcers. The number of training trials (3,…
ERIC Educational Resources Information Center
West, Patti; Rutstein, Daisy Wise; Mislevy, Robert J.; Liu, Junhui; Choi, Younyoung; Levy, Roy; Crawford, Aaron; DiCerbo, Kristen E.; Chappel, Kristina; Behrens, John T.
2010-01-01
A major issue in the study of learning progressions (LPs) is linking student performance on assessment tasks to the progressions. This report describes the challenges faced in making this linkage using Bayesian networks to model LPs in the field of computer networking. The ideas are illustrated with exemplar Bayesian networks built on Cisco…
Digital Devices, Distraction, and Student Performance: Does In-Class Cell Phone Use Reduce Learning?
ERIC Educational Resources Information Center
Duncan, Douglas K.; Hoekstra, Angel R.; Wilcox, Bethany R.
2012-01-01
The recent increase in use of digital devices such as laptop computers, iPads, and web-enabled cell phones has generated concern about how technologies affect student performance. Combining observation, survey, and interview data, this research assesses the effects of technology use on student attitudes and learning. Data were gathered in eight…
ERIC Educational Resources Information Center
Scholtz, R. G.; And Others
This final report of a feasibility study describes the research performed in assessing the requirements for a chemical signature file and search scheme for organic compound identification and information retrieval. The research performed to determined feasibility of identifying an unknown compound involved screening the compound against a file of…
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
In current radiation therapy practice, image quality is still assessed subjectively or by utilizing physically-based metrics. Recently, a methodology for objective task-based image quality (IQ) assessment in radiation therapy was proposed by Barrett et al.1 In this work, we present a comprehensive implementation and evaluation of this new IQ assessment methodology. A modular simulation framework was designed to perform an automated, computer-simulated end-to-end radiation therapy treatment. A fully simulated framework was created that utilizes new learning-based stochastic object models (SOM) to obtain known organ boundaries, generates a set of images directly from the numerical phantoms created with the SOM, and automates the image segmentation and treatment planning steps of a radiation therapy work ow. By use of this computational framework, therapeutic operating characteristic (TOC) curves can be computed and the area under the TOC curve (AUTOC) can be employed as a figure-of-merit to guide optimization of different components of the treatment planning process. The developed computational framework is employed to optimize X-ray CT pre-treatment imaging. We demonstrate that use of the radiation therapy-based-based IQ measures lead to different imaging parameters than obtained by use of physical-based measures.
NASA Astrophysics Data System (ADS)
Jaime, Arturo; Blanco, José Miguel; Domínguez, César; Sánchez, Ana; Heras, Jónathan; Usandizaga, Imanol
2016-06-01
Different learning methods such as project-based learning, spiral learning and peer assessment have been implemented in science disciplines with different outcomes. This paper presents a proposal for a project management course in the context of a computer science degree. Our proposal combines three well-known methods: project-based learning, spiral learning and peer assessment. Namely, the course is articulated during a semester through the structured (progressive and incremental) development of a sequence of four projects, whose duration, scope and difficulty of management increase as the student gains theoretical and instrumental knowledge related to planning, monitoring and controlling projects. Moreover, the proposal is complemented using peer assessment. The proposal has already been implemented and validated for the last 3 years in two different universities. In the first year, project-based learning and spiral learning methods were combined. Such a combination was also employed in the other 2 years; but additionally, students had the opportunity to assess projects developed by university partners and by students of the other university. A total of 154 students have participated in the study. We obtain a gain in the quality of the subsequently projects derived from the spiral project-based learning. Moreover, this gain is significantly bigger when peer assessment is introduced. In addition, high-performance students take advantage of peer assessment from the first moment, whereas the improvement in poor-performance students is delayed.
Workload assessment of surgeons: correlation between NASA TLX and blinks.
Zheng, Bin; Jiang, Xianta; Tien, Geoffrey; Meneghetti, Adam; Panton, O Neely M; Atkins, M Stella
2012-10-01
Blinks are known as an indicator of visual attention and mental stress. In this study, surgeons' mental workload was evaluated utilizing a paper assessment instrument (National Aeronautics and Space Administration Task Load Index, NASA TLX) and by examining their eye blinks. Correlation between these two assessments was reported. Surgeons' eye motions were video-recorded using a head-mounted eye-tracker while the surgeons performed a laparoscopic procedure on a virtual reality trainer. Blink frequency and duration were computed using computer vision technology. The level of workload experienced during the procedure was reported by surgeons using the NASA TLX. A total of 42 valid videos were recorded from 23 surgeons. After blinks were computed, videos were divided into two groups based on the blink frequency: infrequent group (≤ 6 blinks/min) and frequent group (more than 6 blinks/min). Surgical performance (measured by task time and trajectories of tool tips) was not significantly different between these two groups, but NASA TLX scores were significantly different. Surgeons who blinked infrequently reported a higher level of frustration (46 vs. 34, P = 0.047) and higher overall level of workload (57 vs. 47, P = 0.045) than those who blinked more frequently. The correlation coefficients (Pearson test) between NASA TLX and the blink frequency and duration were -0.17 and 0.446. Reduction of blink frequency and shorter blink duration matched the increasing level of mental workload reported by surgeons. The value of using eye-tracking technology for assessment of surgeon mental workload was shown.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-28
... discretion. MSHA is required to perform mathematical computations based on published cost-of-living data and... altering the budgetary impact of entitlements or the rights of entitlement recipients, or raising novel...
Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio
2017-10-01
Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2 = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2 = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.
McKanna, James A; Pavel, Misha; Jimison, Holly
2010-11-13
Assessment of cognitive functionality is an important aspect of care for elders. Unfortunately, few tools exist to measure divided attention, the ability to allocate attention to different aspects of tasks. An accurate determination of divided attention would allow inference of generalized cognitive decline, as well as providing a quantifiable indicator of an important component of driving skill. We propose a new method for determining relative divided attention ability through unobtrusive monitoring of computer use. Specifically, we measure performance on a dual-task cognitive computer exercise as part of a health coaching intervention. This metric indicates whether the user has the ability to pay attention to both tasks at once, or is primarily attending to one task at a time (sacrificing optimal performance). The monitoring of divided attention in a home environment is a key component of both the early detection of cognitive problems and for assessing the efficacy of coaching interventions.
Amen, Daniel G; Hanks, Chris; Prunella, Jill R; Green, Aisa
2007-01-01
The authors explored differences in regional cerebral blood flow in 11 impulsive murderers and 11 healthy comparison subjects using single photon emission computed tomography. The authors assessed subjects at rest and during a computerized go/no-go concentration task. Using statistical parametric mapping software, the authors performed voxel-by-voxel t tests to assess significant differences, making family-wide error corrections for multiple comparisons. Murderers were found to have significantly lower relative rCBF during concentration, particularly in areas associated with concentration and impulse control. These results indicate that nonemotionally laden stimuli may result in frontotemporal dysregulation in people predisposed to impulsive violence.
Computer-assisted liver graft steatosis assessment via learning-based texture analysis.
Moccia, Sara; Mattos, Leonardo S; Patrini, Ilaria; Ruperti, Michela; Poté, Nicolas; Dondero, Federica; Cauchy, François; Sepulveda, Ailton; Soubrane, Olivier; De Momi, Elena; Diaspro, Alberto; Cesaretti, Manuela
2018-05-23
Fast and accurate graft hepatic steatosis (HS) assessment is of primary importance for lowering liver dysfunction risks after transplantation. Histopathological analysis of biopsied liver is the gold standard for assessing HS, despite being invasive and time consuming. Due to the short time availability between liver procurement and transplantation, surgeons perform HS assessment through clinical evaluation (medical history, blood tests) and liver texture visual analysis. Despite visual analysis being recognized as challenging in the clinical literature, few efforts have been invested to develop computer-assisted solutions for HS assessment. The objective of this paper is to investigate the automatic analysis of liver texture with machine learning algorithms to automate the HS assessment process and offer support for the surgeon decision process. Forty RGB images of forty different donors were analyzed. The images were captured with an RGB smartphone camera in the operating room (OR). Twenty images refer to livers that were accepted and 20 to discarded livers. Fifteen randomly selected liver patches were extracted from each image. Patch size was [Formula: see text]. This way, a balanced dataset of 600 patches was obtained. Intensity-based features (INT), histogram of local binary pattern ([Formula: see text]), and gray-level co-occurrence matrix ([Formula: see text]) were investigated. Blood-sample features (Blo) were included in the analysis, too. Supervised and semisupervised learning approaches were investigated for feature classification. The leave-one-patient-out cross-validation was performed to estimate the classification performance. With the best-performing feature set ([Formula: see text]) and semisupervised learning, the achieved classification sensitivity, specificity, and accuracy were 95, 81, and 88%, respectively. This research represents the first attempt to use machine learning and automatic texture analysis of RGB images from ubiquitous smartphone cameras for the task of graft HS assessment. The results suggest that is a promising strategy to develop a fully automatic solution to assist surgeons in HS assessment inside the OR.
Development of computer-based analytical tool for assessing physical protection system
NASA Astrophysics Data System (ADS)
Mardhi, Alim; Pengvanich, Phongphaeth
2016-01-01
Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer-based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system's detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.
NASA Astrophysics Data System (ADS)
Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek
2009-09-01
High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.
Motta, Mario; Zhang, Shiwei
2017-11-14
We address the computation of ground-state properties of chemical systems and realistic materials within the auxiliary-field quantum Monte Carlo method. The phase constraint to control the Fermion phase problem requires the random walks in Slater determinant space to be open-ended with branching. This in turn makes it necessary to use back-propagation (BP) to compute averages and correlation functions of operators that do not commute with the Hamiltonian. Several BP schemes are investigated, and their optimization with respect to the phaseless constraint is considered. We propose a modified BP method for the computation of observables in electronic systems, discuss its numerical stability and computational complexity, and assess its performance by computing ground-state properties in several molecular systems, including small organic molecules.
Improving Conceptual Design for Launch Vehicles
NASA Technical Reports Server (NTRS)
Olds, John R.
1998-01-01
This report summarizes activities performed during the second year of a three year cooperative agreement between NASA - Langley Research Center and Georgia Tech. Year 1 of the project resulted in the creation of a new Cost and Business Assessment Model (CABAM) for estimating the economic performance of advanced reusable launch vehicles including non-recurring costs, recurring costs, and revenue. The current year (second year) activities were focused on the evaluation of automated, collaborative design frameworks (computation architectures or computational frameworks) for automating the design process in advanced space vehicle design. Consistent with NASA's new thrust area in developing and understanding Intelligent Synthesis Environments (ISE), the goals of this year's research efforts were to develop and apply computer integration techniques and near-term computational frameworks for conducting advanced space vehicle design. NASA - Langley (VAB) has taken a lead role in developing a web-based computing architectures within which the designer can interact with disciplinary analysis tools through a flexible web interface. The advantages of this approach are, 1) flexible access to the designer interface through a simple web browser (e.g. Netscape Navigator), 2) ability to include existing 'legacy' codes, and 3) ability to include distributed analysis tools running on remote computers. To date, VAB's internal emphasis has been on developing this test system for the planetary entry mission under the joint Integrated Design System (IDS) program with NASA - Ames and JPL. Georgia Tech's complementary goals this year were to: 1) Examine an alternate 'custom' computational architecture for the three-discipline IDS planetary entry problem to assess the advantages and disadvantages relative to the web-based approach.and 2) Develop and examine a web-based interface and framework for a typical launch vehicle design problem.
Berg, Britt-Isabelle; Dagassan-Berndt, Dorothea; Goldblum, David; Kunz, Christoph
2015-04-01
The aim of this study was to investigate the feasibility and effectiveness of cone-beam computed tomography (CBCT) in the planning, assessment, and follow-up for osteo-odonto-keratoprosthesis (OOKP). Six OOKP patients received a CBCT scan. CBCT scans were performed before and/or between ∼5 and 504 months after the primary OOKP intervention. Preoperative and postoperative results of the CBCT were assessed, regarding the available teeth and to assess the loss of bone in 1 patient, respectively. Resorption of the osteo-odonto-lamina was measured and graded. Five different measurements (I-V) were performed in the coronal and transversal views of CBCT. Four CBCT scans were performed preoperatively and 4 postoperatively. The follow-up time of the patients is between ∼1 to 528 months. Visualization of the potential donor teeth resulted in accurate 3-dimensional visualization of the tooth-lamina-bone complex. CBCT was found to help in the preoperative decision-making process (diameter of optical implant) and in enabling accurate postoperative evaluation of the bone volume and resorption zones of the OOKP. Loss of bone could be measured in a precise range and showed in the completed cases an average loss of 20.2%. The use of CBCT simplifies the preoperative decision making and ordering process. It also helps in determining the postoperative structure and resorption of the prosthesis.
ERIC Educational Resources Information Center
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris
2015-01-01
The grades K-2 Florida Center for Reading Research (FCRR) Reading Assessment (FRA) consists of computer-adaptive alphabetic and oral language screening tasks that provide a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 40th percentile) on the word reading (in kindergarten) or reading comprehension (in grades…
ERIC Educational Resources Information Center
Swan, Gerry; Mazur, Joan
2011-01-01
Although the term data-driven decision making (DDDM) is relatively new (Moss, 2007), the underlying concept of DDDM is not. For example, the practices of formative assessment and computer-managed instruction have historically involved the use of student performance data to guide what happens next in the instructional sequence (Morrison, Kemp, &…
ERIC Educational Resources Information Center
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris
2015-01-01
The Florida Center for Reading Research (FCRR) Reading Assessment (FRA) consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 50th percentile) on the reading comprehension…
Additional Support for the Information Systems Analyst Exam as a Valid Program Assessment Tool
ERIC Educational Resources Information Center
Carpenter, Donald A.; Snyder, Johnny; Slauson, Gayla Jo; Bridge, Morgan K.
2011-01-01
This paper presents a statistical analysis to support the notion that the Information Systems Analyst (ISA) exam can be used as a program assessment tool in addition to measuring student performance. It compares ISA exam scores earned by students in one particular Computer Information Systems program with scores earned by the same students on the…
NASA Technical Reports Server (NTRS)
Pan, Y. S.; Drummond, J. P.; Mcclinton, C. R.
1978-01-01
Two parabolic flow computer programs, SHIP (a finite-difference program) and COMOC (a finite-element program), are used for predicting three-dimensional turbulent reacting flow fields in supersonic combustors. The theoretical foundation of the two computer programs are described, and then the programs are applied to a three-dimensional turbulent mixing experiment. The cold (nonreacting) flow experiment was performed to study the mixing of helium jets with a supersonic airstream in a rectangular duct. Surveys of the flow field at an upstream were used as the initial data by programs; surveys at a downstream station provided comparison to assess program accuracy. Both computer programs predicted the experimental results and data trends reasonably well. However, the comparison between the computations from the two programs indicated that SHIP was more accurate in computation and more efficient in both computer storage and computing time than COMOC.
Brumberg, Jonathan S; Nguyen, Anh; Pitt, Kevin M; Lorenz, Sean D
2018-01-31
We investigated how overt visual attention and oculomotor control influence successful use of a visual feedback brain-computer interface (BCI) for accessing augmentative and alternative communication (AAC) devices in a heterogeneous population of individuals with profound neuromotor impairments. BCIs are often tested within a single patient population limiting generalization of results. This study focuses on examining individual sensory abilities with an eye toward possible interface adaptations to improve device performance. Five individuals with a range of neuromotor disorders participated in four-choice BCI control task involving the steady state visually evoked potential. The BCI graphical interface was designed to simulate a commercial AAC device to examine whether an integrated device could be used successfully by individuals with neuromotor impairment. All participants were able to interact with the BCI and highest performance was found for participants able to employ an overt visual attention strategy. For participants with visual deficits to due to impaired oculomotor control, effective performance increased after accounting for mismatches between the graphical layout and participant visual capabilities. As BCIs are translated from research environments to clinical applications, the assessment of BCI-related skills will help facilitate proper device selection and provide individuals who use BCI the greatest likelihood of immediate and long term communicative success. Overall, our results indicate that adaptations can be an effective strategy to reduce barriers and increase access to BCI technology. These efforts should be directed by comprehensive assessments for matching individuals to the most appropriate device to support their complex communication needs. Implications for Rehabilitation Brain computer interfaces using the steady state visually evoked potential can be integrated with an augmentative and alternative communication device to provide access to language and literacy for individuals with neuromotor impairment. Comprehensive assessments are needed to fully understand the sensory, motor, and cognitive abilities of individuals who may use brain-computer interfaces for proper feature matching as selection of the most appropriate device including optimization device layouts and control paradigms. Oculomotor impairments negatively impact brain-computer interfaces that use the steady state visually evoked potential, but modifications to place interface stimuli and communication items in the intact visual field can improve successful outcomes.
Vairavan, S; Ulusar, U D; Eswaran, H; Preissl, H; Wilson, J D; Mckelvey, S S; Lowery, C L; Govindan, R B
2016-02-01
We propose a novel computational approach to automatically identify the fetal heart rate patterns (fHRPs), which are reflective of sleep/awake states. By combining these patterns with presence or absence of movements, a fetal behavioral state (fBS) was determined. The expert scores were used as the gold standard and objective thresholds for the detection procedure were obtained using Receiver Operating Characteristics (ROC) analysis. To assess the performance, intraclass correlation was computed between the proposed approach and the mutually agreed expert scores. The detected fHRPs were then associated to their corresponding fBS based on the fetal movement obtained from fetal magnetocardiogaphic (fMCG) signals. This approach may aid clinicians in objectively assessing the fBS and monitoring fetal wellbeing. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sarojkumar, K.; Krishna, S.
2016-08-01
Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.
Recent advances to obtain real - Time displacements for engineering applications
Celebi, M.
2005-01-01
This paper presents recent developments and approaches (using GPS technology and real-time double-integration) to obtain displacements and, in turn, drift ratios, in real-time or near real-time to meet the needs of the engineering and user community in seismic monitoring and assessing the functionality and damage condition of structures. Drift ratios computed in near real-time allow technical assessment of the damage condition of a building. Relevant parameters, such as the type of connections and story structural characteristics (including geometry) are used in computing drifts corresponding to several pre-selected threshold stages of damage. Thus, drift ratios determined from real-time monitoring can be compared to pre-computed threshold drift ratios. The approaches described herein can be used for performance evaluation of structures and can be considered as building health-monitoring applications.
NASA Astrophysics Data System (ADS)
Pawar, R.
2016-12-01
Risk assessment and risk management of engineered geologic CO2 storage systems is an area of active investigation. The potential geologic CO2 storage systems currently under consideration are inherently heterogeneous and have limited to no characterization data. Effective risk management decisions to ensure safe, long-term CO2 storage requires assessing and quantifying risks while taking into account the uncertainties in a storage site's characteristics. The key decisions are typically related to definition of area of review, effective monitoring strategy and monitoring duration, potential of leakage and associated impacts, etc. A quantitative methodology for predicting a sequestration site's long-term performance is critical for making key decisions necessary for successful deployment of commercial scale geologic storage projects where projects will require quantitative assessments of potential long-term liabilities. An integrated assessment modeling (IAM) paradigm which treats a geologic CO2 storage site as a system made up of various linked subsystems can be used to predict long-term performance. The subsystems include storage reservoir, seals, potential leakage pathways (such as wellbores, natural fractures/faults) and receptors (such as shallow groundwater aquifers). CO2 movement within each of the subsystems and resulting interactions are captured through reduced order models (ROMs). The ROMs capture the complex physical/chemical interactions resulting due to CO2 movement and interactions but are computationally extremely efficient. The computational efficiency allows for performing Monte Carlo simulations necessary for quantitative probabilistic risk assessment. We have used the IAM to predict long-term performance of geologic CO2 sequestration systems and to answer questions related to probability of leakage of CO2 through wellbores, impact of CO2/brine leakage into shallow aquifer, etc. Answers to such questions are critical in making key risk management decisions. A systematic uncertainty quantification approach can been used to understand how uncertain parameters associated with different subsystems (e.g., reservoir permeability, wellbore cement permeability, wellbore density, etc.) impact the overall site performance predictions.
Dust in the wind: challenges for urban aerodynamics
NASA Astrophysics Data System (ADS)
Boris, Jay P.
2007-04-01
The fluid dynamics of airflow through a city controls the transport and dispersion of airborne contaminants. This is urban aerodynamics, not meteorology. The average flow, large-scale fluctuations and turbulence are closely coupled to the building geometry. Buildings create large "rooster-tail" wakes; there are systematic fountain flows up the backs of tall buildings; and dust in the wind can move perpendicular to or even against the locally prevailing wind. Requirements for better prediction accuracy demand time-dependent, three-dimensional CFD computations that include solar heating and buoyancy, complete landscape and building geometry specification including foliage and, realistic wind fluctuations. This fundamental prediction capability is necessary to assess urban visibility and line-of-sight sensor performance in street canyons and rugged terrain. Computing urban aerodynamics accurately is clearly a time-dependent High Performance Computing (HPC) problem. In an emergency, on the other hand, prediction technology to assess crisis information, sensor performance, and obscured line-of-sight propagation in the face of industrial spills, transportation accidents, or terrorist attacks has very tight time requirements that suggest simple approximations which tend to produce inaccurate results. In the past we have had to choose one or the other: a fast, inaccurate model or a slow accurate model. Using new fluid-dynamic principles, an urban-oriented emergency assessment system called CT-Analyst® was invented that solves this dilemma. It produces HPC-quality results for airborne contaminant scenarios nearly instantly and has unique new capabilities suited to sensor optimization. This presentation treats the design and use of CT-Analyst and discusses the developments needed for widespread use with advanced sensor and communication systems.
In-silico wear prediction for knee replacements--methodology and corroboration.
Strickland, M A; Taylor, M
2009-07-22
The capability to predict in-vivo wear of knee replacements is a valuable pre-clinical analysis tool for implant designers. Traditionally, time-consuming experimental tests provided the principal means of investigating wear. Today, computational models offer an alternative. However, the validity of these models has not been demonstrated across a range of designs and test conditions, and several different formulas are in contention for estimating wear rates, limiting confidence in the predictive power of these in-silico models. This study collates and retrospectively simulates a wide range of experimental wear tests using fast rigid-body computational models with extant wear prediction algorithms, to assess the performance of current in-silico wear prediction tools. The number of tests corroborated gives a broader, more general assessment of the performance of these wear-prediction tools, and provides better estimates of the wear 'constants' used in computational models. High-speed rigid-body modelling allows a range of alternative algorithms to be evaluated. Whilst most cross-shear (CS)-based models perform comparably, the 'A/A+B' wear model appears to offer the best predictive power amongst existing wear algorithms. However, the range and variability of experimental data leaves considerable uncertainty in the results. More experimental data with reduced variability and more detailed reporting of studies will be necessary to corroborate these models with greater confidence. With simulation times reduced to only a few minutes, these models are ideally suited to large-volume 'design of experiment' or probabilistic studies (which are essential if pre-clinical assessment tools are to begin addressing the degree of variation observed clinically and in explanted components).
Spontaneous Movements of a Computer Mouse Reveal Egoism and In-group Favoritism
Maliszewski, Norbert; Wojciechowski, Łukasz; Suszek, Hubert
2017-01-01
The purpose of the project was to assess whether the first spontaneous movements of a computer mouse, when making an assessment on a scale presented on the screen, may express a respondent’s implicit attitudes. In Study 1, the altruistic behaviors of 66 students were assessed. The students were led to believe that the task they were performing was also being performed by another person and they were asked to distribute earnings between themselves and the partner. The participants performed the tasks under conditions with and without distractors. With the distractors, in the first few seconds spontaneous mouse movements on the scale expressed a selfish distribution of money, while later the movements gravitated toward more altruism. In Study 2, 77 Polish students evaluated a painting by a Polish/Jewish painter on a scale. They evaluated it under conditions of full or distracted cognitive abilities. Spontaneous movements of the mouse on the scale were analyzed. In addition, implicit attitudes toward both Poles and Jews were measured with the Implicit Association Test (IAT). A significant association between implicit attitudes (IAT) and spontaneous evaluation of images using a computer mouse was observed in the group with the distractor. The participants with strong implicit in-group favoritism of Poles revealed stronger preference for the Polish painter’s work in the first few seconds of mouse movement. Taken together, these results suggest that spontaneous mouse movements may reveal egoism (in-group favoritism), i.e., processes that were not observed in the participants’ final decisions (clicking on the scale). PMID:28163689
Parsons, Thomas D; McMahan, Timothy; Kane, Robert
2018-01-01
Clinical neuropsychologists have long underutilized computer technologies for neuropsychological assessment. Given the rapid advances in technology (e.g. virtual reality; tablets; iPhones) and the increased accessibility in the past decade, there is an on-going need to identify optimal specifications for advanced technologies while minimizing potential sources of error. Herein, we discuss concerns raised by a joint American Academy of Clinical Neuropsychology/National Academy of Neuropsychology position paper. Moreover, we proffer parameters for the development and use of advanced technologies in neuropsychological assessments. We aim to first describe software and hardware configurations that can impact a computerized neuropsychological assessment. This is followed by a description of best practices for developers and practicing neuropsychologists to minimize error in neuropsychological assessments using advanced technologies. We also discuss the relevance of weighing potential computer error in light of possible errors associated with traditional testing. Throughout there is an emphasis on the need for developers to provide bench test results for their software's performance on various devices and minimum specifications (documented in manuals) for the hardware (e.g. computer, monitor, input devices) in the neuropsychologist's practice. Advances in computerized assessment platforms offer both opportunities and challenges. The challenges can appear daunting but are a manageable and require informed consumers who can appreciate the issues and ask pertinent questions in evaluating their options.
Inter-rater reliability of an observation-based ergonomics assessment checklist for office workers.
Pereira, Michelle Jessica; Straker, Leon Melville; Comans, Tracy Anne; Johnston, Venerina
2016-12-01
To establish the inter-rater reliability of an observation-based ergonomics assessment checklist for computer workers. A 37-item (38-item if a laptop was part of the workstation) comprehensive observational ergonomics assessment checklist comparable to government guidelines and up to date with empirical evidence was developed. Two trained practitioners assessed full-time office workers performing their usual computer-based work and evaluated the suitability of workstations used. Practitioners assessed each participant consecutively. The order of assessors was randomised, and the second assessor was blinded to the findings of the first. Unadjusted kappa coefficients between the raters were obtained for the overall checklist and subsections that were formed from question-items relevant to specific workstation equipment. Twenty-seven office workers were recruited. The inter-rater reliability between two trained practitioners achieved moderate to good reliability for all except one checklist component. This checklist has mostly moderate to good reliability between two trained practitioners. Practitioner Summary: This reliable ergonomics assessment checklist for computer workers was designed using accessible government guidelines and supplemented with up-to-date evidence. Employers in Queensland (Australia) can fulfil legislative requirements by using this reliable checklist to identify and subsequently address potential risk factors for work-related injury to provide a safe working environment.
NASA Astrophysics Data System (ADS)
Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.
2017-05-01
The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.
Analytical evaluation of ILM sensors. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Kirk, R. J.
1975-01-01
The applicability of various sensing concepts to independent landing monitor systems was analyzed. Microwave landing system MLS accuracy requirements are presented along with a description of MLS airborne equipment. Computer programs developed during the analysis are described and include: a mathematical computer model for use in the performance assessment of reconnaissance sensor systems; a theoretical formulation of electromagnetic scattering to generate data at high incidence angles; atmospheric attenuation of microwaves; and microwave radiometry, programs
YF-12 cooperative airframe/propulsion control system program, volume 1
NASA Technical Reports Server (NTRS)
Anderson, D. L.; Connolly, G. F.; Mauro, F. M.; Reukauf, P. J.; Marks, R. (Editor)
1980-01-01
Several YF-12C airplane analog control systems were converted to a digital system. Included were the air data computer, autopilot, inlet control system, and autothrottle systems. This conversion was performed to allow assessment of digital technology applications to supersonic cruise aircraft. The digital system was composed of a digital computer and specialized interface unit. A large scale mathematical simulation of the airplane was used for integration testing and software checkout.
Network survivability performance (computer diskette)
NASA Astrophysics Data System (ADS)
1993-11-01
File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.
Sullivan, F.; Mitchell, E.
1995-01-01
OBJECTIVE--To review findings from studies of the influence of desktop computers on primary care consultations. DESIGN--Systematic review of world reports from 1984 to 1994. SETTING--The computerised catalogues of Medline, BIDS, and GPlit were searched, as well as conference proceedings, books, bibliographies, and references in books and journal articles. SUBJECTS--30 papers met the inclusion criteria and were included for detailed review. INTERVENTIONS--A validated scheme for assessing methodological adequacy was used to score each paper. MAIN OUTCOME MEASURES--Papers were rated on sample formation, baseline differences, unit of allocation, outcome measures, and follow up. Differences in outcomes were also recorded. RESULTS--Four of the six papers dealing with the consultation process showed that consultations took longer. Doctor initiated and "medical" content of consultations increased at the expense of a reduction in patient initiated and "social" content. Each of the 21 studies which looked at clinician performance showed an improvement when a computer was used (from 8% to 50%, with better results for single preventive measures). Only one of the three studies looking at patient outcomes showed an improvement (diastolic blood pressure control 5 mm Hg better after one year, with fewer doctor-patient consultations). CONCLUSIONS--Using a computer in the consultation may help improve clinician performance but may increase the length of the consultation. More studies are needed to assess the effects on patient outcomes of using a computer in consultations. PMID:7580494
Nguyen, Tan B.; Wang, Shijun; Anugu, Vishal; Rose, Natalie; McKenna, Matthew; Petrick, Nicholas; Burns, Joseph E.
2012-01-01
Purpose: To assess the diagnostic performance of distributed human intelligence for the classification of polyp candidates identified with computer-aided detection (CAD) for computed tomographic (CT) colonography. Materials and Methods: This study was approved by the institutional Office of Human Subjects Research. The requirement for informed consent was waived for this HIPAA-compliant study. CT images from 24 patients, each with at least one polyp of 6 mm or larger, were analyzed by using CAD software to identify 268 polyp candidates. Twenty knowledge workers (KWs) from a crowdsourcing platform labeled each polyp candidate as a true or false polyp. Two trials involving 228 KWs were conducted to assess reproducibility. Performance was assessed by comparing the area under the receiver operating characteristic curve (AUC) of KWs with the AUC of CAD for polyp classification. Results: The detection-level AUC for KWs was 0.845 ± 0.045 (standard error) in trial 1 and 0.855 ± 0.044 in trial 2. These were not significantly different from the AUC for CAD, which was 0.859 ± 0.043. When polyp candidates were stratified by difficulty, KWs performed better than CAD on easy detections; AUCs were 0.951 ± 0.032 in trial 1, 0.966 ± 0.027 in trial 2, and 0.877 ± 0.048 for CAD (P = .039 for trial 2). KWs who participated in both trials showed a significant improvement in performance going from trial 1 to trial 2; AUCs were 0.759 ± 0.052 in trial 1 and 0.839 ± 0.046 in trial 2 (P = .041). Conclusion: The performance of distributed human intelligence is not significantly different from that of CAD for colonic polyp classification. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11110938/-/DC1 PMID:22274839
NASA Astrophysics Data System (ADS)
Chiang, Yen-Sheng
2015-11-01
Inequality measures are widely used in both the academia and public media to help us understand how incomes and wealth are distributed. They can be used to assess the distribution of a whole society-global inequality-as well as inequality of actors' referent networks-local inequality. How different is local inequality from global inequality? Formalizing the structure of reference groups as a network, the paper conducted a computational experiment to see how the structure of complex networks influences the difference between global and local inequality assessed by a selection of inequality measures. It was found that local inequality tends to be higher than global inequality when population size is large; network is dense and heterophilously assorted, and income distribution is less dispersed. The implications of the simulation findings are discussed.
Microwave landing system autoland system analysis
NASA Technical Reports Server (NTRS)
Feather, J. B.; Craven, B. K.
1991-01-01
The objective was to investigate the ability of present day aircraft equipped with automatic flight control systems to fly advanced Microwave Landing Systems (MLS) approaches. The tactical approach used to achieve this objective included reviewing the design and autoland operation of the MD-80 aircraft, simulating the MLS approaches using a batch computer program, and assessing the performance of the autoland system from computer generated data. The results showed changes were required to present Instrument Landing System (ILS) procedures to accommodate the new MLS curved paths. It was also shown that in some cases, changes to the digital flight guidance systems would be required so that an autoland could be performed.
Fast Boundary Element Method for acoustics with the Sparse Cardinal Sine Decomposition
NASA Astrophysics Data System (ADS)
Alouges, François; Aussal, Matthieu; Parolin, Emile
2017-07-01
This paper presents the newly proposed method Sparse Cardinal Sine Decomposition that allows fast convolution on unstructured grids. We focus on its use when coupled with finite element techniques to solve acoustic problems with the (compressed) Boundary Element Method. In addition, we also compare the computational performances of two equivalent Matlab® and Python implementations of the method. We show validation test cases in order to assess the precision of the approach. Eventually, the performance of the method is illustrated by the computation of the acoustic target strength of a realistic submarine from the Benchmark Target Strength Simulation international workshop.
NASA Astrophysics Data System (ADS)
Saghafian, Amirreza; Pitsch, Heinz
2012-11-01
A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.
Cabasse, C; Marie-Cousin, A; Huet, A; Sixou, J L
2015-03-01
Anesthetizing MIH (Molar and Incisor Hypomineralisation) teeth is one of the major challenges in paediatric dentistry. Computer-assisted IO injection (CAIO) of 4% articaine with 1:200,000 epinephrine (Alphacaine, Septodont) has been shown to be an efficient way to anesthetize teeth in children. The aim of this study was to assess the efficacy of this method with MIH teeth. This preliminary study was performed using the Quick Sleeper system (Dental Hi Tec, Cholet, France) that allows computer-controlled rotation of the needle to penetrate the bone and computer-controlled injection of the anaesthetic solution. Patients (39) of the department of Paediatric Dentistry were included allowing 46 sessions (including 32 mandibular first permanent molars) to be assessed. CAIO showed efficacy in 93.5% (43/46) of cases. Failures (3) were due to impossibility to reach the spongy bone (1) and to achieve anaesthesia (2). This prospective study confirms that CAIO anaesthesia is a promising method to anesthetize teeth with MIH that could therefore be routinely used by trained practitioners.
Stocker, Elena; Toschkoff, Gregor; Sacher, Stephan; Khinast, Johannes G
2014-11-20
The purpose of this study is to evaluate the use of computer simulations for generating quantitative knowledge as a basis for risk ranking and mechanistic process understanding, as required by ICH Q9 on quality risk management systems. In this specific publication, the main focus is the demonstration of a risk assessment workflow, including a computer simulation for the generation of mechanistic understanding of active tablet coating in a pan coater. Process parameter screening studies are statistically planned under consideration of impacts on a potentially critical quality attribute, i.e., coating mass uniformity. Based on computer simulation data the process failure mode and effects analysis of the risk factors is performed. This results in a quantitative criticality assessment of process parameters and the risk priority evaluation of failure modes. The factor for a quantitative reassessment of the criticality and risk priority is the coefficient of variation, which represents the coating mass uniformity. The major conclusion drawn from this work is a successful demonstration of the integration of computer simulation in the risk management workflow leading to an objective and quantitative risk assessment. Copyright © 2014. Published by Elsevier B.V.
Binding-Site Assessment by Virtual Fragment Screening
Huang, Niu; Jacobson, Matthew P.
2010-01-01
The accurate prediction of protein druggability (propensity to bind high-affinity drug-like small molecules) would greatly benefit the fields of chemical genomics and drug discovery. We have developed a novel approach to quantitatively assess protein druggability by computationally screening a fragment-like compound library. In analogy to NMR-based fragment screening, we dock ∼11000 fragments against a given binding site and compute a computational hit rate based on the fraction of molecules that exceed an empirically chosen score cutoff. We perform a large-scale evaluation of the approach on four datasets, totaling 152 binding sites. We demonstrate that computed hit rates correlate with hit rates measured experimentally in a previously published NMR-based screening method. Secondly, we show that the in silico fragment screening method can be used to distinguish known druggable and non-druggable targets, including both enzymes and protein-protein interaction sites. Finally, we explore the sensitivity of the results to different receptor conformations, including flexible protein-protein interaction sites. Besides its original aim to assess druggability of different protein targets, this method could be used to identifying druggable conformations of flexible binding site for lead discovery, and suggesting strategies for growing or joining initial fragment hits to obtain more potent inhibitors. PMID:20404926
A principled approach to the measurement of situation awareness in commercial aviation
NASA Technical Reports Server (NTRS)
Tenney, Yvette J.; Adams, Marilyn Jager; Pew, Richard W.; Huggins, A. W. F.; Rogers, William H.
1992-01-01
The issue of how to support situation awareness among crews of modern commercial aircraft is becoming especially important with the introduction of automation in the form of sophisticated flight management computers and expert systems designed to assist the crew. In this paper, cognitive theories are discussed that have relevance for the definition and measurement of situation awareness. These theories suggest that comprehension of the flow of events is an active process that is limited by the modularity of attention and memory constraints, but can be enhanced by expert knowledge and strategies. Three implications of this perspective for assessing and improving situation awareness are considered: (1) Scenario variations are proposed that tax awareness by placing demands on attention; (2) Experimental tasks and probes are described for assessing the cognitive processes that underlie situation awareness; and (3) The use of computer-based human performance models to augment the measures of situation awareness derived from performance data is explored. Finally, two potential example applications of the proposed assessment techniques are described, one concerning spatial awareness using wide field of view displays and the other emphasizing fault management in aircraft systems.
NASA Technical Reports Server (NTRS)
Heath, Bruce E.; Khan, M. Javed; Rossi, Marcia; Ali, Syed Firasat
2005-01-01
The rising cost of flight training and the low cost of powerful computers have resulted in increasing use of PC-based flight simulators. This has prompted FAA standards regulating such use and allowing aspects of training on simulators meeting these standards to be substituted for flight time. However, the FAA regulations require an authorized flight instructor as part of the training environment. Thus, while costs associated with flight time have been reduced, the cost associated with the need for a flight instructor still remains. The obvious area of research, therefore, has been to develop intelligent simulators. However, the two main challenges of such attempts have been training strategies and assessment. The research reported in this paper was conducted to evaluate various performance metrics of a straight-in landing approach by 33 novice pilots flying a light single engine aircraft simulation. These metrics were compared to assessments of these flights by two flight instructors to establish a correlation between the two techniques in an attempt to determine a composite performance metric for this flight maneuver.
The Application of Surgical Navigation in the Treatment of Temporomandibular Joint Ankylosis.
Sun, Guowen; Lu, Mingxing; Hu, Qingang
2015-11-01
The purpose of this study was to assess the safety and the accuracy of surgical navigation technology in the resection of severe ankylosis of the mandibular condyle with the middle cranial fossa. The computed tomography scan data were transferred to a Windows-based computer workstation, and the patient's individual anatomy was assessed in multiplanar views at the workstation. In the operation, the patient and the virtual image were matched by individual registration with the reference points which were set on the skull bone surface and the teeth. Then, the real-time navigation can be performed. The acquisition of the data sets was uncomplicated, and image quality was sufficient to assess the operative result in 2 cases. Both of the operations were performed successfully with the guidance of real-time navigation. The application of surgical navigation has enhanced the safety and the accuracy of the surgery for bony ankylosis of temporomandibular joint. The use of surgical navigation resulted in the promotion of accurate and safe surgical excision of the ankylosed skull base tissue.
Grantcharov, T P; Bardram, L; Funch-Jensen, P; Rosenberg, J
2003-07-01
The impact of gender and hand dominance on operative performance may be a subject of prejudice among surgeons, reportedly leading to discrimination and lack of professional promotion. However, very little objective evidence is available yet on the matter. This study was conducted to identify factors that influence surgeons' performance, as measured by a virtual reality computer simulator for laparoscopic surgery. This study included 25 surgical residents who had limited experience with laparoscopic surgery, having performed fewer than 10 laparoscopic cholecystectomies. The participants were registered according to their gender, hand dominance, and experience with computer games. All of the participants performed 10 repetitions of the six tasks on the Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) within 1 month. Assessment of laparoscopic skills was based on three parameters measured by the simulator: time, errors, and economy of hand movement. Differences in performance existed between the compared groups. Men completed the tasks in less time than women ( p = 0.01, Mann-Whitney test), but there was no statistical difference between the genders in the number of errors and unnecessary movements. Individuals with right hand dominance performed fewer unnecessary movements ( p = 0.045, Mann-Whitney test), and there was a trend toward better results in terms of time and errors among the residence with right hand dominance than among those with left dominance. Users of computer games made fewer errors than nonusers ( p = 0.035, Mann-Whitney test). The study provides objective evidence of a difference in laparoscopic skills between surgeons differing gender, hand dominance, and computer experience. These results may influence the future development of training program for laparoscopic surgery. They also pose a challenge to individuals responsible for the selection and training of the residents.
Wild, Katherine V.; Mattek, Nora; Maxwell, Shoshana A.; Dodge, Hiroko H.; Jimison, Holly B.; Kaye, Jeffrey A.
2012-01-01
Background This study examines differences in computer related self-efficacy and anxiety in subgroups of older adults, and changes in those measures following exposure to a systematic training program and subsequent computer use. Methods Participants were volunteers in the Intelligent Systems for Assessment of Aging Changes Study (ISAAC) carried out by the Oregon Center for Aging and Technology. Participants were administered two questionnaires prior to training and again one year later, related to computer self-efficacy and anxiety. Continuous recording of computer use was also assessed for a subset of participants. Results Baseline comparisons by gender, age, education, living arrangement, and computer proficiency, but not cognitive status, yielded significant differences in confidence and anxiety related to specific aspects of computer use. At one-year follow-up, participants reported less anxiety and greater confidence. However, the benefits of training and exposure varied by group and task. Comparisons based on cognitive status showed that the cognitively intact participants benefited more from training and/or experience with computers than did participants with Mild Cognitive Impairment (MCI), who after one year continued to report less confidence and more anxiety regarding certain aspects of computer use. Conclusion After one year of consistent computer use, cognitively intact participants in this study reported reduced levels of anxiety and increased self-confidence in their ability to perform specific computer tasks. Participants with MCI at baseline were less likely to demonstrate increased efficacy or confidence than their cognitively intact counterparts. PMID:23102124
Computer Proficiency Questionnaire: Assessing Low and High Computer Proficient Seniors
Boot, Walter R.; Charness, Neil; Czaja, Sara J.; Sharit, Joseph; Rogers, Wendy A.; Fisk, Arthur D.; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran
2015-01-01
Purpose of the Study: Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. Design and Methods: To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. Results: The CPQ demonstrated excellent reliability (Cronbach’s α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. Implications: The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. PMID:24107443
Simulating Human Cognition in the Domain of Air Traffic Control
NASA Technical Reports Server (NTRS)
Freed, Michael; Johnston, James C.; Null, Cynthia H. (Technical Monitor)
1995-01-01
Experiments intended to assess performance in human-machine interactions are often prohibitively expensive, unethical or otherwise impractical to run. Approximations of experimental results can be obtained, in principle, by simulating the behavior of subjects using computer models of human mental behavior. Computer simulation technology has been developed for this purpose. Our goal is to produce a cognitive model suitable to guide the simulation machinery and enable it to closely approximate a human subject's performance in experimental conditions. The described model is designed to simulate a variety of cognitive behaviors involved in routine air traffic control. As the model is elaborated, our ability to predict the effects of novel circumstances on controller error rates and other performance characteristics should increase. This will enable the system to project the impact of proposed changes to air traffic control procedures and equipment on controller performance.
A computer analysis of the RF performance of a ground-mounted, air-supported radome
NASA Astrophysics Data System (ADS)
Punnett, M. B.; Joy, E. B.
Several reports and actual operating experience have highlighted the degradation of RF Performance which can occur when SSR or IFF antenna are mounted above primary search antenna within metal space frame or dielectric space frame radomes. These effects are usually attributed to both the high incidence angles and sensitivity of the low gain antennae to sidelobe changes due to scattered energy. Although it has been widely accepted that thin membrane radomes would provide superior performance for this application, there has been little supporting documentation. A plane-wave-spectrum (PWS) computer-based radome analysis was conducted to assess the performance of a specific air-supported radome for the SSR application. In conducting the analysis a mathematical model of a modern SSR antenna was combined with a model of an existing Birdair radome design.
Enhancing self-report assessment of PTSD: development of an item bank.
Del Vecchio, Nicole; Elwy, A Rani; Smith, Eric; Bottonari, Kathryn A; Eisen, Susan V
2011-04-01
The authors report results of work to enhance self-report posttraumatic stress disorder (PTSD) assessment by developing an item bank for use in a computer-adapted test. Computer-adapted tests have great potential to decrease the burden of PTSD assessment and outcomes monitoring. The authors conducted a systematic literature review of PTSD instruments, created a database of items, performed qualitative review and readability analysis, and conducted cognitive interviews with veterans diagnosed with PTSD. The systematic review yielded 480 studies in which 41 PTSD instruments comprising 993 items met inclusion criteria. The final PTSD item bank includes 104 items representing each of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV; American Psychiatric Association [APA], 1994), PTSD symptom clusters (reexperiencing, avoidance, and hyperarousal), and 3 additional subdomains (depersonalization, guilt, and sexual problems) that expanded the assessment item pool. Copyright © 2011 International Society for Traumatic Stress Studies.
The Influence of Juggling on Mental Rotation Performance in Children
ERIC Educational Resources Information Center
Jansen, Petra; Lange, Leonie F.; Heil, Martin
2011-01-01
Study aim: To assess the influence of juggling training on mental rotation performance in children. Material and methods: Two groups of girls aged 6-14 years were studied: experimental (EG; n = 26) and control (CG; n = 24). All girls solved a mental rotation task with 3-D block figures on computer screen (pre-test). After the initial test, EG…
ERIC Educational Resources Information Center
Xia, Belle Selene; Liitiäinen, Elia
2017-01-01
The benefits of using online exercises have been analysed in terms of distance learning, automatic assessment and self-regulated learning. In this study, we have not found a direct proportional relationship between student performance in the course exercises that use online technologies and the exam grades. We see that the average submission rate…
ERIC Educational Resources Information Center
Rutherford-Becker, Kristy J.; Vanderwood, Michael L.
2009-01-01
The purpose of this study was to evaluate the extent that reading performance (as measured by curriculum-based measures [CBM] of oral reading fluency [ORF] and Maze reading comprehension), is related to math performance (as measured by CBM math computation and applied math). Additionally, this study examined which of the two reading measures was a…
ERIC Educational Resources Information Center
Spaniol, Mayra Muller; Shalev, Lilach; Kossyvaki, Lila; Mevorach, Carmel
2018-01-01
This study assessed the effectiveness of an attention intervention program (Computerized Progressive Attentional Training; CPAT) in improving academic performance of children with ASD. Fifteen 6-10 year olds with ASD attending a mainstream and a special school were assigned to an experimental (CPAT; n = 8) and active control (computer games; n =…
ERIC Educational Resources Information Center
Schuyler, Stanley TenEyck
2008-01-01
Problem solving can be thought of in two phases: the first phase is problem formulation and the second solution development. Problem formulation is the process of identifying a problem or opportunity in a situation. Problem Formulation Ability, or PFA, is the ability to perform this process. This research investigated a method to assess PFA and…
ERIC Educational Resources Information Center
Angeli, Charoula; Valanides, Nicos
2013-01-01
The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…
Wiksten, D L; Patterson, P; Antonio, K; De La Cruz, D; Buxton, B P
1998-07-01
To evaluate the effectiveness of an interactive athletic training educational curriculum (IATEC) computer program as compared with traditional lecture instruction. Instructions on assessment of the quadriceps angle (Q-angle) were compared. Dependent measures consisted of cognitive knowledge, practical skill assessment, and attitudes toward the 2 methods of instruction. Sixty-six subjects were selected and then randomly assigned to 3 different groups: traditional lecture, IATEC, and control. The traditional lecture group (n = 22) received a 50-minute lecture/demonstration covering the same instructional content as the Q-angle module of the IATEC program. The IATEC group (n = 20; 2 subjects were dropped from this group due to scheduling conflicts) worked independently for 50 to 65 minutes using the Q-angle module of the IATEC program. The control group (n = 22) received no instruction. Subjects were recruited from an undergraduate athletic training education program and were screened for prior knowledge of the Q-angle. A 9-point multiple choice examination was used to determine cognitive knowledge of the Q-angle. A 12-point yes-no checklist was used to determine whether or not the subjects were able to correctly measure the Q-angle. The Allen Attitude Toward Computer-Assisted Instruction Semantic Differential Survey was used to assess student attitudes toward the 2 methods of instruction. The survey examined overall attitudes, in addition to 3 subscales: comfort, creativity, and function. The survey was scored from 1 to 7, with 7 being the most favorable and 1 being the least favorable. Results of a 1-way ANOVA on cognitive knowledge of the Q-angle revealed that the traditional lecture and IATEC groups performed significantly better than the control group, and the traditional lecture group performed significantly better than the IATEC group. Results of a 1-way ANOVA on practical skill performance revealed that the traditional lecture and IATEC groups performed significantly better than the control group, but there were no significant differences between the traditional lecture and IATEC groups on practical skill performance. Results of a t test indicated significantly more favorable attitudes (P < .05) for the traditional lecture group when compared with the IATEC group for comfort, creativity, and function. Our results suggest that use of the IATEC computer module is an effective means of instruction; however, use of the IATEC program alone may not be sufficient for educating students in cognitive knowledge. Further research is needed to determine the effectiveness of the IATEC computer program as a supplement to traditional lecture instruction in athletic training education.
Wiksten, Denise Lebsack; Patterson, Patricia; Antonio, Kimberly; De La Cruz, Daniel; Buxton, Barton P.
1998-01-01
Objective: To evaluate the effectiveness of an interactive athletic training educational curriculum (IATEC) computer program as compared with traditional lecture instruction. Instructions on assessment of the quadriceps angle (Q-angle) were compared. Dependent measures consisted of cognitive knowledge, practical skill assessment, and attitudes toward the 2 methods of instruction. Design and Setting: Sixty-six subjects were selected and then randomly assigned to 3 different groups: traditional lecture, IATEC, and control. The traditional lecture group (n = 22) received a 50-minute lecture/demonstration covering the same instructional content as the Q-angle module of the IATEC program. The IATEC group (n = 20; 2 subjects were dropped from this group due to scheduling conflicts) worked independently for 50 to 65 minutes using the Q-angle module of the IATEC program. The control group (n = 22) received no instruction. Subjects: Subjects were recruited from an undergraduate athletic training education program and were screened for prior knowledge of the Q-angle. Measurements: A 9-point multiple choice examination was used to determine cognitive knowledge of the Q-angle. A 12-point yes-no checklist was used to determine whether or not the subjects were able to correctly measure the Q-angle. The Allen Attitude Toward Computer-Assisted Instruction Semantic Differential Survey was used to assess student attitudes toward the 2 methods of instruction. The survey examined overall attitudes, in addition to 3 subscales: comfort, creativity, and function. The survey was scored from 1 to 7, with 7 being the most favorable and 1 being the least favorable. Results: Results of a 1-way ANOVA on cognitive knowledge of the Q-angle revealed that the traditional lecture and IATEC groups performed significantly better than the control group, and the traditional lecture group performed significantly better than the IATEC group. Results of a 1-way ANOVA on practical skill performance revealed that the traditional lecture and IATEC groups performed significantly better than the control group, but there were no significant differences between the traditional lecture and IATEC groups on practical skill performance. Results of a t test indicated significantly more favorable attitudes (P < .05) for the traditional lecture group when compared with the IATEC group for comfort, creativity, and function. Conclusions: Our results suggest that use of the IATEC computer module is an effective means of instruction; however, use of the IATEC program alone may not be sufficient for educating students in cognitive knowledge. Further research is needed to determine the effectiveness of the IATEC computer program as a supplement to traditional lecture instruction in athletic training education. PMID:16558517
Discerning Trends in Performance Across Multiple Events
NASA Technical Reports Server (NTRS)
Slater, Simon; Hiltz, Mike; Rice, Craig
2006-01-01
Mass Data is a computer program that enables rapid, easy discernment of trends in performance data across multiple flights and ground tests. The program can perform Fourier analysis and other functions for the purposes of frequency analysis and trending of all variables. These functions facilitate identification of past use of diagnosed systems and of anomalies in such systems, and enable rapid assessment of related current problems. Many variables, for computation of which it is usually necessary to perform extensive manual manipulation of raw downlist data, are automatically computed and made available to all users, regularly eliminating the need for what would otherwise be an extensive amount of engineering analysis. Data from flight, ground test, and simulation are preprocessed and stored in one central location for instantaneous access and comparison for diagnostic and trending purposes. Rules are created so that an event log is created for every flight, making it easy to locate information on similar maneuvers across many flights. The same rules can be created for test sets and simulations, and are searchable, so that information on like events is easily accessible.
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
The soft computing-based approach to investigate allergic diseases: a systematic review.
Tartarisco, Gennaro; Tonacci, Alessandro; Minciullo, Paola Lucia; Billeci, Lucia; Pioggia, Giovanni; Incorvaia, Cristoforo; Gangemi, Sebastiano
2017-01-01
Early recognition of inflammatory markers and their relation to asthma, adverse drug reactions, allergic rhinitis, atopic dermatitis and other allergic diseases is an important goal in allergy. The vast majority of studies in the literature are based on classic statistical methods; however, developments in computational techniques such as soft computing-based approaches hold new promise in this field. The aim of this manuscript is to systematically review the main soft computing-based techniques such as artificial neural networks, support vector machines, bayesian networks and fuzzy logic to investigate their performances in the field of allergic diseases. The review was conducted following PRISMA guidelines and the protocol was registered within PROSPERO database (CRD42016038894). The research was performed on PubMed and ScienceDirect, covering the period starting from September 1, 1990 through April 19, 2016. The review included 27 studies related to allergic diseases and soft computing performances. We observed promising results with an overall accuracy of 86.5%, mainly focused on asthmatic disease. The review reveals that soft computing-based approaches are suitable for big data analysis and can be very powerful, especially when dealing with uncertainty and poorly characterized parameters. Furthermore, they can provide valuable support in case of lack of data and entangled cause-effect relationships, which make it difficult to assess the evolution of disease. Although most works deal with asthma, we believe the soft computing approach could be a real breakthrough and foster new insights into other allergic diseases as well.
Dose estimates for the solid waste performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rittman, P.D.
1994-08-30
The Solid Waste Performance Assessment calculations by PNL in 1990 were redone to incorporate changes in methods and parameters since then. The ten scenarios found in their report were reduced to three, the Post-Drilling Resident, the Post-Excavation Resident, and an All Pathways Irrigator. In addition, estimates of population dose to people along the Columbia River are also included. The attached report describes the methods and parameters used in the calculations, and derives dose factors for each scenario. In addition, waste concentrations, ground water concentrations, and river water concentrations needed to reach the performance objectives of 100 mrem/yr and 500 person-rem/yrmore » are computed. Internal dose factors from DOE-0071 were applied when computing internal dose. External dose rate factors came from the GENII Version 1.485 software package. Dose calculations were carried out on a spreadsheet. The calculations are described in detail in the report for 63 nuclides, including 5 not presently in the GENII libraries. The spreadsheet calculations were checked by comparison with GENII, as described in Appendix D.« less
Computer-enhanced visual learning method: a paradigm to teach and document surgical skills.
Maizels, Max; Mickelson, Jennie; Yerkes, Elizabeth; Maizels, Evelyn; Stork, Rachel; Young, Christine; Corcoran, Julia; Holl, Jane; Kaplan, William E
2009-09-01
Changes in health care are stimulating residency training programs to develop new methods for teaching surgical skills. We developed Computer-Enhanced Visual Learning (CEVL) as an innovative Internet-based learning and assessment tool. The CEVL method uses the educational procedures of deliberate practice and performance to teach and learn surgery in a stylized manner. CEVL is a learning and assessment tool that can provide students and educators with quantitative feedback on learning a specific surgical procedure. Methods involved examine quantitative data of improvement in surgical skills. Herein, we qualitatively describe the method and show how program directors (PDs) may implement this technique in their residencies. CEVL allows an operation to be broken down into teachable components. The process relies on feedback and remediation to improve performance, with a focus on learning that is applicable to the next case being performed. CEVL has been shown to be effective for teaching pediatric orchiopexy and is being adapted to additional adult and pediatric procedures and to office examination skills. The CEVL method is available to other residency training programs.
Performance evaluation of objective quality metrics for HDR image compression
NASA Astrophysics Data System (ADS)
Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic
2014-09-01
Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.
Computer-Enhanced Visual Learning Method: A Paradigm to Teach and Document Surgical Skills
Maizels, Max; Mickelson, Jennie; Yerkes, Elizabeth; Maizels, Evelyn; Stork, Rachel; Young, Christine; Corcoran, Julia; Holl, Jane; Kaplan, William E.
2009-01-01
Innovation Changes in health care are stimulating residency training programs to develop new methods for teaching surgical skills. We developed Computer-Enhanced Visual Learning (CEVL) as an innovative Internet-based learning and assessment tool. The CEVL method uses the educational procedures of deliberate practice and performance to teach and learn surgery in a stylized manner. Aim of Innovation CEVL is a learning and assessment tool that can provide students and educators with quantitative feedback on learning a specific surgical procedure. Methods involved examine quantitative data of improvement in surgical skills. Herein, we qualitatively describe the method and show how program directors (PDs) may implement this technique in their residencies. Results CEVL allows an operation to be broken down into teachable components. The process relies on feedback and remediation to improve performance, with a focus on learning that is applicable to the next case being performed. CEVL has been shown to be effective for teaching pediatric orchiopexy and is being adapted to additional adult and pediatric procedures and to office examination skills. The CEVL method is available to other residency training programs. PMID:21975716
Numerical Simulation of a High-Lift Configuration with Embedded Fluidic Actuators
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Casalino, Damiano; Lin, John C.; Appelbaum, Jason
2014-01-01
Numerical simulations have been performed for a vertical tail configuration with deflected rudder. The suction surface of the main element of this configuration is embedded with an array of 32 fluidic actuators that produce oscillating sweeping jets. Such oscillating jets have been found to be very effective for flow control applications in the past. In the current paper, a high-fidelity computational fluid dynamics (CFD) code known as the PowerFLOW(Registered TradeMark) code is used to simulate the entire flow field associated with this configuration, including the flow inside the actuators. The computed results for the surface pressure and integrated forces compare favorably with measured data. In addition, numerical solutions predict the correct trends in forces with active flow control compared to the no control case. Effect of varying yaw and rudder deflection angles are also presented. In addition, computations have been performed at a higher Reynolds number to assess the performance of fluidic actuators at flight conditions.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Jordan, Denis; Steiner, Marcel; Kochs, Eberhard F; Schneider, Gerhard
2010-12-01
Prediction probability (P(K)) and the area under the receiver operating characteristic curve (AUC) are statistical measures to assess the performance of anesthetic depth indicators, to more precisely quantify the correlation between observed anesthetic depth and corresponding values of a monitor or indicator. In contrast to many other statistical tests, they offer several advantages. First, P(K) and AUC are independent from scale units and assumptions on underlying distributions. Second, the calculation can be performed without any knowledge about particular indicator threshold values, which makes the test more independent from specific test data. Third, recent approaches using resampling methods allow a reliable comparison of P(K) or AUC of different indicators of anesthetic depth. Furthermore, both tests allow simple interpretation, whereby results between 0 and 1 are related to the probability, how good an indicator separates the observed levels of anesthesia. For these reasons, P(K) and AUC have become popular in medical decision making. P(K) is intended for polytomous patient states (i.e., >2 anesthetic levels) and can be considered as a generalization of the AUC, which was basically introduced to assess a predictor of dichotomous classes (e.g., consciousness and unconsciousness in anesthesia). Dichotomous paradigms provide equal values of P(K) and AUC test statistics. In the present investigation, we introduce a user-friendly computer program for computing P(K) and estimating reliable bootstrap confidence intervals. It is designed for multiple comparisons of the performance of depth of anesthesia indicators. Additionally, for dichotomous classes, the program plots the receiver operating characteristic graph completing information obtained from P(K) or AUC, respectively. In clinical investigations, both measures are applied for indicator assessment, where ambiguous usage and interpretation may be a consequence. Therefore, a summary of the concepts of P(K) and AUC including brief and easily understandable proof of their equality is presented in the text. The exposure introduces readers to the algorithms of the provided computer program and is intended to make standardized performance tests of depth of anesthesia indicators available to medical researchers.
NASA Astrophysics Data System (ADS)
Lee, Choonik
A series of realistic voxel computational phantoms of pediatric patients were developed and then used for the radiation risk assessment for various exposure scenarios. The high-resolution computed tomographic images of live patients were utilized for the development of the five voxel phantoms of pediatric patients, 9-month male, 4-year female, 8-year female, 11-year male, and 14-year male. The phantoms were first developed as head and torso phantoms and then extended into whole body phantoms by utilizing computed tomographic images of a healthy adult volunteer. The whole body phantom series was modified to have the same anthropometrics with the most recent reference data reported by the international commission on radiological protection. The phantoms, named as the University of Florida series B, are the first complete set of the pediatric voxel phantoms having reference organ masses and total heights. As part of the dosimetry study, the investigation on skeletal tissue dosimetry methods was performed for better understanding of the radiation dose to the active bone marrow and bone endosteum. All of the currently available methodologies were inter-compared and benchmarked with the paired-image radiation transport model. The dosimetric characteristics of the phantoms were investigated by using Monte Carlo simulation of the broad parallel beams of external phantom in anterior-posterior, posterior-anterior, left lateral, right lateral, rotational, and isotropic angles. Organ dose conversion coefficients were calculated for extensive photon energies and compared with the conventional stylized pediatric phantoms of Oak Ridge National Laboratory. The multi-slice helical computed tomography exams were simulated using Monte Carlo simulation code for various exams protocols, head, chest, abdomen, pelvis, and chest-abdomen-pelvis studies. Results have found realistic estimates of the effective doses for frequently used protocols in pediatric radiology. The results were very crucial in understanding the radiation risks of the patients undergoing computed tomography. Finally, nuclear medicine simulations were performed by calculating specific absorbed fractions for multiple target-source organ pairs via Monte Carlo simulations. Specific absorbed fractions were calculated for both photon and electron so that they can be used to calculated radionuclide S-values. All of the results were tabulated for future uses and example dose assessment was performed for selected nuclides administered in nuclear medicine.
Herweh, Christian; Ringleb, Peter A; Rauch, Geraldine; Gerry, Steven; Behrens, Lars; Möhlenbruch, Markus; Gottorf, Rebecca; Richter, Daniel; Schieber, Simon; Nagel, Simon
2016-06-01
The Alberta Stroke Program Early CT score (ASPECTS) is an established 10-point quantitative topographic computed tomography scan score to assess early ischemic changes. We compared the performance of the e-ASPECTS software with those of stroke physicians at different professional levels. The baseline computed tomography scans of acute stroke patients, in whom computed tomography and diffusion-weighted imaging scans were obtained less than two hours apart, were retrospectively scored by e-ASPECTS as well as by three stroke experts and three neurology trainees blinded to any clinical information. The ground truth was defined as the ASPECTS on diffusion-weighted imaging scored by another two non-blinded independent experts on consensus basis. Sensitivity and specificity in an ASPECTS region-based and an ASPECTS score-based analysis as well as receiver-operating characteristic curves, Bland-Altman plots with mean score error, and Matthews correlation coefficients were calculated. Comparisons were made between the human scorers and e-ASPECTS with diffusion-weighted imaging being the ground truth. Two methods for clustered data were used to estimate sensitivity and specificity in the region-based analysis. In total, 34 patients were included and 680 (34 × 20) ASPECTS regions were scored. Mean time from onset to computed tomography was 172 ± 135 min and mean time difference between computed tomographyand magnetic resonance imaging was 41 ± 31 min. The region-based sensitivity (46.46% [CI: 30.8;62.1]) of e-ASPECTS was better than three trainees and one expert (p ≤ 0.01) and not statistically different from another two experts. Specificity (94.15% [CI: 91.7;96.6]) was lower than one expert and one trainee (p < 0.01) and not statistically different to the other four physicians. e-ASPECTS had the best Matthews correlation coefficient of 0.44 (experts: 0.38 ± 0.08 and trainees: 0.19 ± 0.05) and the lowest mean score error of 0.56 (experts: 1.44 ± 1.79 and trainees: 1.97 ± 2.12). e-ASPECTS showed a similar performance to that of stroke experts in the assessment of brain computed tomographys of acute ischemic stroke patients with the Alberta Stroke Program Early CT score method. © 2016 World Stroke Organization.
Memristor-Based Analog Computation and Neural Network Classification with a Dot Product Engine.
Hu, Miao; Graves, Catherine E; Li, Can; Li, Yunning; Ge, Ning; Montgomery, Eric; Davila, Noraica; Jiang, Hao; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei; Strachan, John Paul
2018-03-01
Using memristor crossbar arrays to accelerate computations is a promising approach to efficiently implement algorithms in deep neural networks. Early demonstrations, however, are limited to simulations or small-scale problems primarily due to materials and device challenges that limit the size of the memristor crossbar arrays that can be reliably programmed to stable and analog values, which is the focus of the current work. High-precision analog tuning and control of memristor cells across a 128 × 64 array is demonstrated, and the resulting vector matrix multiplication (VMM) computing precision is evaluated. Single-layer neural network inference is performed in these arrays, and the performance compared to a digital approach is assessed. Memristor computing system used here reaches a VMM accuracy equivalent of 6 bits, and an 89.9% recognition accuracy is achieved for the 10k MNIST handwritten digit test set. Forecasts show that with integrated (on chip) and scaled memristors, a computational efficiency greater than 100 trillion operations per second per Watt is possible. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D.G.: Watkins, J.C.
This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less
Individualized Educational Programming for the Mentally Retarded.
ERIC Educational Resources Information Center
Singh, Nirbhay N.; Ahrens, Michael G.
1980-01-01
The minimal components of a model which utilizes a computer for summarizing individual performance records for teaching educational skills to the mentally retarded are described. The most important components are assessment, individual and group programing, continuous data collection, and program evaluation. (Author)
ELECTRICAL RESISTIVITY TECHNIQUE TO ASSESS THE INTEGRITY OF GEOMEMBRANE LINERS
Two-dimensional electrical modeling of a liner system was performed using computer techniques. The modeling effort examined the voltage distributions in cross sections of lined facilities with different leak locations. Results confirmed that leaks in the liner influenced voltage ...
COMPUTERIZED ASSESSMENT OF HUMAN NEUROTOXICITY: SENSITIVITY TO NITROUS OXIDE EXPOSURE
The authors recently developed a flexible, portable, computer based neurobehavioral evaluation system (NES) to standardize data collection in epidemiologic field studies of individuals at risk for neurobehavioral toxicity. The current study was performed to examine the system's s...
Moorthy, Arun S; Eberl, Hermann J
2014-04-01
Fermentation reactor systems are a key platform in studying intestinal microflora, specifically with respect to questions surrounding the effects of diet. In this study, we develop computational representations of colon fermentation reactor systems as a way to assess the influence of three design elements (number of reactors, emptying mechanism, and inclusion of microbial immobilization) on three performance measures (total biomass density, biomass composition, and fibre digestion efficiency) using a fractional-factorial experimental design. It was determined that the choice of emptying mechanism showed no effect on any of the performance measures. Additionally, it was determined that none of the design criteria had any measurable effect on reactor performance with respect to biomass composition. It is recommended that model fermentation systems used in the experimenting of dietary effects on intestinal biomass composition be streamlined to only include necessary system design complexities, as the measured performance is not benefited by the addition of microbial immobilization mechanisms or semi-continuous emptying scheme. Additionally, the added complexities significantly increase computational time during simulation experiments. It was also noted that the same factorial experiment could be directly adapted using in vitro colon fermentation systems. Copyright © 2013 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Computational Support for Technology- Investment Decisions
NASA Technical Reports Server (NTRS)
Adumitroaie, Virgil; Hua, Hook; Lincoln, William; Block, Gary; Mrozinski, Joseph; Shelton, Kacie; Weisbin, Charles; Elfes, Alberto; Smith, Jeffrey
2007-01-01
Strategic Assessment of Risk and Technology (START) is a user-friendly computer program that assists human managers in making decisions regarding research-and-development investment portfolios in the presence of uncertainties and of non-technological constraints that include budgetary and time limits, restrictions related to infrastructure, and programmatic and institutional priorities. START facilitates quantitative analysis of technologies, capabilities, missions, scenarios and programs, and thereby enables the selection and scheduling of value-optimal development efforts. START incorporates features that, variously, perform or support a unique combination of functions, most of which are not systematically performed or supported by prior decision- support software. These functions include the following: Optimal portfolio selection using an expected-utility-based assessment of capabilities and technologies; Temporal investment recommendations; Distinctions between enhancing and enabling capabilities; Analysis of partial funding for enhancing capabilities; and Sensitivity and uncertainty analysis. START can run on almost any computing hardware, within Linux and related operating systems that include Mac OS X versions 10.3 and later, and can run in Windows under the Cygwin environment. START can be distributed in binary code form. START calls, as external libraries, several open-source software packages. Output is in Excel (.xls) file format.
Parallel high-precision orbit propagation using the modified Picard-Chebyshev method
NASA Astrophysics Data System (ADS)
Koblick, Darin C.
2012-03-01
The modified Picard-Chebyshev method, when run in parallel, is thought to be more accurate and faster than the most efficient sequential numerical integration techniques when applied to orbit propagation problems. Previous experiments have shown that the modified Picard-Chebyshev method can have up to a one order magnitude speedup over the 12
ERIC Educational Resources Information Center
de-Marcos, Luis; Hilera, Jose Ramon; Barchino, Roberto; Jimenez, Lourdes; Martinez, Jose Javier; Gutierrez, Jose Antonio; Gutierrez, Jose Maria; Oton, Salvador
2010-01-01
Mobile learning is considered an evolution of e-learning that embraces the ubiquitous nature of current computational systems in order to improve teaching and learning. Within this context it is possible to develop mobile applications oriented to learning, but it is also important to assess to what extent such applications actually work. In this…
Ha, Richard; Mema, Eralda; Guo, Xiaotao; Mango, Victoria; Desperito, Elise; Ha, Jason; Wynn, Ralph; Zhao, Binsheng
2016-04-01
The amount of fibroglandular tissue (FGT) has been linked to breast cancer risk based on mammographic density studies. Currently, the qualitative assessment of FGT on mammogram (MG) and magnetic resonance imaging (MRI) is prone to intra and inter-observer variability. The purpose of this study is to develop an objective quantitative FGT measurement tool for breast MRI that could provide significant clinical value. An IRB approved study was performed. Sixty breast MRI cases with qualitative assessment of mammographic breast density and MRI FGT were randomly selected for quantitative analysis from routine breast MRIs performed at our institution from 1/2013 to 12/2014. Blinded to the qualitative data, whole breast and FGT contours were delineated on T1-weighted pre contrast sagittal images using an in-house, proprietary segmentation algorithm which combines the region-based active contours and a level set approach. FGT (%) was calculated by: [segmented volume of FGT (mm(3))/(segmented volume of whole breast (mm(3))] ×100. Statistical correlation analysis was performed between quantified FGT (%) on MRI and qualitative assessments of mammographic breast density and MRI FGT. There was a significant positive correlation between quantitative MRI FGT assessment and qualitative MRI FGT (r=0.809, n=60, P<0.001) and mammographic density assessment (r=0.805, n=60, P<0.001). There was a significant correlation between qualitative MRI FGT assessment and mammographic density assessment (r=0.725, n=60, P<0.001). The four qualitative assessment categories of FGT correlated with the calculated mean quantitative FGT (%) of 4.61% (95% CI, 0-12.3%), 8.74% (7.3-10.2%), 18.1% (15.1-21.1%), 37.4% (29.5-45.3%). Quantitative measures of FGT (%) were computed with data derived from breast MRI and correlated significantly with conventional qualitative assessments. This quantitative technique may prove to be a valuable tool in clinical use by providing computer generated standardized measurements with limited intra or inter-observer variability.
Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi
2016-08-05
The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Evaluation of a continuous-rotation, high-speed scanning protocol for micro-computed tomography.
Kerl, Hans Ulrich; Isaza, Cristina T; Boll, Hanne; Schambach, Sebastian J; Nolte, Ingo S; Groden, Christoph; Brockmann, Marc A
2011-01-01
Micro-computed tomography is used frequently in preclinical in vivo research. Limiting factors are radiation dose and long scan times. The purpose of the study was to compare a standard step-and-shoot to a continuous-rotation, high-speed scanning protocol. Micro-computed tomography of a lead grid phantom and a rat femur was performed using a step-and-shoot and a continuous-rotation protocol. Detail discriminability and image quality were assessed by 3 radiologists. The signal-to-noise ratio and the modulation transfer function were calculated, and volumetric analyses of the femur were performed. The radiation dose of the scan protocols was measured using thermoluminescence dosimeters. The 40-second continuous-rotation protocol allowed a detail discriminability comparable to the step-and-shoot protocol at significantly lower radiation doses. No marked differences in volumetric or qualitative analyses were observed. Continuous-rotation micro-computed tomography significantly reduces scanning time and radiation dose without relevantly reducing image quality compared with a normal step-and-shoot protocol.
Hardware Acceleration of Adaptive Neural Algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.
As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less
Quantum computing applied to calculations of molecular energies: CH2 benchmark.
Veis, Libor; Pittner, Jiří
2010-11-21
Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
NASA Astrophysics Data System (ADS)
Yu, Li-Juan; Wan, Wenchao; Karton, Amir
2016-11-01
We evaluate the performance of standard and modified MPn procedures for a wide set of thermochemical and kinetic properties, including atomization energies, structural isomerization energies, conformational energies, and reaction barrier heights. The reference data are obtained at the CCSD(T)/CBS level by means of the Wn thermochemical protocols. We find that none of the MPn-based procedures show acceptable performance for the challenging W4-11 and BH76 databases. For the other thermochemical/kinetic databases, the MP2.5 and MP3.5 procedures provide the most attractive accuracy-to-computational cost ratios. The MP2.5 procedure results in a weighted-total-root-mean-square deviation (WTRMSD) of 3.4 kJ/mol, whilst the computationally more expensive MP3.5 procedure results in a WTRMSD of 1.9 kJ/mol (the same WTRMSD obtained for the CCSD(T) method in conjunction with a triple-zeta basis set). We also assess the performance of the computationally economical CCSD(T)/CBS(MP2) method, which provides the best overall performance for all the considered databases, including W4-11 and BH76.
Giebel, Clarissa M; Challis, David; Montaldi, Daniela
2017-03-01
Minimal evidence exists on the detailed deficits in complex instrumental activities of daily living (IADLs) in mild dementia. The aim of this study was twofold, to validate a revised questionnaire focusing measuring the initiative and performance of IADLs in mild dementia and to explore the relationship between individual IADLs and patient and carer well-being. A total of 183 carers of people with mild dementia completed a further modified Revised Interview for Deterioration in Daily Living Activities 2 (R-IDDD2), which comprised new activities such as computer use, as well as sub-activities on the performance scale. Carers also completed questionnaires assessing patient quality of life (QoL-AD), carer quality of life (AC-QoL), and burden (GHQ-12). Persons with dementia were significantly poorer initiating than performing cleaning, doing repair work, and preparing a hot or cold meal, whereas being poorer at performing dressing and following current affairs. Using the computer, preparing a hot meal, finance, and medication management were most impaired, whereas more basic activities of dressing, washing oneself, brushing hair or teeth, and preparing a hot drink were most preserved. Poor initiative and performance on nearly all activities were significantly related to reduced carer and patient well-being. The R-IDDD2 offers a platform to comprehensively assess everyday functioning. Deteriorations in initiative and performance need to be targeted separately in interventions, as the former requires effective triggering and the latter structured training and support. Most activities were significantly associated with well-being, particularly patient quality of life so that improving any activity should improve well-being.
Simulation of hypersonic shock wave - laminar boundary layer interactions
NASA Astrophysics Data System (ADS)
Kianvashrad, N.; Knight, D.
2017-06-01
The capability of the Navier-Stokes equations with a perfect gas model for simulation of hypersonic shock wave - laminar boundary layer interactions is assessed. The configuration is a hollow cylinder flare. The experimental data were obtained by Calspan-University of Buffalo (CUBRC) for total enthalpies ranging from 5.07 to 21.85 MJ/kg. Comparison of the computed and experimental surface pressure and heat transfer is performed and the computed §ow¦eld structure is analyzed.
NASA Technical Reports Server (NTRS)
Trosset, Michael W.
1999-01-01
Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.
Probabilistic Surface Characterization for Safe Landing Hazard Detection and Avoidance (HDA)
NASA Technical Reports Server (NTRS)
Johnson, Andrew E. (Inventor); Ivanov, Tonislav I. (Inventor); Huertas, Andres (Inventor)
2015-01-01
Apparatuses, systems, computer programs and methods for performing hazard detection and avoidance for landing vehicles are provided. Hazard assessment takes into consideration the geometry of the lander. Safety probabilities are computed for a plurality of pixels in a digital elevation map. The safety probabilities are combined for pixels associated with one or more aim points and orientations. A worst case probability value is assigned to each of the one or more aim points and orientations.
Use of computer models to assess exposure to agricultural chemicals via drinking water.
Gustafson, D I
1995-10-27
Surveys of drinking water quality throughout the agricultural regions of the world have revealed the tendency of certain crop protection chemicals to enter water supplies. Fortunately, the trace concentrations that have been detected are generally well below the levels thought to have any negative impact on human health or the environment. However, the public expects drinking water to be pristine and seems willing to bear the costs involved in further regulating agricultural chemical use in such a way so as to eliminate the potential for such materials to occur at any detectable level. Of all the tools available to assess exposure to agricultural chemicals via drinking water, computer models are one of the most cost-effective. Although not sufficiently predictive to be used in the absence of any field data, such computer programs can be used with some degree of certainty to perform quantitative extrapolations and thereby quantify regional exposure from field-scale monitoring information. Specific models and modeling techniques will be discussed for performing such exposure analyses. Improvements in computer technology have recently made it practical to use Monte Carlo and other probabilistic techniques as a routine tool for estimating human exposure. Such methods make it possible, at least in principle, to prepare exposure estimates with known confidence intervals and sufficient statistical validity to be used in the regulatory management of agricultural chemicals.
[Manual material handling risk assessment and ergonomic improvement in foodstuff retailing company].
Maso, S; Maccà, I; Simonetti, A; Scopa, P; Paruzzolo, P; Bonacci, A; Murgolo, I; Bartolucci, G B
2011-01-01
The aim of this study is to assess and reduce the risk due to manual material handling in a company involved in the foodstuff retailing. The risk assessment was performed by NIOSH Variable Lifing Index in 13 different occupational conditions. As result the risk was present in any case, with VLI values ranging from 2.12 to 2.81. A good risk reduction has been accomplished correcting properly the most important multiplier involved in the computation of the revised NIOSH Lifting equation (lifting frequency and weight of heavier products). Even if the performed risk reduction has been significant, the residual risk is still higher than the level of acceptability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dverstorp, B.; Andersson, J.
1995-12-01
Performance Assessment of a nuclear waste repository implies an analysis of a complex system with many interacting processes. Even if some of these processes may be known to large detail, problems arise when combining all information, and means of abstracting information from complex detailed models into models that couple different processes are needed. Clearly, one of the major objectives of performance assessment, to calculate doses or other performance indicators, implies an enormous abstraction of information compared to all information that is used as input. Other problems are that the knowledge of different parts or processes is strongly variable and adjustments,more » interpretations, are needed when combining models from different disciplines. In addition, people as well as computers, even today, always have a limited capacity to process information and choices have to be made. However, because abstraction of information clearly is unavoidable in performance assessment the validity of choices made, always need to be scrutinized and judgements made need to be updated in an iterative process.« less
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)
1998-01-01
Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.
NASA Astrophysics Data System (ADS)
Beggrow, Elizabeth P.; Ha, Minsu; Nehm, Ross H.; Pearl, Dennis; Boone, William J.
2014-02-01
The landscape of science education is being transformed by the new Framework for Science Education (National Research Council, A framework for K-12 science education: practices, crosscutting concepts, and core ideas. The National Academies Press, Washington, DC, 2012), which emphasizes the centrality of scientific practices—such as explanation, argumentation, and communication—in science teaching, learning, and assessment. A major challenge facing the field of science education is developing assessment tools that are capable of validly and efficiently evaluating these practices. Our study examined the efficacy of a free, open-source machine-learning tool for evaluating the quality of students' written explanations of the causes of evolutionary change relative to three other approaches: (1) human-scored written explanations, (2) a multiple-choice test, and (3) clinical oral interviews. A large sample of undergraduates (n = 104) exposed to varying amounts of evolution content completed all three assessments: a clinical oral interview, a written open-response assessment, and a multiple-choice test. Rasch analysis was used to compute linear person measures and linear item measures on a single logit scale. We found that the multiple-choice test displayed poor person and item fit (mean square outfit >1.3), while both oral interview measures and computer-generated written response measures exhibited acceptable fit (average mean square outfit for interview: person 0.97, item 0.97; computer: person 1.03, item 1.06). Multiple-choice test measures were more weakly associated with interview measures (r = 0.35) than the computer-scored explanation measures (r = 0.63). Overall, Rasch analysis indicated that computer-scored written explanation measures (1) have the strongest correspondence to oral interview measures; (2) are capable of capturing students' normative scientific and naive ideas as accurately as human-scored explanations, and (3) more validly detect understanding than the multiple-choice assessment. These findings demonstrate the great potential of machine-learning tools for assessing key scientific practices highlighted in the new Framework for Science Education.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
CFD Assessment of Aerodynamic Degradation of a Subsonic Transport Due to Airframe Damage
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Pirzadeh, Shahyar Z.; Atkins, Harold L.; Viken, Sally A.; Morrison, Joseph H.
2010-01-01
A computational study is presented to assess the utility of two NASA unstructured Navier-Stokes flow solvers for capturing the degradation in static stability and aerodynamic performance of a NASA General Transport Model (GTM) due to airframe damage. The approach is to correlate computational results with a substantial subset of experimental data for the GTM undergoing progressive losses to the wing, vertical tail, and horizontal tail components. The ultimate goal is to advance the probability of inserting computational data into the creation of advanced flight simulation models of damaged subsonic aircraft in order to improve pilot training. Results presented in this paper demonstrate good correlations with slope-derived quantities, such as pitch static margin and static directional stability, and incremental rolling moment due to wing damage. This study further demonstrates that high fidelity Navier-Stokes flow solvers could augment flight simulation models with additional aerodynamic data for various airframe damage scenarios.
Chan, Ernest G; Landreneau, James R; Schuchert, Matthew J; Odell, David D; Gu, Suicheng; Pu, Jiantao; Luketich, James D; Landreneau, Rodney J
2015-09-01
Accurate cancer localization and negative resection margins are necessary for successful segmentectomy. In this study, we evaluate a newly developed software package that permits automated segmentation of the pulmonary parenchyma, allowing 3-dimensional assessment of tumor size, location, and estimates of surgical margins. A pilot study using a newly developed 3-dimensional computed tomography analytic software package was performed to retrospectively evaluate preoperative computed tomography images of patients who underwent segmentectomy (n = 36) or lobectomy (n = 15) for stage 1 non-small cell lung cancer. The software accomplishes an automated reconstruction of anatomic pulmonary segments of the lung based on bronchial arborization. Estimates of anticipated surgical margins and pulmonary segmental volume were made on the basis of 3-dimensional reconstruction. Autosegmentation was achieved in 72.7% (32/44) of preoperative computed tomography images with slice thicknesses of 3 mm or less. Reasons for segmentation failure included local severe emphysema or pneumonitis, and lower computed tomography resolution. Tumor segmental localization was achieved in all autosegmented studies. The 3-dimensional computed tomography analysis provided a positive predictive value of 87% in predicting a marginal clearance greater than 1 cm and a 75% positive predictive value in predicting a margin to tumor diameter ratio greater than 1 in relation to the surgical pathology assessment. This preoperative 3-dimensional computed tomography analysis of segmental anatomy can confirm the tumor location within an anatomic segment and aid in predicting surgical margins. This 3-dimensional computed tomography information may assist in the preoperative assessment regarding the suitability of segmentectomy for peripheral lung cancers. Published by Elsevier Inc.
Application of infrared thermography in computer aided diagnosis
NASA Astrophysics Data System (ADS)
Faust, Oliver; Rajendra Acharya, U.; Ng, E. Y. K.; Hong, Tan Jen; Yu, Wenwei
2014-09-01
The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care.
Scientific programming and high performance computing Research Interests Wind and solar resource assessment , Department of Geography and Environmental Sciences, Denver, CO Research Assistant, National Center for Atmospheric Research (NCAR), Boulder, CO Graduate Instructor and Research Assistant, University of Colorado
A qualitative and quantitative assessment for a bone marrow harvest simulator.
Machado, Liliane S; Moraes, Ronei M
2009-01-01
Several approaches to perform assessment in training simulators based on virtual reality have been proposed. There are two kinds of assessment methods: offline and online. The main requirements related to online training assessment methodologies applied to virtual reality systems are the low computational complexity and the high accuracy. In the literature it can be found several approaches for general cases which can satisfy such requirements. An inconvenient about those approaches is related to an unsatisfactory solution for specific cases, as in some medical procedures, where there are quantitative and qualitative information available to perform the assessment. In this paper, we present an approach to online training assessment based on a Modified Naive Bayes which can manipulate qualitative and quantitative variables simultaneously. A special medical case was simulated in a bone marrow harvest simulator. The results obtained were satisfactory and evidenced the applicability of the method.
ERIC Educational Resources Information Center
Tsai, Shu-Chiao
2017-01-01
This study reports on investigating students' English translation performance and their use of reading strategies in an elective English writing course offered to senior students of English as a Foreign Language for 100 minutes per week for 12 weeks. A courseware-implemented instruction combined with a task-based learning approach was adopted.…
Static Fatigue of a Siliconized Silicon Carbide
1987-03-01
flexitral stress rupture and stepped temperature stress rupture (STSR) testing were performed to assess the static fatigue and creep resistances. Isothermal... stress rupture experiments were performed at 1200 0C in air for com- parison to previous results. - 10 STSR experiments 15 were under deadweight...temperature and stress levels that static fatigue and creep processes are active. The applied stresses were computed on the basis of the elastic
Berthing mechanism final test report and program assessment
NASA Technical Reports Server (NTRS)
1988-01-01
The purpose is to document the testing performed on both hardware and software developed under the Space Station Berthing Mechanisms Program. Testing of the mechanism occurred at three locations. Several system components, e.g., actuators and computer systems, were functionally tested before assembly. A series of post assembly tests were performed. The post assembly tests, as well as the dynamic testing of the mechanism, are presented.
Evaluation of prosthetic valve thrombosis by 64-row multi-detector computed tomography. .
Tarzia, Vincenzo; Bortolussi, Giacomo; Rubino, Maurizio; Gallo, Michele; Bottio, Tomaso; Gerosa, Gino
2015-03-01
Multi-detector computed tomography (MDCT), combined with retrospective electrocardiographic gating, permits cardiac imaging with high accuracy. Recent advances in MDCT have seemed to respond adequately to the need for a non-invasive and reliable assessment of the coronary artery lumen. Two patients with prosthetic aortic valves (one bioprosthetic, one mechanical) presented at the authors' institution with dyspnea and syncopal episodes. MDCT was performed to evaluate thrombus characteristics and exclude coronary artery disease (CAD). Based on the MDCT coronary artery assessment, neither patient underwent preoperative invasive coronary angiography, abolishing the risk of any iatrogenic thrombus fragmentation and subsequent embolization. One patient underwent surgical treatment without complications, while medical therapy was successful in the other case. MDCT can be used for the accurate imaging of thrombi on prosthetic aortic valves, and to correctly assess possible CAD.
Evaluation of a novel Serious Game based assessment tool for patients with Alzheimer's disease.
Vallejo, Vanessa; Wyss, Patric; Rampa, Luca; Mitache, Andrei V; Müri, René M; Mosimann, Urs P; Nef, Tobias
2017-01-01
Despite growing interest in developing ecological assessment of difficulties in patients with Alzheimer's disease new methods assessing the cognitive difficulties related to functional activities are missing. To complete current evaluation, the use of Serious Games can be a promising approach as it offers the possibility to recreate a virtual environment with daily living activities and a precise and complete cognitive evaluation. The aim of the present study was to evaluate the usability and the screening potential of a new ecological tool for assessment of cognitive functions in patients with Alzheimer's disease. Eighteen patients with Alzheimer's disease and twenty healthy controls participated to the study. They were asked to complete six daily living virtual tasks assessing several cognitive functions: three navigation tasks, one shopping task, one cooking task and one table preparation task following a one-day scenario. Usability of the game was evaluated through a questionnaire and through the analysis of the computer interactions for the two groups. Furthermore, the performances in terms of time to achieve the task and percentage of completion on the several tasks were recorded. Results indicate that both groups subjectively found the game user friendly and they were objectively able to play the game without computer interactions difficulties. Comparison of the performances between the two groups indicated a significant difference in terms of percentage of achievement of the several tasks and in terms of time they needed to achieve the several tasks. This study suggests that this new Serious Game based assessment tool is a user-friendly and ecological method to evaluate the cognitive abilities related to the difficulties patients can encounter in daily living activities and can be used as a screening tool as it allowed to distinguish Alzheimer's patient's performance from healthy controls.
Final Report for ALCC Allocation: Predictive Simulation of Complex Flow in Wind Farms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew F.; Ananthan, Shreyas; Churchfield, Matt
This report documents work performed using ALCC computing resources granted under a proposal submitted in February 2016, with the resource allocation period spanning the period July 2016 through June 2017. The award allocation was 10.7 million processor-hours at the National Energy Research Scientific Computing Center. The simulations performed were in support of two projects: the Atmosphere to Electrons (A2e) project, supported by the DOE EERE office; and the Exascale Computing Project (ECP), supported by the DOE Office of Science. The project team for both efforts consists of staff scientists and postdocs from Sandia National Laboratories and the National Renewable Energymore » Laboratory. At the heart of these projects is the open-source computational-fluid-dynamics (CFD) code, Nalu. Nalu solves the low-Mach-number Navier-Stokes equations using an unstructured- grid discretization. Nalu leverages the open-source Trilinos solver library and the Sierra Toolkit (STK) for parallelization and I/O. This report documents baseline computational performance of the Nalu code on problems of direct relevance to the wind plant physics application - namely, Large Eddy Simulation (LES) of an atmospheric boundary layer (ABL) flow and wall-modeled LES of a flow past a static wind turbine rotor blade. Parallel performance of Nalu and its constituent solver routines residing in the Trilinos library has been assessed previously under various campaigns. However, both Nalu and Trilinos have been, and remain, in active development and resources have not been available previously to rigorously track code performance over time. With the initiation of the ECP, it is important to establish and document baseline code performance on the problems of interest. This will allow the project team to identify and target any deficiencies in performance, as well as highlight any performance bottlenecks as we exercise the code on a greater variety of platforms and at larger scales. The current study is rather modest in scale, examining performance on problem sizes of O(100 million) elements and core counts up to 8k cores. This will be expanded as more computational resources become available to the projects.« less
Automated Portable Test (APT) System: overview and prospects
NASA Technical Reports Server (NTRS)
Bittner, A. C.; Smith, M. G.; Kennedy, R. S.; Staley, C. F.; Harbeson, M. M.
1985-01-01
The Automated Portable Test (APT) System is a notebook-sized, computer-based, human-performance and subjective-status assessment system. It is now being used in a wide range of environmental studies (e.g., simulator aftereffects, flight tests, drug effects, and hypoxia). Three questionnaires and 15 performance tests have been implemented, and the adaptation of 30 more tests is underway or is planned. The APT System is easily transportable, is inexpensive, and has the breadth of expansion options required for field and laboratory applications. The APT System is a powerful and expandable tool for human assessment in remote and unusual environments.
NASA Astrophysics Data System (ADS)
Siddiqui, Khan M.; Siegel, Eliot L.; Reiner, Bruce I.; Johnson, Jeffrey P.
2005-04-01
The authors identify a fundamental disconnect between the ways in which industry and radiologists assess and even discuss product performance. What is needed is a quantitative methodology that can assess both subjective image quality and observer task performance. In this study, we propose and evaluate the use of a visual discrimination model (VDM) that assesses just-noticeable differences (JNDs) to serve this purpose. The study compares radiologists' subjective perceptions of image quality of computer tomography (CT) and computed radiography (CR) images with quantitative measures of peak signal-to-noise ratio (PSNR) and JNDs as measured by a VDM. The study included 4 CT and 6 CR studies with compression ratios ranging from lossless to 90:1 (total of 80 sets of images were generated [n = 1,200]). Eleven radiologists reviewed the images and rated them in terms of overall quality and readability and identified images not acceptable for interpretation. Normalized reader scores were correlated with compression, objective PSNR, and mean JND values. Results indicated a significantly higher correlation between observer performance and JND values than with PSNR methods. These results support the use of the VDM as a metric not only for the threshold discriminations for which it was calibrated, but also as a general image quality metric. This VDM is a highly promising, reproducible, and reliable adjunct or even alternative to human observer studies for research or to establish clinical guidelines for image compression, dose reductions, and evaluation of various display technologies.
Emergency ultrasound in the acute assessment of haemothorax
Brooks, A; Davies, B; Smethhurst, M; Connolly, J
2004-01-01
Aims: To evaluate thoracic ultrasound for the detection of haemothorax in patients with thoracic trauma against established investigations. Methods: Thoracic ultrasound was performed as an extension of the standard focused assessment with sonography for trauma (FAST) protocol used at the Queen's Medical Centre for the assessment of adult patients with torso trauma. Fluid was sought in both pleural cavities using a hand portable ultrasound system by one of two non-radiologists trained in FAST. Findings were compared against subsequent investigations/procedures performed at the discretion of the attending emergency physician—supine chest radiography, intercostal drain, computed tomography, or thoracotomy. The sensitivity of the technique and the time taken to diagnosis for each investigation were recorded. Results: Sixty one patients, 54 (89%) after blunt trauma, underwent thoracic ultrasound evaluation during the study. Twelve patients had a haemothorax detected by ultrasound and confirmed by computed tomography or by tube thoracostomy. Four haemothoraces detected on ultrasound were not apparent on trauma chest radiography. There were 12 true positives, 48 true negatives, no false positives, and one false negative scan. The sensitivity of ultrasound was 92% and specificity 100% with a positive predictive value of 100% and negative predictive value 98% for the detection of haemothorax after trauma. Conclusions: Emergency ultrasound of the chest performed as part of the primary survey of the traumatised patient can rapidly and accurately diagnose haemothorax and is a valuable tool to augment the immediate clinical assessment of these patients. PMID:14734374
Hansen, Tor Ivar; Haferstrom, Elise Christina D; Brunner, Jan F; Lehn, Hanne; Håberg, Asta Kristine
2015-01-01
Computerized neuropsychological tests are effective in assessing different cognitive domains, but are often limited by the need of proprietary hardware and technical staff. Web-based tests can be more accessible and flexible. We aimed to investigate validity, effects of computer familiarity, education, and age, and the feasibility of a new web-based self-administered neuropsychological test battery (Memoro) in older adults and seniors. A total of 62 (37 female) participants (mean age 60.7 years) completed the Memoro web-based neuropsychological test battery and a traditional battery composed of similar tests intended to measure the same cognitive constructs. Participants were assessed on computer familiarity and how they experienced the two batteries. To properly test the factor structure of Memoro, an additional factor analysis in 218 individuals from the HUNT population was performed. Comparing Memoro to traditional tests, we observed good concurrent validity (r = .49-.63). The performance on the traditional and Memoro test battery was consistent, but differences in raw scores were observed with higher scores on verbal memory and lower in spatial memory in Memoro. Factor analysis indicated two factors: verbal and spatial memory. There were no correlations between test performance and computer familiarity after adjustment for age or age and education. Subjects reported that they preferred web-based testing as it allowed them to set their own pace, and they did not feel scrutinized by an administrator. Memoro showed good concurrent validity compared to neuropsychological tests measuring similar cognitive constructs. Based on the current results, Memoro appears to be a tool that can be used to assess cognitive function in older and senior adults. Further work is necessary to ascertain its validity and reliability.
Alonso-Silverio, Gustavo A; Pérez-Escamirosa, Fernando; Bruno-Sanchez, Raúl; Ortiz-Simon, José L; Muñoz-Guerrero, Roberto; Minor-Martinez, Arturo; Alarcón-Paredes, Antonio
2018-05-01
A trainer for online laparoscopic surgical skills assessment based on the performance of experts and nonexperts is presented. The system uses computer vision, augmented reality, and artificial intelligence algorithms, implemented into a Raspberry Pi board with Python programming language. Two training tasks were evaluated by the laparoscopic system: transferring and pattern cutting. Computer vision libraries were used to obtain the number of transferred points and simulated pattern cutting trace by means of tracking of the laparoscopic instrument. An artificial neural network (ANN) was trained to learn from experts and nonexperts' behavior for pattern cutting task, whereas the assessment of transferring task was performed using a preestablished threshold. Four expert surgeons in laparoscopic surgery, from hospital "Raymundo Abarca Alarcón," constituted the experienced class for the ANN. Sixteen trainees (10 medical students and 6 residents) without laparoscopic surgical skills and limited experience in minimal invasive techniques from School of Medicine at Universidad Autónoma de Guerrero constituted the nonexperienced class. Data from participants performing 5 daily repetitions for each task during 5 days were used to build the ANN. The participants tend to improve their learning curve and dexterity with this laparoscopic training system. The classifier shows mean accuracy and receiver operating characteristic curve of 90.98% and 0.93, respectively. Moreover, the ANN was able to evaluate the psychomotor skills of users into 2 classes: experienced or nonexperienced. We constructed and evaluated an affordable laparoscopic trainer system using computer vision, augmented reality, and an artificial intelligence algorithm. The proposed trainer has the potential to increase the self-confidence of trainees and to be applied to programs with limited resources.
Forkmann, Thomas; Kroehne, Ulf; Wirtz, Markus; Norra, Christine; Baumeister, Harald; Gauggel, Siegfried; Elhan, Atilla Halil; Tennant, Alan; Boecker, Maren
2013-11-01
This study conducted a simulation study for computer-adaptive testing based on the Aachen Depression Item Bank (ADIB), which was developed for the assessment of depression in persons with somatic diseases. Prior to computer-adaptive test simulation, the ADIB was newly calibrated. Recalibration was performed in a sample of 161 patients treated for a depressive syndrome, 103 patients from cardiology, and 103 patients from otorhinolaryngology (mean age 44.1, SD=14.0; 44.7% female) and was cross-validated in a sample of 117 patients undergoing rehabilitation for cardiac diseases (mean age 58.4, SD=10.5; 24.8% women). Unidimensionality of the itembank was checked and a Rasch analysis was performed that evaluated local dependency (LD), differential item functioning (DIF), item fit and reliability. CAT-simulation was conducted with the total sample and additional simulated data. Recalibration resulted in a strictly unidimensional item bank with 36 items, showing good Rasch model fit (item fit residuals<|2.5|) and no DIF or LD. CAT simulation revealed that 13 items on average were necessary to estimate depression in the range of -2 and +2 logits when terminating at SE≤0.32 and 4 items if using SE≤0.50. Receiver Operating Characteristics analysis showed that θ estimates based on the CAT algorithm have good criterion validity with regard to depression diagnoses (Area Under the Curve≥.78 for all cut-off criteria). The recalibration of the ADIB succeeded and the simulation studies conducted suggest that it has good screening performance in the samples investigated and that it may reasonably add to the improvement of depression assessment. © 2013.
Hansen, Tor Ivar; Haferstrom, Elise Christina D.; Brunner, Jan F.; Lehn, Hanne; Håberg, Asta Kristine
2015-01-01
Introduction: Computerized neuropsychological tests are effective in assessing different cognitive domains, but are often limited by the need of proprietary hardware and technical staff. Web-based tests can be more accessible and flexible. We aimed to investigate validity, effects of computer familiarity, education, and age, and the feasibility of a new web-based self-administered neuropsychological test battery (Memoro) in older adults and seniors. Method: A total of 62 (37 female) participants (mean age 60.7 years) completed the Memoro web-based neuropsychological test battery and a traditional battery composed of similar tests intended to measure the same cognitive constructs. Participants were assessed on computer familiarity and how they experienced the two batteries. To properly test the factor structure of Memoro, an additional factor analysis in 218 individuals from the HUNT population was performed. Results: Comparing Memoro to traditional tests, we observed good concurrent validity (r = .49–.63). The performance on the traditional and Memoro test battery was consistent, but differences in raw scores were observed with higher scores on verbal memory and lower in spatial memory in Memoro. Factor analysis indicated two factors: verbal and spatial memory. There were no correlations between test performance and computer familiarity after adjustment for age or age and education. Subjects reported that they preferred web-based testing as it allowed them to set their own pace, and they did not feel scrutinized by an administrator. Conclusions: Memoro showed good concurrent validity compared to neuropsychological tests measuring similar cognitive constructs. Based on the current results, Memoro appears to be a tool that can be used to assess cognitive function in older and senior adults. Further work is necessary to ascertain its validity and reliability. PMID:26009791
Developing an Index to Measure Health System Performance: Measurement for Districts of Nepal.
Kandel, N; Fric, A; Lamichhane, J
2014-01-01
Various frameworks for measuring health system performance have been proposed and discussed. The scope of using performance indicators are broad, ranging from examining national health system to individual patients at various levels of health system. Development of innovative and easy index is essential to measure multidimensionality of health systems. We used indicators, which also serve as proxy to the set of activities, whose primary goal is to maintain and improve health. We used eleven indicators of MDGs, which represent all dimensions of health to develop index. These indicators are computed with similar methodology that of human development index. We used published data of Nepal for computation of the index for districts of Nepal as an illustration. To validate our finding, we compared the indices of these districts with other development indices of Nepal. An index for each district has been computed from eleven indicators. Then indices are compared with that of human development index, socio-economic and infrastructure development indices and findings has shown the similarity on distribution of districts. Categories of low and high performing districts on health system performance are also having low and high human development, socio-economic, and infrastructure indices respectively. This methodology of computing index from various indicators could assist policy makers and program managers to prioritize activities based on their performance. Validation of the findings with that of other development indicators show that this can be one of the tools, which can assist on assessing health system performance for policy makers, program managers and others.
Mejía, Vilma; Gonzalez, Carlos; Delfino, Alejandro E; Altermatt, Fernando R; Corvetto, Marcia A
The primary purpose of this study was to compare the effect of high fidelity simulation versus a computer-based case solving self-study, in skills acquisition about malignant hyperthermia on first year anesthesiology residents. After institutional ethical committee approval, 31 first year anesthesiology residents were enrolled in this prospective randomized single-blinded study. Participants were randomized to either a High Fidelity Simulation Scenario or a computer-based Case Study about malignant hyperthermia. After the intervention, all subjects' performance in was assessed through a high fidelity simulation scenario using a previously validated assessment rubric. Additionally, knowledge tests and a satisfaction survey were applied. Finally, a semi-structured interview was done to assess self-perception of reasoning process and decision-making. 28 first year residents finished successfully the study. Resident's management skill scores were globally higher in High Fidelity Simulation versus Case Study, however they were significant in 4 of the 8 performance rubric elements: recognize signs and symptoms (p = 0.025), prioritization of initial actions of management (p = 0.003), recognize complications (p = 0.025) and communication (p = 0.025). Average scores from pre- and post-test knowledge questionnaires improved from 74% to 85% in the High Fidelity Simulation group, and decreased from 78% to 75% in the Case Study group (p = 0.032). Regarding the qualitative analysis, there was no difference in factors influencing the student's process of reasoning and decision-making with both teaching strategies. Simulation-based training with a malignant hyperthermia high-fidelity scenario was superior to computer-based case study, improving knowledge and skills in malignant hyperthermia crisis management, with a very good satisfaction level in anesthesia residents. Copyright © 2018 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.
A study on haptic collaborative game in shared virtual environment
NASA Astrophysics Data System (ADS)
Lu, Keke; Liu, Guanyang; Liu, Lingzhi
2013-03-01
A study on collaborative game in shared virtual environment with haptic feedback over computer networks is introduced in this paper. A collaborative task was used where the players located at remote sites and played the game together. The player can feel visual and haptic feedback in virtual environment compared to traditional networked multiplayer games. The experiment was desired in two conditions: visual feedback only and visual-haptic feedback. The goal of the experiment is to assess the impact of force feedback on collaborative task performance. Results indicate that haptic feedback is beneficial for performance enhancement for collaborative game in shared virtual environment. The outcomes of this research can have a powerful impact on the networked computer games.
Kundhal, Pavi S; Grantcharov, Teodor P
2009-03-01
This study was conducted to validate the role of virtual reality computer simulation as an objective method for assessing laparoscopic technical skills. The authors aimed to investigate whether performance in the operating room, assessed using a modified Objective Structured Assessment of Technical Skill (OSATS), correlated with the performance parameters registered by a virtual reality laparoscopic trainer (LapSim). The study enrolled 10 surgical residents (3 females) with a median of 5.5 years (range, 2-6 years) since graduation who had similar limited experience in laparoscopic surgery (median, 5; range, 1-16 laparoscopic cholecystectomies). All the participants performed three repetitions of seven basic skills tasks on the LapSim laparoscopic trainer and one laparoscopic cholecystectomy in the operating room. The operating room procedure was video recorded and blindly assessed by two independent observers using a modified OSATS rating scale. Assessment in the operating room was based on three parameters: time used, error score, and economy of motion score. During the tasks on the LapSim, time, error (tissue damage and millimeters of tissue damage [tasks 2-6], error score [incomplete target areas, badly placed clips, and dropped clips [task 7]), and economy of movement parameters (path length and angular path) were registered. The correlation between time, economy, and error parameters during the simulated tasks and the operating room procedure was statistically assessed using Spearman's test. Significant correlations were demonstrated between the time used to complete the operating room procedure and time used for task 7 (r (s) = 0.74; p = 0.015). The error score demonstrated during the laparoscopic cholecystectomy correlated well with the tissue damage in three of the seven tasks (p < 0.05), the millimeters of tissue damage during two of the tasks, and the error score in task 7 (r (s) = 0.67; p = 0.034). Furthermore, statistically significant correlations were observed between the economy of motion score from the operative procedure and LapSim's economy parameters (path length and angular path in six of the tasks) (p < 0.05). The current study demonstrated significant correlations between operative performance in the operating room (assessed using a well-validated rating scale) and psychomotor performance in virtual environment assessed by a computer simulator. This provides strong evidence for the validity of the simulator system as an objective tool for assessing laparoscopic skills. Virtual reality simulation can be used in practice to assess technical skills relevant for minimally invasive surgery.
Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar
2009-10-01
Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.
Computer proficiency questionnaire: assessing low and high computer proficient seniors.
Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran
2015-06-01
Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Wild, Katherine V; Mattek, Nora C; Maxwell, Shoshana A; Dodge, Hiroko H; Jimison, Holly B; Kaye, Jeffrey A
2012-11-01
This study examines differences in computer-related self-efficacy and anxiety in subgroups of older adults, and changes in those measures after exposure to a systematic training program and subsequent computer use. Participants were volunteers in the Intelligent Systems for Assessment of Aging Changes study (ISAAC) carried out by the Oregon Center for Aging and Technology. Participants were administered two questionnaires before training and again 1 year later, which were related to computer self-efficacy and anxiety. Continuous recording of computer use was also assessed for a subset of participants. Baseline comparisons by sex, age, education, living arrangement, and computer proficiency, but not cognitive status, yielded significant differences in confidence and anxiety related to specific aspects of computer use. At 1-year follow-up, participants reported less anxiety and greater confidence. However, the benefits of training and exposure varied by group and task. Comparisons based on cognitive status showed that the cognitively intact participants benefited more from training and/or experience with computers than did participants with mild cognitive impairment (MCI), who after 1 year continued to report less confidence and more anxiety regarding certain aspects of computer use. After 1 year of consistent computer use, cognitively intact participants in this study reported reduced levels of anxiety and increased self-confidence in their ability to perform specific computer tasks. Participants with MCI at baseline were less likely to demonstrate increased efficacy or confidence than their cognitively intact counterparts. Copyright © 2012 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Ray, Sarah; Valdovinos, Katie
Pharmacy students should be exposed to and offered opportunities to practice the skill of incorporating a computer into a patient interview in the didactic setting. Faculty sought to improve retention of student ability to incorporate computers into their patient-pharmacist communication. Students were required to utilize a computer to document clinical information gathered during a simulated patient encounter (SPE). Students utilized electronic worksheets and were evaluated by instructors on their ability to effectively incorporate a computer into a SPE using a rubric. Students received specific instruction on effective computer use during patient encounters. Students were then re-evaluated by an instructor during subsequent SPEs of increasing complexity using standardized rubrics blinded from the students. Pre-instruction, 45% of students effectively incorporated a computer into a SPE. After receiving instruction, 67% of students were effective in their use of a computer during a SPE of performing a pharmaceutical care assessment for a patient with chronic obstructive pulmonary disease (COPD) (p < 0.05 compared to pre-instruction), and 58% of students were effective in their use of a computer during a SPE of retrieving a medication list and social history from a simulated alcohol-impaired patient (p = 0.087 compared to pre-instruction). Instruction can improve pharmacy students' ability to incorporate a computer into SPEs, a critical skill in building and maintaining rapport with patients and improving efficiency of patient visits. Complex encounters may affect students' ability to utilize a computer appropriately. Students may benefit from repeated practice with this skill, especially with SPEs of increasing complexity. Copyright © 2016 Elsevier Inc. All rights reserved.
Computational technique for stepwise quantitative assessment of equation correctness
NASA Astrophysics Data System (ADS)
Othman, Nuru'l Izzah; Bakar, Zainab Abu
2017-04-01
Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.
Use of a Tracing Task to Assess Visuomotor Performance: Effects of Age, Sex, and Handedness
2013-01-01
Background. Visuomotor abnormalities are common in aging and age-related disease, yet difficult to quantify. This study investigated the effects of healthy aging, sex, and handedness on the performance of a tracing task. Participants (n = 150, aged 21–95 years, 75 females) used a stylus to follow a moving target around a circle on a tablet computer with their dominant and nondominant hands. Participants also performed the Trail Making Test (a measure of executive function). Methods. Deviations from the circular path were computed to derive an “error” time series. For each time series, absolute mean, variance, and complexity index (a proposed measure of system functionality and adaptability) were calculated. Using the moving target and stylus coordinates, the percentage of task time within the target region and the cumulative micropause duration (a measure of motion continuity) were computed. Results. All measures showed significant effects of aging (p < .0005). Post hoc age group comparisons showed that with increasing age, the absolute mean and variance of the error increased, complexity index decreased, percentage of time within the target region decreased, and cumulative micropause duration increased. Only complexity index showed a significant difference between dominant versus nondominant hands within each age group (p < .0005). All measures showed relationships to the Trail Making Test (p < .05). Conclusions. Measures derived from a tracing task identified performance differences in healthy individuals as a function of age, sex, and handedness. Studies in populations with specific neuromotor syndromes are warranted to test the utility of measures based on the dynamics of tracking a target as a clinical assessment tool. PMID:23388876
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spangler, Lee; Cunningham, Alfred; Lageson, David
2011-03-31
ZERT has made major contributions to five main areas of sequestration science: improvement of computational tools; measurement and monitoring techniques to verify storage and track migration of CO{sub 2}; development of a comprehensive performance and risk assessment framework; fundamental geophysical, geochemical and hydrological investigations of CO{sub 2} storage; and investigate innovative, bio-based mitigation strategies.
NASA Astrophysics Data System (ADS)
He, Jiangang; Franchini, Cesare
2017-11-01
In this paper we assess the predictive power of the self-consistent hybrid functional scPBE0 in calculating the band gap of oxide semiconductors. The computational procedure is based on the self-consistent evaluation of the mixing parameter α by means of an iterative calculation of the static dielectric constant using the perturbation expansion after discretization method and making use of the relation \
George E. Host; Harlan W. Stech; Kathryn E. Lenz; Kyle Roskoski; Richard Mather; Michael Donahue
2007-01-01
ECOPHYS is one of the early FSTM's that integrated plant physiological and tree architectural models to assess the relative importance of genetic traits in tree growth, and explore the growth response to interacting environmental stresses (Host et al 1999, Isebrands et al 1999, Martin et al 2001). This paper will describe extensions of the ECOPHYS individual tree...
Huang, Jia; McCaskey, Michael A; Yang, Shanli; Ye, Haicheng; Tao, Jing; Jiang, Cai; Schuster-Amft, Corina; Balzer, Christian; Ettlin, Thierry; Schupp, Wilfried; Kulke, Hartwig; Chen, Lidian
2015-12-02
A majority of stroke survivors present with cognitive impairments. Attention disturbance, which leads to impaired concentration and overall reduced cognitive functions, is strongly associated with stroke. The clinical efficacy of acupuncture with Baihui (GV20) and Shenting (GV24) as well as computer-assisted cognitive training in stroke and post-stroke cognitive impairment have both been demonstrated in previous studies. To date, no systematic comparison of these exists and the potential beneficial effects of a combined application are yet to be examined. The main objective of this pilot study is to evaluate the effects of computer-assisted cognitive training compared to acupuncture on the outcomes of attention assessments. The second objective is to test the effects of a combined cognitive intervention that incorporates computer-assisted cognitive training and acupuncture (ACoTrain). An international multicentre, single-blinded, randomised controlled pilot trial will be conducted. In a 1:1:1 ratio, 60 inpatients with post-stroke cognitive dysfunction will be randomly allocated into either the acupuncture group, the computer-assisted cognitive training group, or the ACoTrain group in addition to their individual rehabilitation programme. The intervention period of this pilot trial will last 4 weeks (30 minutes per day, 5 days per week, Monday to Friday). The primary outcome is the test battery for attentional performance. The secondary outcomes include the Trail Making Test, Test des Deux Barrages, National Institute of Health Stroke Scale, and Modified Barthel Index for assessment of daily life competence, and the EuroQol Questionnaire for health-related quality of life. This trial mainly focuses on evaluating the effects of computer-assisted cognitive training compared to acupuncture on the outcomes of attention assessments. The results of this pilot trial are expected to provide new insights on how Eastern and Western medicine can complement one another and improve the treatment of cognitive impairments in early stroke rehabilitation. Including patients with different cultural backgrounds allows a more generalisable interpretation of the results but also poses risks of performance bias. Using standardised and well-described assessments, validated for each region, is pivotal to allow pooling of the data. Clinical Trails.gov ID: NCT02324959 (8 December 2014).
Deal, Samantha; Wambaugh, John; Judson, Richard; Mosher, Shad; Radio, Nick; Houck, Keith; Padilla, Stephanie
2016-09-01
One of the rate-limiting procedures in a developmental zebrafish screen is the morphological assessment of each larva. Most researchers opt for a time-consuming, structured visual assessment by trained human observer(s). The present studies were designed to develop a more objective, accurate and rapid method for screening zebrafish for dysmorphology. Instead of the very detailed human assessment, we have developed the computational malformation index, which combines the use of high-content imaging with a very brief human visual assessment. Each larva was quickly assessed by a human observer (basic visual assessment), killed, fixed and assessed for dysmorphology with the Zebratox V4 BioApplication using the Cellomics® ArrayScan® V(TI) high-content image analysis platform. The basic visual assessment adds in-life parameters, and the high-content analysis assesses each individual larva for various features (total area, width, spine length, head-tail length, length-width ratio, perimeter-area ratio). In developing the computational malformation index, a training set of hundreds of embryos treated with hundreds of chemicals were visually assessed using the basic or detailed method. In the second phase, we assessed both the stability of these high-content measurements and its performance using a test set of zebrafish treated with a dose range of two reference chemicals (trans-retinoic acid or cadmium). We found the measures were stable for at least 1 week and comparison of these automated measures to detailed visual inspection of the larvae showed excellent congruence. Our computational malformation index provides an objective manner for rapid phenotypic brightfield assessment of individual larva in a developmental zebrafish assay. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-01-01
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists’ goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities that are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists—as opposed to a completely automatic computer interpretation—focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous—from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects—collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more—from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis. PMID:19175137
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-12-15
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists' goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities thatmore » are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists--as opposed to a completely automatic computer interpretation--focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous--from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects--collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more--from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
East, D. R.; Sexton, J.
This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and IBM TJ Watson Research Center to research, assess feasibility and develop an implementation plan for a High Performance Computing Innovation Center (HPCIC) in the Livermore Valley Open Campus (LVOC). The ultimate goal of this work was to help advance the State of California and U.S. commercial competitiveness in the arena of High Performance Computing (HPC) by accelerating the adoption of computational science solutions, consistent with recent DOE strategy directives. The desired result of this CRADA was a well-researched,more » carefully analyzed market evaluation that would identify those firms in core sectors of the US economy seeking to adopt or expand their use of HPC to become more competitive globally, and to define how those firms could be helped by the HPCIC with IBM as an integral partner.« less
1993-03-01
values themselves. The Wools perform risk-adjusted present-value comparisons and compute the ROI using discount factors. The assessment of risk in a...developed X Window system, the de facto industry standard window system in the UNIX environment. An X- terminal’s use is limited to display. It has no...2.1 IT HARDWARE The DOS-based PC used in this analysis costs $2,060. It includes an ASL 486DX-33 Industry Standard Architecture (ISA) computer with 8
NASA Technical Reports Server (NTRS)
Forney, J. A.; Walker, D.; Lanier, M.
1979-01-01
Computer program, SHCOST, was used to perform economic analyses of operational test sites. The program allows consideration of the economic parameters which are important to the solar system user. A life cycle cost and cash flow comparison is made between a solar heating system and a conventional system. The program assists in sizing the solar heating system. A sensitivity study and plot capability allow the user to select the most cost effective system configuration.
Olson, Scott A.
1996-01-01
It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Olson, Scott A.
1997-01-01
size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
1991-09-01
constant data into the gaining base’s computer records. Among the data elements to be loaded, the 1XT434 image contains the level detail effective date...the mission support effective date, and the PBR override (19:19-203). In conjunction with the 1XT434, the Mission Change Parameter Image (Constant...the gaining base (19:19-208). The level detail effective date establishes the date the MCDDFR and MCDDR "are considered by the requirements computation
Ivanoff, Michael A.; Hammond, Robert E.
1997-01-01
It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Single-node orbit analsyis with radiation heat transfer only
NASA Technical Reports Server (NTRS)
Peoples, J. A.
1977-01-01
The steady-state temperature of a single node which dissipates energy by radiation only is discussed for a nontime varying thermal environment. Relationships are developed to illustrate how shields can be utilized to represent a louver system. A computer program is presented which can assess periodic temperature characteristics of a single node in a time varying thermal environment having energy dissipation by radiation only. The computer program performs thermal orbital analysis for five combinations of plate, shields, and louvers.
Boehmler, Erick M.; Hammond, Robert E.
1997-01-01
It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Ivanoff, Michael A.; Burns, Ronda L.
1997-01-01
size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Degnan, James R.
1997-01-01
It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Boehmler, Erick M.; Burns, Ronda L.
1997-01-01
It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Boehmler, Erick M.; Hammond, Robert E.
1997-01-01
It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
A Practical Guide to the Design and Construction of a Single Wire Beverage Antenna.
1980-09-01
THEORETICAL PATTERNS 5 5 DESIGN PARAMETERS 7 6 ANTENNA CONSTRUCTION 8 7 ASSESSMENT OF PERFORMANCE 9 8 CONCLUSIONS 9 Tables 1-4 11 References 14...on the main RAE 1906S computer. > The results show the effect at two frequencies of varying the main parameters of antenna height, antenna length...programmes are stored on the main RAE 1906S computer and can be edited prior to running to take into account desired changes in parameters , principally
Damage progression in Composite Structures
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1996-01-01
A computational simulation tool is used to evaluate the various stages of damage progression in composite materials during Iosipescu sheat testing. Unidirectional composite specimens with either the major or minor material axis in the load direction are considered. Damage progression characteristics are described for each specimen using two types of boundary conditions. A procedure is outlined regarding the use of computational simulation in composites testing. Iosipescu shear testing using the V-notched beam specimen is a convenient method to measure both shear strength and shear stiffness simultaneously. The evaluation of composite test response can be made more productive and informative via computational simulation of progressive damage and fracture. Computational simulation performs a complete evaluation of laminated composite fracture via assessment of ply and subply level damage/fracture processes.
The Association Between Computer Use and Cognition Across Adulthood: Use it so You Won't Lose it?
Tun, Patricia A.; Lachman, Margie E.
2012-01-01
Understanding the association between computer use and adult cognition has been limited until now by self-selected samples with restricted ranges of age and education. Here we studied effects of computer use in a large national sample (N=2671) of adults aged 32 to 84, assessing cognition with the Brief Test of Adult Cognition by Telephone (Tun & Lachman, 2005), and executive function with the Stop and Go Switch Task (Tun & Lachman, 2008). Frequency of computer activity was associated with cognitive performance after controlling for age, sex, education, and health status: that is, individuals who used the computer frequently scored significantly higher than those who seldom used the computer. Greater computer use was also associated with better executive function on a task-switching test, even after controlling for basic cognitive ability as well as demographic variables. These findings suggest that frequent computer activity is associated with good cognitive function, particularly executive control, across adulthood into old age, especially for those with lower intellectual ability. PMID:20677884
Curran, V R; Hoekman, T; Gulliver, W; Landells, I; Hatcher, L
2000-01-01
Over the years, various distance learning technologies and methods have been applied to the continuing medical education needs of rural and remote physicians. They have included audio teleconferencing, slow scan imaging, correspondence study, and compressed videoconferencing. The recent emergence and growth of Internet, World Wide Web (Web), and compact disk read-only-memory (CD-ROM) technologies have introduced new opportunities for providing continuing education to the rural medical practitioner. This evaluation study assessed the instructional effectiveness of a hybrid computer-mediated courseware delivery system on dermatologic office procedures. A hybrid delivery system merges Web documents, multimedia, computer-mediated communications, and CD-ROMs to enable self-paced instruction and collaborative learning. Using a modified pretest to post-test control group study design, several evaluative criteria (participant reaction, learning achievement, self-reported performance change, and instructional transactions) were assessed by various qualitative and quantitative data collection methods. This evaluation revealed that a hybrid computer-mediated courseware system was an effective means for increasing knowledge (p < .05) and improving self-reported competency (p < .05) in dermatologic office procedures, and that participants were very satisfied with the self-paced instruction and use of asynchronous computer conferencing for collaborative information sharing among colleagues.
A computational framework to characterize and compare the geometry of coronary networks.
Bulant, C A; Blanco, P J; Lima, T P; Assunção, A N; Liberato, G; Parga, J R; Ávila, L F R; Pereira, A C; Feijóo, R A; Lemos, P A
2017-03-01
This work presents a computational framework to perform a systematic and comprehensive assessment of the morphometry of coronary arteries from in vivo medical images. The methodology embraces image segmentation, arterial vessel representation, characterization and comparison, data storage, and finally analysis. Validation is performed using a sample of 48 patients. Data mining of morphometric information of several coronary arteries is presented. Results agree to medical reports in terms of basic geometric and anatomical variables. Concerning geometric descriptors, inter-artery and intra-artery correlations are studied. Data reported here can be useful for the construction and setup of blood flow models of the coronary circulation. Finally, as an application example, similarity criterion to assess vasculature likelihood based on geometric features is presented and used to test geometric similarity among sibling patients. Results indicate that likelihood, measured through geometric descriptors, is stronger between siblings compared with non-relative patients. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Competence with Fractions Predicts Gains in Mathematics Achievement
Bailey, Drew H.; Hoard, Mary K.; Nugent, Lara; Geary, David C.
2012-01-01
Competence with fractions predicts later mathematics achievement, but the co-developmental pattern between fractions knowledge and mathematics achievement is not well understood. We assessed this co-development through examination of the cross-lagged relation between a measure of conceptual knowledge of fractions and mathematics achievement in sixth and seventh grade (n = 212). The cross-lagged effects indicated that performance on the sixth grade fractions concepts measure predicted one year gains in mathematics achievement (β = .14, p<.01), controlling for the central executive component of working memory and intelligence, but sixth grade mathematics achievement did not predict gains on the fractions concepts measure (β = .03, p>.50). In a follow-up assessment, we demonstrated that measures of fluency with computational fractions significantly predicted seventh grade mathematics achievement above and beyond the influence of fluency in computational whole number arithmetic, performance on number fluency and number line tasks, and central executive span and intelligence. Results provide empirical support for the hypothesis that competence with fractions underlies, in part, subsequent gains in mathematics achievement. PMID:22832199
Whitfield, Richard H; Newcombe, Robert G; Woollard, Malcolm
2003-12-01
The introduction of the European Resuscitation Guidelines (2000) for cardiopulmonary resuscitation (CPR) and automated external defibrillation (AED) prompted the development of an up-to-date and reliable method of assessing the quality of performance of CPR in combination with the use of an AED. The Cardiff Test of basic life support (BLS) and AED version 3.1 was developed to meet this need and uses standardised checklists to retrospectively evaluate performance from analyses of video recordings and data drawn from a laptop computer attached to a training manikin. This paper reports the inter- and intra-observer reliability of this test. Data used to assess reliability were obtained from an investigation of CPR and AED skill acquisition in a lay responder AED training programme. Six observers were recruited to evaluate performance in 33 data sets, repeating their evaluation after a minimum interval of 3 weeks. More than 70% of the 42 variables considered in this study had a kappa score of 0.70 or above for inter-observer reliability or were drawn from computer data and therefore not subject to evaluator variability. 85% of the 42 variables had kappa scores for intra-observer reliability of 0.70 or above or were drawn from computer data. The standard deviations for inter- and intra-observer measures of time to first shock were 11.6 and 7.7 s, respectively. The inter- and intra-observer reliability for the majority of the variables in the Cardiff Test of BLS and AED version 3.1 is satisfactory. However, reliability is less acceptable with respect to shaking when checking for responsiveness, initial check/clearing of the airway, checks for signs of circulation, time to first shock and performance of interventions in the correct sequence. Further research is required to determine if modifications to the method of assessing these variables can increase reliability.
An Assessment of the State-of-the-art in Multidisciplinary Aeromechanical Analyses
NASA Technical Reports Server (NTRS)
Datta, Anubhav; Johnson, Wayne
2008-01-01
This paper presents a survey of the current state-of-the-art in multidisciplinary aeromechanical analyses which integrate advanced Computational Structural Dynamics (CSD) and Computational Fluid Dynamics (CFD) methods. The application areas to be surveyed include fixed wing aircraft, turbomachinery, and rotary wing aircraft. The objective of the authors in the present paper, together with a companion paper on requirements, is to lay out a path for a High Performance Computing (HPC) based next generation comprehensive rotorcraft analysis. From this survey of the key technologies in other application areas it is possible to identify the critical technology gaps that stem from unique rotorcraft requirements.
Retention in a Computer-based Outreach Intervention For Chronically Ill Rural Women
Weinert, Clarann; Cudney, Shirley; Hill, Wade G.
2009-01-01
The study's purpose was to examine retention factors in a computer intervention with 158 chronically ill rural women. After a 22 week intervention, 18.9 percent of the women had dropped out. A Cox regression survival analysis was performed to assess the effects of selected covariates on retention. Reasons for dropping were tallied and categorized. Major reasons for dropping were: lack of time; decline in health status, and non-participation in study activities. Four covariates predicted survival time: level of computer skills, marital status, work outside of home, and impact of social events on participants' lives. Retention-enhancing strategies are suggested for implementation. PMID:18226760
Computational Fluid Dynamics Analysis of Thoracic Aortic Dissection
NASA Astrophysics Data System (ADS)
Tang, Yik; Fan, Yi; Cheng, Stephen; Chow, Kwok
2011-11-01
Thoracic Aortic Dissection (TAD) is a cardiovascular disease with high mortality. An aortic dissection is formed when blood infiltrates the layers of the vascular wall, and a new artificial channel, the false lumen, is created. The expansion of the blood vessel due to the weakened wall enhances the risk of rupture. Computational fluid dynamics analysis is performed to study the hemodynamics of this pathological condition. Both idealized geometry and realistic patient configurations from computed tomography (CT) images are investigated. Physiological boundary conditions from in vivo measurements are employed. Flow configuration and biomechanical forces are studied. Quantitative analysis allows clinicians to assess the risk of rupture in making decision regarding surgical intervention.
Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S
2014-01-01
The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).
NASA Technical Reports Server (NTRS)
Holley, Daniel C.; Haight, Kyle G.; Lindstrom, Ted
1997-01-01
The purpose of this study was to expose a range of naive individuals to the NASA Data Archive and to obtain feedback from them, with the goal of learning how useful people with varied backgrounds would find the Archive for research and other purposes. We processed 36 subjects in four experimental categories, designated in this report as C+R+, C+R-, C-R+ and C-R-, for computer experienced researchers, computer experienced non-researchers, non-computer experienced researchers, and non-computer experienced non-researchers, respectively. This report includes an assessment of general patterns of subject responses to the various aspects of the NASA Data Archive. Some of the aspects examined were interface-oriented, addressing such issues as whether the subject was able to locate information, figure out how to perform desired information retrieval tasks, etc. Other aspects were content-related. In doing these assessments, answers given to different questions were sometimes combined. This practice reflects the tendency of the subjects to provide answers expressing their experiences across question boundaries. Patterns of response are cross-examined by subject category in order to bring out deeper understandings of why subjects reacted the way they did to the archive. After the general assessment, there will be a more extensive summary of the replies received from the test subjects.
NASA Technical Reports Server (NTRS)
Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Myers, Jerry
2015-01-01
Human missions beyond low earth orbit to destinations, such as to Mars and asteroids will expose astronauts to novel operational conditions that may pose health risks that are currently not well understood and perhaps unanticipated. In addition, there are limited clinical and research data to inform development and implementation of health risk countermeasures for these missions. Consequently, NASA's Digital Astronaut Project (DAP) is working to develop and implement computational models and simulations (M&S) to help predict and assess spaceflight health and performance risks, and enhance countermeasure development. In order to effectively accomplish these goals, the DAP evaluates its models and simulations via a rigorous verification, validation and credibility assessment process to ensure that the computational tools are sufficiently reliable to both inform research intended to mitigate potential risk as well as guide countermeasure development. In doing so, DAP works closely with end-users, such as space life science researchers, to establish appropriate M&S credibility thresholds. We will present and demonstrate the process the DAP uses to vet computational M&S for space biomedical analysis using real M&S examples. We will also provide recommendations on how the larger space biomedical community can employ these concepts to enhance the credibility of their M&S codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.
The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less
Cappellozza, José Antonio Zuega; Guedes, Fabio Pinto; Nary, Hugo; Capelozza, Leopoldino; Cardoso, Mauricio de Almeida
2015-01-01
Introduction: Cone-Beam Computed Tomography (CBCT) is essential for tridimensional planning of orthognathic surgery, as it allows visualization and evaluation of bone structures and mineralized tissues. Tomographic slices allow evaluation of tooth inclination and individualization of movement performed during preoperative decompensation. The aim of this paper was to assess maxillary and mandibular incisors inclination pre and post orthodontic decompensation in skeletal Class III malocclusion. Methods: The study was conducted on six individuals with skeletal Class III malocclusion, surgically treated, who had Cone-Beam Computed Tomographic scans obtained before and after orthodontic decompensation. On multiplanar reconstruction view, tomographic slices (axial, coronal and sagittal) were obtained on the long axis of each incisor. The sagittal slice was used for measurement taking, whereas the references used to assess tooth inclination were the long axis of maxillary teeth in relation to the palatal plane and the long axis of mandibular teeth in relation to the mandibular plane. Results: There was significant variation in the inclination of incisors before and after orthodontic decompensation. This change was of greater magnitude in the mandibular arch, evidencing that natural compensation is more effective in this arch, thereby requiring more intensive decompensation. Conclusion: When routinely performed, the protocols of decompensation treatment in surgical individuals often result in intensive movements, which should be reevaluated, since the extent of movement predisposes to reduction in bone attachment levels and root length. PMID:26560818