Sample records for metrical task systems

  1. Checkpoint triggering in a computer system

    DOEpatents

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  2. Human-centric predictive model of task difficulty for human-in-the-loop control tasks

    PubMed Central

    Majewicz Fey, Ann

    2018-01-01

    Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt’s law, a well studied difficulty model for human psychomotor control. PMID:29621301

  3. Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery

    PubMed Central

    Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack

    2015-01-01

    Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loughran, B; Singh, V; Jain, A

    Purpose: Although generalized linear system analytic metrics such as GMTF and GDQE can evaluate performance of the whole imaging system including detector, scatter and focal-spot, a simplified task-specific measured metric may help to better compare detector systems. Methods: Low quantum-noise images of a neuro-vascular stent with a modified ANSI head phantom were obtained from the average of many exposures taken with the high-resolution Micro-Angiographic Fluoroscope (MAF) and with a Flat Panel Detector (FPD). The square of the Fourier Transform of each averaged image, equivalent to the measured product of the system GMTF and the object function in spatial-frequency space, wasmore » then divided by the normalized noise power spectra (NNPS) for each respective system to obtain a task-specific generalized signal-to-noise ratio. A generalized measured relative object detectability (GM-ROD) was obtained by taking the ratio of the integral of the resulting expressions for each detector system to give an overall metric that enables a realistic systems comparison for the given detection task. Results: The GM-ROD provides comparison of relative performance of detector systems from actual measurements of the object function as imaged by those detector systems. This metric includes noise correlations and spatial frequencies relevant to the specific object. Additionally, the integration bounds for the GM-ROD can be selected to emphasis the higher frequency band of each detector if high-resolution image details are to be evaluated. Examples of this new metric are discussed with a comparison of the MAF to the FPD for neuro-vascular interventional imaging. Conclusion: The GM-ROD is a new direct-measured task-specific metric that can provide clinically relevant comparison of the relative performance of imaging systems. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.« less

  5. Relevance of motion-related assessment metrics in laparoscopic surgery.

    PubMed

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  6. Sensitivity of the lane change test as a measure of in-vehicle system demand.

    PubMed

    Young, Kristie L; Lenné, Michael G; Williamson, Amy R

    2011-05-01

    The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. Development of Management Metrics for Research and Technology

    NASA Technical Reports Server (NTRS)

    Sheskin, Theodore J.

    2003-01-01

    Professor Ted Sheskin from CSU will be tasked to research and investigate metrics that can be used to determine the technical progress for advanced development and research tasks. These metrics will be implemented in a software environment that hosts engineering design, analysis and management tools to be used to support power system and component research work at GRC. Professor Sheskin is an Industrial Engineer and has been involved in issues related to management of engineering tasks and will use his knowledge from this area to allow extrapolation into the research and technology management area. Over the course of the summer, Professor Sheskin will develop a bibliography of management papers covering current management methods that may be applicable to research management. At the completion of the summer work we expect to have him recommend a metric system to be reviewed prior to implementation in the software environment. This task has been discussed with Professor Sheskin and some review material has already been given to him.

  8. Gaze entropy reflects surgical task load.

    PubMed

    Di Stasi, Leandro L; Diaz-Piedra, Carolina; Rieiro, Héctor; Sánchez Carrión, José M; Martin Berrido, Mercedes; Olivares, Gonzalo; Catena, Andrés

    2016-11-01

    Task (over-)load imposed on surgeons is a main contributing factor to surgical errors. Recent research has shown that gaze metrics represent a valid and objective index to asses operator task load in non-surgical scenarios. Thus, gaze metrics have the potential to improve workplace safety by providing accurate measurements of task load variations. However, the direct relationship between gaze metrics and surgical task load has not been investigated yet. We studied the effects of surgical task complexity on the gaze metrics of surgical trainees. We recorded the eye movements of 18 surgical residents, using a mobile eye tracker system, during the performance of three high-fidelity virtual simulations of laparoscopic exercises of increasing complexity level: Clip Applying exercise, Cutting Big exercise, and Translocation of Objects exercise. We also measured performance accuracy and subjective rating of complexity. Gaze entropy and velocity linearly increased with increased task complexity: Visual exploration pattern became less stereotyped (i.e., more random) and faster during the more complex exercises. Residents performed better the Clip Applying exercise and the Cutting Big exercise than the Translocation of Objects exercise and their perceived task complexity differed accordingly. Our data show that gaze metrics are a valid and reliable surgical task load index. These findings have potential impacts to improve patient safety by providing accurate measurements of surgeon task (over-)load and might provide future indices to assess residents' learning curves, independently of expensive virtual simulators or time-consuming expert evaluation.

  9. Computer-enhanced laparoscopic training system (CELTS): bridging the gap.

    PubMed

    Stylopoulos, N; Cotin, S; Maithel, S K; Ottensmeye, M; Jackson, P G; Bardsley, R S; Neumann, P F; Rattner, D W; Dawson, S L

    2004-05-01

    There is a large and growing gap between the need for better surgical training methodologies and the systems currently available for such training. In an effort to bridge this gap and overcome the disadvantages of the training simulators now in use, we developed the Computer-Enhanced Laparoscopic Training System (CELTS). CELTS is a computer-based system capable of tracking the motion of laparoscopic instruments and providing feedback about performance in real time. CELTS consists of a mechanical interface, a customizable set of tasks, and an Internet-based software interface. The special cognitive and psychomotor skills a laparoscopic surgeon should master were explicitly defined and transformed into quantitative metrics based on kinematics analysis theory. A single global standardized and task-independent scoring system utilizing a z-score statistic was developed. Validation exercises were performed. The scoring system clearly revealed a gap between experts and trainees, irrespective of the task performed; none of the trainees obtained a score above the threshold that distinguishes the two groups. Moreover, CELTS provided educational feedback by identifying the key factors that contributed to the overall score. Among the defined metrics, depth perception, smoothness of motion, instrument orientation, and the outcome of the task are major indicators of performance and key parameters that distinguish experts from trainees. Time and path length alone, which are the most commonly used metrics in currently available systems, are not considered good indicators of performance. CELTS is a novel and standardized skills trainer that combines the advantages of computer simulation with the features of the traditional and popular training boxes. CELTS can easily be used with a wide array of tasks and ensures comparability across different training conditions. This report further shows that a set of appropriate and clinically relevant performance metrics can be defined and a standardized scoring system can be designed.

  10. Construct validity of individual and summary performance metrics associated with a computer-based laparoscopic simulator.

    PubMed

    Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason

    2014-06-01

    Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.

  11. Metrics Handbook (Air Force Systems Command)

    NASA Astrophysics Data System (ADS)

    1991-08-01

    The handbook is designed to help one develop and use good metrics. It is intended to provide sufficient information to begin developing metrics for objectives, processes, and tasks, and to steer one toward appropriate actions based on the data one collects. It should be viewed as a road map to assist one in arriving at meaningful metrics and to assist in continuous process improvement.

  12. Development of an Objective Space Suit Mobility Performance Metric Using Metabolic Cost and Functional Tasks

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.; Norcross, Jason

    2016-01-01

    Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.

  13. Bilateral assessment of functional tasks for robot-assisted therapy applications

    PubMed Central

    Wang, Sarah; Bai, Ping; Strachota, Elaine; Tchekanov, Guennady; Melbye, Jeff; McGuire, John

    2011-01-01

    This article presents a novel evaluation system along with methods to evaluate bilateral coordination of arm function on activities of daily living tasks before and after robot-assisted therapy. An affordable bilateral assessment system (BiAS) consisting of two mini-passive measuring units modeled as three degree of freedom robots is described. The process for evaluating functional tasks using the BiAS is presented and we demonstrate its ability to measure wrist kinematic trajectories. Three metrics, phase difference, movement overlap, and task completion time, are used to evaluate the BiAS system on a bilateral symmetric (bi-drink) and a bilateral asymmetric (bi-pour) functional task. Wrist position and velocity trajectories are evaluated using these metrics to provide insight into temporal and spatial bilateral deficits after stroke. The BiAS system quantified movements of the wrists during functional tasks and detected differences in impaired and unimpaired arm movements. Case studies showed that stroke patients compared to healthy subjects move slower and are less likely to use their arm simultaneously even when the functional task requires simultaneous movement. After robot-assisted therapy, interlimb coordination spatial deficits moved toward normal coordination on functional tasks. PMID:21881901

  14. Performance metrics for the evaluation of hyperspectral chemical identification systems

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay

    2016-02-01

    Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.

  15. Using machine learning and real-time workload assessment in a high-fidelity UAV simulation environment

    NASA Astrophysics Data System (ADS)

    Monfort, Samuel S.; Sibley, Ciara M.; Coyne, Joseph T.

    2016-05-01

    Future unmanned vehicle operations will see more responsibilities distributed among fewer pilots. Current systems typically involve a small team of operators maintaining control over a single aerial platform, but this arrangement results in a suboptimal configuration of operator resources to system demands. Rather than devoting the full-time attention of several operators to a single UAV, the goal should be to distribute the attention of several operators across several UAVs as needed. Under a distributed-responsibility system, operator task load would be continuously monitored, with new tasks assigned based on system needs and operator capabilities. The current paper sought to identify a set of metrics that could be used to assess workload unobtrusively and in near real-time to inform a dynamic tasking algorithm. To this end, we put 20 participants through a variable-difficulty multiple UAV management simulation. We identified a subset of candidate metrics from a larger pool of pupillary and behavioral measures. We then used these metrics as features in a machine learning algorithm to predict workload condition every 60 seconds. This procedure produced an overall classification accuracy of 78%. An automated tasker sensitive to fluctuations in operator workload could be used to efficiently delegate tasks for teams of UAV operators.

  16. Instrument Motion Metrics for Laparoscopic Skills Assessment in Virtual Reality and Augmented Reality.

    PubMed

    Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A

    2016-11-01

    To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.

  17. Oriented regions grouping based candidate proposal for infrared pedestrian detection

    NASA Astrophysics Data System (ADS)

    Wang, Jiangtao; Zhang, Jingai; Li, Huaijiang

    2018-04-01

    Effectively and accurately locating the positions of pedestrian candidates in image is a key task for the infrared pedestrian detection system. In this work, a novel similarity measuring metric is designed. Based on the selective search scheme, the developed similarity measuring metric is utilized to yield the possible locations for pedestrian candidate. Besides this, corresponding diversification strategies are also provided according to the characteristics of the infrared thermal imaging system. Experimental results indicate that the presented scheme can achieve more efficient outputs than the traditional selective search methodology for the infrared pedestrian detection task.

  18. Numerical distance effect size is a poor metric of approximate number system acuity.

    PubMed

    Chesney, Dana

    2018-04-12

    Individual differences in the ability to compare and evaluate nonsymbolic numerical magnitudes-approximate number system (ANS) acuity-are emerging as an important predictor in many research areas. Unfortunately, recent empirical studies have called into question whether a historically common ANS-acuity metric-the size of the numerical distance effect (NDE size)-is an effective measure of ANS acuity. NDE size has been shown to frequently yield divergent results from other ANS-acuity metrics. Given these concerns and the measure's past popularity, it behooves us to question whether the use of NDE size as an ANS-acuity metric is theoretically supported. This study seeks to address this gap in the literature by using modeling to test the basic assumption underpinning use of NDE size as an ANS-acuity metric: that larger NDE size indicates poorer ANS acuity. This assumption did not hold up under test. Results demonstrate that the theoretically ideal relationship between NDE size and ANS acuity is not linear, but rather resembles an inverted J-shaped distribution, with the inflection points varying based on precise NDE task methodology. Thus, depending on specific methodology and the distribution of ANS acuity in the tested population, positive, negative, or null correlations between NDE size and ANS acuity could be predicted. Moreover, peak NDE sizes would be found for near-average ANS acuities on common NDE tasks. This indicates that NDE size has limited and inconsistent utility as an ANS-acuity metric. Past results should be interpreted on a case-by-case basis, considering both specifics of the NDE task and expected ANS acuity of the sampled population.

  19. Flight Tasks and Metrics to Evaluate Laser Eye Protection in Flight Simulators

    DTIC Science & Technology

    2017-07-07

    AFRL-RH-FS-TR-2017-0026 Flight Tasks and Metrics to Evaluate Laser Eye Protection in Flight Simulators Thomas K. Kuyk Peter A. Smith Solangia...34Flight Tasks and Metrics to Evaluate Laser Eye Protection in Flight Simulators" (AFRL-RH-FS-TR- 2017 - 0026 SHORTER.PATRI CK.D.1023156390 Digitally...SUBTITLE Flight Tasks and Metrics to Evaluate Laser Eye Protection in Flight Simulators 5a. CONTRACT NUMBER FA8650-14-D-6519 5b. GRANT NUMBER 5c

  20. Decomposition-based transfer distance metric learning for image classification.

    PubMed

    Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao

    2014-09-01

    Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.

  1. Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance

    NASA Technical Reports Server (NTRS)

    Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.

    2010-01-01

    PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.

  2. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  3. Research and development on performance models of thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan

    2009-07-01

    Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.

  4. Systems Engineering Approach and Metrics for Evaluating Network-Centric Operations for U.S. Army Battle Command

    DTIC Science & Technology

    2013-07-01

    Systems Engineering Approach and Metrics for Evaluating Network-Centric Operations for U.S. Army Battle Command by Jock O. Grynovicki and...Battle Command Jock O. Grynovicki and Teresa A. Branscome Human Research and Engineering Directorate, ARL...NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jock O. Grynovicki and Teresa A. Branscome 5d. PROJECT NUMBER 622716H70 5e. TASK NUMBER

  5. The psychometrics of mental workload: multiple measures are sensitive but divergent.

    PubMed

    Matthews, Gerald; Reinerman-Jones, Lauren E; Barber, Daniel J; Abich, Julian

    2015-02-01

    A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.

  6. Quantifying usability: an evaluation of a diabetes mHealth system on effectiveness, efficiency, and satisfaction metrics with associated user characteristics.

    PubMed

    Georgsson, Mattias; Staggers, Nancy

    2016-01-01

    Mobile health (mHealth) systems are becoming more common for chronic disease management, but usability studies are still needed on patients' perspectives and mHealth interaction performance. This deficiency is addressed by our quantitative usability study of a mHealth diabetes system evaluating patients' task performance, satisfaction, and the relationship of these measures to user characteristics. We used metrics in the International Organization for Standardization (ISO) 9241-11 standard. After standardized training, 10 patients performed representative tasks and were assessed on individual task success, errors, efficiency (time on task), satisfaction (System Usability Scale [SUS]) and user characteristics. Tasks of exporting and correcting values proved the most difficult, had the most errors, the lowest task success rates, and consumed the longest times on task. The average SUS satisfaction score was 80.5, indicating good but not excellent system usability. Data trends showed males were more successful in task completion, and younger participants had higher performance scores. Educational level did not influence performance, but a more recent diabetes diagnosis did. Patients with more experience in information technology (IT) also had higher performance rates. Difficult task performance indicated areas for redesign. Our methods can assist others in identifying areas in need of improvement. Data about user background and IT skills also showed how user characteristics influence performance and can provide future considerations for targeted mHealth designs. Using the ISO 9241-11 usability standard, the SUS instrument for satisfaction and measuring user characteristics provided objective measures of patients' experienced usability. These could serve as an exemplar for standardized, quantitative methods for usability studies on mHealth systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  7. Quantifying usability: an evaluation of a diabetes mHealth system on effectiveness, efficiency, and satisfaction metrics with associated user characteristics

    PubMed Central

    Staggers, Nancy

    2016-01-01

    Objective Mobile health (mHealth) systems are becoming more common for chronic disease management, but usability studies are still needed on patients’ perspectives and mHealth interaction performance. This deficiency is addressed by our quantitative usability study of a mHealth diabetes system evaluating patients’ task performance, satisfaction, and the relationship of these measures to user characteristics. Materials and Methods We used metrics in the International Organization for Standardization (ISO) 9241-11 standard. After standardized training, 10 patients performed representative tasks and were assessed on individual task success, errors, efficiency (time on task), satisfaction (System Usability Scale [SUS]) and user characteristics. Results Tasks of exporting and correcting values proved the most difficult, had the most errors, the lowest task success rates, and consumed the longest times on task. The average SUS satisfaction score was 80.5, indicating good but not excellent system usability. Data trends showed males were more successful in task completion, and younger participants had higher performance scores. Educational level did not influence performance, but a more recent diabetes diagnosis did. Patients with more experience in information technology (IT) also had higher performance rates. Discussion Difficult task performance indicated areas for redesign. Our methods can assist others in identifying areas in need of improvement. Data about user background and IT skills also showed how user characteristics influence performance and can provide future considerations for targeted mHealth designs. Conclusion Using the ISO 9241-11 usability standard, the SUS instrument for satisfaction and measuring user characteristics provided objective measures of patients’ experienced usability. These could serve as an exemplar for standardized, quantitative methods for usability studies on mHealth systems. PMID:26377990

  8. Role of quality of service metrics in visual target acquisition and tracking in resource constrained environments

    NASA Astrophysics Data System (ADS)

    Anderson, Monica; David, Phillip

    2007-04-01

    Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.

  9. Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J.; Hodge, B. M.; Florita, A.

    2013-10-01

    Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The resultsmore » show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.« less

  10. Quantitative evaluation of muscle synergy models: a single-trial task decoding approach

    PubMed Central

    Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano

    2013-01-01

    Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space. PMID:23471195

  11. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  12. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Anderson, M. R.; Schmidt, D. K.

    1986-01-01

    In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.

  13. Analysis of Trajectory Flexibility Preservation Impact on Traffic Complexity

    NASA Technical Reports Server (NTRS)

    Idris, Husni; El-Wakil, Tarek; Wing, David J.

    2009-01-01

    The growing demand for air travel is increasing the need for mitigation of air traffic congestion and complexity problems, which are already at high levels. At the same time new information and automation technologies are enabling the distribution of tasks and decisions from the service providers to the users of the air traffic system, with potential capacity and cost benefits. This distribution of tasks and decisions raises the concern that independent user actions will decrease the predictability and increase the complexity of the traffic system, hence inhibiting and possibly reversing any potential benefits. In answer to this concern, the authors proposed the introduction of decision-making metrics for preserving user trajectory flexibility. The hypothesis is that such metrics will make user actions naturally mitigate traffic complexity. In this paper, the impact of using these metrics on traffic complexity is investigated. The scenarios analyzed include aircraft in en route airspace with each aircraft meeting a required time of arrival in a one-hour time horizon while mitigating the risk of loss of separation with the other aircraft, thus preserving its trajectory flexibility. The experiments showed promising results in that the individual trajectory flexibility preservation induced self-separation and self-organization effects in the overall traffic situation. The effects were quantified using traffic complexity metrics, namely dynamic density indicators, which indicated that using the flexibility metrics reduced aircraft density and the potential of loss of separation.

  14. Sit Up Straight

    NASA Technical Reports Server (NTRS)

    1998-01-01

    BioMetric Systems has an exclusive license to the Posture Video Analysis Tool (PVAT) developed at Johnson Space Center. PVAT uses videos from Space Shuttle flights to identify limiting posture and other human factors in the workplace that could be limiting. The software also provides data that recommends appropriate postures for certain tasks and safe duration for potentially harmful positions. BioMetric Systems has further developed PVAT for use by hospitals, physical rehabilitation facilities, insurance companies, sports medicine clinics, oil companies, manufacturers, and the military.

  15. Surgical task analysis of simulated laparoscopic cholecystectomy with a navigation system.

    PubMed

    Sugino, T; Kawahira, H; Nakamura, R

    2014-09-01

       Advanced surgical procedures, which have become complex and difficult, increase the burden of surgeons. Quantitative analysis of surgical procedures can improve training, reduce variability, and enable optimization of surgical procedures. To this end, a surgical task analysis system was developed that uses only surgical navigation information.    Division of the surgical procedure, task progress analysis, and task efficiency analysis were done. First, the procedure was divided into five stages. Second, the operating time and progress rate were recorded to document task progress during specific stages, including the dissecting task. Third, the speed of the surgical instrument motion (mean velocity and acceleration), as well as the size and overlap ratio of the approximate ellipse of the location log data distribution, was computed to estimate the task efficiency during each stage. These analysis methods were evaluated based on experimental validation with two groups of surgeons, i.e., skilled and "other" surgeons. The performance metrics and analytical parameters included incidents during the operation, the surgical environment, and the surgeon's skills or habits.    Comparison of groups revealed that skilled surgeons tended to perform the procedure in less time and involved smaller regions; they also manipulated the surgical instruments more gently.    Surgical task analysis developed for quantitative assessment of surgical procedures and surgical performance may provide practical methods and metrics for objective evaluation of surgical expertise.

  16. Quantitative adaptation analytics for assessing dynamic systems of systems: LDRD Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauthier, John H.; Miner, Nadine E.; Wilson, Michael L.

    2015-01-01

    Our society is increasingly reliant on systems and interoperating collections of systems, known as systems of systems (SoS). These SoS are often subject to changing missions (e.g., nation- building, arms-control treaties), threats (e.g., asymmetric warfare, terrorism), natural environments (e.g., climate, weather, natural disasters) and budgets. How well can SoS adapt to these types of dynamic conditions? This report details the results of a three year Laboratory Directed Research and Development (LDRD) project aimed at developing metrics and methodologies for quantifying the adaptability of systems and SoS. Work products include: derivation of a set of adaptability metrics, a method for combiningmore » the metrics into a system of systems adaptability index (SoSAI) used to compare adaptability of SoS designs, development of a prototype dynamic SoS (proto-dSoS) simulation environment which provides the ability to investigate the validity of the adaptability metric set, and two test cases that evaluate the usefulness of a subset of the adaptability metrics and SoSAI for distinguishing good from poor adaptability in a SoS. Intellectual property results include three patents pending: A Method For Quantifying Relative System Adaptability, Method for Evaluating System Performance, and A Method for Determining Systems Re-Tasking.« less

  17. Oculomotor evidence for neocortical systems but not cerebellar dysfunction in autism

    PubMed Central

    Minshew, Nancy J.; Luna, Beatriz; Sweeney, John A.

    2010-01-01

    Objective To investigate the functional integrity of cerebellar and frontal system in autism using oculomotor paradigms. Background Cerebellar and neocortical systems models of autism have been proposed. Courchesne and colleagues have argued that cognitive deficits such as shifting attention disturbances result from dysfunction of vermal lobules VI and VII. Such a vermal deficit should be associated with dysmetric saccadic eye movements because of the major role these areas play in guiding the motor precision of saccades. In contrast, neocortical models of autism predict intact saccade metrics, but impairments on tasks requiring the higher cognitive control of saccades. Methods A total of 26 rigorously diagnosed nonmentally retarded autistic subjects and 26 matched healthy control subjects were assessed with a visually guided saccade task and two volitional saccade tasks, the oculomotor delayed-response task and the antisaccade task. Results Metrics and dynamic of the visually guided saccades were normal in autistic subjects, documenting the absence of disturbances in cerebellar vermal lobules VI and VII and in automatic shifts of visual attention. Deficits were demonstrated on both volitional saccade tasks, indicating dysfunction in the circuitry of prefrontal cortex and its connections with the parietal cortex, and associated cognitive impairments in spatial working memory and in the ability to voluntarily suppress context-inappropriate responses. Conclusions These findings demonstrate intrinsic neocortical, not cerebellar, dysfunction in autism, and parallel deficits in higher order cognitive mechanisms and not in elementary attentional and sensorimotor systems in autism. PMID:10102406

  18. Development and validation of a composite scoring system for robot-assisted surgical training--the Robotic Skills Assessment Score.

    PubMed

    Chowriappa, Ashirwad J; Shi, Yi; Raza, Syed Johar; Ahmed, Kamran; Stegemann, Andrew; Wilding, Gregory; Kaouk, Jihad; Peabody, James O; Menon, Mani; Hassett, James M; Kesavadas, Thenkurussi; Guru, Khurshid A

    2013-12-01

    A standardized scoring system does not exist in virtual reality-based assessment metrics to describe safe and crucial surgical skills in robot-assisted surgery. This study aims to develop an assessment score along with its construct validation. All subjects performed key tasks on previously validated Fundamental Skills of Robotic Surgery curriculum, which were recorded, and metrics were stored. After an expert consensus for the purpose of content validation (Delphi), critical safety determining procedural steps were identified from the Fundamental Skills of Robotic Surgery curriculum and a hierarchical task decomposition of multiple parameters using a variety of metrics was used to develop Robotic Skills Assessment Score (RSA-Score). Robotic Skills Assessment mainly focuses on safety in operative field, critical error, economy, bimanual dexterity, and time. Following, the RSA-Score was further evaluated for construct validation and feasibility. Spearman correlation tests performed between tasks using the RSA-Scores indicate no cross correlation. Wilcoxon rank sum tests were performed between the two groups. The proposed RSA-Score was evaluated on non-robotic surgeons (n = 15) and on expert-robotic surgeons (n = 12). The expert group demonstrated significantly better performance on all four tasks in comparison to the novice group. Validation of the RSA-Score in this study was carried out on the Robotic Surgical Simulator. The RSA-Score is a valid scoring system that could be incorporated in any virtual reality-based surgical simulator to achieve standardized assessment of fundamental surgical tents during robot-assisted surgery. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Perceived assessment metrics for visible and infrared color fused image quality without reference image

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao

    2015-02-01

    Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.

  20. A Practical Method for Collecting Social Media Campaign Metrics

    ERIC Educational Resources Information Center

    Gharis, Laurie W.; Hightower, Mary F.

    2017-01-01

    Today's Extension professionals are tasked with more work and fewer resources. Integrating social media campaigns into outreach efforts can be an efficient way to meet work demands. If resources go toward social media, a practical method for collecting metrics is needed. Collecting metrics adds one more task to the workloads of Extension…

  1. Data-driven management using quantitative metric and automatic auditing program (QMAP) improves consistency of radiation oncology processes.

    PubMed

    Yu, Naichang; Xia, Ping; Mastroianni, Anthony; Kolar, Matthew D; Chao, Samuel T; Greskovich, John F; Suh, John H

    Process consistency in planning and delivery of radiation therapy is essential to maintain patient safety and treatment quality and efficiency. Ensuring the timely completion of each critical clinical task is one aspect of process consistency. The purpose of this work is to report our experience in implementing a quantitative metric and automatic auditing program (QMAP) with a goal of improving the timely completion of critical clinical tasks. Based on our clinical electronic medical records system, we developed a software program to automatically capture the completion timestamp of each critical clinical task while providing frequent alerts of potential delinquency. These alerts were directed to designated triage teams within a time window that would offer an opportunity to mitigate the potential for late completion. Since July 2011, 18 metrics were introduced in our clinical workflow. We compared the delinquency rates for 4 selected metrics before the implementation of the metric with the delinquency rate of 2016. One-tailed Student t test was used for statistical analysis RESULTS: With an average of 150 daily patients on treatment at our main campus, the late treatment plan completion rate and late weekly physics check were reduced from 18.2% and 8.9% in 2011 to 4.2% and 0.1% in 2016, respectively (P < .01). The late weekly on-treatment physician visit rate was reduced from 7.2% in 2012 to <1.6% in 2016. The yearly late cone beam computed tomography review rate was reduced from 1.6% in 2011 to <0.1% in 2016. QMAP is effective in reducing late completions of critical tasks, which can positively impact treatment quality and patient safety by reducing the potential for errors resulting from distractions, interruptions, and rush in completion of critical tasks. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  2. Machine learning of network metrics in ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration

    2017-10-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  3. Distributed Trajectory Flexibility Preservation for Traffic Complexity Mitigation

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Wing, David; Delahaye, Daniel

    2009-01-01

    The growing demand for air travel is increasing the need for mitigation of air traffic congestion and complexity problems, which are already at high levels. At the same time new information and automation technologies are enabling the distribution of tasks and decisions from the service providers to the users of the air traffic system, with potential capacity and cost benefits. This distribution of tasks and decisions raises the concern that independent user actions will decrease the predictability and increase the complexity of the traffic system, hence inhibiting and possibly reversing any potential benefits. In answer to this concern, the authors propose the introduction of decision-making metrics for preserving user trajectory flexibility. The hypothesis is that such metrics will make user actions naturally mitigate traffic complexity. In this paper, the impact of using these metrics on traffic complexity is investigated. The scenarios analyzed include aircraft in en route airspace with each aircraft meeting a required time of arrival in a one-hour time horizon while mitigating the risk of loss of separation with the other aircraft, thus preserving its trajectory flexibility. The experiments showed promising results in that the individual trajectory flexibility preservation induced self-separation and self-organization effects in the overall traffic situation. The effects were quantified using traffic complexity metrics based on Lyapunov exponents and traffic proximity.

  4. Improving Rural Emergency Medical Services (EMS) through transportation system enhancements Phase II : project brief.

    DOT National Transportation Integrated Search

    2015-12-01

    This study used the National EMS Information System (NEMSIS) South Dakota data to develop datadriven performance metrics for EMS. Researchers used the data for three tasks: geospatial analysis of EMS events, optimization of station locations, and ser...

  5. A Low-Cost, Passive Navigation Training System for Image-Guided Spinal Intervention.

    PubMed

    Lorias-Espinoza, Daniel; Carranza, Vicente González; de León, Fernando Chico-Ponce; Escamirosa, Fernando Pérez; Martinez, Arturo Minor

    2016-11-01

    Navigation technology is used for training in various medical specialties, not least image-guided spinal interventions. Navigation practice is an important educational component that allows residents to understand how surgical instruments interact with complex anatomy and to learn basic surgical skills such as the tridimensional mental interpretation of bidimensional data. Inexpensive surgical simulators for spinal surgery, however, are lacking. We therefore designed a low-cost spinal surgery simulator (Spine MovDigSys 01) to allow 3-dimensional navigation via 2-dimensional images without altering or limiting the surgeon's natural movement. A training system was developed with an anatomical lumbar model and 2 webcams to passively digitize surgical instruments under MATLAB software control. A proof-of-concept recognition task (vertebral body cannulation) and a pilot test of the system with 12 neuro- and orthopedic surgeons were performed to obtain feedback on the system. Position, orientation, and kinematic variables were determined and the lateral, posteroanterior, and anteroposterior views obtained. The system was tested with a proof-of-concept experimental task. Operator metrics including time of execution (t), intracorporeal length (d), insertion angle (α), average speed (v¯), and acceleration (a) were obtained accurately. These metrics were converted into assessment metrics such as smoothness of operation and linearity of insertion. Results from initial testing are shown and the system advantages and disadvantages described. This low-cost spinal surgery training system digitized the position and orientation of the instruments and allowed image-guided navigation, the generation of metrics, and graphic recording of the instrumental route. Spine MovDigSys 01 is useful for development of basic, noninnate skills and allows the novice apprentice to quickly and economically move beyond the basics. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  7. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  8. Assessment program for Kentucky traffic records.

    DOT National Transportation Integrated Search

    2015-02-01

    During 2013, the Kentucky Transportation Center identified 117 potential performance metrics for the ten databases in : the Kentucky Traffic Records System. This report summarizes the findings of three main tasks completed in 2014: (1) : assessment o...

  9. Task-based detectability comparison of exponential transformation of free-response operating characteristic (EFROC) curve and channelized Hotelling observer (CHO)

    NASA Astrophysics Data System (ADS)

    Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly

    2016-03-01

    This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.

  10. Neural decoding with kernel-based metric learning.

    PubMed

    Brockmeier, Austin J; Choi, John S; Kriminger, Evan G; Francis, Joseph T; Principe, Jose C

    2014-06-01

    In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus-exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.

  11. What Do Eye Gaze Metrics Tell Us about Motor Imagery?

    PubMed

    Poiroux, Elodie; Cavaro-Ménard, Christine; Leruez, Stéphanie; Lemée, Jean Michel; Richard, Isabelle; Dinomais, Mickael

    2015-01-01

    Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to "spy" on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.

  12. Lunar lander conceptual design: Lunar base systems study task 2.2

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This study is a first look at the problem of building a lunar lander to support a small lunar surface base. One lander, which can land 25 metric tons, one way, or take a 6 metric ton crew capsule up and down is desired. A series of trade studies are used to narrow the choices and provide some general guidelines. Given a rough baseline, the systems are then reviewed. A conceptual design is then produced. The process was only carried through one iteration. Many more iterations are needed. Assumptions and groundrules are considered.

  13. Temporal discounting and emotional self-regulation in children with attention-deficit/hyperactivity disorder.

    PubMed

    Utsumi, Daniel Augusto; Miranda, Mônica Carolina; Muszkat, Mauro

    2016-12-30

    Temporal Discounting (TD) reflects a tendency to discount a reward more deeply the longer its delivery is delayed. TD tasks and behavioral scales have been used to investigate 'hot' executive functions in ADHD. The present study analyzed TD task performance shown by ADHD and control groups for correlations with emotional self-regulation metrics from two scales, the Behavior Rating Inventory of Executive Functions (BRIEF) and the Child Behavior Checklist (CBCL). Children (ages 8-12) with ADHD (n=25) and controls (n=24) were assessed using material rewards (toys) for three types of task: Hypothetical (H); Hypothetical with temporal expectation (HTE); and Real (R). Between-group differences were found for the HTE task, on which the ADHD group showed a higher rate of discounting their favorite toy over time, especially at 10s and 20s. This was the only task on which performance significantly correlated with BRIEF metrics, thus suggesting associations between impulsivity and low emotional self-regulation, but no task was correlated with CBCL score. The conclusion is that tasks involving toys and HTE in particular may be used to investigate TD in children with ADHD and as a means of evaluating the interface between the reward system and emotional self-regulation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Metrics in method engineering

    NASA Astrophysics Data System (ADS)

    Brinkkemper, S.; Rossi, M.

    1994-12-01

    As customizable computer aided software engineering (CASE) tools, or CASE shells, have been introduced in academia and industry, there has been a growing interest into the systematic construction of methods and their support environments, i.e. method engineering. To aid the method developers and method selectors in their tasks, we propose two sets of metrics, which measure the complexity of diagrammatic specification techniques on the one hand, and of complete systems development methods on the other hand. Proposed metrics provide a relatively fast and simple way to analyze the technique (or method) properties, and when accompanied with other selection criteria, can be used for estimating the cost of learning the technique and the relative complexity of a technique compared to others. To demonstrate the applicability of the proposed metrics, we have applied them to 34 techniques and 15 methods.

  15. Demand curves for hypothetical cocaine in cocaine-dependent individuals.

    PubMed

    Bruner, Natalie R; Johnson, Matthew W

    2014-03-01

    Drug purchasing tasks have been successfully used to examine demand for hypothetical consumption of abused drugs including heroin, nicotine, and alcohol. In these tasks, drug users make hypothetical choices whether to buy drugs, and if so, at what quantity, at various potential prices. These tasks allow for behavioral economic assessment of that drug's intensity of demand (preferred level of consumption at extremely low prices) and demand elasticity (sensitivity of consumption to price), among other metrics. However, a purchasing task for cocaine in cocaine-dependent individuals has not been investigated. This study examined a novel Cocaine Purchasing Task and the relation between resulting demand metrics and self-reported cocaine use data. Participants completed a questionnaire assessing hypothetical purchases of cocaine units at prices ranging from $0.01 to $1,000. Demand curves were generated from responses on the Cocaine Purchasing Task. Correlations compared metrics from the demand curve to measures of real-world cocaine use. Group and individual data were well modeled by a demand curve function. The validity of the Cocaine Purchasing Task was supported by a significant correlation between the demand curve metrics of demand intensity and O max (determined from Cocaine Purchasing Task data) and self-reported measures of cocaine use. Partial correlations revealed that after controlling for demand intensity, demand elasticity and the related measure, P max, were significantly correlated with real-world cocaine use. Results indicate that the Cocaine Purchasing Task produces orderly demand curve data, and that these data relate to real-world measures of cocaine use.

  16. Timesharing performance as an indicator of pilot mental workload

    NASA Technical Reports Server (NTRS)

    Casper, Patricia A.; Kantowitz, Barry H.; Sorkin, Robert D.

    1988-01-01

    Attentional deficits (workloads) were evaluated in a timesharing task. The results from this and other experiments were incorporated into an expert system designed to provide workload metric selection advice to non-experts in the field interested in operator workload.

  17. Comparison of task-based exposure metrics for an epidemiologic study of isocyanate inhalation exposures among autobody shop workers.

    PubMed

    Woskie, Susan R; Bello, Dhimiter; Gore, Rebecca J; Stowe, Meredith H; Eisen, Ellen A; Liu, Youcheng; Sparer, Judy A; Redlich, Carrie A; Cullen, Mark R

    2008-09-01

    Because many occupational epidemiologic studies use exposure surrogates rather than quantitative exposure metrics, the UMass Lowell and Yale study of autobody shop workers provided an opportunity to evaluate the relative utility of surrogates and quantitative exposure metrics in an exposure response analysis of cross-week change in respiratory function. A task-based exposure assessment was used to develop several metrics of inhalation exposure to isocyanates. The metrics included the surrogates, job title, counts of spray painting events during the day, counts of spray and bystander exposure events, and a quantitative exposure metric that incorporated exposure determinant models based on task sampling and a personal workplace protection factor for respirator use, combined with a daily task checklist. The result of the quantitative exposure algorithm was an estimate of the daily time-weighted average respirator-corrected total NCO exposure (microg/m(3)). In general, these four metrics were found to be variable in agreement using measures such as weighted kappa and Spearman correlation. A logistic model for 10% drop in FEV(1) from Monday morning to Thursday morning was used to evaluate the utility of each exposure metric. The quantitative exposure metric was the most favorable, producing the best model fit, as well as the greatest strength and magnitude of association. This finding supports the reports of others that reducing exposure misclassification can improve risk estimates that otherwise would be biased toward the null. Although detailed and quantitative exposure assessment can be more time consuming and costly, it can improve exposure-disease evaluations and is more useful for risk assessment purposes. The task-based exposure modeling method successfully produced estimates of daily time-weighted average exposures in the complex and changing autobody shop work environment. The ambient TWA exposures of all of the office workers and technicians and 57% of the painters were found to be below the current U.K. Health and Safety Executive occupational exposure limit (OEL) for total NCO of 20 microg/m(3). When respirator use was incorporated, all personal daily exposures were below the U.K. OEL.

  18. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  19. Evaluation of eye metrics as a detector of fatigue.

    PubMed

    McKinley, R Andy; McIntire, Lindsey K; Schmidt, Regina; Repperger, Daniel W; Caldwell, John A

    2011-08-01

    This study evaluated oculometrics as a detector of fatigue in Air Force-relevant tasks after sleep deprivation. Using the metrics of total eye closure duration (PERCLOS) and approximate entropy (ApEn), the relation between these eye metrics and fatigue-induced performance decrements was investigated. One damaging effect to the successful outcome of operational military missions is that attributed to sleep deprivation-induced fatigue. Consequently, there is interest in the development of reliable monitoring devices that can assess when an operator is overly fatigued. Ten civilian participants volunteered to serve in this study. Each was trained on three performance tasks: target identification, unmanned aerial vehicle landing, and the psychomotor vigilance task (PVT). Experimental testing began after 14 hr awake and continued every 2 hr until 28 hr of sleep deprivation was reached. Performance on the PVT and target identification tasks declined significantly as the level of sleep deprivation increased.These performance declines were paralleled more closely by changes in the ApEn compared to the PERCLOS measure. The results provide evidence that the ApEn eye metric can be used to detect fatigue in relevant military aviation tasks. Military and commercial operators could benefit from an alertness monitoring device.

  20. Stimulus selectivity of drug purchase tasks: A preliminary study evaluating alcohol and cigarette demand.

    PubMed

    Strickland, Justin C; Stoops, William W

    2017-06-01

    The use of drug purchase tasks to measure drug demand in human behavioral pharmacology and addiction research has proliferated in recent years. Few studies have systematically evaluated the stimulus selectivity of drug purchase tasks to demonstrate that demand metrics are specific to valuation of or demand for the commodity under study. Stimulus selectivity is broadly defined for this purpose as a condition under which a specific stimulus input or target (e.g., alcohol, cigarettes) is the primary determinant of behavior (e.g., demand). The overall goal of the present study was to evaluate the stimulus selectivity of drug purchase tasks. Participants were sampled from the Amazon.com's crowdsourcing platform Mechanical Turk. Participants completed either alcohol and soda purchase tasks (Experiment 1; N = 139) or cigarette and chocolate purchase tasks (Experiment 2; N = 46), and demand metrics were compared to self-reported use behaviors. Demand metrics for alcohol and soda were closely associated with commodity-similar (e.g., alcohol demand and weekly alcohol use) but not commodity-different (e.g., alcohol demand and weekly soda use) variables. A similar pattern was observed for cigarette and chocolate demand, but selectivity was not as consistent as for alcohol and soda. Collectively, we observed robust selectivity for alcohol and soda purchase tasks and modest selectivity for cigarette and chocolate purchase tasks. These preliminary outcomes suggest that demand metrics adequately reflect the specific commodity under study and support the continued use of purchase tasks in substance use research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Demand Curves for Hypothetical Cocaine in Cocaine-Dependent Individuals

    PubMed Central

    Bruner, Natalie R.; Johnson, Matthew W.

    2013-01-01

    Rationale Drug purchasing tasks have been successfully used to examine demand for hypothetical consumption of abused drugs including heroin, nicotine, and alcohol. In these tasks drug users make hypothetical choices whether to buy drugs, and if so, at what quantity, at various potential prices. These tasks allow for behavioral economic assessment of that drug's intensity of demand (preferred level of consumption at extremely low prices) and demand elasticity (sensitivity of consumption to price), among other metrics. However, a purchasing task for cocaine in cocaine-dependent individuals has not been investigated. Objectives This study examined a novel Cocaine Purchasing Task and the relation between resulting demand metrics and self-reported cocaine use data. Methods Participants completed a questionnaire assessing hypothetical purchases of cocaine units at prices ranging from $0.01 to $1,000. Demand curves were generated from responses on the Cocaine Purchasing Task. Correlations compared metrics from the demand curve to measures of real-world cocaine use. Results Group and individual data were well modeled by a demand curve function. The validity of the Cocaine Purchasing Task was supported by a significant correlation between the demand curve metrics of demand intensity and Omax (determined from Cocaine Purchasing Task data) and self-reported measures of cocaine use. Partial correlations revealed that after controlling for demand intensity, demand elasticity and the related measure, Pmax, were significantly correlated with real-world cocaine use. Conclusions Results indicate that the Cocaine Purchasing Task produces orderly demand curve data, and that these data relate to real-world measures of cocaine use. PMID:24217899

  2. Exploring Localization in Nuclear Spin Chains

    NASA Astrophysics Data System (ADS)

    Wei, Ken Xuan; Ramanathan, Chandrasekhar; Cappellaro, Paola

    2018-02-01

    Characterizing out-of-equilibrium many-body dynamics is a complex but crucial task for quantum applications and understanding fundamental phenomena. A central question is the role of localization in quenching thermalization in many-body systems and whether such localization survives in the presence of interactions. Probing this question in real systems necessitates the development of an experimentally measurable metric that can distinguish between different types of localization. While it is known that the localized phase of interacting systems [many-body localization (MBL)] exhibits a long-time logarithmic growth in entanglement entropy that distinguishes it from the noninteracting case of Anderson localization (AL), entanglement entropy is difficult to measure experimentally. Here, we present a novel correlation metric, capable of distinguishing MBL from AL in high-temperature spin systems. We demonstrate the use of this metric to detect localization in a natural solid-state spin system using nuclear magnetic resonance (NMR). We engineer the natural Hamiltonian to controllably introduce disorder and interactions, and observe the emergence of localization. In particular, while our correlation metric saturates for AL, it slowly keeps increasing for MBL, demonstrating analogous features to entanglement entropy, as we show in simulations. Our results show that our NMR techniques, akin to measuring out-of-time correlations, are well suited for studying localization in spin systems.

  3. Automated Support for da Vinci Surgical System

    DTIC Science & Technology

    2011-05-01

    MScore, which provides objective assessment measuring robotic surgery skills across all computed metrics (Figure 7). In addition to viewing single ...holding an object. Data Collection & Analysis (Task 5)  Preliminary Experiments  During the first phase of data collection, a single performance of...a single task (anastomosis) trial was recorded from six different users – three each for the da Vinci and the dV-Trainer platforms. On each platform

  4. Utility functions and resource management in an oversubscribed heterogeneous computing environment

    DOE PAGES

    Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...

    2014-09-26

    We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less

  5. Analysis of Dependencies and Impacts of Metroplex Operations

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel A.; Ayyalasomayajula, Sricharan

    2010-01-01

    This report documents research performed by Purdue University under subcontract to the George Mason University (GMU) for the Metroplex Operations effort sponsored by NASA's Airportal Project. Purdue University conducted two tasks in support of the larger efforts led by GMU: a) a literature review on metroplex operations followed by identification and analysis of metroplex dependencies, and b) the analysis of impacts of metroplex operations on the larger U.S. domestic airline service network. The tasks are linked in that the ultimate goal is an understanding of the role of dependencies among airports in a metroplex in causing delays both locally and network-wide. The Purdue team has formulated a system-of-systems framework to analyze metroplex dependencies (including simple metrics to quantify them) and develop compact models to predict delays based on network structure. These metrics and models were developed to provide insights for planners to formulate tailored policies and operational strategies that streamline metroplex operations and mitigate delays and congestion.

  6. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  7. Proof of Concept Study: Investigating Force Metrics of an Intracorporeal Suturing Knot Task.

    PubMed

    Wee, Justin; Azzie, Georges; Drake, James; Gerstle, J Ted

    2018-06-19

    Mastering proper force manipulation in minimally invasive surgery can take many hours of practice and training. Improper force control can lead to necrosis, infection, and scarring. A force-sensing skin (FSS) has been developed, which measures forces at the distal end of minimal access surgeries' (MAS) instruments without altering the instrument's structural integrity or the surgical workflow, and acts as a minimally disruptive add-on to any MAS instrument. A proof of concept study was conducted using a FSS-equipped 5 mm straight-tip needle holder. Participants (n = 19: 3 novices, 11 fellows, and 5 staff surgeons) performed one intracorporeal suturing knot task (ISKT). Using participant task video footage, each participant's two puncture forces (each wall of the Penrose drain) and three knot tightening forces were measured. Force metrics from the three expertise groups were compared using analysis of variance (ANOVA) and Tukey's honest significance test with statistical significance assessed at P < .05. Preliminary ISKT force metric data showed differences between novices and more experienced fellows and surgeons. Of the five stages of the ISKT evaluated, the first puncture force of the Penrose drain seemed to best reflect the difference in skill among participants. The study demonstrated ISKT knot tightening and puncture force ranges across three expertise levels (novices, surgical fellows, and staff surgeons) of 0.586 to 6.089 newtons (N) and 0.852 to 2.915 N, respectively. The investigation of force metrics is important for the implementation of future force feedback systems as it can provide real-time information to surgeons in training and the operating theater.

  8. Predicting dual-task performance with the Multiple Resources Questionnaire (MRQ).

    PubMed

    Boles, David B; Bursk, Jonathan H; Phillips, Jeffrey B; Perdelwitz, Jason R

    2007-02-01

    The objective was to assess the validity of the Multiple Resources Questionnaire (MRQ) in predicting dual-task interference. Subjective workload measures such as the Subjective Workload Assessment Technique (SWAT) and NASA Task Load Index are sensitive to single-task parameters and dual-task loads but have not attempted to measure workload in particular mental processes. An alternative is the MRQ. In Experiment 1, participants completed simple laboratory tasks and the MRQ after each. Interference between tasks was then correlated to three different task similarity metrics: profile similarity, based on r(2) between ratings; overlap similarity, based on summed minima; and overall demand, based on summed ratings. Experiment 2 used similar methods but more complex computer-based games. In Experiment 1 the MRQ moderately predicted interference (r = +.37), with no significant difference between metrics. In Experiment 2 the metric effect was significant, with overlap similarity excelling in predicting interference (r = +.83). Mean ratings showed high diagnosticity in identifying specific mental processing bottlenecks. The MRQ shows considerable promise as a cognitive-process-sensitive workload measure. Potential applications of the MRQ include the identification of dual-processing bottlenecks as well as process overloads in single tasks, preparatory to redesign in areas such as air traffic management, advanced flight displays, and medical imaging.

  9. The remapping of space in motor learning and human-machine interfaces

    PubMed Central

    Mussa-Ivaldi, F.A.; Danziger, Z.

    2009-01-01

    Studies of motor adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. One of the most fundamental elements of our environment is space itself. This article focuses on the notion of Euclidean space as it applies to common sensory motor experiences. Starting from the assumption that we interact with the world through a system of neural signals, we observe that these signals are not inherently endowed with metric properties of the ordinary Euclidean space. The ability of the nervous system to represent these properties depends on adaptive mechanisms that reconstruct the Euclidean metric from signals that are not Euclidean. Gaining access to these mechanisms will reveal the process by which the nervous system handles novel sophisticated coordinate transformation tasks, thus highlighting possible avenues to create functional human-machine interfaces that can make that task much easier. A set of experiments is presented that demonstrate the ability of the sensory-motor system to reorganize coordination in novel geometrical environments. In these environments multiple degrees of freedom of body motions are used to control the coordinates of a point in a two-dimensional Euclidean space. We discuss how practice leads to the acquisition of the metric properties of the controlled space. Methods of machine learning based on the reduction of reaching errors are tested as a means to facilitate learning by adaptively changing he map from body motions to controlled device. We discuss the relevance of the results to the development of adaptive human machine interfaces and optimal control. PMID:19665553

  10. Maintaining a Distributed File System by Collection and Analysis of Metrics

    NASA Technical Reports Server (NTRS)

    Bromberg, Daniel

    1997-01-01

    AFS(originally, Andrew File System) is a widely-deployed distributed file system product used by companies, universities, and laboratories world-wide. However, it is not trivial to operate: runing an AFS cell is a formidable task. It requires a team of dedicated and experienced system administratores who must manage a user base numbring in the thousands, rather than the smaller range of 10 to 500 faced by the typical system administrator.

  11. Analysis of Subjects' Vulnerability in a Touch Screen Game Using Behavioral Metrics.

    PubMed

    Parsinejad, Payam; Sipahi, Rifat

    2017-12-01

    In this article, we report results on an experimental study conducted with volunteer subjects playing a touch-screen game with two unique difficulty levels. Subjects have knowledge about the rules of both game levels, but only sufficient playing experience with the easy level of the game, making them vulnerable with the difficult level. Several behavioral metrics associated with subjects' playing the game are studied in order to assess subjects' mental-workload changes induced by their vulnerability. Specifically, these metrics are calculated based on subjects' finger kinematics and decision making times, which are then compared with baseline metrics, namely, performance metrics pertaining to how well the game is played and a physiological metric called pnn50 extracted from heart rate measurements. In balanced experiments and supported by comparisons with baseline metrics, it is found that some of the studied behavioral metrics have the potential to be used to infer subjects' mental workload changes through different levels of the game. These metrics, which are decoupled from task specifics, relate to subjects' ability to develop strategies to play the game, and hence have the advantage of offering insight into subjects' task-load and vulnerability assessment across various experimental settings.

  12. An initiative to improve the management of clinically significant test results in a large health care network.

    PubMed

    Roy, Christopher L; Rothschild, Jeffrey M; Dighe, Anand S; Schiff, Gordon D; Graydon-Baker, Erin; Lenoci-Edwards, Jennifer; Dwyer, Cheryl; Khorasani, Ramin; Gandhi, Tejal K

    2013-11-01

    The failure of providers to communicate and follow up clinically significant test results (CSTR) is an important threat to patient safety. The Massachusetts Coalition for the Prevention of Medical Errors has endorsed the creation of systems to ensure that results can be received and acknowledged. In 2008 a task force was convened that represented clinicians, laboratories, radiology, patient safety, risk management, and information systems in a large health care network with the goals of providing recommendations and a road map for improvement in the management of CSTR and of implementing this improvement plan during the sub-force sequent five years. In drafting its charter, the task broadened the scope from "critical" results to "clinically significant" ones; clinically significant was defined as any result that requires further clinical action to avoid morbidity or mortality, regardless of the urgency of that action. The task force recommended four key areas for improvement--(1) standardization of policies and definitions, (2) robust identification of the patient's care team, (3) enhanced results management/tracking systems, and (4) centralized quality reporting and metrics. The task force faced many challenges in implementing these recommendations, including disagreements on definitions of CSTR and on who should have responsibility for CSTR, changes to established work flows, limitations of resources and of existing information systems, and definition of metrics. This large-scale effort to improve the communication and follow-up of CSTR in a health care network continues with ongoing work to address implementation challenges, refine policies, prepare for a new clinical information system platform, and identify new ways to measure the extent of this important safety problem.

  13. Asynchronous decision making in a memorized paddle pressing task

    NASA Astrophysics Data System (ADS)

    Dankert, James R.; Olson, Byron; Si, Jennie

    2008-12-01

    This paper presents a method for asynchronous decision making using recorded neural data in a binary decision task. This is a demonstration of a technique for developing motor cortical neural prosthetics that do not rely on external cued timing information. The system presented in this paper uses support vector machines and leaky integrate-and-fire elements to predict directional paddle presses. In addition to the traditional metrics of accuracy, asynchronous systems must also optimize the time needed to make a decision. The system presented is able to predict paddle presses with a median accuracy of 88% and all decisions are made before the time of the actual paddle press. An alternative bit rate measure of performance is defined to show that the system proposed here is able to perform the task with the same efficiency as the rats.

  14. Complexity Management Using Metrics for Trajectory Flexibility Preservation and Constraint Minimization

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Shen, Ni; Wing, David J.

    2011-01-01

    The growing demand for air travel is increasing the need for mitigating air traffic congestion and complexity problems, which are already at high levels. At the same time new surveillance, navigation, and communication technologies are enabling major transformations in the air traffic management system, including net-based information sharing and collaboration, performance-based access to airspace resources, and trajectory-based rather than clearance-based operations. The new system will feature different schemes for allocating tasks and responsibilities between the ground and airborne agents and between the human and automation, with potential capacity and cost benefits. Therefore, complexity management requires new metrics and methods that can support these new schemes. This paper presents metrics and methods for preserving trajectory flexibility that have been proposed to support a trajectory-based approach for complexity management by airborne or ground-based systems. It presents extensions to these metrics as well as to the initial research conducted to investigate the hypothesis that using these metrics to guide user and service provider actions will naturally mitigate traffic complexity. The analysis showed promising results in that: (1) Trajectory flexibility preservation mitigated traffic complexity as indicated by inducing self-organization in the traffic patterns and lowering traffic complexity indicators such as dynamic density and traffic entropy. (2)Trajectory flexibility preservation reduced the potential for secondary conflicts in separation assurance. (3) Trajectory flexibility metrics showed potential application to support user and service provider negotiations for minimizing the constraints imposed on trajectories without jeopardizing their objectives.

  15. Cognitive context detection using pupillary measurements

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    In this paper, we demonstrate the use of pupillary measurements as indices of cognitive workload. We analyze the pupillary data of twenty individuals engaged in a simulated Unmanned Aerial System (UAS) operation in order to understand and characterize the behavior of pupil dilation under varying task load (i.e., workload) levels. We present three metrics that can be employed as real-time indices of cognitive workload. In addition, we develop a predictive system utilizing the pupillary metrics to demonstrate cognitive context detection within simulated supervisory control of UAS. Further, we use pupillary data collected concurrently from the left and right eye and present comparative results of the use of separate vs. combined pupillary data for detecting cognitive context.

  16. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  17. An Investigation of Candidate Sensor-Observable Wake Vortex Strength Parameters for the NASA Aircraft Vortex Spacing System (AVOSS)

    NASA Technical Reports Server (NTRS)

    Tatnall, Chistopher R.

    1998-01-01

    The counter-rotating pair of wake vortices shed by flying aircraft can pose a threat to ensuing aircraft, particularly on landing approach. To allow adequate time for the vortices to disperse/decay, landing aircraft are required to maintain certain fixed separation distances. The Aircraft Vortex Spacing System (AVOSS), under development at NASA, is designed to prescribe safe aircraft landing approach separation distances appropriate to the ambient weather conditions. A key component of the AVOSS is a ground sensor, to ensure, safety by making wake observations to verify predicted behavior. This task requires knowledge of a flowfield strength metric which gauges the severity of disturbance an encountering aircraft could potentially experience. Several proposed strength metric concepts are defined and evaluated for various combinations of metric parameters and sensor line-of-sight elevation angles. Representative populations of generating and following aircraft types are selected, and their associated wake flowfields are modeled using various wake geometry definitions. Strength metric candidates are then rated and compared based on the correspondence of their computed values to associated aircraft response values, using basic statistical analyses.

  18. Kinematics effectively delineate accomplished users of endovascular robotics with a physical training model.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean

    2015-02-01

    Endovascular robotics systems, now approved for clinical use in the United States and Europe, are seeing rapid growth in interest. Determining who has sufficient expertise for safe and effective clinical use remains elusive. Our aim was to analyze performance on a robotic platform to determine what defines an expert user. During three sessions, 21 subjects with a range of endovascular expertise and endovascular robotic experience (novices <2 hours to moderate-extensive experience with >20 hours) performed four tasks on a training model. All participants completed a 2-hour training session on the robot by a certified instructor. Completion times, global rating scores, and motion metrics were collected to assess performance. Electromagnetic tracking was used to capture and to analyze catheter tip motion. Motion analysis was based on derivations of speed and position including spectral arc length and total number of submovements (inversely proportional to proficiency of motion) and duration of submovements (directly proportional to proficiency). Ninety-eight percent of competent subjects successfully completed the tasks within the given time, whereas 91% of noncompetent subjects were successful. There was no significant difference in completion times between competent and noncompetent users except for the posterior branch (151 s:105 s; P = .01). The competent users had more efficient motion as evidenced by statistically significant differences in the metrics of motion analysis. Users with >20 hours of experience performed significantly better than those newer to the system, independent of prior endovascular experience. This study demonstrates that motion-based metrics can differentiate novice from trained users of flexible robotics systems for basic endovascular tasks. Efficiency of catheter movement, consistency of performance, and learning curves may help identify users who are sufficiently trained for safe clinical use of the system. This work will help identify the learning curve and specific movements that translate to expert robotic navigation. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  19. Display/control requirements for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Curry, R. E.; Kleinman, D. L.; Hollister, W. M.; Young, L. R.

    1975-01-01

    Quantative metrics were determined for system control performance, workload for control, monitoring performance, and workload for monitoring. Pilot tasks were allocated for navigation and guidance of automated commercial V/STOL aircraft in all weather conditions using an optimal control model of the human operator to determine display elements and design.

  20. Clutter in electronic medical records: examining its performance and attentional costs using eye tracking.

    PubMed

    Moacdieh, Nadine; Sarter, Nadine

    2015-06-01

    The objective was to use eye tracking to trace the underlying changes in attention allocation associated with the performance effects of clutter, stress, and task difficulty in visual search and noticing tasks. Clutter can degrade performance in complex domains, yet more needs to be known about the associated changes in attention allocation, particularly in the presence of stress and for different tasks. Frequently used and relatively simple eye tracking metrics do not effectively capture the various effects of clutter, which is critical for comprehensively analyzing clutter and developing targeted, real-time countermeasures. Electronic medical records (EMRs) were chosen as the application domain for this research. Clutter, stress, and task difficulty were manipulated, and physicians' performance on search and noticing tasks was recorded. Several eye tracking metrics were used to trace attention allocation throughout those tasks, and subjective data were gathered via a debriefing questionnaire. Clutter degraded performance in terms of response time and noticing accuracy. These decrements were largely accentuated by high stress and task difficulty. Eye tracking revealed the underlying attentional mechanisms, and several display-independent metrics were shown to be significant indicators of the effects of clutter. Eye tracking provides a promising means to understand in detail (offline) and prevent (in real time) major performance breakdowns due to clutter. Display designers need to be aware of the risks of clutter in EMRs and other complex displays and can use the identified eye tracking metrics to evaluate and/or adjust their display. © 2015, Human Factors and Ergonomics Society.

  1. Optimal SSN Tasking to Enhance Real-time Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Ferreira, J., III; Hussein, I.; Gerber, J.; Sivilli, R.

    2016-09-01

    Space Situational Awareness (SSA) is currently constrained by an overwhelming number of resident space objects (RSOs) that need to be tracked and the amount of data these observations produce. The Joint Centralized Autonomous Tasking System (JCATS) is an autonomous, net-centric tool that approaches these SSA concerns from an agile, information-based stance. Finite set statistics and stochastic optimization are used to maintain an RSO catalog and develop sensor tasking schedules based on operator configured, state information-gain metrics to determine observation priorities. This improves the efficiency of sensors to target objects as awareness changes and new information is needed, not at predefined frequencies solely. A net-centric, service-oriented architecture (SOA) allows for JCATS integration into existing SSA systems. Testing has shown operationally-relevant performance improvements and scalability across multiple types of scenarios and against current sensor tasking tools.

  2. Viewpoint matters: objective performance metrics for surgeon endoscope control during robot-assisted surgery.

    PubMed

    Jarc, Anthony M; Curet, Myriam J

    2017-03-01

    Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.

  3. A novel augmented reality simulator for skills assessment in minimal invasive surgery.

    PubMed

    Lahanas, Vasileios; Loukas, Constantinos; Smailis, Nikolaos; Georgiou, Evangelos

    2015-08-01

    Over the past decade, simulation-based training has come to the foreground as an efficient method for training and assessment of surgical skills in minimal invasive surgery. Box-trainers and virtual reality (VR) simulators have been introduced in the teaching curricula and have substituted to some extent the traditional model of training based on animals or cadavers. Augmented reality (AR) is a new technology that allows blending of VR elements and real objects within a real-world scene. In this paper, we present a novel AR simulator for assessment of basic laparoscopic skills. The components of the proposed system include: a box-trainer, a camera and a set of laparoscopic tools equipped with custom-made sensors that allow interaction with VR training elements. Three AR tasks were developed, focusing on basic skills such as perception of depth of field, hand-eye coordination and bimanual operation. The construct validity of the system was evaluated via a comparison between two experience groups: novices with no experience in laparoscopic surgery and experienced surgeons. The observed metrics included task execution time, tool pathlength and two task-specific errors. The study also included a feedback questionnaire requiring participants to evaluate the face-validity of the system. Between-group comparison demonstrated highly significant differences (<0.01) in all performance metrics and tasks denoting the simulator's construct validity. Qualitative analysis on the instruments' trajectories highlighted differences between novices and experts regarding smoothness and economy of motion. Subjects' ratings on the feedback questionnaire highlighted the face-validity of the training system. The results highlight the potential of the proposed simulator to discriminate groups with different expertise providing a proof of concept for the potential use of AR as a core technology for laparoscopic simulation training.

  4. Test-retest reliability of an fMRI paradigm for studies of cardiovascular reactivity.

    PubMed

    Sheu, Lei K; Jennings, J Richard; Gianaros, Peter J

    2012-07-01

    We examined the reliability of measures of fMRI, subjective, and cardiovascular reactions to standardized versions of a Stroop color-word task and a multisource interference task. A sample of 14 men and 12 women (30-49 years old) completed the tasks on two occasions, separated by a median of 88 days. The reliability of fMRI BOLD signal changes in brain areas engaged by the tasks was moderate, and aggregating fMRI BOLD signal changes across the tasks improved test-retest reliability metrics. These metrics included voxel-wise intraclass correlation coefficients (ICCs) and overlap ratio statistics. Task-aggregated ratings of subjective arousal, valence, and control, as well as cardiovascular reactions evoked by the tasks showed ICCs of 0.57 to 0.87 (ps < .001), indicating moderate-to-strong reliability. These findings support using these tasks as a battery for fMRI studies of cardiovascular reactivity. Copyright © 2012 Society for Psychophysiological Research.

  5. Validation of a virtual reality-based robotic surgical skills curriculum.

    PubMed

    Connolly, Michael; Seligman, Johnathan; Kastenmeier, Andrew; Goldblatt, Matthew; Gould, Jon C

    2014-05-01

    The clinical application of robotic-assisted surgery (RAS) is rapidly increasing. The da Vinci Surgical System™ is currently the only commercially available RAS system. The skills necessary to perform robotic surgery are unique from those required for open and laparoscopic surgery. A validated laparoscopic surgical skills curriculum (fundamentals of laparoscopic surgery or FLS™) has transformed the way surgeons acquire laparoscopic skills. There is a need for a similar skills training and assessment tool specific for robotic surgery. Based on previously published data and expert opinion, we developed a robotic skills curriculum. We sought to evaluate this curriculum for evidence of construct validity (ability to discriminate between users of different skill levels). Four experienced surgeons (>20 RAS) and 20 novice surgeons (first-year medical students with no surgical or RAS experience) were evaluated. The curriculum comprised five tasks utilizing the da Vinci™ Skills Simulator (Pick and Place, Camera Targeting 2, Peg Board 2, Matchboard 2, and Suture Sponge 3). After an orientation to the robot and a period of acclimation in the simulator, all subjects completed three consecutive repetitions of each task. Computer-derived performance metrics included time, economy of motion, master work space, instrument collisions, excessive force, distance of instruments out of view, drops, missed targets, and overall scores (a composite of all metrics). Experienced surgeons significantly outperformed novice surgeons in most metrics. Statistically significant differences were detected for each task in regards to mean overall scores and mean time (seconds) to completion. The curriculum we propose is a valid method of assessing and distinguishing robotic surgical skill levels on the da Vinci Si™ Surgical System. Further study is needed to establish proficiency levels and to demonstrate that training on the simulator with the proposed curriculum leads to improved robotic surgical performance in the operating room.

  6. Cognitive skills assessment during robot-assisted surgery: separating the wheat from the chaff.

    PubMed

    Guru, Khurshid A; Esfahani, Ehsan T; Raza, Syed J; Bhat, Rohit; Wang, Katy; Hammond, Yana; Wilding, Gregory; Peabody, James O; Chowriappa, Ashirwad J

    2015-01-01

    To investigate the utility of cognitive assessment during robot-assisted surgery (RAS) to define skills in terms of cognitive engagement, mental workload, and mental state; while objectively differentiating between novice and expert surgeons. In all, 10 surgeons with varying operative experience were assigned to beginner (BG), combined competent and proficient (CPG), and expert (EG) groups based on the Dreyfus model. The participants performed tasks for basic, intermediate and advanced skills on the da Vinci Surgical System. Participant performance was assessed using both tool-based and cognitive metrics. Tool-based metrics showed significant differences between the BG vs CPG and the BG vs EG, in basic skills. While performing intermediate skills, there were significant differences only on the instrument-to-instrument collisions between the BG vs CPG (2.0 vs 0.2, P = 0.028), and the BG vs EG (2.0 vs 0.1, P = 0.018). There were no significant differences between the CPG and EG for both basic and intermediate skills. However, using cognitive metrics, there were significant differences between all groups for the basic and intermediate skills. In advanced skills, there were no significant differences between the CPG and the EG except time (1116 vs 599.6 s), using tool-based metrics. However, cognitive metrics revealed significant differences between both groups. Cognitive assessment of surgeons may aid in defining levels of expertise performing complex surgical tasks once competence is achieved. Cognitive assessment may be used as an adjunct to the traditional methods for skill assessment during RAS. © 2014 The Authors. BJU International © 2014 BJU International.

  7. Common Metrics for Human-Robot Interaction

    NASA Technical Reports Server (NTRS)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  8. Cutting Solid Figures by Plane--Analytical Solution and Spreadsheet Implementation

    ERIC Educational Resources Information Center

    Benacka, Jan

    2012-01-01

    In some secondary mathematics curricula, there is a topic called Stereometry that deals with investigating the position and finding the intersection, angle, and distance of lines and planes defined within a prism or pyramid. Coordinate system is not used. The metric tasks are solved using Pythagoras' theorem, trigonometric functions, and sine and…

  9. Metric Learning for Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  10. Further Development of the Assessment of Military Multitasking Performance: Iterative Reliability Testing

    PubMed Central

    McCulloch, Karen L.; Radomski, Mary V.; Finkelstein, Marsha; Cecchini, Amy S.; Davidson, Leslie F.; Heaton, Kristin J.; Smith, Laurel B.; Scherer, Matthew R.

    2017-01-01

    The Assessment of Military Multitasking Performance (AMMP) is a battery of functional dual-tasks and multitasks based on military activities that target known sensorimotor, cognitive, and exertional vulnerabilities after concussion/mild traumatic brain injury (mTBI). The AMMP was developed to help address known limitations in post concussive return to duty assessment and decision making. Once validated, the AMMP is intended for use in combination with other metrics to inform duty-readiness decisions in Active Duty Service Members following concussion. This study used an iterative process of repeated interrater reliability testing and feasibility feedback to drive modifications to the 9 tasks of the original AMMP which resulted in a final version of 6 tasks with metrics that demonstrated clinically acceptable ICCs of > 0.92 (range of 0.92–1.0) for the 3 dual tasks and > 0.87 (range 0.87–1.0) for the metrics of the 3 multitasks. Three metrics involved in recording subject errors across 2 tasks did not achieve ICCs above 0.85 set apriori for multitasks (0.64) and above 0.90 set for dual-tasks (0.77 and 0.86) and were not used for further analysis. This iterative process involved 3 phases of testing with between 13 and 26 subjects, ages 18–42 years, tested in each phase from a combined cohort of healthy controls and Service Members with mTBI. Study findings support continued validation of this assessment tool to provide rehabilitation clinicians further return to duty assessment methods robust to ceiling effects with strong face validity to injured Warriors and their leaders. PMID:28056045

  11. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  12. Systematic methods for knowledge acquisition and expert system development

    NASA Technical Reports Server (NTRS)

    Belkin, Brenda L.; Stengel, Robert F.

    1991-01-01

    Nine cooperating rule-based systems, collectively called AUTOCREW, were designed to automate functions and decisions associated with a combat aircraft's subsystem. The organization of tasks within each system is described; performance metrics were developed to evaluate the workload of each rule base, and to assess the cooperation between the rule-bases. Each AUTOCREW subsystem is composed of several expert systems that perform specific tasks. AUTOCREW's NAVIGATOR was analyzed in detail to understand the difficulties involved in designing the system and to identify tools and methodologies that ease development. The NAVIGATOR determines optimal navigation strategies from a set of available sensors. A Navigation Sensor Management (NSM) expert system was systematically designed from Kalman filter covariance data; four ground-based, a satellite-based, and two on-board INS-aiding sensors were modeled and simulated to aid an INS. The NSM Expert was developed using the Analysis of Variance (ANOVA) and the ID3 algorithm. Navigation strategy selection is based on an RSS position error decision metric, which is computed from the covariance data. Results show that the NSM Expert predicts position error correctly between 45 and 100 percent of the time for a specified navaid configuration and aircraft trajectory. The NSM Expert adapts to new situations, and provides reasonable estimates of hybrid performance. The systematic nature of the ANOVA/ID3 method makes it broadly applicable to expert system design when experimental or simulation data is available.

  13. Development and Application of a Clinical Microsystem Simulation Methodology for Human Factors-Based Research of Alarm Fatigue.

    PubMed

    Kobayashi, Leo; Gosbee, John W; Merck, Derek L

    2017-07-01

    (1) To develop a clinical microsystem simulation methodology for alarm fatigue research with a human factors engineering (HFE) assessment framework and (2) to explore its application to the comparative examination of different approaches to patient monitoring and provider notification. Problems with the design, implementation, and real-world use of patient monitoring systems result in alarm fatigue. A multidisciplinary team is developing an open-source tool kit to promote bedside informatics research and mitigate alarm fatigue. Simulation, HFE, and computer science experts created a novel simulation methodology to study alarm fatigue. Featuring multiple interconnected simulated patient scenarios with scripted timeline, "distractor" patient care tasks, and triggered true and false alarms, the methodology incorporated objective metrics to assess provider and system performance. Developed materials were implemented during institutional review board-approved study sessions that assessed and compared an experimental multiparametric alerting system with a standard monitor telemetry system for subject response, use characteristics, and end-user feedback. A four-patient simulation setup featuring objective metrics for participant task-related performance and response to alarms was developed along with accompanying structured HFE assessment (questionnaire and interview) for monitor systems use testing. Two pilot and four study sessions with individual nurse subjects elicited true alarm and false alarm responses (including diversion from assigned tasks) as well as nonresponses to true alarms. In-simulation observation and subject questionnaires were used to test the experimental system's approach to suppressing false alarms and alerting providers. A novel investigative methodology applied simulation and HFE techniques to replicate and study alarm fatigue in controlled settings for systems assessment and experimental research purposes.

  14. Task Force on the Future of Military Health Care

    DTIC Science & Technology

    2007-12-01

    Navigator. Service programs are supported by the Military Health System Population Health Portal (MHSPHP), a centralized, secure, web-based population...Congress on March 1, 2008.66 64 Air Force Medical Support Agency, Population Health Support Division. MHS Population Health Portal Methods. July 2007...HEDIS metrics using the MHS Population Health Portal and reporting in the service systems and the Tri- Service Business Planning tool. DoD has several

  15. WISE: Automated support for software project management and measurement. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Sudhakar

    1995-01-01

    One important aspect of software development and IV&V is measurement. Unless a software development effort is measured in some way, it is difficult to judge the effectiveness of current efforts and predict future performances. Collection of metrics and adherence to a process are difficult tasks in a software project. Change activity is a powerful indicator of project status. Automated systems that can handle change requests, issues, and other process documents provide an excellent platform for tracking the status of the project. A World Wide Web based architecture is developed for (a) making metrics collection an implicit part of the software process, (b) providing metric analysis dynamically, (c) supporting automated tools that can complement current practices of in-process improvement, and (d) overcoming geographical barrier. An operational system (WISE) instantiates this architecture allowing for the improvement of software process in a realistic environment. The tool tracks issues in software development process, provides informal communication between the users with different roles, supports to-do lists (TDL), and helps in software process improvement. WISE minimizes the time devoted to metrics collection, analysis, and captures software change data. Automated tools like WISE focus on understanding and managing the software process. The goal is improvement through measurement.

  16. Optimizing spectral CT parameters for material classification tasks

    NASA Astrophysics Data System (ADS)

    Rigie, D. S.; La Rivière, P. J.

    2016-06-01

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.

  17. Optimizing Spectral CT Parameters for Material Classification Tasks

    PubMed Central

    Rigie, D. S.; La Rivière, P. J.

    2017-01-01

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430

  18. A Correlation Between Quality Management Metrics and Technical Performance Measurement

    DTIC Science & Technology

    2007-03-01

    Engineering Working Group SME Subject Matter Expert SoS System of Systems SPI Schedule performance Index SSEI System of Systems Engineering and...and stated as such [Q, M , M &G]. The QMM equation is given by: 12 QMM=0.92RQM+0.67EPM+0.55RKM+1.86PM, where: RGM is the requirements management...schedule. Now if corrective action is not taken, the project/task will be completed behind schedule and over budget. m . As well as the derived

  19. Medical telementoring using an augmented reality transparent display.

    PubMed

    Andersen, Daniel; Popescu, Voicu; Cabrera, Maria Eugenia; Shanghavi, Aditya; Gomez, Gerardo; Marley, Sherri; Mullis, Brian; Wachs, Juan P

    2016-06-01

    The goal of this study was to design and implement a novel surgical telementoring system called the System for Telementoring with Augmented Reality (STAR) that uses a virtual transparent display to convey precise locations in the operating field to a trainee surgeon. This system was compared with a conventional system based on a telestrator for surgical instruction. A telementoring system was developed and evaluated in a study which used a 1 × 2 between-subjects design with telementoring system, that is, STAR or conventional, as the independent variable. The participants in the study were 20 premedical or medical students who had no prior experience with telementoring. Each participant completed a task of port placement and a task of abdominal incision under telementoring using either the STAR or the conventional system. The metrics used to test performance when using the system were placement error, number of focus shifts, and time to task completion. When compared with the conventional system, participants using STAR completed the 2 tasks with less placement error (45% and 68%) and with fewer focus shifts (86% and 44%), but more slowly (19% for each task). Using STAR resulted in decreased annotation placement error, fewer focus shifts, but greater times to task completion. STAR placed virtual annotations directly onto the trainee surgeon's field of view of the operating field by conveying location with great accuracy; this technology helped to avoid shifts in focus, decreased depth perception, and enabled fine-tuning execution of the task to match telementored instruction, but led to greater times to task completion. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Determining optimal parameters of the self-referent encoding task: A large-scale examination of self-referent cognition and depression.

    PubMed

    Dainer-Best, Justin; Lee, Hae Yeon; Shumake, Jason D; Yeager, David S; Beevers, Christopher G

    2018-06-07

    Although the self-referent encoding task (SRET) is commonly used to measure self-referent cognition in depression, many different SRET metrics can be obtained. The current study used best subsets regression with cross-validation and independent test samples to identify the SRET metrics most reliably associated with depression symptoms in three large samples: a college student sample (n = 572), a sample of adults from Amazon Mechanical Turk (n = 293), and an adolescent sample from a school field study (n = 408). Across all 3 samples, SRET metrics associated most strongly with depression severity included number of words endorsed as self-descriptive and rate of accumulation of information required to decide whether adjectives were self-descriptive (i.e., drift rate). These metrics had strong intratask and split-half reliability and high test-retest reliability across a 1-week period. Recall of SRET stimuli and traditional reaction time (RT) metrics were not robustly associated with depression severity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Development and preliminary reliability of a multitasking assessment for executive functioning after concussion.

    PubMed

    Smith, Laurel B; Radomski, Mary Vining; Davidson, Leslie Freeman; Finkelstein, Marsha; Weightman, Margaret M; McCulloch, Karen L; Scherer, Matthew R

    2014-01-01

    OBJECTIVES. Executive functioning deficits may result from concussion. The Charge of Quarters (CQ) Duty Task is a multitask assessment designed to assess executive functioning in servicemembers after concussion. In this article, we discuss the rationale and process used in the development of the CQ Duty Task and present pilot data from the preliminary evaluation of interrater reliability (IRR). METHOD. Three evaluators observed as 12 healthy participants performed the CQ Duty Task and measured performance using various metrics. Intraclass correlation coefficient (ICC) quantified IRR. RESULTS. The ICC for task completion was .94. ICCs for other assessment metrics were variable. CONCLUSION. Preliminary IRR data for the CQ Duty Task are encouraging, but further investigation is needed to improve IRR in some domains. Lessons learned in the development of the CQ Duty Task could benefit future test development efforts with populations other than the military. Copyright © 2014 by the American Occupational Therapy Association, Inc.

  2. Development and Preliminary Reliability of a Multitasking Assessment for Executive Functioning After Concussion

    PubMed Central

    Radomski, Mary Vining; Davidson, Leslie Freeman; Finkelstein, Marsha; Weightman, Margaret M.; McCulloch, Karen L.; Scherer, Matthew R.

    2014-01-01

    OBJECTIVES. Executive functioning deficits may result from concussion. The Charge of Quarters (CQ) Duty Task is a multitask assessment designed to assess executive functioning in servicemembers after concussion. In this article, we discuss the rationale and process used in the development of the CQ Duty Task and present pilot data from the preliminary evaluation of interrater reliability (IRR). METHOD. Three evaluators observed as 12 healthy participants performed the CQ Duty Task and measured performance using various metrics. Intraclass correlation coefficient (ICC) quantified IRR. RESULTS. The ICC for task completion was .94. ICCs for other assessment metrics were variable. CONCLUSION. Preliminary IRR data for the CQ Duty Task are encouraging, but further investigation is needed to improve IRR in some domains. Lessons learned in the development of the CQ Duty Task could benefit future test development efforts with populations other than the military. PMID:25005507

  3. Improving Separation Assurance Stability Through Trajectory Flexibility Preservation

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Shen, Ni; Wing, David J.

    2010-01-01

    New information and automation technologies are enabling the distribution of tasks and decisions from the service providers to the users of the air traffic system, with potential capacity and cost benefits. This distribution of tasks and decisions raises the concern that independent user actions will decrease the predictability and increase the complexity of the traffic system, hence inhibiting and possibly reversing any potential benefits. One such concern is the adverse impact of uncoordinated actions by individual aircraft on the stability of separation assurance. For example, individual aircraft performing self-separation may resolve predicted losses of separation or conflicts with some traffic, only to result in secondary conflicts with other traffic or with the same traffic later in time. In answer to this concern, this paper proposes metrics for preserving user trajectory flexibility to be used in self-separation along with other objectives. The hypothesis is that preserving trajectory flexibility will naturally reduce the creation of secondary conflicts by bringing about implicit coordination between aircraft. The impact of using these metrics on improving self-separation stability is investigated by measuring the impact on secondary conflicts. The scenarios analyzed include aircraft in en route airspace with each aircraft meeting a required time of arrival in a twenty minute time horizon while maintaining separation from the surrounding traffic and using trajectory flexibility metrics to mitigate the risk of secondary conflicts. Preliminary experiments showed promising results in that the trajectory flexibility preservation reduced the potential for secondary conflicts.

  4. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems.

    PubMed

    Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A

    2015-01-01

    Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.

  5. Operating room metrics score card-creating a prototype for individualized feedback.

    PubMed

    Gabriel, Rodney A; Gimlich, Robert; Ehrenfeld, Jesse M; Urman, Richard D

    2014-11-01

    The balance between reducing costs and inefficiencies with that of patient safety is a challenging problem faced in the operating room suite. An ongoing challenge is the creation of effective strategies that reduce these inefficiencies and provide real-time personalized metrics and electronic feedback to anesthesia practitioners. We created a sample report card structure, utilizing existing informatics systems. This system allows to gather and analyze operating room metrics for each anesthesia provider and offer personalized feedback. To accomplish this task, we identified key metrics that represented time and quality parameters. We collected these data for individual anesthesiologists and compared performance to the overall group average. Data were presented as an electronic score card and made available to individual clinicians on a real-time basis in an effort to provide effective feedback. These metrics included number of cancelled cases, average turnover time, average time to operating room ready and patient in room, number of delayed first case starts, average induction time, average extubation time, average time to recovery room arrival to discharge, performance feedback from other providers, compliance to various protocols, and total anesthetic costs. The concept we propose can easily be generalized to a variety of operating room settings, types of facilities and OR health care professionals. Such a scorecard can be created using content that is important for operating room efficiency, research, and practice improvement for anesthesia providers.

  6. DISTA: a portable software solution for 3D compilation of photogrammetric image blocks

    NASA Astrophysics Data System (ADS)

    Boochs, Frank; Mueller, Hartmut; Neifer, Markus

    2001-04-01

    A photogrammetric evaluation system used for the precise determination of 3D-coordinates from blocks of large metric images will be presented. First, the motivation for the development is shown, which is placed in the field of processing tools for photogrammetric evaluation tasks. As the use and availability of metric images of digital type rapidly increases corresponding equipment for the measuring process is needed. Systems which have been developed up to now are either very special ones, founded on high end graphics workstations with an according pricing or simple ones with restricted measuring functionality. A new conception will be shown, avoiding special high end graphics hardware but providing a complete processing chain for all elementary photogrammetric tasks ranging from preparatory steps over the formation of image blocks up to the automatic and interactive 3D-evaluation within digital stereo models. The presented system is based on PC-hardware equipped with off the shelf graphics boards and uses an object oriented design. The specific needs of a flexible measuring system and the corresponding requirements which have to be met by the system are shown. Important aspects as modularity and hardware independence and their value for the solution are shown. The design of the software will be presented and first results with a prototype realised on a powerful PC-hardware configuration will be featured

  7. Does stereo-endoscopy improve neurosurgical targeting in 3rd ventriculostomy?

    NASA Astrophysics Data System (ADS)

    Abhari, Kamyar; de Ribaupierre, Sandrine; Peters, Terry; Eagleson, Roy

    2011-03-01

    Endoscopic third ventriculostomy is a minimally invasive surgical technique to treat hydrocephalus; a condition where patients suffer from excessive amounts of cerebrospinal fluid (CSF) in the ventricular system of their brain. This technique involves using a monocular endoscope to locate the third ventricle, where a hole can be made to drain excessive fluid. Since a monocular endoscope provides only a 2D view, it is difficult to make this perforation due to the lack of monocular cues and depth perception. In a previous study, we had investigated the use of a stereo-endoscope to allow neurosurgeons to locate and avoid hazardous areas on the surface of the third ventricle. In this paper, we extend our previous study by developing a new methodology to evaluate the targeting performance in piercing the hole in the membrane. We consider the accuracy of this surgical task and derive an index of performance for a task which does not have a well-defined position or width of target. Our performance metric is sensitive and can distinguish between experts and novices. We make use of this metric to demonstrate an objective learning curve on this task for each subject.

  8. The integrated manual and automatic control of complex flight systems

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.

    1991-01-01

    Research dealt with the general area of optimal flight control synthesis for manned flight vehicles. The work was generic; no specific vehicle was the focus of study. However, the class of vehicles generally considered were those for which high authority, multivariable control systems might be considered, for the purpose of stabilization and the achievement of optimal handling characteristics. Within this scope, the topics of study included several optimal control synthesis techniques, control-theoretic modeling of the human operator in flight control tasks, and the development of possible handling qualities metrics and/or measures of merit. Basic contributions were made in all these topics, including human operator (pilot) models for multi-loop tasks, optimal output feedback flight control synthesis techniques; experimental validations of the methods developed, and fundamental modeling studies of the air-to-air tracking and flared landing tasks.

  9. Synergetic Organization in Speech Rhythm

    NASA Astrophysics Data System (ADS)

    Cummins, Fred

    The Speech Cycling Task is a novel experimental paradigm developed together with Robert Port and Keiichi Tajima at Indiana University. In a task of this sort, subjects repeat a phrase containing multiple prominent, or stressed, syllables in time with an auditory metronome, which can be simple or complex. A phase-based collective variable is defined in the acoustic speech signal. This paper reports on two experiments using speech cycling which together reveal many of the hallmarks of hierarchically coupled oscillatory processes. The first experiment requires subjects to place the final stressed syllable of a small phrase at specified phases within the overall Phrase Repetition Cycle (PRC). It is clearly demonstrated that only three patterns, characterized by phases around 1/3, 1/2 or 2/3 are reliably produced, and these points are attractors for other target phases. The system is thus multistable, and the attractors correspond to stable couplings between the metrical foot and the PRC. A second experiment examines the behavior of these attractors at increased rates. Faster rates lead to mode jumps between attractors. Previous experiments have also illustrated hysteresis as the system moves from one mode to the next. The dynamical organization is particularly interesting from a modeling point of view, as there is no single part of the speech production system which cycles at the level of either the metrical foot or the phrase repetition cycle. That is, there is no continuous kinematic observable in the system. Nonetheless, there is strong evidence that the oscopic behavior of the entire production system is correctly described as hierarchically coupled oscillators. There are many parallels between this organization and the forms of inter-limb coupling observed in locomotion and rhythmic manual tasks.

  10. Space station definition and preliminary design, WP-01. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Lenda, J. A.

    1987-01-01

    System activities are summarized and an overview of the system level engineering tasks performed are provided. Areas discussed include requirements, system test and verification, the advanced development plan, customer accommodations, software, growth, productivity, operations, product assurance and metrication. The hardware element study results are summarized. Overviews of recommended configurations are provided for the core module, the USL, the logistics elements, the propulsion subsystems, reboost, vehicle accommodations, and the smart front end. A brief overview is provided for costing activities.

  11. Evaluation of a mobile augmented reality application for image guidance of neurosurgical interventions.

    PubMed

    Kramers, Matthew; Armstrong, Ryan; Bakhshmand, Saeed M; Fenster, Aaron; de Ribaupierre, Sandrine; Eagleson, Roy

    2014-01-01

    Image guidance can provide surgeons with valuable contextual information during a medical intervention. Often, image guidance systems require considerable infrastructure, setup-time, and operator experience to be utilized. Certain procedures performed at bedside are susceptible to navigational errors that can lead to complications. We present an application for mobile devices that can provide image guidance using augmented reality to assist in performing neurosurgical tasks. A methodology is outlined that evaluates this mode of visualization from the standpoint of perceptual localization, depth estimation, and pointing performance, in scenarios derived from a neurosurgical targeting task. By measuring user variability and speed we can report objective metrics of performance for our augmented reality guidance system.

  12. Space Launch System Advanced Development Office, FY 2013 Annual Report

    NASA Technical Reports Server (NTRS)

    Crumbly, C. M.; Bickley, F. P.; Hueter, U.

    2013-01-01

    The Advanced Development Office (ADO), part of the Space Launch System (SLS) program, provides SLS with the advanced development needed to evolve the vehicle from an initial Block 1 payload capability of 70 metric tons (t) to an eventual capability Block 2 of 130 t, with intermediary evolution options possible. ADO takes existing technologies and matures them to the point that insertion into the mainline program minimizes risk. The ADO portfolio of tasks covers a broad range of technical developmental activities. The ADO portfolio supports the development of advanced boosters, upper stages, and other advanced development activities benefiting the SLS program. A total of 34 separate tasks were funded by ADO in FY 2013.

  13. Introducing Co-Activation Pattern Metrics to Quantify Spontaneous Brain Network Dynamics

    PubMed Central

    Chen, Jingyuan E.; Chang, Catie; Greicius, Michael D.; Glover, Gary H.

    2015-01-01

    Recently, fMRI researchers have begun to realize that the brain's intrinsic network patterns may undergo substantial changes during a single resting state (RS) scan. However, despite the growing interest in brain dynamics, metrics that can quantify the variability of network patterns are still quite limited. Here, we first introduce various quantification metrics based on the extension of co-activation pattern (CAP) analysis, a recently proposed point-process analysis that tracks state alternations at each individual time frame and relies on very few assumptions; then apply these proposed metrics to quantify changes of brain dynamics during a sustained 2-back working memory (WM) task compared to rest. We focus on the functional connectivity of two prominent RS networks, the default-mode network (DMN) and executive control network (ECN). We first demonstrate less variability of global Pearson correlations with respect to the two chosen networks using a sliding-window approach during WM task compared to rest; then we show that the macroscopic decrease in variations in correlations during a WM task is also well characterized by the combined effect of a reduced number of dominant CAPs, increased spatial consistency across CAPs, and increased fractional contributions of a few dominant CAPs. These CAP metrics may provide alternative and more straightforward quantitative means of characterizing brain network dynamics than time-windowed correlation analyses. PMID:25662866

  14. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  15. Monitoring cognitive and emotional processes through pupil and cardiac response during dynamic versus logical task.

    PubMed

    Causse, Mickaël; Sénard, Jean-Michel; Démonet, Jean François; Pastor, Josette

    2010-06-01

    The paper deals with the links between physiological measurements and cognitive and emotional functioning. As long as the operator is a key agent in charge of complex systems, the definition of metrics able to predict his performance is a great challenge. The measurement of the physiological state is a very promising way but a very acute comprehension is required; in particular few studies compare autonomous nervous system reactivity according to specific cognitive processes during task performance and task related psychological stress is often ignored. We compared physiological parameters recorded on 24 healthy subjects facing two neuropsychological tasks: a dynamic task that require problem solving in a world that continually evolves over time and a logical task representative of cognitive processes performed by operators facing everyday problem solving. Results showed that the mean pupil diameter change was higher during the dynamic task; conversely, the heart rate was more elevated during the logical task. Finally, the systolic blood pressure seemed to be strongly sensitive to psychological stress. A better taking into account of the precise influence of a given cognitive activity and both workload and related task-induced psychological stress during task performance is a promising way to better monitor operators in complex working situations to detect mental overload or pejorative stress factor of error.

  16. Task-Driven Comparison of Topic Models.

    PubMed

    Alexander, Eric; Gleicher, Michael

    2016-01-01

    Topic modeling, a method of statistically extracting thematic content from a large collection of texts, is used for a wide variety of tasks within text analysis. Though there are a growing number of tools and techniques for exploring single models, comparisons between models are generally reduced to a small set of numerical metrics. These metrics may or may not reflect a model's performance on the analyst's intended task, and can therefore be insufficient to diagnose what causes differences between models. In this paper, we explore task-centric topic model comparison, considering how we can both provide detail for a more nuanced understanding of differences and address the wealth of tasks for which topic models are used. We derive comparison tasks from single-model uses of topic models, which predominantly fall into the categories of understanding topics, understanding similarity, and understanding change. Finally, we provide several visualization techniques that facilitate these tasks, including buddy plots, which combine color and position encodings to allow analysts to readily view changes in document similarity.

  17. Usability: Human Research Program - Space Human Factors and Habitability

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Holden, Kritina L.

    2009-01-01

    The Usability project addresses the need for research in the area of metrics and methodologies used in hardware and software usability testing in order to define quantifiable and verifiable usability requirements. A usability test is a human-in-the-loop evaluation where a participant works through a realistic set of representative tasks using the hardware/software under investigation. The purpose of this research is to define metrics and methodologies for measuring and verifying usability in the aerospace domain in accordance with FY09 focus on errors, consistency, and mobility/maneuverability. Usability metrics must be predictive of success with the interfaces, must be easy to obtain and/or calculate, and must meet the intent of current Human Systems Integration Requirements (HSIR). Methodologies must work within the constraints of the aerospace domain, be cost and time efficient, and be able to be applied without extensive specialized training.

  18. Steganalysis for Audio Data

    DTIC Science & Technology

    2006-03-31

    from existing image steganography and steganalysis techniques, the overall objective of Task (b) is to design and implement audio steganography in...general design of the VoIP steganography algorithm is based on known LSB hiding techniques (used for example in StegHide (http...system. Nasir Memon et. al. described a steganalyzer based on image quality metrics [AMS03]. Basically, the main idea to detect steganography by

  19. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, Sean J; Alvarez, Paul

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  20. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  1. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.

    2014-05-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  2. Status Report on Activities of the Systems Assessment Task Force, OECD-NEA Expert Group on Accident Tolerant Fuels for LWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bragg-Sitton, Shannon Michelle

    The Organization for Economic Cooperation and Development /Nuclear Energy Agency (OECD/NEA) Nuclear Science Committee approved the formation of an Expert Group on Accident Tolerant Fuel (ATF) for LWRs (EGATFL) in 2014. Chaired by Kemal Pasamehmetoglu, INL Associate Laboratory Director for Nuclear Science and Technology, the mandate for the EGATFL defines work under three task forces: (1) Systems Assessment, (2) Cladding and Core Materials, and (3) Fuel Concepts. Scope for the Systems Assessment task force includes definition of evaluation metrics for ATF, technology readiness level definition, definition of illustrative scenarios for ATF evaluation, parametric studies, and selection of system codes. Themore » Cladding and Core Materials and Fuel Concepts task forces will identify gaps and needs for modeling and experimental demonstration; define key properties of interest; identify the data necessary to perform concept evaluation under normal conditions and illustrative scenarios; identify available infrastructure (internationally) to support experimental needs; and make recommendations on priorities. Where possible, considering proprietary and other export restrictions (e.g., International Traffic in Arms Regulations), the Expert Group will facilitate the sharing of data and lessons learned across the international group membership. The Systems Assessment Task Force is chaired by Shannon Bragg-Sitton (INL), while the Cladding Task Force will be chaired by a representative from France (Marie Moatti, Electricite de France [EdF]) and the Fuels Task Force will be chaired by a representative from Japan (Masaki Kurata, Japan Atomic Energy Agency [JAEA]). This report provides an overview of the Systems Assessment Task Force charter and status of work accomplishment.« less

  3. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    PubMed

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-08

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be used as an independent standardized procedure for detector performance assessment. © 2016 The Authors.

  4. Evaluation of cassette‐based digital radiography detectors using standardized image quality metrics: AAPM TG‐150 Draft Image Detector Tests

    PubMed Central

    Greene, Travis C.; Nishino, Thomas K.; Willis, Charles E.

    2016-01-01

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region‐of‐interest (ROI)‐based techniques to measure nonuniformity, minimum signal‐to‐noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX‐1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG‐150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG‐150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG‐150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG‐150 tests can be used as an independent standardized procedure for detector performance assessment. PACS number(s): 87.57.‐s, 87.57.C PMID:27685102

  5. Surgical simulation tasks challenge visual working memory and visual-spatial ability differently.

    PubMed

    Schlickum, Marcus; Hedman, Leif; Enochsson, Lars; Henningsohn, Lars; Kjellin, Ann; Felländer-Tsai, Li

    2011-04-01

    New strategies for selection and training of physicians are emerging. Previous studies have demonstrated a correlation between visual-spatial ability and visual working memory with surgical simulator performance. The aim of this study was to perform a detailed analysis on how these abilities are associated with metrics in simulator performance with different task content. The hypothesis is that the importance of visual-spatial ability and visual working memory varies with different task contents. Twenty-five medical students participated in the study that involved testing visual-spatial ability using the MRT-A test and visual working memory using the RoboMemo computer program. Subjects were also trained and tested for performance in three different surgical simulators. The scores from the psychometric tests and the performance metrics were then correlated using multivariate analysis. MRT-A score correlated significantly with the performance metrics Efficiency of screening (p = 0.006) and Total time (p = 0.01) in the GI Mentor II task and Total score (p = 0.02) in the MIST-VR simulator task. In the Uro Mentor task, both the MRT-A score and the visual working memory 3-D cube test score as presented in the RoboMemo program (p = 0.02) correlated with Total score (p = 0.004). In this study we have shown that some differences exist regarding the impact of visual abilities and task content on simulator performance. When designing future cognitive training programs and testing regimes, one might have to consider that the design must be adjusted in accordance with the specific surgical task to be trained in mind.

  6. Generalized two-dimensional (2D) linear system analysis metrics (GMTF, GDQE) for digital radiography systems including the effect of focal spot, magnification, scatter, and detector characteristics.

    PubMed

    Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen

    2010-03-01

    The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.

  7. Shaping of arm configuration space by prescription of non-Euclidean metrics with applications to human motor control

    NASA Astrophysics Data System (ADS)

    Biess, Armin

    2013-01-01

    The study of the kinematic and dynamic features of human arm movements provides insights into the computational strategies underlying human motor control. In this paper a differential geometric approach to movement control is taken by endowing arm configuration space with different non-Euclidean metric structures to study the predictions of the generalized minimum-jerk (MJ) model in the resulting Riemannian manifold for different types of human arm movements. For each metric space the solution of the generalized MJ model is given by reparametrized geodesic paths. This geodesic model is applied to a variety of motor tasks ranging from three-dimensional unconstrained movements of a four degree of freedom arm between pointlike targets to constrained movements where the hand location is confined to a surface (e.g., a sphere) or a curve (e.g., an ellipse). For the latter speed-curvature relations are derived depending on the boundary conditions imposed (periodic or nonperiodic) and the compatibility with the empirical one-third power law is shown. Based on these theoretical studies and recent experimental findings, I argue that geodesics may be an emergent property of the motor system and that the sensorimotor system may shape arm configuration space by learning metric structures through sensorimotor feedback.

  8. Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibanez, E.; Milligan, M.

    2014-04-01

    Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less

  9. Establishing a curriculum for the acquisition of laparoscopic psychomotor skills in the virtual reality environment.

    PubMed

    Sinitsky, Daniel M; Fernando, Bimbi; Berlingieri, Pasquale

    2012-09-01

    The unique psychomotor skills required in laparoscopy result in reduced patient safety during the early part of the learning curve. Evidence suggests that these may be safely acquired in the virtual reality (VR) environment. Several VR simulators are available, each preloaded with several psychomotor skills tasks that provide users with computer-generated performance metrics. This review aimed to evaluate the usefulness of specific psychomotor skills tasks and metrics, and how trainers might build an effective training curriculum. We performed a comprehensive literature search. The vast majority of VR psychomotor skills tasks show construct validity for one or more metrics. These are commonly for time and motion parameters. Regarding training schedules, distributed practice is preferred over massed practice. However, a degree of supervision may be needed to counter the limitations of VR training. In the future, standardized proficiency scores should facilitate local institutions in establishing VR laparoscopic psychomotor skills curricula. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Virtual reality-based assessment of basic laparoscopic skills using the Leap Motion controller.

    PubMed

    Lahanas, Vasileios; Loukas, Constantinos; Georgiou, Konstantinos; Lababidi, Hani; Al-Jaroudi, Dania

    2017-12-01

    The majority of the current surgical simulators employ specialized sensory equipment for instrument tracking. The Leap Motion controller is a new device able to track linear objects with sub-millimeter accuracy. The aim of this study was to investigate the potential of a virtual reality (VR) simulator for assessment of basic laparoscopic skills, based on the low-cost Leap Motion controller. A simple interface was constructed to simulate the insertion point of the instruments into the abdominal cavity. The controller provided information about the position and orientation of the instruments. Custom tools were constructed to simulate the laparoscopic setup. Three basic VR tasks were developed: camera navigation (CN), instrument navigation (IN), and bimanual operation (BO). The experiments were carried out in two simulation centers: MPLSC (Athens, Greece) and CRESENT (Riyadh, Kingdom of Saudi Arabia). Two groups of surgeons (28 experts and 21 novices) participated in the study by performing the VR tasks. Skills assessment metrics included time, pathlength, and two task-specific errors. The face validity of the training scenarios was also investigated via a questionnaire completed by the participants. Expert surgeons significantly outperformed novices in all assessment metrics for IN and BO (p < 0.05). For CN, a significant difference was found in one error metric (p < 0.05). The greatest difference between the performances of the two groups occurred for BO. Qualitative analysis of the instrument trajectory revealed that experts performed more delicate movements compared to novices. Subjects' ratings on the feedback questionnaire highlighted the training value of the system. This study provides evidence regarding the potential use of the Leap Motion controller for assessment of basic laparoscopic skills. The proposed system allowed the evaluation of dexterity of the hand movements. Future work will involve comparison studies with validated simulators and development of advanced training scenarios on current Leap Motion controller.

  11. Energy-Based Metrics for Arthroscopic Skills Assessment.

    PubMed

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  12. Evaluation techniques and metrics for assessment of pan+MSI fusion (pansharpening)

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    2015-05-01

    Fusion of broadband panchromatic data with narrow band multispectral data - pansharpening - is a common and often studied problem in remote sensing. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. This study examines the output products of 4 commercial implementations with regard to their relative strengths and weaknesses for a set of defined image characteristics and analyst use-cases. Image characteristics used are spatial detail, spatial quality, spectral integrity, and composite color quality (hue and saturation), and analyst use-cases included a variety of object detection and identification tasks. The imagery comes courtesy of the RIT SHARE 2012 collect. Two approaches are used to evaluate the pansharpening methods, analyst evaluation or qualitative measure and image quality metrics or quantitative measures. Visual analyst evaluation results are compared with metric results to determine which metrics best measure the defined image characteristics and product use-cases and to support future rigorous characterization the metrics' correlation with the analyst results. Because pansharpening represents a trade between adding spatial information from the panchromatic image, and retaining spectral information from the MSI channels, the metrics examined are grouped into spatial improvement metrics and spectral preservation metrics. A single metric to quantify the quality of a pansharpening method would necessarily be a combination of weighted spatial and spectral metrics based on the importance of various spatial and spectral characteristics for the primary task of interest. Appropriate metrics and weights for such a combined metric are proposed here, based on the conducted analyst evaluation. Additionally, during this work, a metric was developed specifically focused on assessment of spatial structure improvement relative to a reference image and independent of scene content. Using analysis of Fourier transform images, a measure of high-frequency content is computed in small sub-segments of the image. The average increase in high-frequency content across the image is used as the metric, where averaging across sub-segments combats the scene dependent nature of typical image sharpness techniques. This metric had an improved range of scores, better representing difference in the test set than other common spatial structure metrics.

  13. Effects of metric change on safety in the workplace for selected occupations

    NASA Astrophysics Data System (ADS)

    Lefande, J. M.; Pokorney, J. L.

    1982-04-01

    The study assesses the potential safety issues of metric conversion in the workplace. A purposive sample of 35 occupations based on injury and illnesses indexes were assessed. After an analysis of workforce population, hazard analysis and measurement sensitivity of the occupations, jobs were analyzed to identify potential safety hazards by industrial hygienists, safety engineers and academia. The study's major findings were as follows: No metric hazard experience was identified. An increased exposure might occur when particular jobs and their job tasks are going the transition from customary measurement to metric measurement. Well planned metric change programs reduce hazard potential. Metric safety issues are unresolved in the aviation industry.

  14. Person re-identification over camera networks using multi-task distance metric learning.

    PubMed

    Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng

    2014-08-01

    Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.

  15. Meaningful Assessment of Robotic Surgical Style using the Wisdom of Crowds.

    PubMed

    Ershad, M; Rege, R; Fey, A Majewicz

    2018-07-01

    Quantitative assessment of surgical skills is an important aspect of surgical training; however, the proposed metrics are sometimes difficult to interpret and may not capture the stylistic characteristics that define expertise. This study proposes a methodology for evaluating the surgical skill, based on metrics associated with stylistic adjectives, and evaluates the ability of this method to differentiate expertise levels. We recruited subjects from different expertise levels to perform training tasks on a surgical simulator. A lexicon of contrasting adjective pairs, based on important skills for robotic surgery, inspired by the global evaluative assessment of robotic skills tool, was developed. To validate the use of stylistic adjectives for surgical skill assessment, posture videos of the subjects performing the task, as well as videos of the task were rated by crowd-workers. Metrics associated with each adjective were found using kinematic and physiological measurements through correlation with the crowd-sourced adjective assignment ratings. To evaluate the chosen metrics' ability in distinguishing expertise levels, two classifiers were trained and tested using these metrics. Crowd-assignment ratings for all adjectives were significantly correlated with expertise levels. The results indicate that naive Bayes classifier performs the best, with an accuracy of [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] when classifying into four, three, and two levels of expertise, respectively. The proposed method is effective at mapping understandable adjectives of expertise to the stylistic movements and physiological response of trainees.

  16. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems

    PubMed Central

    Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.

    2015-01-01

    Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086

  17. TOD to TTP calibration

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.

    2011-05-01

    The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.

  18. Validation of the updated ArthroS simulator: face and construct validity of a passive haptic virtual reality simulator with novel performance metrics.

    PubMed

    Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L

    2017-02-01

    To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.

  19. Space Launch System Spacecraft/Payloads Integration and Evolution Office Advanced Development FY 2014 Annual Report

    NASA Technical Reports Server (NTRS)

    Crumbly, C. M.; Bickley, F. P.; Hueter, U.

    2015-01-01

    The Advanced Development Office (ADO), part of the Space Launch System (SLS) program, provides SLS with the advanced development needed to evolve the vehicle from an initial Block 1 payload capability of 70 metric tons (t) to an eventual capability Block 2 of 130 t, with intermediary evolution options possible. ADO takes existing technologies and matures them to the point that insertion into the mainline program minimizes risk. The ADO portfolio of tasks covers a broad range of technical developmental activities. The ADO portfolio supports the development of advanced boosters, upper stages, and other advanced development activities benefiting the SLS program. A total of 36 separate tasks were funded by ADO in FY 2014.

  20. Coalition Formation under Uncertainty

    DTIC Science & Technology

    2010-03-01

    world robotics and demonstrate the algorithm’s scalability. This provides a framework well suited to decentralized task allocation in general collectives...impatience and acquiescence to define a robot allocation to a task in a decentralized manner. The tasks are assigned to the entire collective, and one...20] allocates tasks to robots with a first-price auction method [31]. It announces a task with defined metrics, then the robots issue bids. The task

  1. An Overview of the NOAA Drought Task Force

    NASA Technical Reports Server (NTRS)

    Schubert, S.; Mo, K.; Peters-Lidard, C.; Wood, A.

    2012-01-01

    The charge of the NOAA Drought Task Force is to coordinate and facilitate the various MAPP-funded research efforts with the overall goal of achieving significant advances in understanding and in the ability to monitor and predict drought over North America. In order to achieve this, the task force has developed a Drought Test-bed that individual research groups can use to test/evaluate methods and ideas. Central to this is a focus on three high profile North American droughts (1998-2004 western US drought, 2006-2007 SE US drought, 2011- current Tex-Mex drought) to facilitate collaboration among projects, including the development of metrics to assess the quality of monitoring and prediction products, and the development of an experimental drought monitoring and prediction system that incorporates and assesses recent advances. This talk will review the progress and plans of the task force, including efforts to help advance official national drought products, and the development of early warning systems by the National Integrated Drought Information System (NIDIS). Coordination with other relevant national and international efforts such as the emerging NMME capabilities and the international effort to develop a Global Drought Information System (GDIS) will be discussed.

  2. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    NASA Astrophysics Data System (ADS)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use of portfolio views and top 'n' analysis. This proved the usefulness of the RAAM framework and methodology during Pre-Milestone A capability based analysis. (Abstract shortened by UMI.).

  3. Distance Metric Learning Using Privileged Information for Face Verification and Person Re-Identification.

    PubMed

    Xu, Xinxing; Li, Wen; Xu, Dong

    2015-12-01

    In this paper, we propose a new approach to improve face verification and person re-identification in the RGB images by leveraging a set of RGB-D data, in which we have additional depth images in the training data captured using depth cameras such as Kinect. In particular, we extract visual features and depth features from the RGB images and depth images, respectively. As the depth features are available only in the training data, we treat the depth features as privileged information, and we formulate this task as a distance metric learning with privileged information problem. Unlike the traditional face verification and person re-identification tasks that only use visual features, we further employ the extra depth features in the training data to improve the learning of distance metric in the training process. Based on the information-theoretic metric learning (ITML) method, we propose a new formulation called ITML with privileged information (ITML+) for this task. We also present an efficient algorithm based on the cyclic projection method for solving the proposed ITML+ formulation. Extensive experiments on the challenging faces data sets EUROCOM and CurtinFaces for face verification as well as the BIWI RGBD-ID data set for person re-identification demonstrate the effectiveness of our proposed approach.

  4. The CREST Simulation Development Process: Training the Next Generation.

    PubMed

    Sweet, Robert M

    2017-04-01

    The challenges of training and assessing endourologic skill have driven the development of new training systems. The Center for Research in Education and Simulation Technologies (CREST) has developed a team and a methodology to facilitate this development process. Backwards design principles were applied. A panel of experts first defined desired clinical and educational outcomes. Outcomes were subsequently linked to learning objectives. Gross task deconstruction was performed, and the primary domain was classified as primarily involving decision-making, psychomotor skill, or communication. A more detailed cognitive task analysis was performed to elicit and prioritize relevant anatomy/tissues, metrics, and errors. Reference anatomy was created using a digital anatomist and clinician working off of a clinical data set. Three dimensional printing can facilitate this process. When possible, synthetic or virtual tissue behavior and textures were recreated using data derived from human tissue. Embedded sensors/markers and/or computer-based systems were used to facilitate the collection of objective metrics. A learning Verification and validation occurred throughout the engineering development process. Nine endourology-relevant training systems were created by CREST with this approach. Systems include basic laparoscopic skills (BLUS), vesicourethral anastomosis, pyeloplasty, cystoscopic procedures, stent placement, rigid and flexible ureteroscopy, GreenLight PVP (GL Sim), Percutaneous access with C-arm (CAT), Nephrolithotomy (NLM), and a vascular injury model. Mixed modalities have been used, including "smart" physical models, virtual reality, augmented reality, and video. Substantial validity evidence for training and assessment has been collected on systems. An open source manikin-based modular platform is under development by CREST with the Department of Defense that will unify these and other commercial task trainers through the common physiology engine, learning management system, standard data connectors, and standards. Using the CREST process has and will ensure that the systems we create meet the needs of training and assessing endourologic skills.

  5. Can a virtual reality surgical simulation training provide a self-driven and mentor-free skills learning? Investigation of the practical influence of the performance metrics from the virtual reality robotic surgery simulator on the skill learning and associated cognitive workloads.

    PubMed

    Lee, Gyusung I; Lee, Mija R

    2018-01-01

    While it is often claimed that virtual reality (VR) training system can offer self-directed and mentor-free skill learning using the system's performance metrics (PM), no studies have yet provided evidence-based confirmation. This experimental study investigated what extent to which trainees achieved their self-learning with a current VR simulator and whether additional mentoring improved skill learning, skill transfer and cognitive workloads in robotic surgery simulation training. Thirty-two surgical trainees were randomly assigned to either the Control-Group (CG) or Experiment-Group (EG). While the CG participants reviewed the PM at their discretion, the EG participants had explanations about PM and instructions on how to improve scores. Each subject completed a 5-week training using four simulation tasks. Pre- and post-training data were collected using both a simulator and robot. Peri-training data were collected after each session. Skill learning, time spent on PM (TPM), and cognitive workloads were compared between groups. After the simulation training, CG showed substantially lower simulation task scores (82.9 ± 6.0) compared with EG (93.2 ± 4.8). Both groups demonstrated improved physical model tasks performance with the actual robot, but the EG had a greater improvement in two tasks. The EG exhibited lower global mental workload/distress, higher engagement, and a better understanding regarding using PM to improve performance. The EG's TPM was initially long but substantially shortened as the group became familiar with PM. Our study demonstrated that the current VR simulator offered limited self-skill learning and additional mentoring still played an important role in improving the robotic surgery simulation training.

  6. BioNLP Shared Task--The Bacteria Track.

    PubMed

    Bossy, Robert; Jourde, Julien; Manine, Alain-Pierre; Veber, Philippe; Alphonse, Erick; van de Guchte, Maarten; Bessières, Philippe; Nédellec, Claire

    2012-06-26

    We present the BioNLP 2011 Shared Task Bacteria Track, the first Information Extraction challenge entirely dedicated to bacteria. It includes three tasks that cover different levels of biological knowledge. The Bacteria Gene Renaming supporting task is aimed at extracting gene renaming and gene name synonymy in PubMed abstracts. The Bacteria Gene Interaction is a gene/protein interaction extraction task from individual sentences. The interactions have been categorized into ten different sub-types, thus giving a detailed account of genetic regulations at the molecular level. Finally, the Bacteria Biotopes task focuses on the localization and environment of bacteria mentioned in textbook articles. We describe the process of creation for the three corpora, including document acquisition and manual annotation, as well as the metrics used to evaluate the participants' submissions. Three teams submitted to the Bacteria Gene Renaming task; the best team achieved an F-score of 87%. For the Bacteria Gene Interaction task, the only participant's score had reached a global F-score of 77%, although the system efficiency varies significantly from one sub-type to another. Three teams submitted to the Bacteria Biotopes task with very different approaches; the best team achieved an F-score of 45%. However, the detailed study of the participating systems efficiency reveals the strengths and weaknesses of each participating system. The three tasks of the Bacteria Track offer participants a chance to address a wide range of issues in Information Extraction, including entity recognition, semantic typing and coreference resolution. We found common trends in the most efficient systems: the systematic use of syntactic dependencies and machine learning. Nevertheless, the originality of the Bacteria Biotopes task encouraged the use of interesting novel methods and techniques, such as term compositionality, scopes wider than the sentence.

  7. Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking

    NASA Technical Reports Server (NTRS)

    Turgeon, Gregory; Price, Petra

    2010-01-01

    A feasibility study was performed on a representative aerospace system to determine the following: (1) the benefits and limitations to using SCADE , a commercially available tool for model checking, in comparison to using a proprietary tool that was studied previously [1] and (2) metrics for performing the model checking and for assessing the findings. This study was performed independently of the development task by a group unfamiliar with the system, providing a fresh, external perspective free from development bias.

  8. Motion generation of robotic surgical tasks: learning from expert demonstrations.

    PubMed

    Reiley, Carol E; Plaku, Erion; Hager, Gregory D

    2010-01-01

    Robotic surgical assistants offer the possibility of automating portions of a task that are time consuming and tedious in order to reduce the cognitive workload of a surgeon. This paper proposes using programming by demonstration to build generative models and generate smooth trajectories that capture the underlying structure of the motion data recorded from expert demonstrations. Specifically, motion data from Intuitive Surgical's da Vinci Surgical System of a panel of expert surgeons performing three surgical tasks are recorded. The trials are decomposed into subtasks or surgemes, which are then temporally aligned through dynamic time warping. Next, a Gaussian Mixture Model (GMM) encodes the experts' underlying motion structure. Gaussian Mixture Regression (GMR) is then used to extract a smooth reference trajectory to reproduce a trajectory of the task. The approach is evaluated through an automated skill assessment measurement. Results suggest that this paper presents a means to (i) extract important features of the task, (ii) create a metric to evaluate robot imitative performance (iii) generate smoother trajectories for reproduction of three common medical tasks.

  9. Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps

    PubMed Central

    Bowen, Chris; Ye, Gu; Alterovitz, Ron

    2015-01-01

    In unstructured environments in people’s homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task’s motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method’s effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints. Note to Practitioners Motivated by the desire to enable robots to autonomously operate in cluttered home and workplace environments, this paper presents an approach for intuitively training a robot in a manner that enables it to repeat the task in novel scenarios and in the presence of unforeseen obstacles in the environment. Based on user-provided demonstrations of the task, our method learns features of the task that are consistent across the demonstrations and that we expect should be repeated by the robot when performing the task. We next present an efficient algorithm for planning robot motions to perform the task based on the learned features while avoiding obstacles. We demonstrate the effectiveness of our motion planner for scenarios requiring transferring a powder and pushing a button in environments with obstacles, and we plan to extend our results to more complex tasks in the future. PMID:26279642

  10. The development of a virtual reality training curriculum for colonoscopy.

    PubMed

    Sugden, Colin; Aggarwal, Rajesh; Banerjee, Amrita; Haycock, Adam; Thomas-Gibson, Siwan; Williams, Christopher B; Darzi, Ara

    2012-07-01

    The development of a structured virtual reality (VR) training curriculum for colonoscopy using high-fidelity simulation. Colonoscopy requires detailed knowledge and technical skill. Changes to working practices in recent times have reduced the availability of traditional training opportunities. Much might, therefore, be achieved by applying novel technologies such as VR simulation to colonoscopy. Scientifically developed device-specific curricula aim to maximize the yield of laboratory-based training by focusing on validated modules and linking progression to the attainment of benchmarked proficiency criteria. Fifty participants comprised of 30 novices (<10 colonoscopies), 10 intermediates (100 to 500 colonoscopies), and 10 experienced (>500 colonoscopies) colonoscopists were recruited to participate. Surrogates of proficiency, such as number of procedures undertaken, determined prospective allocation to 1 of 3 groups (novice, intermediate, and experienced). Construct validity and learning value (comparison between groups and within groups respectively) for each task and metric on the chosen simulator model determined suitability for inclusion in the curriculum. Eight tasks in possession of construct validity and significant learning curves were included in the curriculum: 3 abstract tasks, 4 part-procedural tasks, and 1 procedural task. The whole-procedure task was valid for 11 metrics including the following: "time taken to complete the task" (1238, 343, and 293 s; P < 0.001) and "insertion length with embedded tip" (23.8, 3.6, and 4.9 cm; P = 0.005). Learning curves consistently plateaued at or beyond the ninth attempt. Valid metrics were used to define benchmarks, derived from the performance of the experienced cohort, for each included task. A comprehensive, stratified, benchmarked, whole-procedure curriculum has been developed for a modern high-fidelity VR colonoscopy simulator.

  11. Development of Methodologies, Metrics, and Tools for Investigating Human-Robot Interaction in Space Robotics

    NASA Technical Reports Server (NTRS)

    Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer

    2011-01-01

    Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed.

  12. The MPC&A Questionnaire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    The questionnaire is the instrument used for recording performance data on the nuclear material protection, control, and accountability (MPC&A) system at a nuclear facility. The performance information provides a basis for evaluating the effectiveness of the MPC&A system. The goal for the questionnaire is to provide an accurate representation of the performance of the MPC&A system as it currently exists in the facility. Performance grades for all basic MPC&A functions should realistically reflect the actual level of performance at the time the survey is conducted. The questionnaire was developed after testing and benchmarking the material control and accountability (MC&A) systemmore » effectiveness tool (MSET) in the United States. The benchmarking exercise at the Idaho National Laboratory (INL) proved extremely valuable for improving the content and quality of the early versions of the questionnaire. Members of the INL benchmark team identified many areas of the questionnaire where questions should be clarified and areas where additional questions should be incorporated. The questionnaire addresses all elements of the MC&A system. Specific parts pertain to the foundation for the facility's overall MPC&A system, and other parts pertain to the specific functions of the operational MPC&A system. The questionnaire includes performance metrics for each of the basic functions or tasks performed in the operational MPC&A system. All of those basic functions or tasks are represented as basic events in the MPC&A fault tree. Performance metrics are to be used during completion of the questionnaire to report what is actually being done in relation to what should be done in the performance of MPC&A functions.« less

  13. Usability testing of a mobile robotic system for in-home telerehabilitation.

    PubMed

    Boissy, Patrick; Brière, Simon; Corriveau, Hélène; Grant, Andrew; Lauria, Michel; Michaud, François

    2011-01-01

    Mobile robots designed to enhance telepresence in the support of telehealth services are being considered for numerous applications. TELEROBOT is a teleoperated mobile robotic platform equipped with videoconferencingcapabilities and designed to be used in a home environment to. In this study, learnability of the system's teleoperation interface and controls was evaluated with ten rehabilitation professionals during four training sessions in a laboratory environment and in an unknown home environment while performing the execution of a standardized evaluation protocol typically used in home care. Results show that the novice teleoperators' performances on two of the four metrics used (number of command and total time) improved significantly across training sessions (ANOVAS, p<0.05) and that performance in these metrics in the last training session reflected teleoperation abilities seen in the unknown home environment during navigation tasks (r=0,77 and 0,60). With only 4 hours of training, rehabilitation professionals were able learn to teleoperate successfully TELEROBOT. However teleoperation performances remained significantly less efficient then those of an expert. Under the home task condition (navigating the home environment from one point to the other as fast as possible) this translated to completion time between 350 seconds (best performance) and 850 seconds (worse performance). Improvements in other usability aspects of the system will be needed to meet the requirements of in-home telerehabilitation.

  14. C3 generic workstation: Performance metrics and applications

    NASA Technical Reports Server (NTRS)

    Eddy, Douglas R.

    1988-01-01

    The large number of integrated dependent measures available on a command, control, and communications (C3) generic workstation under development are described. In this system, embedded communications tasks will manipulate workload to assess the effects of performance-enhancing drugs (sleep aids and decongestants), work/rest cycles, biocybernetics, and decision support systems on performance. Task performance accuracy and latency will be event coded for correlation with other measures of voice stress and physiological functioning. Sessions will be videotaped to score non-verbal communications. Physiological recordings include spectral analysis of EEG, ECG, vagal tone, and EOG. Subjective measurements include SWAT, fatigue, POMS and specialized self-report scales. The system will be used primarily to evaluate the effects on performance of drugs, work/rest cycles, and biocybernetic concepts. Performance assessment algorithms will also be developed, including those used with small teams. This system provides a tool for integrating and synchronizing behavioral and psychophysiological measures in a complex decision-making environment.

  15. A compact eyetracked optical see-through head-mounted display

    NASA Astrophysics Data System (ADS)

    Hua, Hong; Gao, Chunyu

    2012-03-01

    An eye-tracked head-mounted display (ET-HMD) system is able to display virtual images as a classical HMD does, while additionally tracking the gaze direction of the user. There is ample evidence that a fully-integrated ETHMD system offers multi-fold benefits, not only to fundamental scientific research but also to emerging applications of such technology. For instance eyetracking capability in HMDs adds a very valuable tool and objective metric for scientists to quantitatively assess user interaction with 3D environments and investigate the effectiveness of various 3D visualization technologies for various specific tasks including training, education, and augmented cognition tasks. In this paper, we present an innovative optical approach to the design of an optical see-through ET-HMD system based on freeform optical technology and an innovative optical scheme that uniquely combines the display optics with the eye imaging optics. A preliminary design of the described ET-HMD system will be presented.

  16. Design of a virtual reality based adaptive response technology for children with autism.

    PubMed

    Lahiri, Uttama; Bekele, Esubalew; Dohrmann, Elizabeth; Warren, Zachary; Sarkar, Nilanjan

    2013-01-01

    Children with autism spectrum disorder (ASD) demonstrate potent impairments in social communication skills including atypical viewing patterns during social interactions. Recently, several assistive technologies, particularly virtual reality (VR), have been investigated to address specific social deficits in this population. Some studies have coupled eye-gaze monitoring mechanisms to design intervention strategies. However, presently available systems are designed to primarily chain learning via aspects of one's performance only which affords restricted range of individualization. The presented work seeks to bridge this gap by developing a novel VR-based interactive system with Gaze-sensitive adaptive response technology that can seamlessly integrate VR-based tasks with eye-tracking techniques to intelligently facilitate engagement in tasks relevant to advancing social communication skills. Specifically, such a system is capable of objectively identifying and quantifying one's engagement level by measuring real-time viewing patterns, subtle changes in eye physiological responses, as well as performance metrics in order to adaptively respond in an individualized manner to foster improved social communication skills among the participants. The developed system was tested through a usability study with eight adolescents with ASD. The results indicate the potential of the system to promote improved social task performance along with socially-appropriate mechanisms during VR-based social conversation tasks.

  17. Design of a Virtual Reality Based Adaptive Response Technology for Children With Autism

    PubMed Central

    Lahiri, Uttama; Bekele, Esubalew; Dohrmann, Elizabeth; Warren, Zachary; Sarkar, Nilanjan

    2013-01-01

    Children with autism spectrum disorder (ASD) demonstrate potent impairments in social communication skills including atypical viewing patterns during social interactions. Recently, several assistive technologies, particularly virtual reality (VR), have been investigated to address specific social deficits in this population. Some studies have coupled eye-gaze monitoring mechanisms to design intervention strategies. However, presently available systems are designed to primarily chain learning via aspects of one’s performance only which affords restricted range of individualization. The presented work seeks to bridge this gap by developing a novel VR-based interactive system with Gaze-sensitive adaptive response technology that can seamlessly integrate VR-based tasks with eye-tracking techniques to intelligently facilitate engagement in tasks relevant to advancing social communication skills. Specifically, such a system is capable of objectively identifying and quantifying one’s engagement level by measuring real-time viewing patterns, subtle changes in eye physiological responses, as well as performance metrics in order to adaptively respond in an individualized manner to foster improved social communication skills among the participants. The developed system was tested through a usability study with eight adolescents with ASD. The results indicate the potential of the system to promote improved social task performance along with socially-appropriate mechanisms during VR-based social conversation tasks. PMID:23033333

  18. Quantification and visualization of coordination during non-cyclic upper extremity motion.

    PubMed

    Fineman, Richard A; Stirling, Leia A

    2017-10-03

    There are many design challenges in creating at-home tele-monitoring systems that enable quantification and visualization of complex biomechanical behavior. One such challenge is robustly quantifying joint coordination in a way that is intuitive and supports clinical decision-making. This work defines a new measure of coordination called the relative coordination metric (RCM) and its accompanying normalization schemes. RCM enables quantification of coordination during non-constrained discrete motions. Here RCM is applied to a grasping task. Fifteen healthy participants performed a reach, grasp, transport, and release task with a cup and a pen. The measured joint angles were then time-normalized and the RCM time-series were calculated between the shoulder-elbow, shoulder-wrist, and elbow-wrist. RCM was normalized using four differing criteria: the selected joint degree of freedom, angular velocity, angular magnitude, and range of motion. Percent time spent in specified RCM ranges was used asa composite metric and was evaluated for each trial. RCM was found to vary based on: (1) chosen normalization scheme, (2) the stage within the task, (3) the object grasped, and (4) the trajectory of the motion. The RCM addresses some of the limitations of current measures of coordination because it is applicable to discrete motions, does not rely on cyclic repetition, and uses velocity-based measures. Future work will explore clinically relevant differences in the RCM as it is expanded to evaluate different tasks and patient populations. Copyright © 2017. Published by Elsevier Ltd.

  19. Development and Applications of a Self-Contained, Non-Invasive EVA Joint Angle and Muscle Fatigue Sensor System

    NASA Technical Reports Server (NTRS)

    Ranniger, C. U.; Sorenson, E. A.; Akin, D. L.

    1995-01-01

    The University of Maryland Space Systems Laboratory, as a participant in NASA's INSTEP program, is developing a non-invasive, self-contained sensor system which can provide quantitative measurements of joint angles and muscle fatigue in the hand and forearm. The goal of this project is to develop a system with which hand/forearm motion and fatigue metrics can be determined in various terrestrial and zero-G work environments. A preliminary study of the prototype sensor systems and data reduction techniques for the fatigue measurement system are presented. The sensor systems evaluated include fiberoptics, used to measure joint angle, surface electrodes, which measure the electrical signals created in muscle as it contracts; microphones, which measure the noise made by contracting muscle; and accelerometers, which measure the lateral muscle acceleration during contraction. The prototype sensor systems were used to monitor joint motion of the metacarpophalangeal joint and muscle fatigue in flexor digitorum superficialis and flexor carpi ulnaris in subjects performing gripping tasks. Subjects were asked to sustain a 60-second constant-contraction (isometric) exercise and subsequently to perform a repetitive handgripping task to failure. Comparison of the electrical and mechanical signals of the muscles during the different tasks will be used to evaluate the applicability of muscle signal measurement techniques developed for isometric contraction tasks to fatigue prediction in quasi-dynamic exercises. Potential data reduction schemes are presented.

  20. The data quality analyzer: A quality control program for seismic data

    NASA Astrophysics Data System (ADS)

    Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.

    2015-03-01

    The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.

  1. Distance and direction, but not light cues, support response reversal learning.

    PubMed

    Wright, S L; Martin, G M; Thorpe, C M; Haley, K; Skinner, D M

    2018-03-05

    Across three experiments, we examined the cuing properties of metric (distance and direction) and nonmetric (lighting) cues in different tasks. In Experiment 1, rats were trained on a response problem in a T-maze, followed by four reversals. Rats that experienced a change in maze orientation (Direction group) or a change in the length of the start arm (Distance group) across reversals showed facilitation of reversal learning relative to a group that experienced changes in room lighting across reversals. In Experiment 2, rats learned a discrimination task more readily when distance or direction cues were used than when light cues were used as the discriminative stimuli. In Experiment 3, performance on a go/no-go task was equivalent using both direction and lighting cues. The successful use of both metric and nonmetric cues in the go/no-go task indicates that rats are sensitive to both types of cues and that the usefulness of different cues is dependent on the nature of the task.

  2. A relationship between eye movement patterns and performance in a precognitive tracking task

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Hartzell, E. J.

    1977-01-01

    Eye movements made by various subjects in the performance of a precognitive tracking task are studied. The tracking task persented by an antiaircraft artillery (AAA) simulator has an input forcing function represented by a deterministic aircraft fly-by. The performance of subjects is ranked by two metrics. Good, mediocre, and poor trackers are selected for analysis based on performance during the difficult segment of the tracking task and over replications. Using phase planes to characterize both the eye movement patterns and the displayed error signal, a simple metric is developed to study these patterns. Two characterizations of eye movement strategies are defined and quantified. Using these two types of eye strategies, two conclusions are obtained about good, mediocre, and poor trackers. First, the eye tracker who used a fixed strategy will consistently perform better. Secondly, the best fixed strategy is defined as a Crosshair Fixator.

  3. Overview of the ID, EPI and REL tasks of BioNLP Shared Task 2011.

    PubMed

    Pyysalo, Sampo; Ohta, Tomoko; Rak, Rafal; Sullivan, Dan; Mao, Chunhong; Wang, Chunxia; Sobral, Bruno; Tsujii, Jun'ichi; Ananiadou, Sophia

    2012-06-26

    We present the preparation, resources, results and analysis of three tasks of the BioNLP Shared Task 2011: the main tasks on Infectious Diseases (ID) and Epigenetics and Post-translational Modifications (EPI), and the supporting task on Entity Relations (REL). The two main tasks represent extensions of the event extraction model introduced in the BioNLP Shared Task 2009 (ST'09) to two new areas of biomedical scientific literature, each motivated by the needs of specific biocuration tasks. The ID task concerns the molecular mechanisms of infection, virulence and resistance, focusing in particular on the functions of a class of signaling systems that are ubiquitous in bacteria. The EPI task is dedicated to the extraction of statements regarding chemical modifications of DNA and proteins, with particular emphasis on changes relating to the epigenetic control of gene expression. By contrast to these two application-oriented main tasks, the REL task seeks to support extraction in general by separating challenges relating to part-of relations into a subproblem that can be addressed by independent systems. Seven groups participated in each of the two main tasks and four groups in the supporting task. The participating systems indicated advances in the capability of event extraction methods and demonstrated generalization in many aspects: from abstracts to full texts, from previously considered subdomains to new ones, and from the ST'09 extraction targets to other entities and events. The highest performance achieved in the supporting task REL, 58% F-score, is broadly comparable with levels reported for other relation extraction tasks. For the ID task, the highest-performing system achieved 56% F-score, comparable to the state-of-the-art performance at the established ST'09 task. In the EPI task, the best result was 53% F-score for the full set of extraction targets and 69% F-score for a reduced set of core extraction targets, approaching a level of performance sufficient for user-facing applications. In this study, we extend on previously reported results and perform further analyses of the outputs of the participating systems. We place specific emphasis on aspects of system performance relating to real-world applicability, considering alternate evaluation metrics and performing additional manual analysis of system outputs. We further demonstrate that the strengths of extraction systems can be combined to improve on the performance achieved by any system in isolation. The manually annotated corpora, supporting resources, and evaluation tools for all tasks are available from http://www.bionlp-st.org and the tasks continue as open challenges for all interested parties.

  4. Overview of the ID, EPI and REL tasks of BioNLP Shared Task 2011

    PubMed Central

    2012-01-01

    We present the preparation, resources, results and analysis of three tasks of the BioNLP Shared Task 2011: the main tasks on Infectious Diseases (ID) and Epigenetics and Post-translational Modifications (EPI), and the supporting task on Entity Relations (REL). The two main tasks represent extensions of the event extraction model introduced in the BioNLP Shared Task 2009 (ST'09) to two new areas of biomedical scientific literature, each motivated by the needs of specific biocuration tasks. The ID task concerns the molecular mechanisms of infection, virulence and resistance, focusing in particular on the functions of a class of signaling systems that are ubiquitous in bacteria. The EPI task is dedicated to the extraction of statements regarding chemical modifications of DNA and proteins, with particular emphasis on changes relating to the epigenetic control of gene expression. By contrast to these two application-oriented main tasks, the REL task seeks to support extraction in general by separating challenges relating to part-of relations into a subproblem that can be addressed by independent systems. Seven groups participated in each of the two main tasks and four groups in the supporting task. The participating systems indicated advances in the capability of event extraction methods and demonstrated generalization in many aspects: from abstracts to full texts, from previously considered subdomains to new ones, and from the ST'09 extraction targets to other entities and events. The highest performance achieved in the supporting task REL, 58% F-score, is broadly comparable with levels reported for other relation extraction tasks. For the ID task, the highest-performing system achieved 56% F-score, comparable to the state-of-the-art performance at the established ST'09 task. In the EPI task, the best result was 53% F-score for the full set of extraction targets and 69% F-score for a reduced set of core extraction targets, approaching a level of performance sufficient for user-facing applications. In this study, we extend on previously reported results and perform further analyses of the outputs of the participating systems. We place specific emphasis on aspects of system performance relating to real-world applicability, considering alternate evaluation metrics and performing additional manual analysis of system outputs. We further demonstrate that the strengths of extraction systems can be combined to improve on the performance achieved by any system in isolation. The manually annotated corpora, supporting resources, and evaluation tools for all tasks are available from http://www.bionlp-st.org and the tasks continue as open challenges for all interested parties. PMID:22759456

  5. Capacity value assessments of wind power: Capacity value assessments of wind power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milligan, Michael; Frew, Bethany; Ibanez, Eduardo

    This article describes some of the recent research into the capacity value of wind power. With the worldwide increase in wind power during the past several years, there is increasing interest and significance regarding its capacity value because this has a direct influence on the amount of other (nonwind) capacity that is needed. We build on previous reviews from IEEE and IEA Wind Task 25a and examine recent work that evaluates the impact of multiple-year data sets and the impact of interconnected systems on resource adequacy. We also provide examples that explore the use of alternative reliability metrics for windmore » capacity value calculations. We show how multiple-year data sets significantly increase the robustness of results compared to single-year assessments. Assumptions regarding the transmission interconnections play a significant role. To date, results regarding which reliability metric to use for probabilistic capacity valuation show little sensitivity to the metric.« less

  6. Using cognitive task analysis to develop simulation-based training for medical tasks.

    PubMed

    Cannon-Bowers, Jan; Bowers, Clint; Stout, Renee; Ricci, Katrina; Hildabrand, Annette

    2013-10-01

    Pressures to increase the efficacy and effectiveness of medical training are causing the Department of Defense to investigate the use of simulation technologies. This article describes a comprehensive cognitive task analysis technique that can be used to simultaneously generate training requirements, performance metrics, scenario requirements, and simulator/simulation requirements for medical tasks. On the basis of a variety of existing techniques, we developed a scenario-based approach that asks experts to perform the targeted task multiple times, with each pass probing a different dimension of the training development process. In contrast to many cognitive task analysis approaches, we argue that our technique can be highly cost effective because it is designed to accomplish multiple goals. The technique was pilot tested with expert instructors from a large military medical training command. These instructors were employed to generate requirements for two selected combat casualty care tasks-cricothyroidotomy and hemorrhage control. Results indicated that the technique is feasible to use and generates usable data to inform simulation-based training system design. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  7. Unsupervised quality estimation model for English to German translation and its application in extensive supervised evaluation.

    PubMed

    Han, Aaron L-F; Wong, Derek F; Chao, Lidia S; He, Liangye; Lu, Yi

    2014-01-01

    With the rapid development of machine translation (MT), the MT evaluation becomes very important to timely tell us whether the MT system makes any progress. The conventional MT evaluation methods tend to calculate the similarity between hypothesis translations offered by automatic translation systems and reference translations offered by professional translators. There are several weaknesses in existing evaluation metrics. Firstly, the designed incomprehensive factors result in language-bias problem, which means they perform well on some special language pairs but weak on other language pairs. Secondly, they tend to use no linguistic features or too many linguistic features, of which no usage of linguistic feature draws a lot of criticism from the linguists and too many linguistic features make the model weak in repeatability. Thirdly, the employed reference translations are very expensive and sometimes not available in the practice. In this paper, the authors propose an unsupervised MT evaluation metric using universal part-of-speech tagset without relying on reference translations. The authors also explore the performances of the designed metric on traditional supervised evaluation tasks. Both the supervised and unsupervised experiments show that the designed methods yield higher correlation scores with human judgments.

  8. Usability Evaluations of a Wearable Inertial Sensing System and Quality of Movement Metrics for Stroke Survivors by Care Professionals.

    PubMed

    Klaassen, Bart; van Beijnum, Bert-Jan F; Held, Jeremia P; Reenalda, Jasper; van Meulen, Fokke B; Veltink, Peter H; Hermens, Hermie J

    2017-01-01

    Inertial motion capture systems are used in many applications such as measuring the movement quality in stroke survivors. The absence of clinical effectiveness and usability evidence in these assistive technologies into rehabilitation has delayed the transition of research into clinical practice. Recently, a new inertial motion capture system was developed in a project, called INTERACTION, to objectively measure the quality of movement (QoM) in stroke survivors during daily-life activity. With INTERACTION, we are to be able to investigate into what happens with patients after discharge from the hospital. Resulting QoM metrics, where a metric is defined as a measure of some property, are subsequently presented to care professionals. Metrics include for example: reaching distance, walking speed, and hand distribution plots. The latter shows a density plot of the hand position in the transversal plane. The objective of this study is to investigate the opinions of care professionals in using these metrics obtained from INTERACTION and its usability. By means of a semi-structured interview, guided by a presentation, presenting two patient reports. Each report includes several QoM metric (like reaching distance, hand position density plots, shoulder abduction) results obtained during daily-life measurements and in clinic and were evaluated by care professionals not related to the project. The results were compared with care professionals involved within the INTERACTION project. Furthermore, two questionnaires (5-point Likert and open questionnaire) were handed over to rate the usability of the metrics and to investigate if they would like such a system in their clinic. Eleven interviews were conducted, where each interview included either two or three care professionals as a group, in Switzerland and The Netherlands. Evaluation of the case reports (CRs) by participants and INTERACTION members showed a high correlation for both lower and upper extremity metrics. Participants were most in favor of hand distribution plots during daily-life activities. All participants mentioned that visualizing QoM of stroke survivors over time during daily-life activities has more possibilities compared to current clinical assessments. They also mentioned that these metrics could be important for self-evaluation of stroke survivors. The results showed that most participants were able to understand the metrics presented in the CRs. For a few metrics, it remained difficult to assess the underlying cause of the QoM. Hence, a combination of metrics is needed to get a better insight of the patient. Furthermore, it remains important to report the state (e.g., how the patient feels), its surroundings (outside, inside the house, on a slippery surface), and detail of specific activities (does the patient grasps a piece of paper or a heavy cooking pan but also dual tasks). Altogether, it remains a questions how to determine what the patient is doing and where the patient is doing his or her activities.

  9. Assessment of various supervised learning algorithms using different performance metrics

    NASA Astrophysics Data System (ADS)

    Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.

    2017-11-01

    Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.

  10. Collected notes from the Benchmarks and Metrics Workshop

    NASA Technical Reports Server (NTRS)

    Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.

    1991-01-01

    In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.

  11. Facilitating Energy Savings through Enhanced Usability of Thermostats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meier, Alan; Aragon, Cecilia; Peffer, Therese

    2011-05-23

    Residential thermostats play a key role in controlling heating and cooling systems. Occupants often find the controls of programmable thermostats confusing, sometimes leading to higher heating consumption than when the buildings are controlled manually. A high degree of usability is vital to a programmable thermostat's effectiveness because, unlike a more efficient heating system, occupants must engage in specific actions after installation to obtain energy savings. We developed a procedure for measuring the usability of thermostats and tested this methodology with 31 subjects on five thermostats. The procedure requires first identifying representative tasks associated with the device and then testing themore » subjects ability to accomplish those tasks. The procedure was able to demonstrate the subjects wide ability to accomplish tasks and the influence of a device's usability on success rates. A metric based on the time to accomplish the tasks and the fraction of subjects actually completing the tasks captured the key aspects of each thermostat's usability. The procedure was recently adopted by the Energy Star Program for its thermostat specification. The approach appears suitable for quantifying usability of controls in other products, such as heat pump water heaters and commercial lighting.« less

  12. Automatic intersection map generation task 10 report.

    DOT National Transportation Integrated Search

    2016-02-29

    This report describes the work conducted in Task 10 of the V2I Safety Applications Development Project. The work was performed by the University of Michigan Transportation Research Institute (UMTRI) under contract to the Crash Avoidance Metrics Partn...

  13. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  14. Mental workload and cognitive task automaticity: an evaluation of subjective and time estimation metrics.

    PubMed

    Liu, Y; Wickens, C D

    1994-11-01

    The evaluation of mental workload is becoming increasingly important in system design and analysis. The present study examined the structure and assessment of mental workload in performing decision and monitoring tasks by focusing on two mental workload measurements: subjective assessment and time estimation. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines displayed on a computer monitor. The subject was either in charge of the customer assignment (manual mode) or was monitoring an automated system performing the same task (automatic mode). In both cases, the subjects were required to detect the non-optimal assignments that they or the computer had made. Time pressure was manipulated by the experimenter to create fast and slow conditions. The results revealed a multi-dimensional structure of mental workload and a multi-step process of subjective workload assessment. The results also indicated that subjective workload was more influenced by the subject's participatory mode than by the factor of task speed. The time estimation intervals produced while performing the decision and monitoring tasks had significantly greater length and larger variability than those produced while either performing no other tasks or performing a well practised customer assignment task. This result seemed to indicate that time estimation was sensitive to the presence of perceptual/cognitive demands, but not to response related activities to which behavioural automaticity has developed.

  15. The suppression of scale-free fMRI brain dynamics across three different sources of effort: aging, task novelty and task difficulty.

    PubMed

    Churchill, Nathan W; Spring, Robyn; Grady, Cheryl; Cimprich, Bernadine; Askren, Mary K; Reuter-Lorenz, Patricia A; Jung, Mi Sook; Peltier, Scott; Strother, Stephen C; Berman, Marc G

    2016-08-08

    There is growing evidence that fluctuations in brain activity may exhibit scale-free ("fractal") dynamics. Scale-free signals follow a spectral-power curve of the form P(f ) ∝ f(-β), where spectral power decreases in a power-law fashion with increasing frequency. In this study, we demonstrated that fractal scaling of BOLD fMRI signal is consistently suppressed for different sources of cognitive effort. Decreases in the Hurst exponent (H), which quantifies scale-free signal, was related to three different sources of cognitive effort/task engagement: 1) task difficulty, 2) task novelty, and 3) aging effects. These results were consistently observed across multiple datasets and task paradigms. We also demonstrated that estimates of H are robust across a range of time-window sizes. H was also compared to alternative metrics of BOLD variability (SDBOLD) and global connectivity (Gconn), with effort-related decreases in H producing similar decreases in SDBOLD and Gconn. These results indicate a potential global brain phenomenon that unites research from different fields and indicates that fractal scaling may be a highly sensitive metric for indexing cognitive effort/task engagement.

  16. The suppression of scale-free fMRI brain dynamics across three different sources of effort: aging, task novelty and task difficulty

    PubMed Central

    Churchill, Nathan W.; Spring, Robyn; Grady, Cheryl; Cimprich, Bernadine; Askren, Mary K.; Reuter-Lorenz, Patricia A.; Jung, Mi Sook; Peltier, Scott; Strother, Stephen C.; Berman, Marc G.

    2016-01-01

    There is growing evidence that fluctuations in brain activity may exhibit scale-free (“fractal”) dynamics. Scale-free signals follow a spectral-power curve of the form P(f ) ∝ f−β, where spectral power decreases in a power-law fashion with increasing frequency. In this study, we demonstrated that fractal scaling of BOLD fMRI signal is consistently suppressed for different sources of cognitive effort. Decreases in the Hurst exponent (H), which quantifies scale-free signal, was related to three different sources of cognitive effort/task engagement: 1) task difficulty, 2) task novelty, and 3) aging effects. These results were consistently observed across multiple datasets and task paradigms. We also demonstrated that estimates of H are robust across a range of time-window sizes. H was also compared to alternative metrics of BOLD variability (SDBOLD) and global connectivity (Gconn), with effort-related decreases in H producing similar decreases in SDBOLD and Gconn. These results indicate a potential global brain phenomenon that unites research from different fields and indicates that fractal scaling may be a highly sensitive metric for indexing cognitive effort/task engagement. PMID:27498696

  17. The Metrics of Spatial Distance Traversed During Mental Imagery

    ERIC Educational Resources Information Center

    Rinck, Mike; Denis, Michel

    2004-01-01

    The authors conducted 2 experiments to study the metrics of spatial distance in a mental imagery task. In both experiments, participants first memorized the layout of a building containing 10 rooms with 24 objects. Participants then received mental imagery instructions and imagined how they walked through the building from one room to another. The…

  18. Lexical and Metrical Stress in Word Recognition: Lexical or Pre-Lexical Influences?

    ERIC Educational Resources Information Center

    Slowiaczek, Louisa M.; Soltano, Emily G.; Bernstein, Hilary L.

    2006-01-01

    The influence of lexical stress and/or metrical stress on spoken word recognition was examined. Two experiments were designed to determine whether response times in lexical decision or shadowing tasks are influenced when primes and targets share lexical stress patterns (JUVenile-BIBlical [Syllables printed in capital letters indicate those…

  19. Metrical Encoding in Adults Who Do and Do Not Stutter

    ERIC Educational Resources Information Center

    Coalson, Geoffrey A.; Byrd, Courtney T.

    2015-01-01

    Purpose: The purpose of this study was to explore metrical aspects of phonological encoding (i.e., stress and syllable boundary assignment) in adults who do and do not stutter (AWS and AWNS, respectively). Method: Participants monitored nonwords for target sounds during silent phoneme monitoring tasks across two distinct experiments. For…

  20. The Warfighter Associate: Decision-Support and Metrics for Mission Command

    DTIC Science & Technology

    2013-01-01

    complex situations can be captured it makes sense to use software to provide this important adjunct to complex human cognitive problems. As a software...tasks that could distract the user from the important events occurring. An Associate System also observes the actions undertaken by a human operator...the Commander’s Critical Information Requirements. ‡It is important to note that the Warfighter Associate maintains a human -in-the-loop for decision

  1. Getting it right the first time: predicted performance guarantees from the analysis of emergent behavior in autonomous and semi-autonomous systems

    NASA Astrophysics Data System (ADS)

    Arkin, Ronald C.; Lyons, Damian; Shu, Jiang; Nirmal, Prem; Zafar, Munzir

    2012-06-01

    A crucially important aspect for mission-critical robotic operations is ensuring as best as possible that an autonomous system be able to complete its task. In a project for the Defense Threat Reduction Agency (DTRA) we are developing methods to provide such guidance, specifically for counter-Weapons of Mass Destruction (C-WMD) missions. In this paper, we describe the scenarios under consideration, the performance measures and metrics being developed, and an outline of the mechanisms for providing performance guarantees.

  2. Safety Metrics for Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  3. Force-Sensing Enhanced Simulation Environment (ForSense) for laparoscopic surgery training and assessment.

    PubMed

    Cundy, Thomas P; Thangaraj, Evelyn; Rafii-Tari, Hedyeh; Payne, Christopher J; Azzie, Georges; Sodergren, Mikael H; Yang, Guang-Zhong; Darzi, Ara

    2015-04-01

    Excessive or inappropriate tissue interaction force during laparoscopic surgery is a recognized contributor to surgical error, especially for robotic surgery. Measurement of force at the tool-tissue interface is, therefore, a clinically relevant skill assessment variable that may improve effectiveness of surgical simulation. Popular box trainer simulators lack the necessary technology to measure force. The aim of this study was to develop a force sensing unit that may be integrated easily with existing box trainer simulators and to (1) validate multiple force variables as objective measurements of laparoscopic skill, and (2) determine concurrent validity of a revised scoring metric. A base plate unit sensitized to a force transducer was retrofitted to a box trainer. Participants of 3 different levels of operative experience performed 5 repetitions of a peg transfer and suture task. Multiple outcome variables of force were assessed as well as a revised scoring metric that incorporated a penalty for force error. Mean, maximum, and overall magnitudes of force were significantly different among the 3 levels of experience, as well as force error. Experts were found to exert the least force and fastest task completion times, and vice versa for novices. Overall magnitude of force was the variable most correlated with experience level and task completion time. The revised scoring metric had similar predictive strength for experience level compared with the standard scoring metric. Current box trainer simulators can be adapted for enhanced objective measurements of skill involving force sensing. These outcomes are significantly influenced by level of expertise and are relevant to operative safety in laparoscopic surgery. Conventional proficiency standards that focus predominantly on task completion time may be integrated with force-based outcomes to be more accurately reflective of skill quality. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Target detection cycle criteria when using the targeting task performance metric

    NASA Astrophysics Data System (ADS)

    Hixson, Jonathan G.; Jacobs, Eddie L.; Vollmerhausen, Richard H.

    2004-12-01

    The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate of the US Army (NVESD) has developed a new target acquisition metric to better predict the performance of modern electro-optical imagers. The TTP metric replaces the Johnson criteria. One problem with transitioning to the new model is that the difficulty of searching in a terrain has traditionally been quantified by an "N50." The N50 is the number of Johnson criteria cycles needed for the observer to detect the target half the time, assuming that the observer is not time limited. In order to make use of this empirical data base, a conversion must be found relating Johnson cycles for detection to TTP cycles for detection. This paper describes how that relationship is established. We have found that the relationship between Johnson and TTP is 1:2.7 for the recognition and identification tasks.

  5. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    PubMed

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  6. Metrics for Operator Situation Awareness, Workload, and Performance in Automated Separation Assurance Systems

    NASA Technical Reports Server (NTRS)

    Strybel, Thomas Z.; Vu, Kim-Phuong L.; Battiste, Vernol; Dao, Arik-Quang; Dwyer, John P.; Landry, Steven; Johnson, Walter; Ho, Nhut

    2011-01-01

    A research consortium of scientists and engineers from California State University Long Beach (CSULB), San Jose State University Foundation (SJSUF), California State University Northridge (CSUN), Purdue University, and The Boeing Company was assembled to evaluate the impact of changes in roles and responsibilities and new automated technologies, being introduced in the Next Generation Air Transportation System (NextGen), on operator situation awareness (SA) and workload. To meet these goals, consortium members performed systems analyses of NextGen concepts and airspace scenarios, and concurrently evaluated SA, workload, and performance measures to assess their appropriateness for evaluations of NextGen concepts and tools. The following activities and accomplishments were supported by the NRA: a distributed simulation, metric development, systems analysis, part-task simulations, and large-scale simulations. As a result of this NRA, we have gained a greater understanding of situation awareness and its measurement, and have shared our knowledge with the scientific community. This network provides a mechanism for consortium members, colleagues, and students to pursue research on other topics in air traffic management and aviation, thus enabling them to make greater contributions to the field

  7. Testing the Construct Validity of a Virtual Reality Hip Arthroscopy Simulator.

    PubMed

    Khanduja, Vikas; Lawrence, John E; Audenaert, Emmanuel

    2017-03-01

    To test the construct validity of the hip diagnostics module of a virtual reality hip arthroscopy simulator. Nineteen orthopaedic surgeons performed a simulated arthroscopic examination of a healthy hip joint using a 70° arthroscope in the supine position. Surgeons were categorized as either expert (those who had performed 250 hip arthroscopies or more) or novice (those who had performed fewer than this). Twenty-one specific targets were visualized within the central and peripheral compartments; 9 via the anterior portal, 9 via the anterolateral portal, and 3 via the posterolateral portal. This was immediately followed by a task testing basic probe examination of the joint in which a series of 8 targets were probed via the anterolateral portal. During the tasks, the surgeon's performance was evaluated by the simulator using a set of predefined metrics including task duration, number of soft tissue and bone collisions, and distance travelled by instruments. No repeat attempts at the tasks were permitted. Construct validity was then evaluated by comparing novice and expert group performance metrics over the 2 tasks using the Mann-Whitney test, with a P value of less than .05 considered significant. On the visualization task, the expert group outperformed the novice group on time taken (P = .0003), number of collisions with soft tissue (P = .001), number of collisions with bone (P = .002), and distance travelled by the arthroscope (P = .02). On the probe examination, the 2 groups differed only in the time taken to complete the task (P = .025) with no significant difference in other metrics. Increased experience in hip arthroscopy was reflected by significantly better performance on the virtual reality simulator across 2 tasks, supporting its construct validity. This study validates a virtual reality hip arthroscopy simulator and supports its potential for developing basic arthroscopic skills. Level III. Copyright © 2016 Arthroscopy Association of North America. All rights reserved.

  8. Investigating the Association of Eye Gaze Pattern and Diagnostic Error in Mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, Sophie; Pinto, Frank M; Xu, Songhua

    2013-01-01

    The objective of this study was to investigate the association between eye-gaze patterns and the diagnostic accuracy of radiologists for the task of assessing the likelihood of malignancy of mammographic masses. Six radiologists (2 expert breast imagers and 4 Radiology residents of variable training) assessed the likelihood of malignancy of 40 biopsy-proven mammographic masses (20 malignant and 20 benign) on a computer monitor. Eye-gaze data were collected using a commercial remote eye-tracker. Upon reviewing each mass, the radiologists were also asked to provide their assessment regarding the probability of malignancy of the depicted mass as well as a rating regardingmore » the perceived difficulty of the diagnostic task. The collected data were analyzed using established algorithms and various quantitative metrics were extracted to characterize the recorded gaze patterns. The extracted metrics were correlated with the radiologists diagnostic decisions and perceived complexity scores. Results showed that the visual gaze pattern of radiologists varies substantially, not only depending on their experience level but also among individuals. However, some eye gaze metrics appear to correlate with diagnostic error and perceived complexity more consistently. These results suggest that although gaze patterns are generally associated with diagnostic error and the human perceived difficulty of the diagnostic task, there are substantially individual differences that are not explained simply by the experience level of the individual performing the diagnostic task.« less

  9. FAST COGNITIVE AND TASK ORIENTED, ITERATIVE DATA DISPLAY (FACTOID)

    DTIC Science & Technology

    2017-06-01

    approaches. As a result, the following assumptions guided our efforts in developing modeling and descriptive metrics for evaluation purposes...Application Evaluation . Our analytic workflow for evaluation is to first provide descriptive statistics about applications across metrics (performance...distributions for evaluation purposes because the goal of evaluation is accurate description , not inference (e.g., prediction). Outliers depicted

  10. Use of a machine learning algorithm to classify expertise: analysis of hand motion patterns during a simulated surgical task.

    PubMed

    Watson, Robert A

    2014-08-01

    To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.

  11. Effects of low to moderate acute doses of pramipexole on impulsivity and cognition in healthy volunteers.

    PubMed

    Hamidovic, Ajna; Kang, Un Jung; de Wit, Harriet

    2008-02-01

    The neurotransmitter dopamine is integrally involved in the rewarding effects of drugs, and it has also been thought to mediate impulsive behaviors in animal models. Most of the studies of drug effects on impulsive behaviors in humans have involved drugs with complex actions on different transmitter systems and different receptor subtypes. The present study was designed to characterize the effect of single doses of pramipexole, a D2/D3 agonist, on measures of cognitive and impulsive behavior, as well as on mood in healthy volunteers. Healthy men and women (N = 10) received placebo and 2 doses of pramipexole, 0.25 and 0.50 mg, in a within-subject, double-blinded study. Outcome measures included changes in cognitive performance, assessed by the Automated Neuropsychological Assessment Metrics, several behavioral measures related to impulsive behavior, including the Balloon Analogue Risk Task, Delay Discounting Task, Go/No-Go Task, Card Perseveration Task, and subjective ratings of mood assessed by Addiction Research Center Inventory, Profile of Mood States, and Drug Effects Questionnaire. Pramipexole decreased positive ratings of mood (euphoria, intellectual efficiency, and energy) and increased both subjectively reported sedation and behavioral sedation indicated by impaired cognitive performance on several measures of the Automated Neuropsychological Assessment Metrics. Single low to medium doses of this drug did not produce a decrease in impulsive responding on behavioral measures included in this study. The sedative-like effects observed in this study may reflect presynaptic actions of the drug. Higher doses with postsynaptic actions may be needed to produce either behavioral or subjective stimulant-like effects.

  12. About Using the Metric System.

    ERIC Educational Resources Information Center

    Illinois State Office of Education, Springfield.

    This booklet contains a brief introduction to the use of the metric system. Topics covered include: (1) what is the metric system; (2) how to think metric; (3) some advantages of the metric system; (4) basics of the metric system; (5) how to measure length, area, volume, mass and temperature the metric way; (6) some simple calculations using…

  13. An Evaluation of Detect and Avoid (DAA) Displays for Unmanned Aircraft Systems: The Effect of Information Level and Display Location on Pilot Performance

    NASA Technical Reports Server (NTRS)

    Fern, Lisa; Rorie, R. Conrad; Pack, Jessica S.; Shively, R. Jay; Draper, Mark H.

    2015-01-01

    A consortium of government, industry and academia is currently working to establish minimum operational performance standards for Detect and Avoid (DAA) and Control and Communications (C2) systems in order to enable broader integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS). One subset of these performance standards will need to address the DAA display requirements that support an acceptable level of pilot performance. From a pilot's perspective, the DAA task is the maintenance of self separation and collision avoidance from other aircraft, utilizing the available information and controls within the Ground Control Station (GCS), including the DAA display. The pilot-in-the-loop DAA task requires the pilot to carry out three major functions: 1) detect a potential threat, 2) determine an appropriate resolution maneuver, and 3) execute that resolution maneuver via the GCS control and navigation interface(s). The purpose of the present study was to examine two main questions with respect to DAA display considerations that could impact pilots' ability to maintain well clear from other aircraft. First, what is the effect of a minimum (or basic) information display compared to an advanced information display on pilot performance? Second, what is the effect of display location on UAS pilot performance? Two levels of information level (basic, advanced) were compared across two levels of display location (standalone, integrated), for a total of four displays. The authors propose an eight-stage pilot-DAA interaction timeline from which several pilot response time metrics can be extracted. These metrics were compared across the four display conditions. The results indicate that the advanced displays had faster overall response times compared to the basic displays, however, there were no significant differences between the standalone and integrated displays. Implications of the findings on understanding pilot performance on the DAA task, the development of DAA display performance standards, as well as the need for future research are discussed.

  14. Microgravity Science and Applications. Program Tasks and Bibliography for FY 1993

    NASA Technical Reports Server (NTRS)

    1994-01-01

    An annual report published by the Microgravity Science and Applications Division (MSAD) of NASA is presented. It represents a compilation of the Division's currently-funded ground, flight and Advanced Technology Development tasks. An overview and progress report for these tasks, including progress reports by principal investigators selected from the academic, industry and government communities, are provided. The document includes a listing of new bibliographic data provided by the principal investigators to reflect the dissemination of research data during FY 1993 via publications and presentations. The document also includes division research metrics and an index of the funded investigators. The document contains three sections and three appendices: Section 1 includes an introduction and metrics data, Section 2 is a compilation of the task reports in an order representative of its ground, flight or ATD status and the science discipline it represents, and Section 3 is the bibliography. The three appendices, in the order of presentation, are: Appendix A - a microgravity science acronym list, Appendix B - a list of guest investigators associated with a biotechnology task, and Appendix C - an index of the currently funded principal investigators.

  15. Product evaluation based in the association between intuition and tasks.

    PubMed

    Almeida e Silva, Caio Márcio; Okimoto, Maria Lúcia L R; Albertazzi, Deise; Calixto, Cyntia; Costa, Humberto

    2012-01-01

    This paper explores the importance of researching the intuitiveness in the product use. It approaches the intuitiveness influence for users that already had a visual experience of the product. Finally, it is suggested the use of a table that relates the tasks performed while using a product, the features for an intuitive use and the performance metric "task success".

  16. Status Report on Activities of the Systems Assessment Task Force, OECD-NEA Expert Group on Accident Tolerant Fuels for LWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bragg-Sitton, Shannon Michelle

    The Organization for Economic Cooperation and Development /Nuclear Energy Agency (OECD/NEA) Nuclear Science Committee approved the formation of an Expert Group on Accident Tolerant Fuel (ATF) for LWRs (EGATFL) in 2014. Chaired by Kemal Pasamehmetoglu, INL Associate Laboratory Director for Nuclear Science and Technology, the mandate for the EGATFL defines work under three task forces: (1) Systems Assessment, (2) Cladding and Core Materials, and (3) Fuel Concepts. Scope for the Systems Assessment task force (TF1) includes definition of evaluation metrics for ATF, technology readiness level definition, definition of illustrative scenarios for ATF evaluation, and identification of fuel performance and systemmore » codes applicable to ATF evaluation. The Cladding and Core Materials (TF2) and Fuel Concepts (TF3) task forces will identify gaps and needs for modeling and experimental demonstration; define key properties of interest; identify the data necessary to perform concept evaluation under normal conditions and illustrative scenarios; identify available infrastructure (internationally) to support experimental needs; and make recommendations on priorities. Where possible, considering proprietary and other export restrictions (e.g., International Traffic in Arms Regulations), the Expert Group will facilitate the sharing of data and lessons learned across the international group membership. The Systems Assessment task force is chaired by Shannon Bragg-Sitton (Idaho National Laboratory [INL], U.S.), the Cladding Task Force is chaired by Marie Moatti (Electricite de France [EdF], France), and the Fuels Task Force is chaired by a Masaki Kurata (Japan Atomic Energy Agency [JAEA], Japan). The original Expert Group mandate was established for June 2014 to June 2016. In April 2016 the Expert Group voted to extend the mandate one additional year to June 2017 in order to complete the task force deliverables; this request was subsequently approved by the Nuclear Science Committee. This report provides an update on the status Systems Assessment Task Force activities.« less

  17. Reducing radiation dose to the female breast during CT coronary angiography: A simulation study comparing breast shielding, angular tube current modulation, reduced kV, and partial angle protocols using an unknown-location signal-detectability metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupcich, Franco; Gilat Schmidt, Taly; Badal, Andreu

    2013-08-15

    Purpose: The authors compared the performance of five protocols intended to reduce dose to the breast during computed tomography (CT) coronary angiography scans using a model observer unknown-location signal-detectability metric.Methods: The authors simulated CT images of an anthropomorphic female thorax phantom for a 120 kV reference protocol and five “dose reduction” protocols intended to reduce dose to the breast: 120 kV partial angle (posteriorly centered), 120 kV tube-current modulated (TCM), 120 kV with shielded breasts, 80 kV, and 80 kV partial angle (posteriorly centered). Two image quality tasks were investigated: the detection and localization of 4-mm, 3.25 mg/ml and 1-mm,more » 6.0 mg/ml iodine contrast signals randomly located in the heart region. For each protocol, the authors plotted the signal detectability, as quantified by the area under the exponentially transformed free response characteristic curve estimator (A-caret{sub FE}), as well as noise and contrast-to-noise ratio (CNR) versus breast and lung dose. In addition, the authors quantified each protocol's dose performance as the percent difference in dose relative to the reference protocol achieved while maintaining equivalent A-caret{sub FE}.Results: For the 4-mm signal-size task, the 80 kV full scan and 80 kV partial angle protocols decreased dose to the breast (80.5% and 85.3%, respectively) and lung (80.5% and 76.7%, respectively) with A-caret{sub FE} = 0.96, but also resulted in an approximate three-fold increase in image noise. The 120 kV partial protocol reduced dose to the breast (17.6%) at the expense of increased lung dose (25.3%). The TCM algorithm decreased dose to the breast (6.0%) and lung (10.4%). Breast shielding increased breast dose (67.8%) and lung dose (103.4%). The 80 kV and 80 kV partial protocols demonstrated greater dose reductions for the 4-mm task than for the 1-mm task, and the shielded protocol showed a larger increase in dose for the 4-mm task than for the 1-mm task. In general, the CNR curves indicate a similar relative ranking of protocol performance as the corresponding A-caret{sub FE} curves, however, the CNR metric overestimated the performance of the shielded protocol for both tasks, leading to corresponding underestimates in the relative dose increases compared to those obtained when using the A-caret{sub FE} metric.Conclusions: The 80 kV and 80 kV partial angle protocols demonstrated the greatest reduction to breast and lung dose, however, the subsequent increase in image noise may be deemed clinically unacceptable. Tube output for these protocols can be adjusted to achieve a more desirable noise level with lesser breast dose savings. Breast shielding increased breast and lung dose when maintaining equivalent A-caret{sub FE}. The results demonstrated that comparisons of dose performance depend on both the image quality metric and the specific task, and that CNR may not be a reliable metric of signal detectability.« less

  18. Timesharing performance as an indicator of pilot mental workload

    NASA Technical Reports Server (NTRS)

    Casper, Patricia A.

    1988-01-01

    The research was performed in two simultaneous phases, each intended to identify and manipulate factors related to operator mental workload. The first phase concerned evaluation of attentional deficits (workloads) in a timesharing task. Work in the second phase involved incorporating the results from these and other experiments into an expert system designed to provide workload metric selection advice to nonexperts in the field interested in operator workload. The results of the experiments conducted are summarized.

  19. EVA: laparoscopic instrument tracking based on Endoscopic Video Analysis for psychomotor skills assessment.

    PubMed

    Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J

    2013-03-01

    The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.

  20. Immersive training and mentoring for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Nistor, Vasile; Allen, Brian; Dutson, E.; Faloutsos, P.; Carman, G. P.

    2007-04-01

    We describe in this paper a training system for minimally invasive surgery (MIS) that creates an immersive training simulation by recording the pathways of the instruments from an expert surgeon while performing an actual training task. Instrument spatial pathway data is stored and later accessed at the training station in order to visualize the ergonomic experience of the expert surgeon and trainees. Our system is based on tracking the spatial position and orientation of the instruments on the console for both the expert surgeon and the trainee. The technology is the result of recent developments in miniaturized position sensors that can be integrated seamlessly into the MIS instruments without compromising functionality. In order to continuously monitor the positions of laparoscopic tool tips, DC magnetic tracking sensors are used. A hardware-software interface transforms the coordinate data points into instrument pathways, while an intuitive graphic user interface displays the instruments spatial position and orientation for the mentor/trainee, and endoscopic video information. These data are recorded and saved in a database for subsequent immersive training and training performance analysis. We use two 6 DOF DC magnetic trackers with a sensor diameter of just 1.3 mm - small enough for insertion into 4 French catheters, embedded in the shaft of a endoscopic grasper and a needle driver. One sensor is located at the distal end of the shaft while the second sensor is located at the proximal end of the shaft. The placement of these sensors does not impede the functionally of the instrument. Since the sensors are located inside the shaft there are no sealing issues between the valve of the trocar and the instrument. We devised a peg transfer training task in accordance to validated training procedures, and tested our system on its ability to differentiate between the expert surgeon and the novices, based on a set of performance metrics. These performance metrics: motion smoothness, total path length, and time to completion, are derived from the kinematics of the instrument. An affine combination of the above mentioned metrics is provided to give a general score for the training performance. Clear differentiation between the expert surgeons and the novice trainees is visible in the test results. Strictly kinematics based performance metrics can be used to evaluate the training progress of MIS trainees in the context of UCLA - LTS.

  1. The Effects of Sensor Performance as Modeled by Signal Detection Theory on the Performance of Reinforcement Learning in a Target Acquisition Task

    NASA Astrophysics Data System (ADS)

    Quirion, Nate

    Unmanned Aerial Systems (UASs) today are fulfilling more roles than ever before. There is a general push to have these systems feature more advanced autonomous capabilities in the near future. To achieve autonomous behavior requires some unique approaches to control and decision making. More advanced versions of these approaches are able to adapt their own behavior and examine their past experiences to increase their future mission performance. To achieve adaptive behavior and decision making capabilities this study used Reinforcement Learning algorithms. In this research the effects of sensor performance, as modeled through Signal Detection Theory (SDT), on the ability of RL algorithms to accomplish a target localization task are examined. Three levels of sensor sensitivity are simulated and compared to the results of the same system using a perfect sensor. To accomplish the target localization task, a hierarchical architecture used two distinct agents. A simulated human operator is assumed to be a perfect decision maker, and is used in the system feedback. An evaluation of the system is performed using multiple metrics, including episodic reward curves and the time taken to locate all targets. Statistical analyses are employed to detect significant differences in the comparison of steady-state behavior of different systems.

  2. Functional Task Test: 3. Skeletal Muscle Performance Adaptations to Space Flight

    NASA Technical Reports Server (NTRS)

    Ryder, Jeffrey W.; Wickwire, P. J.; Buxton, R. E.; Bloomberg, J. J.; Ploutz-Snyder, L.

    2011-01-01

    The functional task test is a multi-disciplinary study investigating how space-flight induced changes to physiological systems impacts functional task performance. Impairment of neuromuscular function would be expected to negatively affect functional performance of crewmembers following exposure to microgravity. This presentation reports the results for muscle performance testing in crewmembers. Functional task performance will be presented in the abstract "Functional Task Test 1: sensory motor adaptations associated with postflight alternations in astronaut functional task performance." METHODS: Muscle performance measures were obtained in crewmembers before and after short-duration space flight aboard the Space Shuttle and long-duration International Space Station (ISS) missions. The battery of muscle performance tests included leg press and bench press measures of isometric force, isotonic power and total work. Knee extension was used for the measurement of central activation and maximal isometric force. Upper and lower body force steadiness control were measured on the bench press and knee extension machine, respectively. Tests were implemented 60 and 30 days before launch, on landing day (Shuttle crew only), and 6, 10 and 30 days after landing. Seven Space Shuttle crew and four ISS crew have completed the muscle performance testing to date. RESULTS: Preliminary results for Space Shuttle crew reveal significant reductions in the leg press performance metrics of maximal isometric force, power and total work on R+0 (p<0.05). Bench press total work was also significantly impaired, although maximal isometric force and power were not significantly affected. No changes were noted for measurements of central activation or force steadiness. Results for ISS crew were not analyzed due to the current small sample size. DISCUSSION: Significant reductions in lower body muscle performance metrics were observed in returning Shuttle crew and these adaptations are likely contributors to impaired functional tasks that are ambulatory in nature (See abstract Functional Task Test: 1). Interestingly, no significant changes in central activation capacity were detected. Therefore, impairments in muscle function in response to short-duration space flight are likely myocellular rather than neuromotor in nature.

  3. Metrics to assess ecological condition, change, and impacts in sandy beach ecosystems.

    PubMed

    Schlacher, Thomas A; Schoeman, David S; Jones, Alan R; Dugan, Jenifer E; Hubbard, David M; Defeo, Omar; Peterson, Charles H; Weston, Michael A; Maslo, Brooke; Olds, Andrew D; Scapini, Felicita; Nel, Ronel; Harris, Linda R; Lucrezi, Serena; Lastra, Mariano; Huijbers, Chantal M; Connolly, Rod M

    2014-11-01

    Complexity is increasingly the hallmark in environmental management practices of sandy shorelines. This arises primarily from meeting growing public demands (e.g., real estate, recreation) whilst reconciling economic demands with expectations of coastal users who have modern conservation ethics. Ideally, shoreline management is underpinned by empirical data, but selecting ecologically-meaningful metrics to accurately measure the condition of systems, and the ecological effects of human activities, is a complex task. Here we construct a framework for metric selection, considering six categories of issues that authorities commonly address: erosion; habitat loss; recreation; fishing; pollution (litter and chemical contaminants); and wildlife conservation. Possible metrics were scored in terms of their ability to reflect environmental change, and against criteria that are widely used for judging the performance of ecological indicators (i.e., sensitivity, practicability, costs, and public appeal). From this analysis, four types of broadly applicable metrics that also performed very well against the indicator criteria emerged: 1.) traits of bird populations and assemblages (e.g., abundance, diversity, distributions, habitat use); 2.) breeding/reproductive performance sensu lato (especially relevant for birds and turtles nesting on beaches and in dunes, but equally applicable to invertebrates and plants); 3.) population parameters and distributions of vertebrates associated primarily with dunes and the supralittoral beach zone (traditionally focused on birds and turtles, but expandable to mammals); 4.) compound measurements of the abundance/cover/biomass of biota (plants, invertebrates, vertebrates) at both the population and assemblage level. Local constraints (i.e., the absence of birds in highly degraded urban settings or lack of dunes on bluff-backed beaches) and particular issues may require alternatives. Metrics - if selected and applied correctly - provide empirical evidence of environmental condition and change, but often do not reflect deeper environmental values per se. Yet, values remain poorly articulated for many beach systems; this calls for a comprehensive identification of environmental values and the development of targeted programs to conserve these values on sandy shorelines globally. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. A biologically plausible computational model for auditory object recognition.

    PubMed

    Larson, Eric; Billimoria, Cyrus P; Sen, Kamal

    2009-01-01

    Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.

  5. Hypoxic Hypoxia at Moderate Altitudes: State of the Science

    DTIC Science & Technology

    2011-05-01

    neuropsychological metrics (surrogate investigational end points) with actual flight task metrics (desired end points of interest) under moderate hypoxic...conditions, (2) determine efficacy of potential neuropsychological performance-enhancing agents (e.g. tyrosine supplementation) for both acute and chronic...to air hunger ; may impact training fidelity Banderet et al. (1985) 4200 and 4700 m H 27 Tyrosine enhanced performance and reduced subjective

  6. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  7. Spatial resolution characterization of differential phase contrast CT systems via modulation transfer function (MTF) measurements

    NASA Astrophysics Data System (ADS)

    Li, Ke; Zambelli, Joseph; Bevins, Nicholas; Ge, Yongshuai; Chen, Guang-Hong

    2013-06-01

    By adding a Talbot-Lau interferometer to a conventional x-ray absorption computed tomography (CT) imaging system, both differential phase contrast (DPC) signal and absorption contrast signal can be simultaneously measured from the same set of CT measurements. The imaging performance of such multi-contrast x-ray CT imaging systems can be characterized with standard metrics such as noise variance, noise power spectrum, contrast-to-noise ratio, modulation transfer function (MTF), and task-based detectability index. Among these metrics, the measurement of the MTF can be challenging in DPC-CT systems due to several confounding factors such as phase wrapping and the difficulty of using fine wires as probes. To address these technical challenges, this paper discusses a viable and reliable method to experimentally measure the MTF of DPC-CT. It has been found that the spatial resolution of DPC-CT is degraded, when compared to that of the corresponding absorption CT, due to the presence of a source grating G0 in the Talbot-Lau interferometer. An effective MTF was introduced and experimentally estimated to describe the impact of the Talbot-Lau interferometer on the system MTF.

  8. A SURVEY OF ASTRONOMICAL RESEARCH: A BASELINE FOR ASTRONOMICAL DEVELOPMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ribeiro, V. A. R. M.; Russo, P.; Cárdenas-Avendaño, A., E-mail: vribeiro@ast.uct.ac.za, E-mail: russo@strw.leidenuniv.nl

    Measuring scientific development is a difficult task. Different metrics have been put forward to evaluate scientific development; in this paper we explore a metric that uses the number of peer-reviewed, and when available non-peer-reviewed, research articles as an indicator of development in the field of astronomy. We analyzed the available publication record, using the Smithsonian Astrophysical Observatory/NASA Astrophysics Database System, by country affiliation in the time span between 1950 and 2011 for countries with a gross national income of less than 14,365 USD in 2010. This represents 149 countries. We propose that this metric identifies countries in ''astronomical development'' withmore » a culture of research publishing. We also propose that for a country to develop in astronomy, it should invest in outside expert visits, send its staff abroad to study, and establish a culture of scientific publishing. Furthermore, we propose that this paper may be used as a baseline to measure the success of major international projects, such as the International Year of Astronomy 2009.« less

  9. Building an Evaluation Scale using Item Response Theory.

    PubMed

    Lalor, John P; Wu, Hao; Yu, Hong

    2016-11-01

    Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.

  10. Building an Evaluation Scale using Item Response Theory

    PubMed Central

    Lalor, John P.; Wu, Hao; Yu, Hong

    2016-01-01

    Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.1 PMID:28004039

  11. The Metric System--An Overview.

    ERIC Educational Resources Information Center

    Hovey, Larry; Hovey, Kathi

    1983-01-01

    Sections look at: (1) Historical Perspective; (2) Naming the New System; (3) The Metric Units; (4) Measuring Larger and Smaller Amounts; (5) Advantage of Using the Metric System; (6) Metric Symbols; (7) Conversion from Metric to Customary System; (8) General Hints for Helping Children Understand; and (9) Current Status of Metric Conversion. (MP)

  12. 48 CFR 611.002-70 - Metric system implementation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Metric system... COMPETITION AND ACQUISITION PLANNING DESCRIBING AGENCY NEEDS 611.002-70 Metric system implementation. (a... to metric policy to adopt the metric system as the preferred system of weights and measurements for...

  13. 48 CFR 611.002-70 - Metric system implementation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Metric system... COMPETITION AND ACQUISITION PLANNING DESCRIBING AGENCY NEEDS 611.002-70 Metric system implementation. (a... to metric policy to adopt the metric system as the preferred system of weights and measurements for...

  14. 48 CFR 611.002-70 - Metric system implementation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Metric system... COMPETITION AND ACQUISITION PLANNING DESCRIBING AGENCY NEEDS 611.002-70 Metric system implementation. (a... to metric policy to adopt the metric system as the preferred system of weights and measurements for...

  15. 48 CFR 611.002-70 - Metric system implementation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Metric system... COMPETITION AND ACQUISITION PLANNING DESCRIBING AGENCY NEEDS 611.002-70 Metric system implementation. (a... to metric policy to adopt the metric system as the preferred system of weights and measurements for...

  16. Low-dose cone-beam CT via raw counts domain low-signal correction schemes: Performance assessment and task-based parameter optimization (Part II. Task-based parameter optimization).

    PubMed

    Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong

    2018-05-01

    Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.

  17. Driver-centred vehicle automation: using network analysis for agent-based modelling of the driver in highly automated driving systems.

    PubMed

    Banks, Victoria A; Stanton, Neville A

    2016-11-01

    To the average driver, the concept of automation in driving infers that they can become completely 'hands and feet free'. This is a common misconception, however, one that has been shown through the application of Network Analysis to new Cruise Assist technologies that may feature on our roads by 2020. Through the adoption of a Systems Theoretic approach, this paper introduces the concept of driver-initiated automation which reflects the role of the driver in highly automated driving systems. Using a combination of traditional task analysis and the application of quantitative network metrics, this agent-based modelling paper shows how the role of the driver remains an integral part of the driving system implicating the need for designers to ensure they are provided with the tools necessary to remain actively in-the-loop despite giving increasing opportunities to delegate their control to the automated subsystems. Practitioner Summary: This paper describes and analyses a driver-initiated command and control system of automation using representations afforded by task and social networks to understand how drivers remain actively involved in the task. A network analysis of different driver commands suggests that such a strategy does maintain the driver in the control loop.

  18. Machine Learning Methods for Production Cases Analysis

    NASA Astrophysics Data System (ADS)

    Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.

    2018-03-01

    Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.

  19. Methods for Assessment of Memory Reactivation.

    PubMed

    Liu, Shizhao; Grosmark, Andres D; Chen, Zhe

    2018-04-13

    It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing memory reactivation. To date, several statistical methods have seen established for assessing memory reactivation based on bursts of ensemble neural spike activity during offline states. Using population-decoding methods, we propose a new statistical metric, the weighted distance correlation, to assess hippocampal memory reactivation (i.e., spatial memory replay) during quiet wakefulness and slow-wave sleep. The new metric can be combined with an unsupervised population decoding analysis, which is invariant to latent state labeling and allows us to detect statistical dependency beyond linearity in memory traces. We validate the new metric using two rat hippocampal recordings in spatial navigation tasks. Our proposed analysis framework may have a broader impact on assessing memory reactivations in other brain regions under different behavioral tasks.

  20. Mayo clinic NLP system for patient smoking status identification.

    PubMed

    Savova, Guergana K; Ogren, Philip V; Duffy, Patrick H; Buntrock, James D; Chute, Christopher G

    2008-01-01

    This article describes our system entry for the 2006 I2B2 contest "Challenges in Natural Language Processing for Clinical Data" for the task of identifying the smoking status of patients. Our system makes the simplifying assumption that patient-level smoking status determination can be achieved by accurately classifying individual sentences from a patient's record. We created our system with reusable text analysis components built on the Unstructured Information Management Architecture and Weka. This reuse of code minimized the development effort related specifically to our smoking status classifier. We report precision, recall, F-score, and 95% exact confidence intervals for each metric. Recasting the classification task for the sentence level and reusing code from other text analysis projects allowed us to quickly build a classification system that performs with a system F-score of 92.64 based on held-out data tests and of 85.57 on the formal evaluation data. Our general medical natural language engine is easily adaptable to a real-world medical informatics application. Some of the limitations as applied to the use-case are negation detection and temporal resolution.

  1. A New Distance Metric for Unsupervised Learning of Categorical Data.

    PubMed

    Jia, Hong; Cheung, Yiu-Ming; Liu, Jiming

    2016-05-01

    Distance metric is the basis of many learning algorithms, and its effectiveness usually has a significant influence on the learning results. In general, measuring distance for numerical data is a tractable task, but it could be a nontrivial problem for categorical data sets. This paper, therefore, presents a new distance metric for categorical data based on the characteristics of categorical values. In particular, the distance between two values from one attribute measured by this metric is determined by both the frequency probabilities of these two values and the values of other attributes that have high interdependence with the calculated one. Dynamic attribute weight is further designed to adjust the contribution of each attribute-distance to the distance between the whole data objects. Promising experimental results on different real data sets have shown the effectiveness of the proposed distance metric.

  2. Numerical aerodynamic simulation facility. Preliminary study extension

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.

  3. Comparing masked target transform volume (MTTV) clutter metric to human observer evaluation of visual clutter

    NASA Astrophysics Data System (ADS)

    Camp, H. A.; Moyer, Steven; Moore, Richard K.

    2010-04-01

    The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.

  4. Strategy quantification using body worn inertial sensors in a reactive agility task.

    PubMed

    Eke, Chika U; Cain, Stephen M; Stirling, Leia A

    2017-11-07

    Agility performance is often evaluated using time-based metrics, which provide little information about which factors aid or limit success. The objective of this study was to better understand agility strategy by identifying biomechanical metrics that were sensitive to performance speed, which were calculated with data from an array of body-worn inertial sensors. Five metrics were defined (normalized number of foot contacts, stride length variance, arm swing variance, mean normalized stride frequency, and number of body rotations) that corresponded to agility terms defined by experts working in athletic, clinical, and military environments. Eighteen participants donned 13 sensors to complete a reactive agility task, which involved navigating a set of cones in response to a vocal cue. Participants were grouped into fast, medium, and slow performance based on their completion time. Participants in the fast group had the smallest number of foot contacts (normalizing by height), highest stride length variance (normalizing by height), highest forearm angular velocity variance, and highest stride frequency (normalizing by height). The number of body rotations was not sensitive to speed and may have been determined by hand and foot dominance while completing the agility task. The results of this study have the potential to inform the development of a composite agility score constructed from the list of significant metrics. By quantifying the agility terms previously defined by expert evaluators through an agility score, this study can assist in strategy development for training and rehabilitation across athletic, clinical, and military domains. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Correlative feature analysis on FFDM

    PubMed Central

    Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene

    2008-01-01

    Identifying the corresponding images of a lesion in different views is an essential step in improving the diagnostic ability of both radiologists and computer-aided diagnosis (CAD) systems. Because of the nonrigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this pilot study, we present a computerized framework that differentiates between corresponding images of the same lesion in different views and noncorresponding images, i.e., images of different lesions. A dual-stage segmentation method, which employs an initial radial gradient index (RGI) based segmentation and an active contour model, is applied to extract mass lesions from the surrounding parenchyma. Then various lesion features are automatically extracted from each of the two views of each lesion to quantify the characteristics of density, size, texture and the neighborhood of the lesion, as well as its distance to the nipple. A two-step scheme is employed to estimate the probability that the two lesion images from different mammographic views are of the same physical lesion. In the first step, a correspondence metric for each pairwise feature is estimated by a Bayesian artificial neural network (BANN). Then, these pairwise correspondence metrics are combined using another BANN to yield an overall probability of correspondence. Receiver operating characteristic (ROC) analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing corresponding pairs from noncorresponding pairs. Using a FFDM database with 123 corresponding image pairs and 82 noncorresponding pairs, the distance feature yielded an area under the ROC curve (AUC) of 0.81±0.02 with leave-one-out (by physical lesion) evaluation, and the feature metric subset, which included distance, gradient texture, and ROI-based correlation, yielded an AUC of 0.87±0.02. The improvement by using multiple feature metrics was statistically significant compared to single feature performance. PMID:19175108

  6. NASA metric transition plan

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  7. Pitch ranking, electrode discrimination, and physiological spread of excitation using current steering in cochlear implants

    PubMed Central

    Goehring, Jenny L.; Neff, Donna L.; Baudhuin, Jacquelyn L.; Hughes, Michelle L.

    2014-01-01

    The first objective of this study was to determine whether adaptive pitch-ranking and electrode-discrimination tasks with cochlear-implant (CI) recipients produce similar results for perceiving intermediate “virtual-channel” pitch percepts using current steering. Previous studies have not examined both behavioral tasks in the same subjects with current steering. A second objective was to determine whether a physiological metric of spatial separation using the electrically evoked compound action potential spread-of-excitation (ECAP SOE) function could predict performance in the behavioral tasks. The metric was the separation index (Σ), defined as the difference in normalized amplitudes between two adjacent ECAP SOE functions, summed across all masker electrodes. Eleven CII or 90 K Advanced Bionics (Valencia, CA) recipients were tested using pairs of electrodes from the basal, middle, and apical portions of the electrode array. The behavioral results, expressed as d′, showed no significant differences across tasks. There was also no significant effect of electrode region for either task. ECAP Σ was not significantly correlated with pitch ranking or electrode discrimination for any of the electrode regions. Therefore, the ECAP separation index is not sensitive enough to predict perceptual resolution of virtual channels. PMID:25480063

  8. Cognitive models of pilot categorization and prioritization of flight-deck information

    NASA Technical Reports Server (NTRS)

    Jonsson, Jon E.; Ricks, Wendell R.

    1995-01-01

    In the past decade, automated systems on modern commercial flight decks have increased dramatically. Pilots now regularly interact and share tasks with these systems. This interaction has led human factors research to direct more attention to the pilot's cognitive processing and mental model of the information flow occurring on the flight deck. The experiment reported herein investigated how pilots mentally represent and process information typically available during flight. Fifty-two commercial pilots participated in tasks that required them to provide similarity ratings for pairs of flight-deck information and to prioritize this information under two contextual conditions. Pilots processed the information along three cognitive dimensions. These dimensions included the flight function and the flight action that the information supported and how frequently pilots refer to the information. Pilots classified the information as aviation, navigation, communications, or systems administration information. Prioritization results indicated a high degree of consensus among pilots, while scaling results revealed two dimensions along which information is prioritized. Pilot cognitive workload for flight-deck tasks and the potential for using these findings to operationalize cognitive metrics are evaluated. Such measures may be useful additions for flight-deck human performance evaluation.

  9. 20 CFR 435.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Metric system of measurement. 435.15 Section..., AND COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 435.15 Metric system of measurement. The Metric... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  10. 20 CFR 435.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Metric system of measurement. 435.15 Section..., AND COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 435.15 Metric system of measurement. The Metric... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  11. 20 CFR 435.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Metric system of measurement. 435.15 Section..., AND COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 435.15 Metric system of measurement. The Metric... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  12. 20 CFR 435.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Metric system of measurement. 435.15 Section..., AND COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 435.15 Metric system of measurement. The Metric... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  13. NERC Policy 10: Measurement of two generation and load balancing IOS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spicer, P.J.; Galow, G.G.

    1999-11-01

    Policy 10 will describe specific standards and metrics for most of the reliability functions described in the Interconnected Operations Services Working Group (IOS WG) report. The purpose of this paper is to discuss, in detail, the proposed metrics for two generation and load balancing IOSs: Regulation; Load Following. For purposes of this paper, metrics include both measurement and performance evaluation. The measurement methods discussed are included in the current draft of the proposed Policy 10. The performance evaluation method discussed is offered by the authors for consideration by the IOS ITF (Implementation Task Force) for inclusion into Policy 10.

  14. Real-Time Performance Feedback for the Manual Control of Spacecraft

    NASA Astrophysics Data System (ADS)

    Karasinski, John Austin

    Real-time performance metrics were developed to quantify workload, situational awareness, and manual task performance for use as visual feedback to pilots of aerospace vehicles. Results from prior lunar lander experiments with variable levels of automation were replicated and extended to provide insights for the development of real-time metrics. Increased levels of automation resulted in increased flight performance, lower workload, and increased situational awareness. Automated Speech Recognition (ASR) was employed to detect verbal callouts as a limited measure of subjects' situational awareness. A one-dimensional manual tracking task and simple instructor-model visual feedback scheme was developed. This feedback was indicated to the operator by changing the color of a guidance element on the primary flight display, similar to how a flight instructor points out elements of a display to a student pilot. Experiments showed that for this low-complexity task, visual feedback did not change subject performance, but did increase the subjects' measured workload. Insights gained from these experiments were applied to a Simplified Aid for EVA Rescue (SAFER) inspection task. The effects of variations of an instructor-model performance-feedback strategy on human performance in a novel SAFER inspection task were investigated. Real-time feedback was found to have a statistically significant effect of improving subject performance and decreasing workload in this complicated four degree of freedom manual control task with two secondary tasks.

  15. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes.

    PubMed

    Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  16. Impact of distance-based metric learning on classification and visualization model performance and structure-activity landscapes

    NASA Astrophysics Data System (ADS)

    Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.

    2014-02-01

    This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.

  17. It's A Metric World.

    ERIC Educational Resources Information Center

    Alabama State Dept. of Education, Montgomery. Div. of Instructional Services.

    Topics covered in the first part of this document include eight advantages of the metric system; a summary of metric instruction; the International System of Units (SI) style and usage; metric decimal tables; the metric system; and conversion tables. An alphabetized list of organizations which market metric materials for educators is provided with…

  18. 2 CFR 215.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 2 Grants and Agreements 1 2013-01-01 2013-01-01 false Metric system of measurement. 215.15 Section... ORGANIZATIONS (OMB CIRCULAR A-110) Pre-Award Requirements § 215.15 Metric system of measurement. The Metric... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  19. 34 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 1 2012-07-01 2012-07-01 false Metric system of measurement. 74.15 Section 74.15... Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S...

  20. 34 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 1 2011-07-01 2011-07-01 false Metric system of measurement. 74.15 Section 74.15... Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S...

  1. 14 CFR 1260.115 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Metric system of measurement. 1260.115....115 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S...

  2. 34 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Metric system of measurement. 74.15 Section 74.15... Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S...

  3. 2 CFR 215.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 2 Grants and Agreements 1 2012-01-01 2012-01-01 false Metric system of measurement. 215.15 Section... ORGANIZATIONS (OMB CIRCULAR A-110) Pre-Award Requirements § 215.15 Metric system of measurement. The Metric... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  4. 34 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 1 2013-07-01 2013-07-01 false Metric system of measurement. 74.15 Section 74.15... Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S...

  5. 14 CFR 1260.115 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Metric system of measurement. 1260.115....115 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S...

  6. 34 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 1 2014-07-01 2014-07-01 false Metric system of measurement. 74.15 Section 74.15... Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S...

  7. 14 CFR 1260.115 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Metric system of measurement. 1260.115....115 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S...

  8. 2 CFR 215.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false Metric system of measurement. 215.15 Section... ORGANIZATIONS (OMB CIRCULAR A-110) Pre-Award Requirements § 215.15 Metric system of measurement. The Metric... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  9. Brain-Inspired Photonic Signal Processor for Generating Periodic Patterns and Emulating Chaotic Systems

    NASA Astrophysics Data System (ADS)

    Antonik, Piotr; Haelterman, Marc; Massar, Serge

    2017-05-01

    Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments, the output was uncoupled from the system and, in most cases, simply computed off-line on a postprocessing computer. However, numerical investigations have shown that feeding the output back into the reservoir opens the possibility of long-horizon time-series forecasting. Here, we present a photonic reservoir computer with output feedback, and we demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, we introduce several metrics, based on standard signal-processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers and, therefore, the range of applications they could potentially tackle. It also raises interesting questions in nonlinear dynamics and chaos theory.

  10. Intelligent vehicle control: Opportunities for terrestrial-space system integration

    NASA Technical Reports Server (NTRS)

    Shoemaker, Charles

    1994-01-01

    For 11 years the Department of Defense has cooperated with a diverse array of other Federal agencies including the National Institute of Standards and Technology, the Jet Propulsion Laboratory, and the Department of Energy, to develop robotics technology for unmanned ground systems. These activities have addressed control system architectures supporting sharing of tasks between the system operator and various automated subsystems, man-machine interfaces to intelligent vehicles systems, video compression supporting vehicle driving in low data rate digital communication environments, multiple simultaneous vehicle control by a single operator, path planning and retrace, and automated obstacle detection and avoidance subsystem. Performance metrics and test facilities for robotic vehicles were developed permitting objective performance assessment of a variety of operator-automated vehicle control regimes. Progress in these areas will be described in the context of robotic vehicle testbeds specifically developed for automated vehicle research. These initiatives, particularly as regards the data compression, task sharing, and automated mobility topics, also have relevance in the space environment. The intersection of technology development interests between these two communities will be discussed in this paper.

  11. Person Re-Identification via Distance Metric Learning With Latent Variables.

    PubMed

    Sun, Chong; Wang, Dong; Lu, Huchuan

    2017-01-01

    In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.

  12. 45 CFR 2543.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 4 2012-10-01 2012-10-01 false Metric system of measurement. 2543.15 Section 2543...-PROFIT ORGANIZATIONS Pre-Award Requirements § 2543.15 Metric system of measurement. The Metric Conversion... activities. Metric implementation may take longer where the use of the system is initially impractical or...

  13. 45 CFR 2543.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Metric system of measurement. 2543.15 Section 2543...-PROFIT ORGANIZATIONS Pre-Award Requirements § 2543.15 Metric system of measurement. The Metric Conversion... activities. Metric implementation may take longer where the use of the system is initially impractical or...

  14. 45 CFR 2543.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 4 2014-10-01 2014-10-01 false Metric system of measurement. 2543.15 Section 2543...-PROFIT ORGANIZATIONS Pre-Award Requirements § 2543.15 Metric system of measurement. The Metric Conversion... activities. Metric implementation may take longer where the use of the system is initially impractical or...

  15. 15 CFR 14.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 1 2014-01-01 2014-01-01 false Metric system of measurement. 14.15... COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 14.15 Metric system of measurement. The Metric Conversion... activities. Metric implementation may take longer where the use of the system is initially impractical or...

  16. 45 CFR 2543.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false Metric system of measurement. 2543.15 Section 2543...-PROFIT ORGANIZATIONS Pre-Award Requirements § 2543.15 Metric system of measurement. The Metric Conversion... activities. Metric implementation may take longer where the use of the system is initially impractical or...

  17. 45 CFR 2543.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 4 2011-10-01 2011-10-01 false Metric system of measurement. 2543.15 Section 2543...-PROFIT ORGANIZATIONS Pre-Award Requirements § 2543.15 Metric system of measurement. The Metric Conversion... activities. Metric implementation may take longer where the use of the system is initially impractical or...

  18. 15 CFR 14.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Metric system of measurement. 14.15... COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 14.15 Metric system of measurement. The Metric Conversion... activities. Metric implementation may take longer where the use of the system is initially impractical or...

  19. 15 CFR 14.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 1 2012-01-01 2012-01-01 false Metric system of measurement. 14.15... COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 14.15 Metric system of measurement. The Metric Conversion... activities. Metric implementation may take longer where the use of the system is initially impractical or...

  20. Intrasubject multimodal groupwise registration with the conditional template entropy.

    PubMed

    Polfliet, Mathias; Klein, Stefan; Huizinga, Wyke; Paulides, Margarethus M; Niessen, Wiro J; Vandemeulebroucke, Jef

    2018-05-01

    Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Comparing Phylogenetic Trees by Matching Nodes Using the Transfer Distance Between Partitions

    PubMed Central

    Giaro, Krzysztof

    2017-01-01

    Abstract Ability to quantify dissimilarity of different phylogenetic trees describing the relationship between the same group of taxa is required in various types of phylogenetic studies. For example, such metrics are used to assess the quality of phylogeny construction methods, to define optimization criteria in supertree building algorithms, or to find horizontal gene transfer (HGT) events. Among the set of metrics described so far in the literature, the most commonly used seems to be the Robinson–Foulds distance. In this article, we define a new metric for rooted trees—the Matching Pair (MP) distance. The MP metric uses the concept of the minimum-weight perfect matching in a complete bipartite graph constructed from partitions of all pairs of leaves of the compared phylogenetic trees. We analyze the properties of the MP metric and present computational experiments showing its potential applicability in tasks related to finding the HGT events. PMID:28177699

  2. Comparing Phylogenetic Trees by Matching Nodes Using the Transfer Distance Between Partitions.

    PubMed

    Bogdanowicz, Damian; Giaro, Krzysztof

    2017-05-01

    Ability to quantify dissimilarity of different phylogenetic trees describing the relationship between the same group of taxa is required in various types of phylogenetic studies. For example, such metrics are used to assess the quality of phylogeny construction methods, to define optimization criteria in supertree building algorithms, or to find horizontal gene transfer (HGT) events. Among the set of metrics described so far in the literature, the most commonly used seems to be the Robinson-Foulds distance. In this article, we define a new metric for rooted trees-the Matching Pair (MP) distance. The MP metric uses the concept of the minimum-weight perfect matching in a complete bipartite graph constructed from partitions of all pairs of leaves of the compared phylogenetic trees. We analyze the properties of the MP metric and present computational experiments showing its potential applicability in tasks related to finding the HGT events.

  3. Scoring Coreference Partitions of Predicted Mentions: A Reference Implementation.

    PubMed

    Pradhan, Sameer; Luo, Xiaoqiang; Recasens, Marta; Hovy, Eduard; Ng, Vincent; Strube, Michael

    2014-06-01

    The definitions of two coreference scoring metrics- B 3 and CEAF-are underspecified with respect to predicted , as opposed to key (or gold ) mentions. Several variations have been proposed that manipulate either, or both, the key and predicted mentions in order to get a one-to-one mapping. On the other hand, the metric BLANC was, until recently, limited to scoring partitions of key mentions. In this paper, we (i) argue that mention manipulation for scoring predicted mentions is unnecessary, and potentially harmful as it could produce unintuitive results; (ii) illustrate the application of all these measures to scoring predicted mentions; (iii) make available an open-source, thoroughly-tested reference implementation of the main coreference evaluation measures; and (iv) rescore the results of the CoNLL-2011/2012 shared task systems with this implementation. This will help the community accurately measure and compare new end-to-end coreference resolution algorithms.

  4. The Metric System: America Measures Up. 1979 Edition.

    ERIC Educational Resources Information Center

    Anderson, Glen; Gallagher, Paul

    This training manual is designed to introduce and assist naval personnel in the conversion from the English system of measurement to the metric system of measurement. The book tells what the "move to metrics" is all about, and details why the change to the metric system is necessary. Individual chapters are devoted to how the metric system will…

  5. Face and Construct Validation of a Virtual Peg Transfer Simulator

    PubMed Central

    Arikatla, Venkata S; Sankaranarayanan, Ganesh; Ahn, Woojin; Chellali, Amine; De, Suvranu; Caroline, GL; Hwabejire, John; DeMoya, Marc; Schwaitzberg, Steven; Jones, Daniel B.

    2013-01-01

    Background The Fundamentals of Laparascopic Surgery (FLS) trainer box is now established as a standard for evaluating minimally invasive surgical skills. A particularly simple task in this trainer box is the peg transfer task which is aimed at testing the surgeon’s bimanual dexterity, hand-eye coordination, speed and precision. The Virtual Basic Laparoscopic Skill Trainer (VBLaST©) is a virtual version of the FLS tasks which allows automatic scoring and real time, subjective quantification of performance without the need of a human proctor. In this paper we report validation studies of the VBLaST© peg transfer (VBLaST-PT©) simulator. Methods Thirty-five subjects with medical background were divided into two groups: experts (PGY 4-5, fellows and practicing surgeons) and novices (PGY 1-3). The subjects were asked to perform the peg transfer task on both the FLS trainer box and the VBLaST-PT© simulator and their performance was evaluated based on established metrics of error and time. A new length of trajectory (LOT) metric has also been introduced for offline analysis. A questionnaire was used to rate the realism of the virtual system on a 5-point Likert scale. Results Preliminary face validation of the VBLaST-PT© with 34 subjects rated on a 5-point Likert scale questionnaire revealed high scores for all aspects of simulation, with 3.53 being the lowest mean score across all questions. A two-tailed Mann-Whitney performed on the total scores showed significant (p=0.001) difference between the groups. A similar test performed on the task time (p=0.002) and the length of trajectory (p=0.004) separately showed statistically significant differences between the experts and novice groups (p<0.05). The experts appear to be traversing shorter overall trajectories in less time than the novices. Conclusion VBLaST-PT© showed both face and construct validity and has promise as a substitute for the FLS to training peg transfer skills. PMID:23263645

  6. An Investigation of the Relationship Between Automated Machine Translation Evaluation Metrics and User Performance on an Information Extraction Task

    DTIC Science & Technology

    2007-01-01

    parameter dimension between the two models). 93 were tested.3 Model 1 log( pHits 1− pHits ) = α + β1 ∗ MetricScore (6.6) The results for each of the...505.67 oTERavg .357 .13 .007 log( pHits 1− pHits ), that is, log-odds of correct task performance, of 2.79 over the intercept only model. All... pHits 1− pHits ) = −1.15− .418× I[MT=2] − .527× I[MT=3] + 1.78×METEOR+ 1.28×METEOR × I[MT=2] + 1.86×METEOR × I[MT=3] (6.7) Model 3 log( pHits 1− pHits

  7. Investigation of Two Models to Set and Evaluate Quality Targets for HbA1c: Biological Variation and Sigma-metrics

    PubMed Central

    Weykamp, Cas; John, Garry; Gillery, Philippe; English, Emma; Ji, Linong; Lenters-Westra, Erna; Little, Randie R.; Roglic, Gojka; Sacks, David B.; Takei, Izumi

    2016-01-01

    Background A major objective of the IFCC Task Force on implementation of HbA1c standardization is to develop a model to define quality targets for HbA1c. Methods Two generic models, the Biological Variation and Sigma-metrics model, are investigated. Variables in the models were selected for HbA1c and data of EQA/PT programs were used to evaluate the suitability of the models to set and evaluate quality targets within and between laboratories. Results In the biological variation model 48% of individual laboratories and none of the 26 instrument groups met the minimum performance criterion. In the Sigma-metrics model, with a total allowable error (TAE) set at 5 mmol/mol (0.46% NGSP) 77% of the individual laboratories and 12 of 26 instrument groups met the 2 sigma criterion. Conclusion The Biological Variation and Sigma-metrics model were demonstrated to be suitable for setting and evaluating quality targets within and between laboratories. The Sigma-metrics model is more flexible as both the TAE and the risk of failure can be adjusted to requirements related to e.g. use for diagnosis/monitoring or requirements of (inter)national authorities. With the aim of reaching international consensus on advice regarding quality targets for HbA1c, the Task Force suggests the Sigma-metrics model as the model of choice with default values of 5 mmol/mol (0.46%) for TAE, and risk levels of 2 and 4 sigma for routine laboratories and laboratories performing clinical trials, respectively. These goals should serve as a starting point for discussion with international stakeholders in the field of diabetes. PMID:25737535

  8. 48 CFR 611.002-70 - Metric system implementation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... information or comparison. Hard metric means the use of only standard metric (SI) measurements in specifications, standards, supplies and services. Hybrid system means the use of both traditional and hard metric... possible. Alternatives to hard metric are soft, dual and hybrid metric terms. The Metric Handbook for...

  9. 22 CFR 518.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 2 2014-04-01 2014-04-01 false Metric system of measurement. 518.15 Section... ORGANIZATIONS Pre-Award Requirements § 518.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is...

  10. 29 CFR 95.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 1 2013-07-01 2013-07-01 false Metric system of measurement. 95.15 Section 95.15 Labor... Requirements § 95.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  11. 49 CFR 19.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Metric system of measurement. 19.15 Section 19.15... Requirements § 19.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  12. 49 CFR 19.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 1 2011-10-01 2011-10-01 false Metric system of measurement. 19.15 Section 19.15... Requirements § 19.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  13. 29 CFR 95.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 1 2014-07-01 2013-07-01 true Metric system of measurement. 95.15 Section 95.15 Labor... Requirements § 95.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  14. 22 CFR 518.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 2 2012-04-01 2009-04-01 true Metric system of measurement. 518.15 Section 518... Pre-Award Requirements § 518.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the...

  15. 49 CFR 19.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 1 2012-10-01 2012-10-01 false Metric system of measurement. 19.15 Section 19.15... Requirements § 19.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  16. 29 CFR 95.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 1 2012-07-01 2012-07-01 false Metric system of measurement. 95.15 Section 95.15 Labor... Requirements § 95.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  17. 29 CFR 95.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Metric system of measurement. 95.15 Section 95.15 Labor... Requirements § 95.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  18. 49 CFR 19.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 1 2014-10-01 2014-10-01 false Metric system of measurement. 19.15 Section 19.15... Requirements § 19.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  19. 43 CFR 12.915 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 1 2011-10-01 2011-10-01 false Metric system of measurement. 12.915... Requirements § 12.915 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement...

  20. 43 CFR 12.915 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 1 2013-10-01 2013-10-01 false Metric system of measurement. 12.915... Requirements § 12.915 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement...

  1. 43 CFR 12.915 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 1 2012-10-01 2011-10-01 true Metric system of measurement. 12.915... Requirements § 12.915 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement...

  2. 43 CFR 12.915 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 1 2014-10-01 2014-10-01 false Metric system of measurement. 12.915... Requirements § 12.915 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement...

  3. 22 CFR 518.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Metric system of measurement. 518.15 Section 518... Pre-Award Requirements § 518.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the...

  4. 49 CFR 19.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 1 2013-10-01 2013-10-01 false Metric system of measurement. 19.15 Section 19.15... Requirements § 19.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  5. 22 CFR 518.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 2 2013-04-01 2009-04-01 true Metric system of measurement. 518.15 Section 518... Pre-Award Requirements § 518.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the...

  6. 29 CFR 95.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 1 2011-07-01 2011-07-01 false Metric system of measurement. 95.15 Section 95.15 Labor... Requirements § 95.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the preferred measurement...

  7. 22 CFR 518.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 2 2011-04-01 2009-04-01 true Metric system of measurement. 518.15 Section 518... Pre-Award Requirements § 518.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205), declares that the metric system is the...

  8. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  9. Area of Concern: a new paradigm in life cycle assessment for ...

    EPA Pesticide Factsheets

    Purpose: As a class of environmental metrics, footprints have been poorly defined, have shared an unclear relationship to life cycle assessment (LCA), and the variety of approaches to quantification have sometimes resulted in confusing and contradictory messages in the marketplace. In response, a task force operating under the auspices of the UNEP/SETAC Life Cycle Initiative project on environmental life cycle impact assessment (LCIA) has been working to develop generic guidance for developers of footprint metrics. The purpose of this paper is to introduce a universal footprint definition and related terminology as well as to discuss modelling implications.MethodsThe task force has worked from the perspective that footprints should be based on LCA methodology, underpinned by the same data systems and models as used in LCA. However, there are important differences in purpose and orientation relative to LCA impact category indicators. Footprints have a primary orientation toward society and nontechnical stakeholders. They are also typically of narrow scope, having the purpose of reporting only in relation to specific topics. In comparison, LCA has a primary orientation toward stakeholders interested in comprehensive evaluation of overall environmental performance and trade-offs among impact categories. These differences create tension between footprints, the existing LCIA framework based on the area of protection paradigm and the core LCA standards ISO14040/44.Res

  10. Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)

    NASA Astrophysics Data System (ADS)

    Blasch, Erik

    2015-06-01

    Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.

  11. The Fundamentals of Laparoscopic Surgery and LapVR evaluation metrics may not correlate with operative performance in a novice cohort

    PubMed Central

    Steigerwald, Sarah N.; Park, Jason; Hardy, Krista M.; Gillman, Lawrence; Vergis, Ashley S.

    2015-01-01

    Background Considerable resources have been invested in both low- and high-fidelity simulators in surgical training. The purpose of this study was to investigate if the Fundamentals of Laparoscopic Surgery (FLS, low-fidelity box trainer) and LapVR (high-fidelity virtual reality) training systems correlate with operative performance on the Global Operative Assessment of Laparoscopic Skills (GOALS) global rating scale using a porcine cholecystectomy model in a novice surgical group with minimal laparoscopic experience. Methods Fourteen postgraduate year 1 surgical residents with minimal laparoscopic experience performed tasks from the FLS program and the LapVR simulator as well as a live porcine laparoscopic cholecystectomy. Performance was evaluated using standardized FLS metrics, automatic computer evaluations, and a validated global rating scale. Results Overall, FLS score did not show an association with GOALS global rating scale score on the porcine cholecystectomy. None of the five LapVR task scores were significantly associated with GOALS score on the porcine cholecystectomy. Conclusions Neither the low-fidelity box trainer or the high-fidelity virtual simulator demonstrated significant correlation with GOALS operative scores. These findings offer caution against the use of these modalities for brief assessments of novice surgical trainees, especially for predictive or selection purposes. PMID:26641071

  12. Zone calculation as a tool for assessing performance outcome in laparoscopic suturing.

    PubMed

    Buckley, Christina E; Kavanagh, Dara O; Nugent, Emmeline; Ryan, Donncha; Traynor, Oscar J; Neary, Paul C

    2015-06-01

    Simulator performance is measured by metrics, which are valued as an objective way of assessing trainees. Certain procedures such as laparoscopic suturing, however, may not be suitable for assessment under traditionally formulated metrics. Our aim was to assess if our new metric is a valid method of assessing laparoscopic suturing. A software program was developed to order to create a new metric, which would calculate the percentage of time spent operating within pre-defined areas called "zones." Twenty-five candidates (medical students N = 10, surgical residents N = 10, and laparoscopic experts N = 5) performed the laparoscopic suturing task on the ProMIS III(®) simulator. New metrics of "in-zone" and "out-zone" scores as well as traditional metrics of time, path length, and smoothness were generated. Performance was also assessed by two blinded observers using the OSATS and FLS rating scales. This novel metric was evaluated by comparing it to both traditional metrics and subjective scores. There was a significant difference in the average in-zone and out-zone scores between all three experience groups (p < 0.05). The new zone metrics scores correlated significantly with the subjective-blinded observer scores of OSATS and FLS (p = 0.0001). The new zone metric scores also correlated significantly with the traditional metrics of path length, time, and smoothness (p < 0.05). The new metric is a valid tool for assessing laparoscopic suturing objectively. This could be incorporated into a competency-based curriculum to monitor resident progression in the simulated setting.

  13. 14 CFR § 1274.206 - Metric Conversion Act.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... metric system is the preferred measurement system for U.S. trade and commerce. NASA's policy with respect to the metric measurement system is stated in NPD 8010.2, Use of the Metric System of Measurement in... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Metric Conversion Act. § 1274.206 Section...

  14. Quantitative assessment based on kinematic measures of functional impairments during upper extremity movements: A review.

    PubMed

    de los Reyes-Guzmán, Ana; Dimbwadyo-Terrer, Iris; Trincado-Alonso, Fernando; Monasterio-Huelin, Félix; Torricelli, Diego; Gil-Agudo, Angel

    2014-08-01

    Quantitative measures of human movement quality are important for discriminating healthy and pathological conditions and for expressing the outcomes and clinically important changes in subjects' functional state. However the most frequently used instruments for the upper extremity functional assessment are clinical scales, that previously have been standardized and validated, but have a high subjective component depending on the observer who scores the test. But they are not enough to assess motor strategies used during movements, and their use in combination with other more objective measures is necessary. The objective of the present review is to provide an overview on objective metrics found in literature with the aim of quantifying the upper extremity performance during functional tasks, regardless of the equipment or system used for registering kinematic data. A search in Medline, Google Scholar and IEEE Xplore databases was performed following a combination of a series of keywords. The full scientific papers that fulfilled the inclusion criteria were included in the review. A set of kinematic metrics was found in literature in relation to joint displacements, analysis of hand trajectories and velocity profiles. These metrics were classified into different categories according to the movement characteristic that was being measured. These kinematic metrics provide the starting point for a proposed objective metrics for the functional assessment of the upper extremity in people with movement disorders as a consequence of neurological injuries. Potential areas of future and further research are presented in the Discussion section. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Say "Yes" to Metric Measure.

    ERIC Educational Resources Information Center

    Monroe, Eula Ewing; Nelson, Marvin N.

    2000-01-01

    Provides a brief history of the metric system. Discusses the infrequent use of the metric measurement system in the United States, why conversion from the customary system to the metric system is difficult, and the need for change. (Contains 14 resources.) (ASK)

  16. A Quantitative Relationship between Signal Detection in Attention and Approach/Avoidance Behavior

    PubMed Central

    Viswanathan, Vijay; Sheppard, John P.; Kim, Byoung W.; Plantz, Christopher L.; Ying, Hao; Lee, Myung J.; Raman, Kalyan; Mulhern, Frank J.; Block, Martin P.; Calder, Bobby; Lee, Sang; Mortensen, Dale T.; Blood, Anne J.; Breiter, Hans C.

    2017-01-01

    This study examines how the domains of reward and attention, which are often studied as independent processes, in fact interact at a systems level. We operationalize divided attention with a continuous performance task and variables from signal detection theory (SDT), and reward/aversion with a keypress task measuring approach/avoidance in the framework of relative preference theory (RPT). Independent experiments with the same subjects showed a significant association between one SDT and two RPT variables, visualized as a three-dimensional structure. Holding one of these three variables constant, further showed a significant relationship between a loss aversion-like metric from the approach/avoidance task, and the response bias observed during the divided attention task. These results indicate that a more liberal response bias under signal detection (i.e., a higher tolerance for noise, resulting in a greater proportion of false alarms) is associated with higher “loss aversion.” Furthermore, our functional model suggests a mechanism for processing constraints with divided attention and reward/aversion. Together, our results argue for a systematic relationship between divided attention and reward/aversion processing in humans. PMID:28270776

  17. A Quantitative Relationship between Signal Detection in Attention and Approach/Avoidance Behavior.

    PubMed

    Viswanathan, Vijay; Sheppard, John P; Kim, Byoung W; Plantz, Christopher L; Ying, Hao; Lee, Myung J; Raman, Kalyan; Mulhern, Frank J; Block, Martin P; Calder, Bobby; Lee, Sang; Mortensen, Dale T; Blood, Anne J; Breiter, Hans C

    2017-01-01

    This study examines how the domains of reward and attention, which are often studied as independent processes, in fact interact at a systems level. We operationalize divided attention with a continuous performance task and variables from signal detection theory (SDT), and reward/aversion with a keypress task measuring approach/avoidance in the framework of relative preference theory (RPT). Independent experiments with the same subjects showed a significant association between one SDT and two RPT variables, visualized as a three-dimensional structure. Holding one of these three variables constant, further showed a significant relationship between a loss aversion-like metric from the approach/avoidance task, and the response bias observed during the divided attention task. These results indicate that a more liberal response bias under signal detection (i.e., a higher tolerance for noise, resulting in a greater proportion of false alarms) is associated with higher "loss aversion." Furthermore, our functional model suggests a mechanism for processing constraints with divided attention and reward/aversion. Together, our results argue for a systematic relationship between divided attention and reward/aversion processing in humans.

  18. Automated Neuropsychological Assessment Metrics (ANAM) Traumatic Brain Injury (TBI): Human Factors Assessment

    DTIC Science & Technology

    2011-07-01

    Lindsay, Cory Overby, Angela Jeter, Petra E. Alfred, Gary L. Boykin, Carita DeVilbiss, and Raymond Bateman ARL-TN-0440 July 2011...Neuropsychological Assessment Metrics (ANAM) Traumatic Brain Injury (TBI): Human Factors Assessment Valerie J. Rice, Petra E. Alfred, Gary L. Boykin...Angela Jeter*, Petra E. Alfred, Gary L. Boykin, Carita DeVilbiss, and Raymond Bateman 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT

  19. Metrics for TRUST in Integrated Circuits

    DTIC Science & Technology

    2008-06-01

    metrics; Trojan ; detection Introduction In the Defense Science Board report, “DSB Task Force on High Performance Microchip Supply” [1] several...BETAINV C m M m= − + − + Where Ptd | lower is a lower bound on Ptd with confidence C, m is the number of detected Trojan transistors, and M is the...total number of Trojan transistors. From this relationship, in order to establish Ptd = 90% at 90% confidence on a single test article, we must

  20. Can teenage novel users perform as well as General Surgery residents upon initial exposure to a robotic surgical system simulator?

    PubMed

    Mehta, A; Patel, S; Robison, W; Senkowski, T; Allen, J; Shaw, E; Senkowski, C

    2018-03-01

    New techniques in minimally invasive and robotic surgical platforms require staged curricula to insure proficiency. Scant literature exists as to how much simulation should play a role in training those who have skills in advanced surgical technology. The abilities of novel users may help discriminate if surgically experienced users should start at a higher simulation level or if the tasks are too rudimentary. The study's purpose is to explore the ability of General Surgery residents to gain proficiency on the dVSS as compared to novel users. The hypothesis is that Surgery residents will have increased proficiency in skills acquisition as compared to naive users. Six General Surgery residents at a single institution were compared with six teenagers using metrics measured by the dVSS. Participants were given two 1-h sessions to achieve an MScoreTM in the 90th percentile on each of the five simulations. MScoreTM software compiles a variety of metrics including total time, number of attempts, and high score. Statistical analysis was run using Student's t test. Significance was set at p value <0.05. Total time, attempts, and high score were compared between the two groups. The General Surgery residents took significantly less Total Time to complete Pegboard 1 (PB1) (p = 0.043). No significant difference was evident between the two groups in the other four simulations across the same MScoreTM metrics. A focused look at the energy dissection task revealed that overall score might not be discriminant enough. Our findings indicate that prior medical knowledge or surgical experience does not significantly impact one's ability to acquire new skills on the dVSS. It is recommended that residency-training programs begin to include exposure to robotic technology.

  1. Brain processing of meter and rhythm in music. Electrophysiological evidence of a common network.

    PubMed

    Kuck, Heleln; Grossbach, Michael; Bangert, Marc; Altenmüller, Eckart

    2003-11-01

    To determine cortical structures involved in "global" meter and "local" rhythm processing, slow brain potentials (DC potentials) were recorded from the scalp of 18 musically trained subjects while listening to pairs of monophonic sequences with both metric structure and rhythmic variations. The second sequence could be either identical to or different from the first one. Differences were either of a metric or a rhythmic nature. The subjects' task was to judge whether the sequences were identical or not. During processing of the auditory tasks, brain activation patterns along with the subjects' performance were assessed using 32-channel DC electroencephalography. Data were statistically analyzed using MANOVA. Processing of both meter and rhythm produced sustained cortical activation over bilateral frontal and temporal brain regions. A shift towards right hemispheric activation was pronounced during presentation of the second stimulus. Processing of rhythmic differences yielded a more centroparietal activation compared to metric processing. These results do not support Lerdhal and Jackendoff's two-component model, predicting a dissociation of left hemispheric rhythm and right hemispheric meter processing. We suggest that the uniform right temporofrontal predominance reflects auditory working memory and a pattern recognition module, which participates in both rhythm and meter processing. More pronounced parietal activation during rhythm processing may be related to switching of task-solving strategies towards mental imagination of the score.

  2. Shape detection of Gaborized outline versions of everyday objects

    PubMed Central

    Sassi, Michaël; Machilsen, Bart; Wagemans, Johan

    2012-01-01

    We previously tested the identifiability of six versions of Gaborized outlines of everyday objects, differing in the orientations assigned to elements inside and outside the outline. We found significant differences in identifiability between the versions, and related a number of stimulus metrics to identifiability [Sassi, M., Vancleef, K., Machilsen, B., Panis, S., & Wagemans, J. (2010). Identification of everyday objects on the basis of Gaborized outline versions. i-Perception, 1(3), 121–142]. In this study, after retesting the identifiability of new variants of three of the stimulus versions, we tested their robustness to local orientation jitter in a detection experiment. In general, our results replicated the key findings from the previous study, and allowed us to substantiate our earlier interpretations of the effects of our stimulus metrics and of the performance differences between the different stimulus versions. The results of the detection task revealed a different ranking order of stimulus versions than the identification task. By examining the parallels and differences between the effects of our stimulus metrics in the two tasks, we found evidence for a trade-off between shape detectability and identifiability. The generally simple and smooth shapes that yield the strongest contour integration and most robust detectability tend to lack the distinguishing features necessary for clear-cut identification. Conversely, contours that do contain such identifying features tend to be inherently more complex and, therefore, yield weaker integration and less robust detectability. PMID:23483752

  3. Changing to the Metric System.

    ERIC Educational Resources Information Center

    Chambers, Donald L.; Dowling, Kenneth W.

    This report examines educational aspects of the conversion to the metric system of measurement in the United States. Statements of positions on metrication and basic mathematical skills are given from various groups. Base units, symbols, prefixes, and style of the metric system are outlined. Guidelines for teaching metric concepts are given,…

  4. Optimal Modality Selection for Cooperative Human-Robot Task Completion.

    PubMed

    Jacob, Mithun George; Wachs, Juan P

    2016-12-01

    Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p <; 0.05) in all metrics validating the predictability of the methodology. The methodology is validated in two scenarios (with and without modeling the risk of a human-robot collision) and the differences in the lexicons are analyzed.

  5. Visual Analytics 101

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.

    This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics

  6. Using Multi-Core Systems for Rover Autonomy

    NASA Technical Reports Server (NTRS)

    Clement, Brad; Estlin, Tara; Bornstein, Benjamin; Springer, Paul; Anderson, Robert C.

    2010-01-01

    Task Objectives are: (1) Develop and demonstrate key capabilities for rover long-range science operations using multi-core computing, (a) Adapt three rover technologies to execute on SOA multi-core processor (b) Illustrate performance improvements achieved (c) Demonstrate adapted capabilities with rover hardware, (2) Targeting three high-level autonomy technologies (a) Two for onboard data analysis (b) One for onboard command sequencing/planning, (3) Technologies identified as enabling for future missions, (4)Benefits will be measured along several metrics: (a) Execution time / Power requirements (b) Number of data products processed per unit time (c) Solution quality

  7. Evaluation of Visual Analytics Environments: The Road to the Visual Analytics Science and Technology Challenge Evaluation Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Plaisant, Catherine; Whiting, Mark A.

    The evaluation of visual analytics environments was a topic in Illuminating the Path [Thomas 2005] as a critical aspect of moving research into practice. For a thorough understanding of the utility of the systems available, evaluation not only involves assessing the visualizations, interactions or data processing algorithms themselves, but also the complex processes that a tool is meant to support (such as exploratory data analysis and reasoning, communication through visualization, or collaborative data analysis [Lam 2012; Carpendale 2007]). Researchers and practitioners in the field have long identified many of the challenges faced when planning, conducting, and executing an evaluation ofmore » a visualization tool or system [Plaisant 2004]. Evaluation is needed to verify that algorithms and software systems work correctly and that they represent improvements over the current infrastructure. Additionally to effectively transfer new software into a working environment, it is necessary to ensure that the software has utility for the end-users and that the software can be incorporated into the end-user’s infrastructure and work practices. Evaluation test beds require datasets, tasks, metrics and evaluation methodologies. As noted in [Thomas 2005] it is difficult and expensive for any one researcher to setup an evaluation test bed so in many cases evaluation is setup for communities of researchers or for various research projects or programs. Examples of successful community evaluations can be found [Chinchor 1993; Voorhees 2007; FRGC 2012]. As visual analytics environments are intended to facilitate the work of human analysts, one aspect of evaluation needs to focus on the utility of the software to the end-user. This requires representative users, representative tasks, and metrics that measure the utility to the end-user. This is even more difficult as now one aspect of the test methodology is access to representative end-users to participate in the evaluation. In many cases the sensitive nature of data and tasks and difficult access to busy analysts puts even more of a burden on researchers to complete this type of evaluation. User-centered design goes beyond evaluation and starts with the user [Beyer 1997, Shneiderman 2009]. Having some knowledge of the type of data, tasks, and work practices helps researchers and developers know the correct paths to pursue in their work. When access to the end-users is problematic at best and impossible at worst, user-centered design becomes difficult. Researchers are unlikely to go to work on the type of problems faced by inaccessible users. Commercial vendors have difficulties evaluating and improving their products when they cannot observe real users working with their products. In well-established fields such as web site design or office software design, user-interface guidelines have been developed based on the results of empirical studies or the experience of experts. Guidelines can speed up the design process and replace some of the need for observation of actual users [heuristics review references]. In 2006 when the visual analytics community was initially getting organized, no such guidelines existed. Therefore, we were faced with the problem of developing an evaluation framework for the field of visual analytics that would provide representative situations and datasets, representative tasks and utility metrics, and finally a test methodology which would include a surrogate for representative users, increase interest in conducting research in the field, and provide sufficient feedback to the researchers so that they could improve their systems.« less

  8. Cues to viewing distance for stereoscopic depth constancy.

    PubMed

    Glennerster, A; Rogers, B J; Bradshaw, M F

    1998-01-01

    A veridical estimate of viewing distance is required in order to determine the metric structure of objects from binocular stereopsis. One example of a judgment of metric structure, which we used in our experiment, is the apparently circular cylinder task (E B Johnston, 1991 Vision Research 31 1351-1360). Most studies report underconstancy in this task when the stimulus is defined purely by binocular disparities. We examined the effect of two factors on performance: (i) the richness of the cues to viewing distance (using either a naturalistic setting with many cues to viewing distance or a condition in which the room and the monitors were obscured from view), and (ii) the range of stimulus disparities (cylinder depths) presented during an experimental run. We tested both experienced subjects (who had performed the task many times before under full-cue conditions) and naïve subjects. Depth constancy was reduced for the naïve subjects (from 62% to 46%) when the position of the monitors was obscured. Under similar conditions, the experienced subjects showed no reduction in constancy. In a second experiment, using a forced-choice method of constant stimuli, we found that depth constancy was reduced from 64% to 23% in naïve subjects and from 77% to 55% in experienced subjects when the same set of images was presented at all viewing distances rather than using a set of stimulus disparities proportional to the correct setting. One possible explanation of these results is that, under reduced-cue conditions, the range of disparities presented is used by the visual system as a cue to viewing distance.

  9. 10 CFR 600.306 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Metric system of measurement. 600.306 Section 600.306... system of measurement. (a) The Metric Conversion Act of 1975, as amended by the Omnibus Trade and... system is the preferred measurement system for U.S. trade and commerce. (2) The metric system of...

  10. 10 CFR 600.306 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Metric system of measurement. 600.306 Section 600.306... system of measurement. (a) The Metric Conversion Act of 1975, as amended by the Omnibus Trade and... system is the preferred measurement system for U.S. trade and commerce. (2) The metric system of...

  11. 10 CFR 600.306 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Metric system of measurement. 600.306 Section 600.306... system of measurement. (a) The Metric Conversion Act of 1975, as amended by the Omnibus Trade and... system is the preferred measurement system for U.S. trade and commerce. (2) The metric system of...

  12. 10 CFR 600.306 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Metric system of measurement. 600.306 Section 600.306... system of measurement. (a) The Metric Conversion Act of 1975, as amended by the Omnibus Trade and... system is the preferred measurement system for U.S. trade and commerce. (2) The metric system of...

  13. 10 CFR 600.306 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Metric system of measurement. 600.306 Section 600.306... system of measurement. (a) The Metric Conversion Act of 1975, as amended by the Omnibus Trade and... system is the preferred measurement system for U.S. trade and commerce. (2) The metric system of...

  14. Information on the metric system and related fields

    NASA Technical Reports Server (NTRS)

    Lange, E.

    1976-01-01

    This document contains about 7,600 references on the metric system and conversion to the metric system. These references include all known documents on the metric system as of December 1975, the month of enactment of the Metric Conversion Act of 1975. This bibliography includes books, reports, articles, presentations, periodicals, legislation, motion pictures, TV series, film strips, slides, posters, wall charts, education and training courses, addresses for information, and sources for metric materials and services. A comprehensive index is provided.

  15. 28 CFR 70.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Metric system of measurement. 70.15... AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 70.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  16. 22 CFR 226.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Metric system of measurement. 226.15 Section....S. NON-GOVERNMENTAL ORGANIZATIONS Pre-award Requirements § 226.15 Metric system of measurement. (a...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. (b...

  17. 22 CFR 226.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Metric system of measurement. 226.15 Section....S. NON-GOVERNMENTAL ORGANIZATIONS Pre-award Requirements § 226.15 Metric system of measurement. (a...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. (b...

  18. 28 CFR 70.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Metric system of measurement. 70.15... AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 70.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  19. 38 CFR 49.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2014-07-01 2014-07-01 false Metric system of... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 49.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  20. 48 CFR 711.002-70 - Metric system waivers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Metric system waivers. 711... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS 711.002-70 Metric system waivers. (a) Criteria. The FAR 11.002(b) requirement to use the metric system of measurement for specifications and quantitative data that are...

  1. 38 CFR 49.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Metric system of... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 49.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  2. 38 CFR 49.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2012-07-01 2012-07-01 false Metric system of... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 49.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  3. 38 CFR 49.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2013-07-01 2013-07-01 false Metric system of... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 49.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  4. 48 CFR 711.002-70 - Metric system waivers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Metric system waivers. 711... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS 711.002-70 Metric system waivers. (a) Criteria. The FAR 11.002(b) requirement to use the metric system of measurement for specifications and quantitative data that are...

  5. 48 CFR 711.002-70 - Metric system waivers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Metric system waivers. 711... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS 711.002-70 Metric system waivers. (a) Criteria. The FAR 11.002(b) requirement to use the metric system of measurement for specifications and quantitative data that are...

  6. 22 CFR 226.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Metric system of measurement. 226.15 Section....S. NON-GOVERNMENTAL ORGANIZATIONS Pre-award Requirements § 226.15 Metric system of measurement. (a...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. (b...

  7. 38 CFR 49.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2011-07-01 2011-07-01 false Metric system of... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 49.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  8. 28 CFR 70.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Metric system of measurement. 70.15... AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 70.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  9. 28 CFR 70.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Metric system of measurement. 70.15... AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 70.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  10. 48 CFR 711.002-70 - Metric system waivers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Metric system waivers. 711... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS 711.002-70 Metric system waivers. (a) Criteria. The FAR 11.002(b) requirement to use the metric system of measurement for specifications and quantitative data that are...

  11. 28 CFR 70.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Metric system of measurement. 70.15... AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 70.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  12. Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.

    2018-01-01

    The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.

  13. 14 CFR 1274.206 - Metric Conversion Act.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... metric measurement system is stated in NPD 8010.2, Use of the Metric System of Measurement in NASA... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Metric Conversion Act. 1274.206 Section... WITH COMMERCIAL FIRMS Pre-Award Requirements § 1274.206 Metric Conversion Act. The Metric Conversion...

  14. 14 CFR 1274.206 - Metric Conversion Act.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... metric measurement system is stated in NPD 8010.2, Use of the Metric System of Measurement in NASA... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Metric Conversion Act. 1274.206 Section... WITH COMMERCIAL FIRMS Pre-Award Requirements § 1274.206 Metric Conversion Act. The Metric Conversion...

  15. The model for Fundamentals of Endovascular Surgery (FEVS) successfully defines the competent endovascular surgeon.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean

    2015-12-01

    Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  16. MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathiaseelan, V; Thomadsen, B

    Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure themore » delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability metrics: Different cultures/practices affecting the effectiveness of methods and metrics. Show examples of quality assurance workflows, Statistical process control, that monitor the treatment planning and delivery process to identify errors. To learn to identify and prioritize risks and QA procedures in radiation oncology. Try to answer the question: Can a quality assurance program aided by quality assurance metrics help minimize errors and ensure safe treatment delivery. Should such metrics be institution specific.« less

  17. The metric system: An introduction

    NASA Astrophysics Data System (ADS)

    Lumley, Susan M.

    On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.

  18. Constrained Metric Learning by Permutation Inducing Isometries.

    PubMed

    Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle

    2016-01-01

    The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.

  19. The Effects of Automation on Battle Manager Workload and Performance

    DTIC Science & Technology

    2008-01-01

    such as the National Aeronautics and Space Administration ( NASA ) Task Load Index ( TLX ) (Hart & Staveland, 1988), the Subjec- tive Workload Assessment...Factor Metric Experience Demographic questionnaire Stress level NASA TLX SWAT Assessment Observer reports Confidence Logged performance data...Mahwah, New Jersey: Law- rence Erlbaum Associates. Hart, S. G., & Staveland, L. E. (1988). Development of NASA - TLX (Task Load Index): Results of

  20. Prostate Cancer Biorepository Network

    DTIC Science & Technology

    2017-10-01

    Department of the Army position, policy or decision unless so designated by other documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704...clinical data including pathology and outcome data are annotated with the biospecimens. Specialized processing consists of tissue microarray design ...Months 1- 6): Completed in 1st quarter Task 5. Report on performance metrics: Ongoing (accrual reports are provided on quarterly basis) Task 6

  1. An overview of the BioCreative 2012 Workshop Track III: interactive text mining task.

    PubMed

    Arighi, Cecilia N; Carterette, Ben; Cohen, K Bretonnel; Krallinger, Martin; Wilbur, W John; Fey, Petra; Dodson, Robert; Cooper, Laurel; Van Slyke, Ceri E; Dahdul, Wasila; Mabee, Paula; Li, Donghui; Harris, Bethany; Gillespie, Marc; Jimenez, Silvia; Roberts, Phoebe; Matthews, Lisa; Becker, Kevin; Drabkin, Harold; Bello, Susan; Licata, Luana; Chatr-aryamontri, Andrew; Schaeffer, Mary L; Park, Julie; Haendel, Melissa; Van Auken, Kimberly; Li, Yuling; Chan, Juancarlos; Muller, Hans-Michael; Cui, Hong; Balhoff, James P; Chi-Yang Wu, Johnny; Lu, Zhiyong; Wei, Chih-Hsuan; Tudor, Catalina O; Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar; Cejuela, Juan Miguel; Dubey, Pratibha; Wu, Cathy

    2013-01-01

    In many databases, biocuration primarily involves literature curation, which usually involves retrieving relevant articles, extracting information that will translate into annotations and identifying new incoming literature. As the volume of biological literature increases, the use of text mining to assist in biocuration becomes increasingly relevant. A number of groups have developed tools for text mining from a computer science/linguistics perspective, and there are many initiatives to curate some aspect of biology from the literature. Some biocuration efforts already make use of a text mining tool, but there have not been many broad-based systematic efforts to study which aspects of a text mining tool contribute to its usefulness for a curation task. Here, we report on an effort to bring together text mining tool developers and database biocurators to test the utility and usability of tools. Six text mining systems presenting diverse biocuration tasks participated in a formal evaluation, and appropriate biocurators were recruited for testing. The performance results from this evaluation indicate that some of the systems were able to improve efficiency of curation by speeding up the curation task significantly (∼1.7- to 2.5-fold) over manual curation. In addition, some of the systems were able to improve annotation accuracy when compared with the performance on the manually curated set. In terms of inter-annotator agreement, the factors that contributed to significant differences for some of the systems included the expertise of the biocurator on the given curation task, the inherent difficulty of the curation and attention to annotation guidelines. After the task, annotators were asked to complete a survey to help identify strengths and weaknesses of the various systems. The analysis of this survey highlights how important task completion is to the biocurators' overall experience of a system, regardless of the system's high score on design, learnability and usability. In addition, strategies to refine the annotation guidelines and systems documentation, to adapt the tools to the needs and query types the end user might have and to evaluate performance in terms of efficiency, user interface, result export and traditional evaluation metrics have been analyzed during this task. This analysis will help to plan for a more intense study in BioCreative IV.

  2. 41 CFR 105-72.205 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false Metric system of... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  3. 40 CFR 30.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 1 2013-07-01 2013-07-01 false Metric system of measurement. 30.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 30.15 Metric system of...), declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  4. 45 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Metric system of measurement. 74.15 Section 74.15... ORGANIZATIONS, AND COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 74.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  5. 32 CFR 22.530 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Metric system of measurement. 22.530 Section 22... REGULATIONS DoD GRANTS AND AGREEMENTS-AWARD AND ADMINISTRATION National Policy Matters § 22.530 Metric system... CFR, 1991 Comp., p. 343), states that: (1) The metric system is the preferred measurement system for U...

  6. 32 CFR 32.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 1 2013-07-01 2013-07-01 false Metric system of measurement. 32.15 Section 32..., HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 32.15 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce, and for...

  7. 10 CFR 600.115 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Metric system of measurement. 600.115 Section 600.115..., Hospitals, and Other Nonprofit Organizations Pre-Award Requirements § 600.115 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  8. 32 CFR 22.530 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 1 2014-07-01 2014-07-01 false Metric system of measurement. 22.530 Section 22... REGULATIONS DoD GRANTS AND AGREEMENTS-AWARD AND ADMINISTRATION National Policy Matters § 22.530 Metric system... CFR, 1991 Comp., p. 343), states that: (1) The metric system is the preferred measurement system for U...

  9. 32 CFR 32.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 1 2014-07-01 2014-07-01 false Metric system of measurement. 32.15 Section 32..., HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 32.15 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce, and for...

  10. 22 CFR 145.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Metric system of measurement. 145.15 Section... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  11. 40 CFR 30.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Metric system of measurement. 30.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 30.15 Metric system of...), declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  12. 36 CFR 1210.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Metric system of measurement... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  13. 2 CFR 215.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Metric system of measurement. 215.15 Section... NON-PROFIT ORGANIZATIONS (OMB CIRCULAR A-110) Pre-Award Requirements § 215.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  14. 32 CFR 22.530 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 1 2012-07-01 2012-07-01 false Metric system of measurement. 22.530 Section 22... REGULATIONS DoD GRANTS AND AGREEMENTS-AWARD AND ADMINISTRATION National Policy Matters § 22.530 Metric system... CFR, 1991 Comp., p. 343), states that: (1) The metric system is the preferred measurement system for U...

  15. 10 CFR 600.115 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Metric system of measurement. 600.115 Section 600.115..., Hospitals, and Other Nonprofit Organizations Pre-Award Requirements § 600.115 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  16. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  17. 45 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Metric system of measurement. 74.15 Section 74.15... ORGANIZATIONS, AND COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 74.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  18. 32 CFR 32.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Metric system of measurement. 32.15 Section 32..., HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 32.15 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce, and for...

  19. 45 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Metric system of measurement. 74.15 Section 74.15... ORGANIZATIONS, AND COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 74.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  20. 22 CFR 145.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Metric system of measurement. 145.15 Section... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  1. 10 CFR 600.115 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Metric system of measurement. 600.115 Section 600.115..., Hospitals, and Other Nonprofit Organizations Pre-Award Requirements § 600.115 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  2. 32 CFR 32.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 1 2012-07-01 2012-07-01 false Metric system of measurement. 32.15 Section 32..., HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 32.15 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce, and for...

  3. 22 CFR 145.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Metric system of measurement. 145.15 Section... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  4. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  5. 32 CFR 32.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 1 2011-07-01 2011-07-01 false Metric system of measurement. 32.15 Section 32..., HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 32.15 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce, and for...

  6. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  7. 36 CFR § 1210.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Metric system of measurement... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  8. 41 CFR 105-72.205 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Metric system of... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  9. 22 CFR 145.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Metric system of measurement. 145.15 Section... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  10. 40 CFR 30.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 1 2012-07-01 2012-07-01 false Metric system of measurement. 30.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 30.15 Metric system of...), declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  11. 40 CFR 30.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Metric system of measurement. 30.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 30.15 Metric system of...), declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  12. 45 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Metric system of measurement. 74.15 Section 74.15... ORGANIZATIONS, AND COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 74.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  13. 36 CFR 1210.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Metric system of measurement... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  14. 22 CFR 145.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Metric system of measurement. 145.15 Section... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  15. 10 CFR 600.115 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Metric system of measurement. 600.115 Section 600.115..., Hospitals, and Other Nonprofit Organizations Pre-Award Requirements § 600.115 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  16. 36 CFR 1210.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Metric system of measurement... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  17. 40 CFR 30.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Metric system of measurement. 30.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 30.15 Metric system of...), declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  18. 41 CFR 105-72.205 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false Metric system of... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  19. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  20. 10 CFR 600.115 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Metric system of measurement. 600.115 Section 600.115..., Hospitals, and Other Nonprofit Organizations Pre-Award Requirements § 600.115 Metric system of measurement...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  1. 41 CFR 105-72.205 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false Metric system of... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  2. 36 CFR 1210.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Metric system of measurement... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  3. 45 CFR 74.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Metric system of measurement. 74.15 Section 74.15... ORGANIZATIONS, AND COMMERCIAL ORGANIZATIONS Pre-Award Requirements § 74.15 Metric system of measurement. The... that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  4. 41 CFR 105-72.205 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false Metric system of... system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and...

  5. 32 CFR 22.530 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 1 2011-07-01 2011-07-01 false Metric system of measurement. 22.530 Section 22... REGULATIONS DoD GRANTS AND AGREEMENTS-AWARD AND ADMINISTRATION National Policy Matters § 22.530 Metric system... CFR, 1991 Comp., p. 343), states that: (1) The metric system is the preferred measurement system for U...

  6. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of...) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The Act...

  7. 32 CFR 22.530 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 1 2013-07-01 2013-07-01 false Metric system of measurement. 22.530 Section 22... REGULATIONS DoD GRANTS AND AGREEMENTS-AWARD AND ADMINISTRATION National Policy Matters § 22.530 Metric system... CFR, 1991 Comp., p. 343), states that: (1) The metric system is the preferred measurement system for U...

  8. Clustervision: Visual Supervision of Unsupervised Clustering.

    PubMed

    Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam

    2018-01-01

    Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.

  9. Metrication study for large space telescope

    NASA Technical Reports Server (NTRS)

    Creswick, F. A.; Weller, A. E.

    1973-01-01

    Various approaches which could be taken in developing a metric-system design for the Large Space Telescope, considering potential penalties on development cost and time, commonality with other satellite programs, and contribution to national goals for conversion to the metric system of units were investigated. Information on the problems, potential approaches, and impacts of metrication was collected from published reports on previous aerospace-industry metrication-impact studies and through numerous telephone interviews. The recommended approach to LST metrication formulated in this study cells for new components and subsystems to be designed in metric-module dimensions, but U.S. customary practice is allowed where U.S. metric standards and metric components are not available or would be unsuitable. Electrical/electronic-system design, which is presently largely metric, is considered exempt from futher metrication. An important guideline is that metric design and fabrication should in no way compromise the effectiveness of the LST equipment.

  10. Irregular large-scale computed tomography on multiple graphics processors improves energy-efficiency metrics for industrial applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.

    2014-09-01

    This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.

  11. Colonoscopy Quality: Metrics and Implementation

    PubMed Central

    Calderwood, Audrey H.; Jacobson, Brian C.

    2013-01-01

    Synopsis Colonoscopy is an excellent area for quality improvement 1 because it is high volume, has significant associated risk and expense, and there is evidence that variability in its performance affects outcomes. The best endpoint for validation of quality metrics in colonoscopy is colorectal cancer incidence and mortality, but because of feasibility issues, a more readily accessible metric is the adenoma detection rate (ADR). Fourteen quality metrics were proposed by the joint American Society of Gastrointestinal Endoscopy/American College of Gastroenterology Task Force on “Quality Indicators for Colonoscopy” in 2006, which are described in further detail below. Use of electronic health records and quality-oriented registries will facilitate quality measurement and reporting. Unlike traditional clinical research, implementation of quality improvement initiatives involves rapid assessments and changes on an iterative basis, and can be done at the individual, group, or facility level. PMID:23931862

  12. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  13. How Soon Will We Measure in Metric?

    ERIC Educational Resources Information Center

    Weaver, Kenneth F.

    1977-01-01

    A brief history of measurement systems beginning with the Egyptians and Babylonians is given, ending with a discussion of the metric system and its adoption by the United States. Tables of metric prefixes, metric units, and common metric conversions are included. (MN)

  14. A novel onset detection technique for brain-computer interfaces using sound-production related cognitive tasks in simulated-online system

    NASA Astrophysics Data System (ADS)

    Song, YoungJae; Sepulveda, Francisco

    2017-02-01

    Objective. Self-paced EEG-based BCIs (SP-BCIs) have traditionally been avoided due to two sources of uncertainty: (1) precisely when an intentional command is sent by the brain, i.e., the command onset detection problem, and (2) how different the intentional command is when compared to non-specific (or idle) states. Performance evaluation is also a problem and there are no suitable standard metrics available. In this paper we attempted to tackle these issues. Approach. Self-paced covert sound-production cognitive tasks (i.e., high pitch and siren-like sounds) were used to distinguish between intentional commands (IC) and idle states. The IC states were chosen for their ease of execution and negligible overlap with common cognitive states. Band power and a digital wavelet transform were used for feature extraction, and the Davies-Bouldin index was used for feature selection. Classification was performed using linear discriminant analysis. Main results. Performance was evaluated under offline and simulated-online conditions. For the latter, a performance score called true-false-positive (TFP) rate, ranging from 0 (poor) to 100 (perfect), was created to take into account both classification performance and onset timing errors. Averaging the results from the best performing IC task for all seven participants, an 77.7% true-positive (TP) rate was achieved in offline testing. For simulated-online analysis the best IC average TFP score was 76.67% (87.61% TP rate, 4.05% false-positive rate). Significance. Results were promising when compared to previous IC onset detection studies using motor imagery, in which best TP rates were reported as 72.0% and 79.7%, and which, crucially, did not take timing errors into account. Moreover, based on our literature review, there is no previous covert sound-production onset detection system for spBCIs. Results showed that the proposed onset detection technique and TFP performance metric have good potential for use in SP-BCIs.

  15. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Joshi, H; Saunderson, J R; Beavis, A W

    2016-11-07

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQ m and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  16. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Joshi, H.; Saunderson, J. R.; Beavis, A. W.

    2016-11-01

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQm and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  17. The influence of time management skill on the curvilinear relationship between organizational citizenship behavior and task performance.

    PubMed

    Rapp, Adam A; Bachrach, Daniel G; Rapp, Tammy L

    2013-07-01

    In this research we integrate resource allocation and social exchange perspectives to build and test theory focusing on the moderating role of time management skill in the nonmonotonic relationship between organizational citizenship behavior (OCB) and task performance. Results from matching survey data collected from 212 employees and 41 supervisors and from task performance metrics collected several months later indicate that the curvilinear association between OCB and task performance is significantly moderated by employees' time management skill. Implications for theory and practice are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  18. NASA education briefs for the classroom. Metrics in space

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The use of metric measurement in space is summarized for classroom use. Advantages of the metric system over the English measurement system are described. Some common metric units are defined, as are special units for astronomical study. International system unit prefixes and a conversion table of metric/English units are presented. Questions and activities for the classroom are recommended.

  19. NASA education briefs for the classroom. Metrics in space

    NASA Astrophysics Data System (ADS)

    The use of metric measurement in space is summarized for classroom use. Advantages of the metric system over the English measurement system are described. Some common metric units are defined, as are special units for astronomical study. International system unit prefixes and a conversion table of metric/English units are presented. Questions and activities for the classroom are recommended.

  20. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.

  1. Performance of a normalized energy metric without jammer state information for an FH/MFSK system in worst case partial band jamming

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1985-01-01

    For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.

  2. An overview of the BioCreative 2012 Workshop Track III: interactive text mining task

    PubMed Central

    Arighi, Cecilia N.; Carterette, Ben; Cohen, K. Bretonnel; Krallinger, Martin; Wilbur, W. John; Fey, Petra; Dodson, Robert; Cooper, Laurel; Van Slyke, Ceri E.; Dahdul, Wasila; Mabee, Paula; Li, Donghui; Harris, Bethany; Gillespie, Marc; Jimenez, Silvia; Roberts, Phoebe; Matthews, Lisa; Becker, Kevin; Drabkin, Harold; Bello, Susan; Licata, Luana; Chatr-aryamontri, Andrew; Schaeffer, Mary L.; Park, Julie; Haendel, Melissa; Van Auken, Kimberly; Li, Yuling; Chan, Juancarlos; Muller, Hans-Michael; Cui, Hong; Balhoff, James P.; Chi-Yang Wu, Johnny; Lu, Zhiyong; Wei, Chih-Hsuan; Tudor, Catalina O.; Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar; Cejuela, Juan Miguel; Dubey, Pratibha; Wu, Cathy

    2013-01-01

    In many databases, biocuration primarily involves literature curation, which usually involves retrieving relevant articles, extracting information that will translate into annotations and identifying new incoming literature. As the volume of biological literature increases, the use of text mining to assist in biocuration becomes increasingly relevant. A number of groups have developed tools for text mining from a computer science/linguistics perspective, and there are many initiatives to curate some aspect of biology from the literature. Some biocuration efforts already make use of a text mining tool, but there have not been many broad-based systematic efforts to study which aspects of a text mining tool contribute to its usefulness for a curation task. Here, we report on an effort to bring together text mining tool developers and database biocurators to test the utility and usability of tools. Six text mining systems presenting diverse biocuration tasks participated in a formal evaluation, and appropriate biocurators were recruited for testing. The performance results from this evaluation indicate that some of the systems were able to improve efficiency of curation by speeding up the curation task significantly (∼1.7- to 2.5-fold) over manual curation. In addition, some of the systems were able to improve annotation accuracy when compared with the performance on the manually curated set. In terms of inter-annotator agreement, the factors that contributed to significant differences for some of the systems included the expertise of the biocurator on the given curation task, the inherent difficulty of the curation and attention to annotation guidelines. After the task, annotators were asked to complete a survey to help identify strengths and weaknesses of the various systems. The analysis of this survey highlights how important task completion is to the biocurators’ overall experience of a system, regardless of the system’s high score on design, learnability and usability. In addition, strategies to refine the annotation guidelines and systems documentation, to adapt the tools to the needs and query types the end user might have and to evaluate performance in terms of efficiency, user interface, result export and traditional evaluation metrics have been analyzed during this task. This analysis will help to plan for a more intense study in BioCreative IV. PMID:23327936

  3. Space shuttle flying qualities and criteria assessment

    NASA Technical Reports Server (NTRS)

    Myers, T. T.; Johnston, D. E.; Mcruer, Duane T.

    1987-01-01

    Work accomplished under a series of study tasks for the Flying Qualities and Flight Control Systems Design Criteria Experiment (OFQ) of the Shuttle Orbiter Experiments Program (OEX) is summarized. The tasks involved review of applicability of existing flying quality and flight control system specification and criteria for the Shuttle; identification of potentially crucial flying quality deficiencies; dynamic modeling of the Shuttle Orbiter pilot/vehicle system in the terminal flight phases; devising a nonintrusive experimental program for extraction and identification of vehicle dynamics, pilot control strategy, and approach and landing performance metrics, and preparation of an OEX approach to produce a data archive and optimize use of the data to develop flying qualities for future space shuttle craft in general. Analytic modeling of the Orbiter's unconventional closed-loop dynamics in landing, modeling pilot control strategies, verification of vehicle dynamics and pilot control strategy from flight data, review of various existent or proposed aircraft flying quality parameters and criteria in comparison with the unique dynamic characteristics and control aspects of the Shuttle in landing; and finally a summary of conclusions and recommendations for developing flying quality criteria and design guides for future Shuttle craft.

  4. Functional MRI of Handwriting Tasks: A Study of Healthy Young Adults Interacting with a Novel Touch-Sensitive Tablet

    PubMed Central

    Karimpoor, Mahta; Churchill, Nathan W.; Tam, Fred; Fischer, Corinne E.; Schweizer, Tom A.; Graham, Simon J.

    2018-01-01

    Handwriting is a complex human activity that engages a blend of cognitive and visual motor skills. Current understanding of the neural correlates of handwriting has largely come from lesion studies of patients with impaired handwriting. Task-based fMRI studies would be useful to supplement this work. To address concerns over ecological validity, previously we developed a fMRI-compatible, computerized tablet system for writing and drawing including visual feedback of hand position and an augmented reality display. The purpose of the present work is to use the tablet system in proof-of-concept to characterize brain activity associated with clinically relevant handwriting tasks, originally developed to characterize handwriting impairments in Alzheimer’s disease patients. As a prelude to undertaking fMRI studies of patients, imaging was performed of twelve young healthy subjects who copied sentences, phone numbers, and grocery lists using the fMRI-compatible tablet. Activation maps for all handwriting tasks consisted of a distributed network of regions in reasonable agreement with previous studies of handwriting performance. In addition, differences in brain activity were observed between the test subcomponents consistent with different demands of neural processing for successful task performance, as identified by investigating three quantitative behavioral metrics (writing speed, stylus contact force and stylus in air time). This study provides baseline behavioral and brain activity results for fMRI studies that adopt this handwriting test to characterize patients with brain impairments. PMID:29487511

  5. Functional MRI of Handwriting Tasks: A Study of Healthy Young Adults Interacting with a Novel Touch-Sensitive Tablet.

    PubMed

    Karimpoor, Mahta; Churchill, Nathan W; Tam, Fred; Fischer, Corinne E; Schweizer, Tom A; Graham, Simon J

    2018-01-01

    Handwriting is a complex human activity that engages a blend of cognitive and visual motor skills. Current understanding of the neural correlates of handwriting has largely come from lesion studies of patients with impaired handwriting. Task-based fMRI studies would be useful to supplement this work. To address concerns over ecological validity, previously we developed a fMRI-compatible, computerized tablet system for writing and drawing including visual feedback of hand position and an augmented reality display. The purpose of the present work is to use the tablet system in proof-of-concept to characterize brain activity associated with clinically relevant handwriting tasks, originally developed to characterize handwriting impairments in Alzheimer's disease patients. As a prelude to undertaking fMRI studies of patients, imaging was performed of twelve young healthy subjects who copied sentences, phone numbers, and grocery lists using the fMRI-compatible tablet. Activation maps for all handwriting tasks consisted of a distributed network of regions in reasonable agreement with previous studies of handwriting performance. In addition, differences in brain activity were observed between the test subcomponents consistent with different demands of neural processing for successful task performance, as identified by investigating three quantitative behavioral metrics (writing speed, stylus contact force and stylus in air time). This study provides baseline behavioral and brain activity results for fMRI studies that adopt this handwriting test to characterize patients with brain impairments.

  6. 15 CFR 1170.4 - Guidelines.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... agency shall: (a) Establish plans and dates for use of the metric system in procurements, grants and... barriers to transition to the metric system; (f) Consider cost effects of metric use in setting agency... metric system of measurement through educational information and guidance and in agency publications; (i...

  7. 15 CFR 1170.4 - Guidelines.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... agency shall: (a) Establish plans and dates for use of the metric system in procurements, grants and... barriers to transition to the metric system; (f) Consider cost effects of metric use in setting agency... metric system of measurement through educational information and guidance and in agency publications; (i...

  8. 15 CFR 1170.4 - Guidelines.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... agency shall: (a) Establish plans and dates for use of the metric system in procurements, grants and... barriers to transition to the metric system; (f) Consider cost effects of metric use in setting agency... metric system of measurement through educational information and guidance and in agency publications; (i...

  9. Adapting to the 30-degree visual perspective by emulating the angled laparoscope: a simple and low-cost solution for basic surgical training.

    PubMed

    Daniel, Lorias Espinoza; Tapia, Fernando Montes; Arturo, Minor Martínez; Ricardo, Ordorica Flores

    2014-12-01

    The ability to handle and adapt to the visual perspectives generated by angled laparoscopes is crucial for skilled laparoscopic surgery. However, the control of the visual work space depends on the ability of the operator of the camera, who is often not the most experienced member of the surgical team. Here, we present a simple, low-cost option for surgical training that challenges the learner with static and dynamic visual perspectives at 30 degrees using a system that emulates the angled laparoscope. A system was developed using a low-cost camera and readily available materials to emulate the angled laparoscope. Nine participants undertook 3 tasks to test spatial adaptation to the static and dynamic visual perspectives at 30 degrees. Completing each task to a predefined satisfactory level ensured precision of execution of the tasks. Associated metrics (time and error rate) were recorded, and the performance of participants were determined. A total of 450 repetitions were performed by 9 residents at various stages of training. All the tasks were performed with a visual perspective of 30 degrees using the system. Junior residents were more proficient than senior residents. This system is a viable and low-cost alternative for developing the basic psychomotor skills necessary for the handling and adaptation to visual perspectives of 30 degrees, without depending on a laparoscopic tower, in junior residents. More advanced skills may then be acquired by other means, such as in the operating theater or through clinical experience.

  10. Leveraging Paraphrase Labels to Extract Synonyms from Twitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoniak, Maria A.; Bell, Eric B.; Xia, Fei

    2015-05-18

    We present an approach for automatically learning synonyms from a paraphrase corpus of tweets. This work shows improvement on the task of paraphrase detection when we substitute our extracted synonyms into the training set. The synonyms are learned by using chunks from a shallow parse to create candidate synonyms and their context windows, and the synonyms are incorporated into a paraphrase detection system that uses machine translation metrics as features for a classifier. We demonstrate a 2.29% improvement in F1 when we train and test on the paraphrase training set, providing better coverage than previous systems, which shows the potentialmore » power of synonyms that are representative of a specific topic.« less

  11. Feasibility of Turing-Style Tests for Autonomous Aerial Vehicle "Intelligence"

    NASA Technical Reports Server (NTRS)

    Young, Larry A.

    2007-01-01

    A new approach is suggested to define and evaluate key metrics as to autonomous aerial vehicle performance. This approach entails the conceptual definition of a "Turing Test" for UAVs. Such a "UAV Turing test" would be conducted by means of mission simulations and/or tailored flight demonstrations of vehicles under the guidance of their autonomous system software. These autonomous vehicle mission simulations and flight demonstrations would also have to be benchmarked against missions "flown" with pilots/human-operators in the loop. In turn, scoring criteria for such testing could be based upon both quantitative mission success metrics (unique to each mission) and by turning to analog "handling quality" metrics similar to the well-known Cooper-Harper pilot ratings used for manned aircraft. Autonomous aerial vehicles would be considered to have successfully passed this "UAV Turing Test" if the aggregate mission success metrics and handling qualities for the autonomous aerial vehicle matched or exceeded the equivalent metrics for missions conducted with pilots/human-operators in the loop. Alternatively, an independent, knowledgeable observer could provide the "UAV Turing Test" ratings of whether a vehicle is autonomous or "piloted." This observer ideally would, in the more sophisticated mission simulations, also have the enhanced capability of being able to override the scripted mission scenario and instigate failure modes and change of flight profile/plans. If a majority of mission tasks are rated as "piloted" by the observer, when in reality the vehicle/simulation is fully- or semi- autonomously controlled, then the vehicle/simulation "passes" the "UAV Turing Test." In this regards, this second "UAV Turing Test" approach is more consistent with Turing s original "imitation game" proposal. The overall feasibility, and important considerations and limitations, of such an approach for judging/evaluating autonomous aerial vehicle "intelligence" will be discussed from a theoretical perspective.

  12. Usefulness of virtual reality in assessment of medical student laparoscopic skill.

    PubMed

    Matzke, Josh; Ziegler, Craig; Martin, Kevin; Crawford, Stuart; Sutton, Erica

    2017-05-01

    This study evaluates if undergraduate medical trainees' laparoscopic skills acquisition could be assessed using a virtual reality (VR) simulator and how the resultant metrics correlate with performance of Fundamentals of Laparoscopic Surgery (FLS) tasks. Our hypothesis is that the VR simulator metrics will correlate with passing results in a competency-based curriculum (FLS). Twenty-eight fourth-year medical students applying for surgical residency were recruited to participate in a VR training curriculum comprised of camera navigation, hand eye coordination, and FLS tasks: circle cutting (CC), ligating loop (LL), peg transfer (PT), and intracorporeal knot tying (IKT). Students were given 8 wk to achieve proficiency goals, after which they were observed performing FLS tasks. The ability of the VR simulator to detect penalties in each of the FLS tasks and correlations of time taken to complete tasks are reported. Twenty-five students trained in all components of the curriculum. All students were proficient in camera navigation and hand eye coordination tasks. Proficiency was achieved in CC, LL, PT, and IKT by 21, 19, 23, and one student, respectively. VR simulation showed high specificity for predicting zero penalties on the observed CC, LL, and PT tasks (80%, 75%, and 80%, respectively). VR can be used to assess medical student's acquisition of laparoscopic skills. The absence of penalties in the simulator reasonably predicts the absence of penalties in all FLS skills, except IKT. The skills acquired by trainees can be used in residency for further monitoring of progress toward proficiency. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. USSR Report, Agriculture, No. 1392

    DTIC Science & Technology

    1983-07-26

    from more than 200,000 hectares: 60,000 metric tons of hay and 130,000 metric tons of haylage have been procured. (1100 GMT) Kuban farmers have...total of about 1 million tons of hay, almost 2.5 million tons of haylage has been laid in and 200,000 tons of vitaminwsf [as printed] grass meal has...is being given to deliveries of haylage . In Krasnodar Kray the percentage of task fulfill- ment for this kind of fodder is now twice as high as for

  14. Information Graph Flow: A Geometric Approximation of Quantum and Statistical Systems

    NASA Astrophysics Data System (ADS)

    Vanchurin, Vitaly

    2018-05-01

    Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g., qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.

  15. Information Graph Flow: A Geometric Approximation of Quantum and Statistical Systems

    NASA Astrophysics Data System (ADS)

    Vanchurin, Vitaly

    2018-06-01

    Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g., qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.

  16. Information architecture for a patient-specific dashboard in head and neck tumor boards.

    PubMed

    Oeser, Alexander; Gaebel, Jan; Dietz, Andreas; Wiegand, Susanne; Oeltze-Jafra, Steffen

    2018-03-28

    Overcoming the flaws of current data management conditions in head and neck oncology could enable integrated information systems specifically tailored to the needs of medical experts in a tumor board meeting. Clinical dashboards are a promising method to assist various aspects of the decision-making process in such cognitively demanding scenarios. However, in order to provide extensive and intuitive assistance to the participating physicians, the design and development of such a system have to be user-centric. To accomplish this task, conceptual methods need to be performed prior to the technical development and integration stages. We have conducted a qualitative survey including eight clinical experts with different levels of expertise in the field of head and neck oncology. According to the principles of information architecture, the survey focused on the identification and causal interconnection of necessary metrics for information assessment in the tumor board. Based on the feedback by the clinical experts, we have constructed a detailed map of the required information items for a tumor board dashboard in head and neck oncology. Furthermore, we have identified three distinct groups of metrics (patient, disease and therapy metrics) as well as specific recommendations for their structural and graphical implementation. By using the information architecture, we were able to gather valuable feedback about the requirements and cognitive processes of the tumor board members. Those insights have helped us to develop a dashboard application that closely adapts to the specified needs and characteristics, and thus is primarily user-centric.

  17. Recommendations of the wwPDB NMR Validation Task Force

    PubMed Central

    Montelione, Gaetano T.; Nilges, Michael; Bax, Ad; Güntert, Peter; Herrmann, Torsten; Richardson, Jane S.; Schwieters, Charles; Vranken, Wim F.; Vuister, Geerten W.; Wishart, David S.; Berman, Helen M.; Kleywegt, Gerard J.; Markley, John L.

    2013-01-01

    As methods for analysis of biomolecular structure and dynamics using nuclear magnetic resonance spectroscopy (NMR) continue to advance, the resulting 3D structures, chemical shifts, and other NMR data are broadly impacting biology, chemistry, and medicine. Structure model assessment is a critical area of NMR methods development, and is an essential component of the process of making these structures accessible and useful to the wider scientific community. For these reasons, the Worldwide Protein Data Bank (wwPDB) has convened an NMR Validation Task Force (NMR-VTF) to work with the wwPDB partners in developing metrics and policies for biomolecular NMR data harvesting, structure representation, and structure quality assessment. This paper summarizes the recommendations of the NMR-VTF, and lays the groundwork for future work in developing standards and metrics for biomolecular NMR structure quality assessment. PMID:24010715

  18. Prenatal ethanol exposure impairs temporal ordering behaviours in young adult rats.

    PubMed

    Patten, Anna R; Sawchuk, Scott; Wortman, Ryan C; Brocardo, Patricia S; Gil-Mohapel, Joana; Christie, Brian R

    2016-02-15

    Prenatal ethanol exposure (PNEE) causes significant deficits in functional (i.e., synaptic) plasticity in the dentate gyrus (DG) and cornu ammonis (CA) hippocampal sub-regions of young adult male rats. Previous research has shown that in the DG, these deficits are not apparent in age-matched PNEE females. This study aimed to expand these findings and determine if PNEE induces deficits in hippocampal-dependent behaviours in both male and female young adult rats (PND 60). The metric change behavioural test examines DG-dependent deficits by determining whether an animal can detect a metric change between two identical objects. The temporal order behavioural test is thought to rely in part on the CA sub-region of the hippocampus and determines whether an animal will spend more time exploring an object that it has not seen for a larger temporal window as compared to an object that it has seen more recently. Using the liquid diet model of FASD (where 6.6% (v/v) ethanol is provided through a liquid diet consumed ad libitum throughout the entire gestation), we found that PNEE causes a significant impairment in the temporal order task, while no deficits in the DG-dependent metric change task were observed. There were no significant differences between males and females for either task. These results indicate that behaviours relying partially on the CA-region may be more affected by PNEE than those that rely on the DG. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Robotics-based synthesis of human motion.

    PubMed

    Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  20. Metric Measures and the Consumer. Reprint from FDA CONSUMER, Dec. 1975-Jan. 1976.

    ERIC Educational Resources Information Center

    Food and Drug Administration (DHEW), Washington, DC.

    Advantages of the metric system for the consumer are discussed. Basic metric units are described, then methods of comparison shopping when items are marked in metric units are explained. The effect of the change to the metric system on packaging and labelling requirements is discussed. (DT)

  1. Behavioral and Neural Correlates of Executive Function: Interplay between Inhibition and Updating Processes.

    PubMed

    Kim, Na Young; Wittenberg, Ellen; Nam, Chang S

    2017-01-01

    This study investigated the interaction between two executive function processes, inhibition and updating, through analyses of behavioral, neurophysiological, and effective connectivity metrics. Although, many studies have focused on behavioral effects of executive function processes individually, few studies have examined the dynamic causal interactions between these two functions. A total of twenty participants from a local university performed a dual task combing flanker and n-back experimental paradigms, and completed the Operation Span Task designed to measure working memory capacity. We found that both behavioral (accuracy and reaction time) and neurophysiological (P300 amplitude and alpha band power) metrics on the inhibition task (i.e., flanker task) were influenced by the updating load (n-back level) and modulated by working memory capacity. Using independent component analysis, source localization (DIPFIT), and Granger Causality analysis of the EEG time-series data, the present study demonstrated that manipulation of cognitive demand in a dual executive function task influenced the causal neural network. We compared connectivity across three updating loads (n-back levels) and found that experimental manipulation of working memory load enhanced causal connectivity of a large-scale neurocognitive network. This network contains the prefrontal and parietal cortices, which are associated with inhibition and updating executive function processes. This study has potential applications in human performance modeling and assessment of mental workload, such as the design of training materials and interfaces for those performing complex multitasking under stress.

  2. The three-class ideal observer for univariate normal data: Decision variable and ROC surface properties

    PubMed Central

    Edwards, Darrin C.; Metz, Charles E.

    2012-01-01

    Although a fully general extension of ROC analysis to classification tasks with more than two classes has yet to be developed, the potential benefits to be gained from a practical performance evaluation methodology for classification tasks with three classes have motivated a number of research groups to propose methods based on constrained or simplified observer or data models. Here we consider an ideal observer in a task with underlying data drawn from three univariate normal distributions. We investigate the behavior of the resulting ideal observer’s decision variables and ROC surface. In particular, we show that the pair of ideal observer decision variables is constrained to a parametric curve in two-dimensional likelihood ratio space, and that the decision boundary line segments used by the ideal observer can intersect this curve in at most six places. From this, we further show that the resulting ROC surface has at most four degrees of freedom at any point, and not the five that would be required, in general, for a surface in a six-dimensional space to be non-degenerate. In light of the difficulties we have previously pointed out in generalizing the well-known area under the ROC curve performance metric to tasks with three or more classes, the problem of developing a suitable and fully general performance metric for classification tasks with three or more classes remains unsolved. PMID:23162165

  3. Metrication report to the Congress

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The major NASA metrication activity of 1988 concerned the Space Station. Although the metric system was the baseline measurement system for preliminary design studies, solicitations for final design and development of the Space Station Freedom requested use of the inch-pound system because of concerns with cost impact and potential safety hazards. Under that policy, however use of the metric system would be permitted through waivers where its use was appropriate. Late in 1987, several Department of Defense decisions were made to increase commitment to the metric system, thereby broadening the potential base of metric involvement in the U.S. industry. A re-evaluation of Space Station Freedom units of measure policy was, therefore, initiated in January 1988.

  4. Eye Tracking Metrics for Workload Estimation in Flight Deck Operation

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle; Schnell, Thomas

    2010-01-01

    Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.

  5. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Arnold, James O. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.

  6. Graph Theory-Based Brain Connectivity for Automatic Classification of Multiple Sclerosis Clinical Courses.

    PubMed

    Kocevar, Gabriel; Stamile, Claudio; Hannoun, Salem; Cotton, François; Vukusic, Sandra; Durand-Dubief, Françoise; Sappey-Marinier, Dominique

    2016-01-01

    Purpose: In this work, we introduce a method to classify Multiple Sclerosis (MS) patients into four clinical profiles using structural connectivity information. For the first time, we try to solve this question in a fully automated way using a computer-based method. The main goal is to show how the combination of graph-derived metrics with machine learning techniques constitutes a powerful tool for a better characterization and classification of MS clinical profiles. Materials and Methods: Sixty-four MS patients [12 Clinical Isolated Syndrome (CIS), 24 Relapsing Remitting (RR), 24 Secondary Progressive (SP), and 17 Primary Progressive (PP)] along with 26 healthy controls (HC) underwent MR examination. T1 and diffusion tensor imaging (DTI) were used to obtain structural connectivity matrices for each subject. Global graph metrics, such as density and modularity, were estimated and compared between subjects' groups. These metrics were further used to classify patients using tuned Support Vector Machine (SVM) combined with Radial Basic Function (RBF) kernel. Results: When comparing MS patients to HC subjects, a greater assortativity, transitivity, and characteristic path length as well as a lower global efficiency were found. Using all graph metrics, the best F -Measures (91.8, 91.8, 75.6, and 70.6%) were obtained for binary (HC-CIS, CIS-RR, RR-PP) and multi-class (CIS-RR-SP) classification tasks, respectively. When using only one graph metric, the best F -Measures (83.6, 88.9, and 70.7%) were achieved for modularity with previous binary classification tasks. Conclusion: Based on a simple DTI acquisition associated with structural brain connectivity analysis, this automatic method allowed an accurate classification of different MS patients' clinical profiles.

  7. Millimeter wave sensor requirements for maritime small craft identification

    NASA Astrophysics Data System (ADS)

    Krapels, Keith; Driggers, Ronald G.; Garcia, Jose; Boettcher, Evelyn; Prather, Dennis; Schuetz, Chrisopher; Samluk, Jesse; Stein, Lee; Kiser, William; Visnansky, Andrew; Grata, Jeremy; Wikner, David; Harris, Russ

    2009-09-01

    Passive millimeter wave (mmW) imagers have improved in terms of resolution sensitivity and frame rate. Currently, the Office of Naval Research (ONR), along with the US Army Research, Development and Engineering Command, Communications Electronics Research Development and Engineering Center (RDECOM CERDEC) Night Vision and Electronic Sensor Directorate (NVESD), are investigating the current state-of-the-art of mmW imaging systems. The focus of this study was the performance of mmW imaging systems for the task of small watercraft / boat identification field performance. First mmW signatures were collected. This consisted of a set of eight small watercrafts; at 5 different aspects, during the daylight hours over a 48 hour period in the spring of 2008. Target characteristics were measured and characteristic dimension, signatures, and Root Sum Squared of Target's Temperature (RRSΔT) tabulated. Then an eight-alternative, forced choice (8AFC) human perception experiment was developed and conducted at NVESD. The ability of observers to discriminate between small watercraft was quantified. Next, the task difficulty criterion, V50, was quantified by applying this data to NVESD's target acquisition models using the Targeting Task Performance (TTP) metric. These parameters can be used to evaluate sensor field performance for Anti-Terrorism / Force Protection (AT/FP) and navigation tasks for the U.S. Navy, as well as for design and evaluation of imaging passive mmW sensors for both the U.S. Navy and U.S. Coast Guard.

  8. Going Metric: Is It for You? A Planning Model for Small Manufacturing Companies.

    ERIC Educational Resources Information Center

    Beek, C.; And Others

    This booklet is designed to aid small manufacturing companies in ascertaining the meaning of going metric for their unique circumstances and to guide them in making a smooth conversion to the metric system. First is a brief discussion of what the law says about metrics and what the metric system is. Then what is involved in going metric is…

  9. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.

  10. Human Performance Optimization Metrics: Consensus Findings, Gaps, and Recommendations for Future Research.

    PubMed

    Nindl, Bradley C; Jaffin, Dianna P; Dretsch, Michael N; Cheuvront, Samuel N; Wesensten, Nancy J; Kent, Michael L; Grunberg, Neil E; Pierce, Joseph R; Barry, Erin S; Scott, Jonathan M; Young, Andrew J; OʼConnor, Francis G; Deuster, Patricia A

    2015-11-01

    Human performance optimization (HPO) is defined as "the process of applying knowledge, skills and emerging technologies to improve and preserve the capabilities of military members, and organizations to execute essential tasks." The lack of consensus for operationally relevant and standardized metrics that meet joint military requirements has been identified as the single most important gap for research and application of HPO. In 2013, the Consortium for Health and Military Performance hosted a meeting to develop a toolkit of standardized HPO metrics for use in military and civilian research, and potentially for field applications by commanders, units, and organizations. Performance was considered from a holistic perspective as being influenced by various behaviors and barriers. To accomplish the goal of developing a standardized toolkit, key metrics were identified and evaluated across a spectrum of domains that contribute to HPO: physical performance, nutritional status, psychological status, cognitive performance, environmental challenges, sleep, and pain. These domains were chosen based on relevant data with regard to performance enhancers and degraders. The specific objectives at this meeting were to (a) identify and evaluate current metrics for assessing human performance within selected domains; (b) prioritize metrics within each domain to establish a human performance assessment toolkit; and (c) identify scientific gaps and the needed research to more effectively assess human performance across domains. This article provides of a summary of 150 total HPO metrics across multiple domains that can be used as a starting point-the beginning of an HPO toolkit: physical fitness (29 metrics), nutrition (24 metrics), psychological status (36 metrics), cognitive performance (35 metrics), environment (12 metrics), sleep (9 metrics), and pain (5 metrics). These metrics can be particularly valuable as the military emphasizes a renewed interest in Human Dimension efforts, and leverages science, resources, programs, and policies to optimize the performance capacities of all Service members.

  11. Planning Following Stroke: A Relational Complexity Approach Using the Tower of London

    PubMed Central

    Andrews, Glenda; Halford, Graeme S.; Chappell, Mark; Maujean, Annick; Shum, David H. K.

    2014-01-01

    Planning on the 4-disk version of the Tower of London (TOL4) was examined in stroke patients and unimpaired controls. Overall TOL4 solution scores indicated impaired planning in the frontal stroke but not non-frontal stroke patients. Consistent with the claim that processing the relations between current states, intermediate states, and goal states is a key process in planning, the domain-general relational complexity metric was a good indicator of the experienced difficulty of TOL4 problems. The relational complexity metric shared variance with task-specific metrics of moves to solution and search depth. Frontal stroke patients showed impaired planning compared to controls on problems at all three complexity levels, but at only two of the three levels of moves to solution, search depth and goal ambiguity. Non-frontal stroke patients showed impaired planning only on the most difficult quaternary-relational and high search depth problems. An independent measure of relational processing (viz., Latin square task) predicted TOL4 solution scores after controlling for stroke status and location, and executive processing (Trail Making Test). The findings suggest that planning involves a domain-general capacity for relational processing that depends on the frontal brain regions. PMID:25566042

  12. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    NASA Astrophysics Data System (ADS)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  13. Economic Metrics for Commercial Reusable Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Shaw, Eric J.; Hamaker, Joseph (Technical Monitor)

    2000-01-01

    The success of any effort depends upon the effective initial definition of its purpose, in terms of the needs to be satisfied and the goals to be fulfilled. If the desired product is "A System" that is well-characterized, these high-level need and goal statements can be transformed into system requirements by traditional systems engineering techniques. The satisfaction of well-designed requirements can be tracked by fairly straightforward cost, schedule, and technical performance metrics. Unfortunately, some types of efforts, including those that NASA terms "Programs," tend to resist application of traditional systems engineering practices. In the NASA hierarchy of efforts, a "Program" is often an ongoing effort with broad, high-level goals and objectives. A NASA "project" is a finite effort, in terms of budget and schedule, that usually produces or involves one System. Programs usually contain more than one project and thus more than one System. Special care must be taken in the formulation of NASA Programs and their projects, to ensure that lower-level project requirements are traceable to top-level Program goals, feasible with the given cost and schedule constraints, and measurable against top-level goals. NASA Programs and projects are tasked to identify the advancement of technology as an explicit goal, which introduces more complicating factors. The justification for funding of technology development may be based on the technology's applicability to more than one System, Systems outside that Program or even external to NASA. Application of systems engineering to broad-based technology development, leading to effective measurement of the benefits, can be valid, but it requires that potential beneficiary Systems be organized into a hierarchical structure, creating a "system of Systems." In addition, these Systems evolve with the successful application of the technology, which creates the necessity for evolution of the benefit metrics to reflect the changing baseline. Still, economic metrics for technology development in these Programs and projects remain fairly straightforward, being based on reductions in acquisition and operating costs of the Systems. One of the most challenging requirements that NASA levies on its Programs is to plan for the commercialization of the developed technology. Some NASA Programs are created for the express purpose of developing technology for a particular industrial sector, such as aviation or space transportation, in financial partnership with that sector. With industrial investment, another set of goals, constraints and expectations are levied on the technology program. Economic benefit metrics then expand beyond cost and cost savings to include the marketability, profit, and investment return requirements of the private sector. Commercial investment criteria include low risk, potential for high return, and strategic alignment with existing product lines. These corporate criteria derive from top-level strategic plans and investment goals, which rank high among the most proprietary types of information in any business. As a result, top-level economic goals and objectives that industry partners bring to cooperative programs cannot usually be brought into technical processes, such as systems engineering, that are worked collaboratively between Industry and Government. In spite of these handicaps, the top-level economic goals and objectives of a joint technology program can be crafted in such a way that they accurately reflect the fiscal benefits from both Industry and Government perspectives. Valid economic metrics can then be designed that can track progress toward these goals and objectives, while maintaining the confidentiality necessary for the competitive process.

  14. Integrated primary care: an inclusive three-world view through process metrics and empirical discrimination.

    PubMed

    Miller, Benjamin F; Mendenhall, Tai J; Malik, Alan D

    2009-03-01

    Integrating behavioral health services within the primary care setting drives higher levels of collaborative care, and is proving to be an essential part of the solution for our struggling American healthcare system. However, justification for implementing and sustaining integrated and collaborative care has shown to be a formidable task. In an attempt to move beyond conflicting terminology found in the literature, we delineate terms and suggest a standardized nomenclature. Further, we maintain that addressing the three principal worlds of healthcare (clinical, operational, financial) is requisite in making sense of the spectrum of available implementations and ultimately transitioning collaborative care into the mainstream. Using a model that deconstructs process metrics into factors/barriers and generalizes behavioral health provider roles into major categories provides a framework to empirically discriminate between implementations across specific settings. This approach offers practical guidelines for care sites implementing integrated and collaborative care and defines a research framework to produce the evidence required for the aforementioned clinical, operational and financial worlds of this important movement.

  15. Worldwide Protein Data Bank validation information: usage and trends.

    PubMed

    Smart, Oliver S; Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika; Kleywegt, Gerard J; Velankar, Sameer

    2018-03-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrends DB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics.

  16. Worldwide Protein Data Bank validation information: usage and trends

    PubMed Central

    Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika

    2018-01-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrendsDB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics. PMID:29533231

  17. Measuring Nursing Value from the Electronic Health Record.

    PubMed

    Welton, John M; Harper, Ellen M

    2016-01-01

    We report the findings of a big data nursing value expert group made up of 14 members of the nursing informatics, leadership, academic and research communities within the United States tasked with 1. Defining nursing value, 2. Developing a common data model and metrics for nursing care value, and 3. Developing nursing business intelligence tools using the nursing value data set. This work is a component of the Big Data and Nursing Knowledge Development conference series sponsored by the University Of Minnesota School Of Nursing. The panel met by conference calls for fourteen 1.5 hour sessions for a total of 21 total hours of interaction from August 2014 through May 2015. Primary deliverables from the bit data expert group were: development and publication of definitions and metrics for nursing value; construction of a common data model to extract key data from electronic health records; and measures of nursing costs and finance to provide a basis for developing nursing business intelligence and analysis systems.

  18. A region-based segmentation of tumour from brain CT images using nonlinear support vector machine classifier.

    PubMed

    Nanthagopal, A Padma; Rajamony, R Sukanesh

    2012-07-01

    The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.

  19. Coal upgrading program for Usti nad Labem, Czech Republic: Task 8.3. Topical report, October 1994--August 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, B.C.; Musich, M.A.

    1995-10-01

    Coal has been a major energy source in the Czech Republic given its large coal reserves, especially brown coal and lignite (almost 4000 million metric tons) and smaller reserves of hard, mainly bituminous, coal (over 800 million tons). Political changes since 1989 have led to the reassessment of the role of coal in the future economy as increasing environmental regulations affect the use of the high-sulfur and high-ash brown coal and lignite as well as the high-ash hard coal. Already, the production of brown coal has declined from 87 million metric tons per year in 1989 to 67 million metricmore » tons in 1993 and is projected to decrease further to 50 million metric tons per year of brown coal by the year 2000. As a means of effectively utilizing its indigenous coal resources, the Czech Republic is upgrading various technologies, and these are available at different stages of development, demonstration, and commercialization. The purpose of this review is to provide a database of information on applicable technologies that reduce the impact of gaseous (SO{sub 2}, NO{sub x}, volatile organic compounds) and particulate emissions from the combustion of coal in district and residential heating systems.« less

  20. Symbiotic Sensing for Energy-Intensive Tasks in Large-Scale Mobile Sensing Applications.

    PubMed

    Le, Duc V; Nguyen, Thuong; Scholten, Hans; Havinga, Paul J M

    2017-11-29

    Energy consumption is a critical performance and user experience metric when developing mobile sensing applications, especially with the significantly growing number of sensing applications in recent years. As proposed a decade ago when mobile applications were still not popular and most mobile operating systems were single-tasking, conventional sensing paradigms such as opportunistic sensing and participatory sensing do not explore the relationship among concurrent applications for energy-intensive tasks. In this paper, inspired by social relationships among living creatures in nature, we propose a symbiotic sensing paradigm that can conserve energy, while maintaining equivalent performance to existing paradigms. The key idea is that sensing applications should cooperatively perform common tasks to avoid acquiring the same resources multiple times. By doing so, this sensing paradigm executes sensing tasks with very little extra resource consumption and, consequently, extends battery life. To evaluate and compare the symbiotic sensing paradigm with the existing ones, we develop mathematical models in terms of the completion probability and estimated energy consumption. The quantitative evaluation results using various parameters obtained from real datasets indicate that symbiotic sensing performs better than opportunistic sensing and participatory sensing in large-scale sensing applications, such as road condition monitoring, air pollution monitoring, and city noise monitoring.

  1. Symbiotic Sensing for Energy-Intensive Tasks in Large-Scale Mobile Sensing Applications

    PubMed Central

    Scholten, Hans; Havinga, Paul J. M.

    2017-01-01

    Energy consumption is a critical performance and user experience metric when developing mobile sensing applications, especially with the significantly growing number of sensing applications in recent years. As proposed a decade ago when mobile applications were still not popular and most mobile operating systems were single-tasking, conventional sensing paradigms such as opportunistic sensing and participatory sensing do not explore the relationship among concurrent applications for energy-intensive tasks. In this paper, inspired by social relationships among living creatures in nature, we propose a symbiotic sensing paradigm that can conserve energy, while maintaining equivalent performance to existing paradigms. The key idea is that sensing applications should cooperatively perform common tasks to avoid acquiring the same resources multiple times. By doing so, this sensing paradigm executes sensing tasks with very little extra resource consumption and, consequently, extends battery life. To evaluate and compare the symbiotic sensing paradigm with the existing ones, we develop mathematical models in terms of the completion probability and estimated energy consumption. The quantitative evaluation results using various parameters obtained from real datasets indicate that symbiotic sensing performs better than opportunistic sensing and participatory sensing in large-scale sensing applications, such as road condition monitoring, air pollution monitoring, and city noise monitoring. PMID:29186037

  2. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.; Anderson, M. R.

    1985-01-01

    Optimal-control-theoretic modeling and frequency-domain analysis is the methodology proposed to evaluate analytically the handling qualities of higher-order manually controlled dynamic systems. Fundamental to the methodology is evaluating the interplay between pilot workload and closed-loop pilot/vehicle performance and stability robustness. The model-based metric for pilot workload is the required pilot phase compensation. Pilot/vehicle performance and loop stability is then evaluated using frequency-domain techniques. When these techniques were applied to the flight-test data for thirty-two highly-augmented fighter configurations, strong correlation was obtained between the analytical and experimental results.

  3. Demand Response Resource Quantification with Detailed Building Energy Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, Elaine; Horsey, Henry; Merket, Noel

    Demand response is a broad suite of technologies that enables changes in electrical load operations in support of power system reliability and efficiency. Although demand response is not a new concept, there is new appetite for comprehensively evaluating its technical potential in the context of renewable energy integration. The complexity of demand response makes this task difficult -- we present new methods for capturing the heterogeneity of potential responses from buildings, their time-varying nature, and metrics such as thermal comfort that help quantify likely acceptability of specific demand response actions. Computed with an automated software framework, the methods are scalable.

  4. Assessing Arthroscopic Skills Using Wireless Elbow-Worn Motion Sensors.

    PubMed

    Kirby, Georgina S J; Guyver, Paul; Strickland, Louise; Alvand, Abtin; Yang, Guang-Zhong; Hargrove, Caroline; Lo, Benny P L; Rees, Jonathan L

    2015-07-01

    Assessment of surgical skill is a critical component of surgical training. Approaches to assessment remain predominantly subjective, although more objective measures such as Global Rating Scales are in use. This study aimed to validate the use of elbow-worn, wireless, miniaturized motion sensors to assess the technical skill of trainees performing arthroscopic procedures in a simulated environment. Thirty participants were divided into three groups on the basis of their surgical experience: novices (n = 15), intermediates (n = 10), and experts (n = 5). All participants performed three standardized tasks on an arthroscopic virtual reality simulator while wearing wireless wrist and elbow motion sensors. Video output was recorded and a validated Global Rating Scale was used to assess performance; dexterity metrics were recorded from the simulator. Finally, live motion data were recorded via Bluetooth from the wireless wrist and elbow motion sensors and custom algorithms produced an arthroscopic performance score. Construct validity was demonstrated for all tasks, with Global Rating Scale scores and virtual reality output metrics showing significant differences between novices, intermediates, and experts (p < 0.001). The correlation of the virtual reality path length to the number of hand movements calculated from the wireless sensors was very high (p < 0.001). A comparison of the arthroscopic performance score levels with virtual reality output metrics also showed highly significant differences (p < 0.01). Comparisons of the arthroscopic performance score levels with the Global Rating Scale scores showed strong and highly significant correlations (p < 0.001) for both sensor locations, but those of the elbow-worn sensors were stronger and more significant (p < 0.001) than those of the wrist-worn sensors. A new wireless assessment of surgical performance system for objective assessment of surgical skills has proven valid for assessing arthroscopic skills. The elbow-worn sensors were shown to achieve an accurate assessment of surgical dexterity and performance. The validation of an entirely objective assessment of arthroscopic skill with wireless elbow-worn motion sensors introduces, for the first time, a feasible assessment system for the live operating theater with the added potential to be applied to other surgical and interventional specialties. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  5. Going Metric...PAL (Programmed Assigned Learning).

    ERIC Educational Resources Information Center

    Wallace, Jesse D.

    This 41-page programed booklet is intended for use by students and adults. It introduces the metric units for length, area, volume, and temperature through a series of questions and answers. The advantages of the metric system over the English system are discussed. Conversion factors are introduced and several applications of the metric system in…

  6. The Metric System: Preparing Your Students for the Big Change

    ERIC Educational Resources Information Center

    Social Education, 1974

    1974-01-01

    Information sources to assist teachers and students in making a smooth and efficient transition to the metric system include an annotated bibliography of literature dealing with the history and present state of measurement in the U.S., with the metric system generally, and with problems involved in metric conversion. (Author/KM)

  7. What Research Says to the Teacher: Metric Education.

    ERIC Educational Resources Information Center

    Goldbecker, Sheralyn S.

    How measurement systems developed is briefly reviewed, followed by comments on the international conversion to the metric system and a lengthier discussion of the history of the metric controversy in the U.S. Statements made by supporters of the customary and metric systems are listed. The role of education is detailed in terms of teacher…

  8. Rice by Weight, Other Produce by Bulk, and Snared Iguanas at So Much Per One. A Talk on Measurement Standards and on Metric Conversion.

    ERIC Educational Resources Information Center

    Allen, Harold Don

    This script for a short radio broadcast on measurement standards and metric conversion begins by tracing the rise of the metric system in the international marketplace. Metric units are identified and briefly explained. Arguments for conversion to metric measures are presented. The history of the development and acceptance of the metric system is…

  9. Up Periscope! Designing a New Perceptual Metric for Imaging System Performance

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2016-01-01

    Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.

  10. Visualizing curved spacetime

    NASA Astrophysics Data System (ADS)

    Jonsson, Rickard M.

    2005-03-01

    I present a way to visualize the concept of curved spacetime. The result is a curved surface with local coordinate systems (Minkowski systems) living on it, giving the local directions of space and time. Relative to these systems, special relativity holds. The method can be used to visualize gravitational time dilation, the horizon of black holes, and cosmological models. The idea underlying the illustrations is first to specify a field of timelike four-velocities uμ. Then, at every point, one performs a coordinate transformation to a local Minkowski system comoving with the given four-velocity. In the local system, the sign of the spatial part of the metric is flipped to create a new metric of Euclidean signature. The new positive definite metric, called the absolute metric, can be covariantly related to the original Lorentzian metric. For the special case of a two-dimensional original metric, the absolute metric may be embedded in three-dimensional Euclidean space as a curved surface.

  11. MO-D-213-06: Quantitative Image Quality Metrics Are for Physicists, Not Radiologists: How to Communicate to Your Radiologists Using Their Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szczykutowicz, T; Rubert, N; Ranallo, F

    Purpose: A framework for explaining differences in image quality to non-technical audiences in medial imaging is needed. Currently, this task is something that is learned “on the job.” The lack of a formal methodology for communicating optimal acquisition parameters into the clinic effectively mitigates many technological advances. As a community, medical physicists need to be held responsible for not only advancing image science, but also for ensuring its proper use in the clinic. This work outlines a framework that bridges the gap between the results from quantitative image quality metrics like detectability, MTF, and NPS and their effect on specificmore » anatomical structures present in diagnostic imaging tasks. Methods: Specific structures of clinical importance were identified for a body, an extremity, a chest, and a temporal bone protocol. Using these structures, quantitative metrics were used to identify the parameter space that should yield optimal image quality constrained within the confines of clinical logistics and dose considerations. The reading room workflow for presenting the proposed changes for imaging each of these structures is presented. The workflow consists of displaying images for physician review consisting of different combinations of acquisition parameters guided by quantitative metrics. Examples of using detectability index, MTF, NPS, noise and noise non-uniformity are provided. During review, the physician was forced to judge the image quality solely on those features they need for diagnosis, not on the overall “look” of the image. Results: We found that in many cases, use of this framework settled mis-agreements between physicians. Once forced to judge images on the ability to detect specific structures inter reader agreement was obtained. Conclusion: This framework will provide consulting, research/industrial, or in-house physicists with clinically relevant imaging tasks to guide reading room image review. This framework avoids use of the overall “look” or “feel” to dictate acquisition parameter selection. Equipment grants GE Healthcare.« less

  12. Value of Frequency Domain Resting-State Functional Magnetic Resonance Imaging Metrics Amplitude of Low-Frequency Fluctuation and Fractional Amplitude of Low-Frequency Fluctuation in the Assessment of Brain Tumor-Induced Neurovascular Uncoupling.

    PubMed

    Agarwal, Shruti; Lu, Hanzhang; Pillai, Jay J

    2017-08-01

    The aim of this study was to explore whether the phenomenon of brain tumor-related neurovascular uncoupling (NVU) in resting-state blood oxygen level-dependent functional magnetic resonance imaging (BOLD fMRI) (rsfMRI) may also affect the resting-state fMRI (rsfMRI) frequency domain metrics the amplitude of low-frequency fluctuation (ALFF) and fractional ALFF (fALFF). Twelve de novo brain tumor patients, who underwent clinical fMRI examinations, including task-based fMRI (tbfMRI) and rsfMRI, were included in this Institutional Review Board-approved study. Each patient displayed decreased/absent tbfMRI activation in the primary ipsilesional (IL) sensorimotor cortex in the absence of a corresponding motor deficit or suboptimal task performance, consistent with NVU. Z-score maps for the motor tasks were obtained from general linear model analysis (reflecting motor activation vs. rest). Seed-based correlation analysis (SCA) maps of sensorimotor network, ALFF, and fALFF were calculated from rsfMRI data. Precentral and postcentral gyri in contralesional (CL) and IL hemispheres were parcellated using an automated anatomical labeling template for each patient. Region of interest (ROI) analysis was performed on four maps: tbfMRI, SCA, ALFF, and fALFF. Voxel values in the CL and IL ROIs of each map were divided by the corresponding global mean of ALFF and fALFF in the cortical brain tissue. Group analysis revealed significantly decreased IL ALFF (p = 0.02) and fALFF (p = 0.03) metrics compared with CL ROIs, consistent with similar findings of significantly decreased IL BOLD signal for tbfMRI (p = 0.0005) and SCA maps (p = 0.0004). The frequency domain metrics ALFF and fALFF may be markers of lesion-induced NVU in rsfMRI similar to previously reported alterations in tbfMRI activation and SCA-derived resting-state functional connectivity maps.

  13. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  14. Benchmarking infrastructure for mutation text mining.

    PubMed

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  15. Automated Neuropsychological Assessment Metrics, Version 4 (ANAM4): Examination of Select Psychometric Properties and Administration Procedures

    DTIC Science & Technology

    2016-12-01

    2017 was approved in August 2016. The supplemental project has 2 primary objectives: • Recommend cognitive assessment tools/approaches ( toolkit ) from...strategies for use in future military-relevant environments The supplemental project has two primary deliverables: • Proposed Toolkit of cognitive...6 Vet Final Report and Cognitive performance recommendations through Steering Committee Task 7 Provide Toolkit Report 16 Months 8-12 Task 8

  16. Eye Metrics: An Alternative Vigilance Detector for Military Cyber Operators

    DTIC Science & Technology

    2013-10-01

    cyber operator task. The significant change of oculometric measurements indicates that oculometrics could be used to detect changes in sustained...a significant change over time (p<.05) during the vigilance task. The significant change of oculometric measurements indicates that oculometrics...percentage of eye closure); it is the most widely used measure of real-time alertness in this industry (Dinges & Grace, 1998; Mallis et al., 1999

  17. Metric Education; A Position Paper for Vocational, Technical and Adult Education.

    ERIC Educational Resources Information Center

    Cooper, Gloria S.; And Others

    Part of an Office of Education three-year project on metric education, the position paper is intended to alert and prepare teachers, curriculum developers, and administrators in vocational, technical, and adult education to the change over to the metric system. The five chapters cover issues in metric education, what the metric system is all…

  18. Orion Flight Performance Design Trades

    NASA Technical Reports Server (NTRS)

    Jackson, Mark C.; Straube, Timothy

    2010-01-01

    A significant portion of the Orion pre-PDR design effort has focused on balancing mass with performance. High level performance metrics include abort success rates, lunar surface coverage, landing accuracy and touchdown loads. These metrics may be converted to parameters that affect mass, such as ballast for stabilizing the abort vehicle, propellant to achieve increased lunar coverage or extended missions, or ballast to increase the lift-to-drag ratio to improve entry and landing performance. The Orion Flight Dynamics team was tasked to perform analyses to evaluate many of these trades. These analyses not only provide insight into the physics of each particular trade but, in aggregate, they illustrate the processes used by Orion to balance performance and mass margins, and thereby make design decisions. Lessons learned can be gleaned from a review of these studies which will be useful to other spacecraft system designers. These lessons fall into several categories, including: appropriate application of Monte Carlo analysis in design trades, managing margin in a highly mass-constrained environment, and the use of requirements to balance margin between subsystems and components. This paper provides a review of some of the trades and analyses conducted by the Flight Dynamics team, as well as systems engineering lessons learned.

  19. Discrimination of acoustic communication signals by grasshoppers (Chorthippus biguttulus): temporal resolution, temporal integration, and the impact of intrinsic noise.

    PubMed

    Ronacher, Bernhard; Wohlgemuth, Sandra; Vogel, Astrid; Krahe, Rüdiger

    2008-08-01

    A characteristic feature of hearing systems is their ability to resolve both fast and subtle amplitude modulations of acoustic signals. This applies also to grasshoppers, which for mate identification rely mainly on the characteristic temporal patterns of their communication signals. Usually the signals arriving at a receiver are contaminated by various kinds of noise. In addition to extrinsic noise, intrinsic noise caused by stochastic processes within the nervous system contributes to making signal recognition a difficult task. The authors asked to what degree intrinsic noise affects temporal resolution and, particularly, the discrimination of similar acoustic signals. This study aims at exploring the neuronal basis for sexual selection, which depends on exploiting subtle differences between basically similar signals. Applying a metric, by which the similarities of spike trains can be assessed, the authors investigated how well the communication signals of different individuals of the same species could be discriminated and correctly classified based on the responses of auditory neurons. This spike train metric yields clues to the optimal temporal resolution with which spike trains should be evaluated. (c) 2008 APA, all rights reserved

  20. Metric System versus Anthropomorphic Units. A Bicentennial Coup?

    ERIC Educational Resources Information Center

    Dalke, John L.

    1976-01-01

    A brief history of the use of the English system of measurement is provided together with a discussion of the United States conversion to the metric system. United States industries which now use the metric system are listed. (SD)

  1. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.

    2013-03-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  2. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2011-11-15

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  3. Real-time performance monitoring and management system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2007-06-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  4. Cognitive Styles, Demographic Attributes, Task Performance and Affective Experiences: An Empirical Investigation into Astrophysics Data System (ADS) Core Users

    NASA Astrophysics Data System (ADS)

    Tong, Rong

    As a primary digital library portal for astrophysics researchers, SAO/NASA ADS (Astrophysics Data System) 2.0 interface features several visualization tools such as Author Network and Metrics. This research study involves 20 ADS long term users who participated in a usability and eye tracking research session. Participants first completed a cognitive test, and then performed five tasks in ADS 2.0 where they explored its multiple visualization tools. Results show that over half of the participants were Imagers and half of the participants were Analytic. Cognitive styles were found to have significant impacts on several efficiency-based measures. Analytic-oriented participants were observed to spent shorter time on web pages and apps, made fewer web page changes than less-Analytic-driving participants in performing common tasks, whereas AI (Analytic-Imagery) participants also completed their five tasks faster than non-AI participants. Meanwhile, self-identified Imagery participants were found to be more efficient in their task completion through multiple measures including total time on task, number of mouse clicks, and number of query revisions made. Imagery scores were negatively associated with frequency of confusion and the observed counts of being surprised. Compared to those who did not claimed to be a visual person, self-identified Imagery participants were observed to have significantly less frequency in frustration and hesitation during their task performance. Both demographic variables and past user experiences were found to correlate with task performance; query revision also correlated with multiple time-based measurements. Considered as an indicator of efficiency, query revisions were found to correlate negatively with the rate of complete with ease, and positively with several time-based efficiency measures, rate of complete with some difficulty, and the frequency of frustration. These results provide rich insights into the cognitive styles of ADS' core users, the impact of such styles and demographic attributes on their task performance their affective and cognitive experiences, and their interaction behaviors while using the visualization component of ADS 2.0, and would subsequently contribute to the design of bibliographic retrieval systems for scientists.

  5. Gestures for Picture Archiving and Communication Systems (PACS) operation in the operating room: Is there any standard?

    PubMed

    Madapana, Naveen; Gonzalez, Glebys; Rodgers, Richard; Zhang, Lingsong; Wachs, Juan P

    2018-01-01

    Gestural interfaces allow accessing and manipulating Electronic Medical Records (EMR) in hospitals while keeping a complete sterile environment. Particularly, in the Operating Room (OR), these interfaces enable surgeons to browse Picture Archiving and Communication System (PACS) without the need of delegating functions to the surgical staff. Existing gesture based medical interfaces rely on a suboptimal and an arbitrary small set of gestures that are mapped to a few commands available in PACS software. The objective of this work is to discuss a method to determine the most suitable set of gestures based on surgeon's acceptability. To achieve this goal, the paper introduces two key innovations: (a) a novel methodology to incorporate gestures' semantic properties into the agreement analysis, and (b) a new agreement metric to determine the most suitable gesture set for a PACS. Three neurosurgical diagnostic tasks were conducted by nine neurosurgeons. The set of commands and gesture lexicons were determined using a Wizard of Oz paradigm. The gestures were decomposed into a set of 55 semantic properties based on the motion trajectory, orientation and pose of the surgeons' hands and their ground truth values were manually annotated. Finally, a new agreement metric was developed, using the known Jaccard similarity to measure consensus between users over a gesture set. A set of 34 PACS commands were found to be a sufficient number of actions for PACS manipulation. In addition, it was found that there is a level of agreement of 0.29 among the surgeons over the gestures found. Two statistical tests including paired t-test and Mann Whitney Wilcoxon test were conducted between the proposed metric and the traditional agreement metric. It was found that the agreement values computed using the former metric are significantly higher (p < 0.001) for both tests. This study reveals that the level of agreement among surgeons over the best gestures for PACS operation is higher than the previously reported metric (0.29 vs 0.13). This observation is based on the fact that the agreement focuses on main features of the gestures rather than the gestures themselves. The level of agreement is not very high, yet indicates a majority preference, and is better than using gestures based on authoritarian or arbitrary approaches. The methods described in this paper provide a guiding framework for the design of future gesture based PACS systems for the OR.

  6. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition

    PubMed Central

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953

  7. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.

  8. d-Neighborhood system and generalized F-contraction in dislocated metric space.

    PubMed

    Kumari, P Sumati; Zoto, Kastriot; Panthi, Dinesh

    2015-01-01

    This paper, gives an answer for the Question 1.1 posed by Hitzler (Generalized metrics and topology in logic programming semantics, 2001) by means of "Topological aspects of d-metric space with d-neighborhood system". We have investigated the topological aspects of a d-neighborhood system obtained from dislocated metric space (simply d-metric space) which has got useful applications in the semantic analysis of logic programming. Further more we have generalized the notion of F-contraction in the view of d-metric spaces and investigated the uniqueness of fixed point and coincidence point of such mappings.

  9. Skin exposure to aliphatic polyisocyanates in the auto body repair and refinishing industry: III. A personal exposure algorithm.

    PubMed

    Liu, Youcheng; Stowe, Meredith H; Bello, Dhimiter; Sparer, Judy; Gore, Rebecca J; Cullen, Mark R; Redlich, Carrie A; Woskie, Susan R

    2009-01-01

    Isocyanate skin exposure may play an important role in sensitization and the development of isocyanate asthma, but such exposures are frequently intermittent and difficult to assess. Exposure metrics are needed to better estimate isocyanate skin exposures. The goal of this study was to develop a semiquantitative algorithm to estimate personal skin exposures in auto body shop workers using task-based skin exposure data and daily work diaries. The relationship between skin and respiratory exposure metrics was also evaluated. The development and results of respiratory exposure metrics were previously reported. Using the task-based data obtained with a colorimetric skin exposure indicator and a daily work diary, we developed a skin exposure algorithm to estimate a skin exposure index (SEI) for each worker. This algorithm considered the type of personal protective equipment (PPE) used, the percentage of skin area covered by PPE and skin exposures without and underneath the PPE. The SEI was summed across the day (daily SEI) and survey week (weekly average SEI) for each worker, compared among the job title categories and also compared with the respiratory exposure metrics. A total of 893 person-days was calculated for 232 workers (49 painters, 118 technicians and 65 office workers) from 33 auto body shops. The median (10th-90th percentile, maximum) daily SEI was 0 (0-0, 1.0), 0 (0-1.9, 4.8) and 1.6 (0-3.5, 6.1) and weekly average SEI was 0 (0-0.0, 0.7), 0.3 (0-1.6, 4.2) and 1.9 (0.4-3.0, 3.6) for office workers, technicians and painters, respectively, which were significantly different (P < 0.0001). The median (10th-90th percentile, maximum) daily SEI was 0 (0-2.4, 6.1) and weekly average SEI was 0.2 (0-2.3, 4.2) for all workers. A relatively weak positive Spearman correlation was found between daily SEI and time-weighted average (TWA) respiratory exposure metrics (microg NCO m(-3)) (r = 0.380, n = 893, P < 0.0001) and between weekly SEI and TWA respiratory exposure metrics (r = 0.482, n = 232, P < 0.0001). The skin exposure algorithm developed in this study provides task-based personal daily and weekly average skin exposure indices that are adjusted for the use of PPE. These skin exposure indices can be used to assess isocyanate exposure-response relationships.

  10. Design of a Physiology-Sensitive VR-Based Social Communication Platform for Children With Autism.

    PubMed

    Kuriakose, Selvia; Lahiri, Uttama

    2017-08-01

    Individuals with autism are often characterized by impairments in communication, reciprocal social interaction and explicit expression of their affective states. In conventional techniques, a therapist adjusts the intervention paradigm by monitoring the affective state e.g., anxiety of these individuals for effective floor-time-therapy. Conventional techniques, though powerful, are observation-based and face resource limitations. Technology-assisted systems can provide a quantitative, individualized rehabilitation platform. Presently-available systems are designed primarily to chain learning via aspects of one's performance alone restricting individualization. Specifically, these systems are not sensitive to one's anxiety. Our presented work seeks to bridge this gap by developing a novel VR-based interactive system with Anxiety-Sensitive adaptive technology. Specifically, such a system is capable of objectively identifying and quantifying one's anxiety level from real-time biomarkers, along with performance metrics. In turn it can adaptively respond in an individualized manner to foster improved social communication skills. In our present research, we have used Virtual Reality (VR) to design a proof-of-concept application that exposes participants to social tasks of varying challenges. Results of a preliminary usability study indicate the potential of our VR-based Anxiety-Sensitive system to foster improved task performance, thereby serving as a potent complementary tool in the hands of therapist.

  11. Toward an optimisation technique for dynamically monitored environment

    NASA Astrophysics Data System (ADS)

    Shurrab, Orabi M.

    2016-10-01

    The data fusion community has introduced multiple procedures of situational assessments; this is to facilitate timely responses to emerging situations. More directly, the process refinement of the Joint Directors of Laboratories (JDL) is a meta-process to assess and improve the data fusion task during real-time operation. In other wording, it is an optimisation technique to verify the overall data fusion performance, and enhance it toward the top goals of the decision-making resources. This paper discusses the theoretical concept of prioritisation. Where the analysts team is required to keep an up to date with the dynamically changing environment, concerning different domains such as air, sea, land, space and cyberspace. Furthermore, it demonstrates an illustration example of how various tracking activities are ranked, simultaneously into a predetermined order. Specifically, it presents a modelling scheme for a case study based scenario, where the real-time system is reporting different classes of prioritised events. Followed by a performance metrics for evaluating the prioritisation process of situational awareness (SWA) domain. The proposed performance metrics has been designed and evaluated using an analytical approach. The modelling scheme represents the situational awareness system outputs mathematically, in the form of a list of activities. Such methods allowed the evaluation process to conduct a rigorous analysis of the prioritisation process, despite any constrained related to a domain-specific configuration. After conducted three levels of assessments over three separates scenario, The Prioritisation Capability Score (PCS) has provided an appropriate scoring scheme for different ranking instances, Indeed, from the data fusion perspectives, the proposed metric has assessed real-time system performance adequately, and it is capable of conducting a verification process, to direct the operator's attention to any issue, concerning the prioritisation capability of situational awareness domain.

  12. Quantifying MCMC exploration of phylogenetic tree space.

    PubMed

    Whidden, Chris; Matsen, Frederick A

    2015-05-01

    In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  13. Electro-Optic Identification Research Program

    DTIC Science & Technology

    2002-04-01

    Electro - optic identification (EOID) sensors provide photographic quality images that can be used to identify mine-like contacts provided by long...tasks such as validating existing electro - optic models, development of performance metrics, and development of computer aided identification and

  14. Estimation and detection information trade-off for x-ray system optimization

    NASA Astrophysics Data System (ADS)

    Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali

    2016-05-01

    X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.

  15. 15 CFR 273.3 - General policy.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the fiscal year 1992, use the metric system of measurement in its procurements, grants, and other... the use of the metric system in their procurements, grants and other business-related activities... STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE METRIC CONVERSION POLICY FOR FEDERAL AGENCIES METRIC...

  16. Evaluation of a Metric Booklet as a Supplement to Teaching the Metric System to Undergraduate Non-Science Majors.

    ERIC Educational Resources Information Center

    Exum, Kenith Gene

    Examined is the effectiveness of a method of teaching the metric system using the booklet, Metric Supplement to Mathematics, in combination with a physical science textbook. The participants in the study were randomly selected undergraduates in a non-science oriented program of study. Instruments used included the Metric Supplement to Mathematics…

  17. What is the private sector? Understanding private provision in the health systems of low-income and middle-income countries.

    PubMed

    Mackintosh, Maureen; Channon, Amos; Karan, Anup; Selvaraj, Sakthivel; Cavagnero, Eleonora; Zhao, Hongwen

    2016-08-06

    Private health care in low-income and middle-income countries is very extensive and very heterogeneous, ranging from itinerant medicine sellers, through millions of independent practitioners-both unlicensed and licensed-to corporate hospital chains and large private insurers. Policies for universal health coverage (UHC) must address this complex private sector. However, no agreed measures exist to assess the scale and scope of the private health sector in these countries, and policy makers tasked with managing and regulating mixed health systems struggle to identify the key features of their private sectors. In this report, we propose a set of metrics, drawn from existing data that can form a starting point for policy makers to identify the structure and dynamics of private provision in their particular mixed health systems; that is, to identify the consequences of specific structures, the drivers of change, and levers available to improve efficiency and outcomes. The central message is that private sectors cannot be understood except within their context of mixed health systems since private and public sectors interact. We develop an illustrative and partial country typology, using the metrics and other country information, to illustrate how the scale and operation of the public sector can shape the private sector's structure and behaviour, and vice versa. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Metrication: An economic wake-up call for US industry

    NASA Astrophysics Data System (ADS)

    Carver, G. P.

    1993-03-01

    As the international standard of measurement, the metric system is one key to success in the global marketplace. International standards have become an important factor in international economic competition. Non-metric products are becoming increasingly unacceptable in world markets that favor metric products. Procurement is the primary federal tool for encouraging and helping U.S. industry to convert voluntarily to the metric system. Besides the perceived unwillingness of the customer, certain regulatory language, and certain legal definitions in some states, there are no major impediments to conversion of the remaining non-metric industries to metric usage. Instead, there are good reasons for changing, including an opportunity to rethink many industry standards and to take advantage of size standardization. Also, when the remaining industries adopt the metric system, they will come into conformance with federal agencies engaged in similar activities.

  19. Characterization of medical students recall of factual knowledge using learning objects and repeated testing in a novel e-learning system.

    PubMed

    Taveira-Gomes, Tiago; Prado-Costa, Rui; Severo, Milton; Ferreira, Maria Amélia

    2015-01-24

    Spaced-repetition and test-enhanced learning are two methodologies that boost knowledge retention. ALERT STUDENT is a platform that allows creation and distribution of Learning Objects named flashcards, and provides insight into student judgments-of-learning through a metric called 'recall accuracy'. This study aims to understand how the spaced-repetition and test-enhanced learning features provided by the platform affect recall accuracy, and to characterize the effect that students, flashcards and repetitions exert on this measurement. Three spaced laboratory sessions (s0, s1 and s2), were conducted with n=96 medical students. The intervention employed a study task, and a quiz task that consisted in mentally answering open-ended questions about each flashcard and grading recall accuracy. Students were randomized into study-quiz and quiz groups. On s0 both groups performed the quiz task. On s1 and s2, the study-quiz group performed the study task followed by the quiz task, whereas the quiz group only performed the quiz task. We measured differences in recall accuracy between groups/sessions, its variance components, and the G-coefficients for the flashcard component. At s0 there were no differences in recall accuracy between groups. The experiment group achieved a significant increase in recall accuracy that was superior to the quiz group in s1 and s2. In the study-quiz group, increases in recall accuracy were mainly due to the session, followed by flashcard factors and student factors. In the quiz group, increases in recall accuracy were mainly accounted by flashcard factors, followed by student and session factors. The flashcard G-coefficient indicated an agreement on recall accuracy of 91% in the quiz group, and of 47% in the study-quiz group. Recall accuracy is an easily collectible measurement that increases the educational value of Learning Objects and open-ended questions. This metric seems to vary in a way consistent with knowledge retention, but further investigation is necessary to ascertain the nature of such relationship. Recall accuracy has educational implications to students and educators, and may contribute to deliver tailored learning experiences, assess the effectiveness of instruction, and facilitate research comparing blended-learning interventions.

  20. Spatial frequency dependence of target signature for infrared performance modeling

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd; Olson, Jeffrey

    2011-05-01

    The standard model used to describe the performance of infrared imagers is the U.S. Army imaging system target acquisition model, based on the targeting task performance metric. The model is characterized by the resolution and sensitivity of the sensor as well as the contrast and task difficulty of the target set. The contrast of the target is defined as a spatial average contrast. The model treats the contrast of the target set as spatially white, or constant, over the bandlimit of the sensor. Previous experiments have shown that this assumption is valid under normal conditions and typical target sets. However, outside of these conditions, the treatment of target signature can become the limiting factor affecting model performance accuracy. This paper examines target signature more carefully. The spatial frequency dependence of the standard U.S. Army RDECOM CERDEC Night Vision 12 and 8 tracked vehicle target sets is described. The results of human perception experiments are modeled and evaluated using both frequency dependent and independent target signature definitions. Finally the function of task difficulty and its relationship to a target set is discussed.

  1. Managing Space Situational Awareness Using the Space Surveillance Network

    DTIC Science & Technology

    2013-11-14

    This  report examines  the  use   of  utility metrics  from  two  forms  of  expected  information  gain for each object‐sensor pair as well as the...examines the use of utility metrics from two forms of expected information gain for each object-sensor pair as well as the approximated stability of the...estimation errors in order to work towards a tasking strategy. The information theoretic approaches use the calculation of Fisher information gain

  2. Integrated Resilient Aircraft Control Project Full Scale Flight Validation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    2009-01-01

    Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.

  3. Math Roots: The Beginnings of the Metric System

    ERIC Educational Resources Information Center

    Johnson, Art; Norris, Kit; Adams,Thomasina Lott, Ed.

    2007-01-01

    This article reviews the history of the metric system, from a proposal of a sixteenth-century mathematician to its implementation in Revolutionary France some 200 years later. Recent developments in the metric system are also discussed.

  4. An Inverse Optimal Control Approach to Explain Human Arm Reaching Control Based on Multiple Internal Models.

    PubMed

    Oguz, Ozgur S; Zhou, Zhehua; Glasauer, Stefan; Wollherr, Dirk

    2018-04-03

    Human motor control is highly efficient in generating accurate and appropriate motor behavior for a multitude of tasks. This paper examines how kinematic and dynamic properties of the musculoskeletal system are controlled to achieve such efficiency. Even though recent studies have shown that the human motor control relies on multiple models, how the central nervous system (CNS) controls this combination is not fully addressed. In this study, we utilize an Inverse Optimal Control (IOC) framework in order to find the combination of those internal models and how this combination changes for different reaching tasks. We conducted an experiment where participants executed a comprehensive set of free-space reaching motions. The results show that there is a trade-off between kinematics and dynamics based controllers depending on the reaching task. In addition, this trade-off depends on the initial and final arm configurations, which in turn affect the musculoskeletal load to be controlled. Given this insight, we further provide a discomfort metric to demonstrate its influence on the contribution of different inverse internal models. This formulation together with our analysis not only support the multiple internal models (MIMs) hypothesis but also suggest a hierarchical framework for the control of human reaching motions by the CNS.

  5. Classification of Hamilton-Jacobi separation in orthogonal coordinates with diagonal curvature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajaratnam, Krishan, E-mail: k2rajara@uwaterloo.ca; McLenaghan, Raymond G., E-mail: rgmclenaghan@uwaterloo.ca

    2014-08-15

    We find all orthogonal metrics where the geodesic Hamilton-Jacobi equation separates and the Riemann curvature tensor satisfies a certain equation (called the diagonal curvature condition). All orthogonal metrics of constant curvature satisfy the diagonal curvature condition. The metrics we find either correspond to a Benenti system or are warped product metrics where the induced metric on the base manifold corresponds to a Benenti system. Furthermore, we show that most metrics we find are characterized by concircular tensors; these metrics, called Kalnins-Eisenhart-Miller metrics, have an intrinsic characterization which can be used to obtain them on a given space. In conjunction withmore » other results, we show that the metrics we found constitute all separable metrics for Riemannian spaces of constant curvature and de Sitter space.« less

  6. 48 CFR 2811.001 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... metric systems. For example, an item is designed, produced and described in inch-pound values with soft metric values also shown for information or comparison purposes. Hybrid systems means the use of both... dimensions. Metric system means the International System of Units established by the General Conference of...

  7. 15 CFR 273.4 - Guidelines.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... for use of the metric system in procurements, grants and other business-related activities; (b... predominant influence, consistent with the legal status of the metric system as the preferred system of... system; (f) Consider cost effects of metric use in setting agency policies, programs and actions and...

  8. 48 CFR 2811.001 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... metric systems. For example, an item is designed, produced and described in inch-pound values with soft metric values also shown for information or comparison purposes. Hybrid systems means the use of both... dimensions. Metric system means the International System of Units established by the General Conference of...

  9. 48 CFR 2811.001 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... metric systems. For example, an item is designed, produced and described in inch-pound values with soft metric values also shown for information or comparison purposes. Hybrid systems means the use of both... dimensions. Metric system means the International System of Units established by the General Conference of...

  10. 48 CFR 2811.001 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... metric systems. For example, an item is designed, produced and described in inch-pound values with soft metric values also shown for information or comparison purposes. Hybrid systems means the use of both... dimensions. Metric system means the International System of Units established by the General Conference of...

  11. 48 CFR 2811.001 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... metric systems. For example, an item is designed, produced and described in inch-pound values with soft metric values also shown for information or comparison purposes. Hybrid systems means the use of both... dimensions. Metric system means the International System of Units established by the General Conference of...

  12. Metrication, American Style. Fastback 41.

    ERIC Educational Resources Information Center

    Izzi, John

    The purpose of this pamphlet is to provide a starting point of information on the metric system for any concerned or interested reader. The material is organized into five brief chapters: Man and Measurement; Learning the Metric System; Progress Report: Education; Recommended Sources; and Metrication, American Style. Appendixes include an…

  13. Deforestation and benthic indicators: how much vegetation cover is needed to sustain healthy Andean streams?

    PubMed

    Iñiguez-Armijos, Carlos; Leiva, Adrián; Frede, Hans-Georg; Hampel, Henrietta; Breuer, Lutz

    2014-01-01

    Deforestation in the tropical Andes is affecting ecological conditions of streams, and determination of how much forest should be retained is a pressing task for conservation, restoration and management strategies. We calculated and analyzed eight benthic metrics (structural, compositional and water quality indices) and a physical-chemical composite index with gradients of vegetation cover to assess the effects of deforestation on macroinvertebrate communities and water quality of 23 streams in southern Ecuadorian Andes. Using a geographical information system (GIS), we quantified vegetation cover at three spatial scales: the entire catchment, the riparian buffer of 30 m width extending the entire stream length, and the local scale defined for a stream reach of 100 m in length and similar buffer width. Macroinvertebrate and water quality metrics had the strongest relationships with vegetation cover at catchment and riparian scales, while vegetation cover did not show any association with the macroinvertebrate metrics at local scale. At catchment scale, the water quality metrics indicate that ecological condition of Andean streams is good when vegetation cover is over 70%. Further, macroinvertebrate community assemblages were more diverse and related in catchments largely covered by native vegetation (>70%). Our results suggest that retaining an important quantity of native vegetation cover within the catchments and a linkage between headwater and riparian forests help to maintain and improve stream biodiversity and water quality in Andean streams affected by deforestation. This research proposes that a strong regulation focused to the management of riparian buffers can be successful when decision making is addressed to conservation/restoration of Andean catchments.

  14. Deforestation and Benthic Indicators: How Much Vegetation Cover Is Needed to Sustain Healthy Andean Streams?

    PubMed Central

    Iñiguez–Armijos, Carlos; Leiva, Adrián; Frede, Hans–Georg; Hampel, Henrietta; Breuer, Lutz

    2014-01-01

    Deforestation in the tropical Andes is affecting ecological conditions of streams, and determination of how much forest should be retained is a pressing task for conservation, restoration and management strategies. We calculated and analyzed eight benthic metrics (structural, compositional and water quality indices) and a physical-chemical composite index with gradients of vegetation cover to assess the effects of deforestation on macroinvertebrate communities and water quality of 23 streams in southern Ecuadorian Andes. Using a geographical information system (GIS), we quantified vegetation cover at three spatial scales: the entire catchment, the riparian buffer of 30 m width extending the entire stream length, and the local scale defined for a stream reach of 100 m in length and similar buffer width. Macroinvertebrate and water quality metrics had the strongest relationships with vegetation cover at catchment and riparian scales, while vegetation cover did not show any association with the macroinvertebrate metrics at local scale. At catchment scale, the water quality metrics indicate that ecological condition of Andean streams is good when vegetation cover is over 70%. Further, macroinvertebrate community assemblages were more diverse and related in catchments largely covered by native vegetation (>70%). Our results suggest that retaining an important quantity of native vegetation cover within the catchments and a linkage between headwater and riparian forests help to maintain and improve stream biodiversity and water quality in Andean streams affected by deforestation. This research proposes that a strong regulation focused to the management of riparian buffers can be successful when decision making is addressed to conservation/restoration of Andean catchments. PMID:25147941

  15. Variability in spatio-temporal pattern of trapezius activity and coordination of hand-arm muscles during a sustained repetitive dynamic task.

    PubMed

    Samani, Afshin; Srinivasan, Divya; Mathiassen, Svend Erik; Madeleine, Pascal

    2017-02-01

    The spatio-temporal distribution of muscle activity has been suggested to be a determinant of fatigue development. Pursuing this hypothesis, we investigated the pattern of muscular activity in the shoulder and arm during a repetitive dynamic task performed until participants' rating of perceived exertion reached 8 on Borg's CR-10 scale. We collected high-density surface electromyogram (HD-EMG) over the upper trapezius, as well as bipolar EMG from biceps brachii, triceps brachii, deltoideus anterior, serratus anterior, upper and lower trapezius from 21 healthy women. Root-mean-square (RMS) and mean power frequency (MNF) were calculated for all EMG signals. The barycenter of RMS values over the HD-EMG grid was also determined, as well as normalized mutual information (NMI) for each pair of muscles. Cycle-to-cycle variability of these metrics was also assessed. With time, EMG RMS increased for most of the muscles, and MNF decreased. Trapezius activity became higher on the lateral side than on the medial side of the HD-EMG grid and the barycenter moved in a lateral direction. NMI between muscle pairs increased with time while its variability decreased. The variability of the metrics during the initial 10 % of task performance was not associated with the time to task termination. Our results suggest that the considerable variability in force and posture contained in the dynamic task per se masks any possible effects of differences between subjects in initial motor variability on the rate of fatigue development.

  16. Eye Gaze Correlates of Motor Impairment in VR Observation of Motor Actions.

    PubMed

    Alves, J; Vourvopoulos, A; Bernardino, A; Bermúdez I Badia, S

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". Identify eye gaze correlates of motor impairment in a virtual reality motor observation task in a study with healthy participants and stroke patients. Participants consisted of a group of healthy subjects (N = 20) and a group of stroke survivors (N = 10). Both groups were required to observe a simple reach-and-grab and place-and-release task in a virtual environment. Additionally, healthy subjects were required to observe the task in a normal condition and a constrained movement condition. Eye movements were recorded during the observation task for later analysis. For healthy participants, results showed differences in gaze metrics when comparing the normal and arm-constrained conditions. Differences in gaze metrics were also found when comparing dominant and non-dominant arm for saccades and smooth pursuit events. For stroke patients, results showed longer smooth pursuit segments in action observation when observing the paretic arm, thus providing evidence that the affected circuitry may be activated for eye gaze control during observation of the simulated motor action. This study suggests that neural motor circuits are involved, at multiple levels, in observation of motor actions displayed in a virtual reality environment. Thus, eye tracking combined with action observation tasks in a virtual reality display can be used to monitor motor deficits derived from stroke, and consequently can also be used for rehabilitation of stroke patients.

  17. Enhanced timing abilities in percussionists generalize to rhythms without a musical beat.

    PubMed

    Cameron, Daniel J; Grahn, Jessica A

    2014-01-01

    The ability to entrain movements to music is arguably universal, but it is unclear how specialized training may influence this. Previous research suggests that percussionists have superior temporal precision in perception and production tasks. Such superiority may be limited to temporal sequences that resemble real music or, alternatively, may generalize to musically implausible sequences. To test this, percussionists and nonpercussionists completed two tasks that used rhythmic sequences varying in musical plausibility. In the beat tapping task, participants tapped with the beat of a rhythmic sequence over 3 stages: finding the beat (as an initial sequence played), continuation of the beat (as a second sequence was introduced and played simultaneously), and switching to a second beat (the initial sequence finished, leaving only the second). The meters of the two sequences were either congruent or incongruent, as were their tempi (minimum inter-onset intervals). In the rhythm reproduction task, participants reproduced rhythms of four types, ranging from high to low musical plausibility: Metric simple rhythms induced a strong sense of the beat, metric complex rhythms induced a weaker sense of the beat, nonmetric rhythms had no beat, and jittered nonmetric rhythms also had no beat as well as low temporal predictability. For both tasks, percussionists performed more accurately than nonpercussionists. In addition, both groups were better with musically plausible than implausible conditions. Overall, the percussionists' superior abilities to entrain to, and reproduce, rhythms generalized to musically implausible sequences.

  18. 20 CFR 435.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Metric system of measurement. 435.15 Section 435.15 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR... metric system is the preferred measurement system for U.S. trade and commerce. The Act requires each...

  19. MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis

    NASA Astrophysics Data System (ADS)

    Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.; Kang, In-Sik; Maloney, Eric; Waliser, Duane; Hendon, Harry

    2017-12-01

    The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJO amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.

  20. MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis

    DOE PAGES

    Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.; ...

    2017-03-23

    The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJOmore » amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.« less

Top