Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
Stevanovic, Stefan; Pervan, Boris
2018-01-01
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250
NASA Astrophysics Data System (ADS)
Anderson, Monica; David, Phillip
2007-04-01
Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.
Moacdieh, Nadine; Sarter, Nadine
2015-06-01
The objective was to use eye tracking to trace the underlying changes in attention allocation associated with the performance effects of clutter, stress, and task difficulty in visual search and noticing tasks. Clutter can degrade performance in complex domains, yet more needs to be known about the associated changes in attention allocation, particularly in the presence of stress and for different tasks. Frequently used and relatively simple eye tracking metrics do not effectively capture the various effects of clutter, which is critical for comprehensively analyzing clutter and developing targeted, real-time countermeasures. Electronic medical records (EMRs) were chosen as the application domain for this research. Clutter, stress, and task difficulty were manipulated, and physicians' performance on search and noticing tasks was recorded. Several eye tracking metrics were used to trace attention allocation throughout those tasks, and subjective data were gathered via a debriefing questionnaire. Clutter degraded performance in terms of response time and noticing accuracy. These decrements were largely accentuated by high stress and task difficulty. Eye tracking revealed the underlying attentional mechanisms, and several display-independent metrics were shown to be significant indicators of the effects of clutter. Eye tracking provides a promising means to understand in detail (offline) and prevent (in real time) major performance breakdowns due to clutter. Display designers need to be aware of the risks of clutter in EMRs and other complex displays and can use the identified eye tracking metrics to evaluate and/or adjust their display. © 2015, Human Factors and Ergonomics Society.
A relationship between eye movement patterns and performance in a precognitive tracking task
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Hartzell, E. J.
1977-01-01
Eye movements made by various subjects in the performance of a precognitive tracking task are studied. The tracking task persented by an antiaircraft artillery (AAA) simulator has an input forcing function represented by a deterministic aircraft fly-by. The performance of subjects is ranked by two metrics. Good, mediocre, and poor trackers are selected for analysis based on performance during the difficult segment of the tracking task and over replications. Using phase planes to characterize both the eye movement patterns and the displayed error signal, a simple metric is developed to study these patterns. Two characterizations of eye movement strategies are defined and quantified. Using these two types of eye strategies, two conclusions are obtained about good, mediocre, and poor trackers. First, the eye tracker who used a fixed strategy will consistently perform better. Secondly, the best fixed strategy is defined as a Crosshair Fixator.
NASA Astrophysics Data System (ADS)
Mohrfeld-Halterman, J. A.; Uddin, M.
2016-07-01
We described in this paper the development of a high fidelity vehicle aerodynamic model to fit wind tunnel test data over a wide range of vehicle orientations. We also present a comparison between the effects of this proposed model and a conventional quasi steady-state aerodynamic model on race vehicle simulation results. This is done by implementing both of these models independently in multi-body quasi steady-state simulations to determine the effects of the high fidelity aerodynamic model on race vehicle performance metrics. The quasi steady state vehicle simulation is developed with a multi-body NASCAR Truck vehicle model, and simulations are conducted for three different types of NASCAR race tracks, a short track, a one and a half mile intermediate track, and a higher speed, two mile intermediate race track. For each track simulation, the effects of the aerodynamic model on handling, maximum corner speed, and drive force metrics are analysed. The accuracy of the high-fidelity model is shown to reduce the aerodynamic model error relative to the conventional aerodynamic model, and the increased accuracy of the high fidelity aerodynamic model is found to have realisable effects on the performance metric predictions on the intermediate tracks resulting from the quasi steady-state simulation.
Improving Department of Defense Global Distribution Performance Through Network Analysis
2016-06-01
network performance increase. 14. SUBJECT TERMS supply chain metrics, distribution networks, requisition shipping time, strategic distribution database...peace and war” (p. 4). USTRANSCOM Metrics and Analysis Branch defines, develops, tracks, and maintains outcomes- based supply chain metrics to...2014a, p. 8). The Joint Staff defines a TDD standard as the maximum number of days the supply chain can take to deliver requisitioned materiel
Tracking occupational hearing loss across global industries: A comparative analysis of metrics
Rabinowitz, Peter M.; Galusha, Deron; McTague, Michael F.; Slade, Martin D.; Wesdock, James C.; Dixon-Ernst, Christine
2013-01-01
Occupational hearing loss is one of the most prevalent occupational conditions; yet, there is no acknowledged international metric to allow comparisons of risk between different industries and regions. In order to make recommendations for an international standard of occupational hearing loss, members of an international industry group (the International Aluminium Association) submitted details of different hearing loss metrics currently in use by members. We compared the performance of these metrics using an audiometric data set for over 6000 individuals working in 10 locations of one member company. We calculated rates for each metric at each location from 2002 to 2006. For comparison, we calculated the difference of observed–expected (for age) binaural high frequency hearing loss (in dB/year) for each location over the same time period. We performed linear regression to determine the correlation between each metric and the observed–expected rate of hearing loss. The different metrics produced discrepant results, with annual rates ranging from 0.0% for a less-sensitive metric to more than 10% for a highly sensitive metric. At least two metrics, a 10 dB age-corrected threshold shift from baseline and a 15 dB nonage-corrected shift metric, correlated well with the difference of observed–expected high-frequency hearing loss. This study suggests that it is feasible to develop an international standard for tracking occupational hearing loss in industrial working populations. PMID:22387709
A Methodology to Analyze Photovoltaic Tracker Uptime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Matthew T; Ruth, Dan
A metric is developed to analyze the daily performance of single-axis photovoltaic (PV) trackers. The metric relies on comparing correlations between the daily time series of the PV power output and an array of simulated plane-of-array irradiances for the given day. Mathematical thresholds and a logic sequence are presented, so the daily tracking metric can be applied in an automated fashion on large-scale PV systems. The results of applying the metric are visually examined against the time series of the power output data for a large number of days and for various systems. The visual inspection results suggest that overall,more » the algorithm is accurate in identifying stuck or functioning trackers on clear-sky days. Visual inspection also shows that there are days that are not classified by the metric where the power output data may be sufficient to identify a stuck tracker. Based on the daily tracking metric, uptime results are calculated for 83 different inverters at 34 PV sites. The mean tracker uptime is calculated at 99% based on 2 different calculation methods. The daily tracking metric clearly has limitations, but as there is no existing metrics in the literature, it provides a valuable tool for flagging stuck trackers.« less
A concept for performance management for Federal science programs
Whalen, Kevin G.
2017-11-06
The demonstration of clear linkages between planning, funding, outcomes, and performance management has created unique challenges for U.S. Federal science programs. An approach is presented here that characterizes science program strategic objectives by one of five “activity types”: (1) knowledge discovery, (2) knowledge development and delivery, (3) science support, (4) inventory and monitoring, and (5) knowledge synthesis and assessment. The activity types relate to performance measurement tools for tracking outcomes of research funded under the objective. The result is a multi-time scale, integrated performance measure that tracks individual performance metrics synthetically while also measuring progress toward long-term outcomes. Tracking performance on individual metrics provides explicit linkages to root causes of potentially suboptimal performance and captures both internal and external program drivers, such as customer relations and science support for managers. Functionally connecting strategic planning objectives with performance measurement tools is a practical approach for publicly funded science agencies that links planning, outcomes, and performance management—an enterprise that has created unique challenges for public-sector research and development programs.
NASA Astrophysics Data System (ADS)
Gide, Milind S.; Karam, Lina J.
2016-08-01
With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrissey, Elmer; O'Donnell, James; Keane, Marcus
2004-03-29
Minimizing building life cycle energy consumption is becoming of paramount importance. Performance metrics tracking offers a clear and concise manner of relating design intent in a quantitative form. A methodology is discussed for storage and utilization of these performance metrics through an Industry Foundation Classes (IFC) instantiated Building Information Model (BIM). The paper focuses on storage of three sets of performance data from three distinct sources. An example of a performance metrics programming hierarchy is displayed for a heat pump and a solar array. Utilizing the sets of performance data, two discrete performance effectiveness ratios may be computed, thus offeringmore » an accurate method of quantitatively assessing building performance.« less
Poisson, Sharon N.; Josephson, S. Andrew
2011-01-01
Stroke is a major public health burden, and accounts for many hospitalizations each year. Due to gaps in practice and recommended guidelines, there has been a recent push toward implementing quality measures to be used for improving patient care, comparing institutions, as well as for rewarding or penalizing physicians through pay-for-performance. This article reviews the major organizations involved in implementing quality metrics for stroke, and the 10 major metrics currently being tracked. We also discuss possible future metrics and the implications of public reporting and using metrics for pay-for-performance. PMID:23983840
Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip
2017-06-01
Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.
Oculomotor Behavior Metrics Change According to Circadian Phase and Time Awake
NASA Technical Reports Server (NTRS)
Flynn-Evans, Erin E.; Tyson, Terence L.; Cravalho, Patrick; Feick, Nathan; Stone, Leland S.
2017-01-01
There is a need for non-invasive, objective measures to forecast performance impairment arising from sleep loss and circadian misalignment, particularly in safety-sensitive occupations. Eye-tracking devices have been used in some operational scenarios, but such devices typically focus on eyelid closures and slow rolling eye movements and are susceptible to the intrusion of head movement artifacts. We hypothesized that an expanded suite of oculomotor behavior metrics, collected during a visual tracking task, would change according to circadian phase and time awake, and could be used as a marker of performance impairment.
Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J
2013-03-01
The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.
DOT National Transportation Integrated Search
2012-03-01
This study was undertaken to: 1) apply a benchmarking process to identify best practices within four areas Wisconsin Department of Transportation (WisDOT) construction management and 2) analyze two performance metrics, % Cost vs. % Time, tracked by t...
Partridge, Roland W; Brown, Fraser S; Brennan, Paul M; Hennessey, Iain A M; Hughes, Mark A
2016-02-01
To assess the potential of the LEAP™ infrared motion tracking device to map laparoscopic instrument movement in a simulated environment. Simulator training is optimized when augmented by objective performance feedback. We explore the potential LEAP has to provide this in a way compatible with affordable take-home simulators. LEAP and the previously validated InsTrac visual tracking tool mapped expert and novice performances of a standardized simulated laparoscopic task. Ability to distinguish between the 2 groups (construct validity) and correlation between techniques (concurrent validity) were the primary outcome measures. Forty-three expert and 38 novice performances demonstrated significant differences in LEAP-derived metrics for instrument path distance (P < .001), speed (P = .002), acceleration (P < .001), motion smoothness (P < .001), and distance between the instruments (P = .019). Only instrument path distance demonstrated a correlation between LEAP and InsTrac tracking methods (novices: r = .663, P < .001; experts: r = .536, P < .001). Consistency of LEAP tracking was poor (average % time hands not tracked: 31.9%). The LEAP motion device is able to track the movement of hands using instruments in a laparoscopic box simulator. Construct validity is demonstrated by its ability to distinguish novice from expert performances. Only time and instrument path distance demonstrated concurrent validity with an existing tracking method however. A number of limitations to the tracking method used by LEAP have been identified. These need to be addressed before it can be considered an alternative to visual tracking for the delivery of objective performance metrics in take-home laparoscopic simulators. © The Author(s) 2015.
Robust tracking control of a magnetically suspended rigid body
NASA Technical Reports Server (NTRS)
Lim, Kyong B.; Cox, David E.
1994-01-01
This study is an application of H-infinity and micro-synthesis for designing robust tracking controllers for the Large Angle Magnetic Suspension Test Facility. The modeling, design, analysis, simulation, and testing of a control law that guarantees tracking performance under external disturbances and model uncertainties is investigated. The type of uncertainties considered and the tracking performance metric used is discussed. This study demonstrates the tradeoff between tracking performance at low frequencies and robustness at high frequencies. Two sets of controllers were designed and tested. The first set emphasized performance over robustness, while the second set traded off performance for robustness. Comparisons of simulation and test results are also included. Current simulation and experimental results indicate that reasonably good robust tracking performance can be attained for this system using multivariable robust control approach.
Kolecki, Radek; Dammavalam, Vikalpa; Bin Zahid, Abdullah; Hubbard, Molly; Choudhry, Osamah; Reyes, Marleen; Han, ByoungJun; Wang, Tom; Papas, Paraskevi Vivian; Adem, Aylin; North, Emily; Gilbertson, David T; Kondziolka, Douglas; Huang, Jason H; Huang, Paul P; Samadani, Uzma
2018-03-01
OBJECTIVE The precise threshold differentiating normal and elevated intracranial pressure (ICP) is variable among individuals. In the context of several pathophysiological conditions, elevated ICP leads to abnormalities in global cerebral functioning and impacts the function of cranial nerves (CNs), either or both of which may contribute to ocular dysmotility. The purpose of this study was to assess the impact of elevated ICP on eye-tracking performed while patients were watching a short film clip. METHODS Awake patients requiring placement of an ICP monitor for clinical purposes underwent eye tracking while watching a 220-second continuously playing video moving around the perimeter of a viewing monitor. Pupil position was recorded at 500 Hz and metrics associated with each eye individually and both eyes together were calculated. Linear regression with generalized estimating equations was performed to test the association of eye-tracking metrics with changes in ICP. RESULTS Eye tracking was performed at ICP levels ranging from -3 to 30 mm Hg in 23 patients (12 women, 11 men, mean age 46.8 years) on 55 separate occasions. Eye-tracking measures correlating with CN function linearly decreased with increasing ICP (p < 0.001). Measures for CN VI were most prominently affected. The area under the curve (AUC) for eye-tracking metrics to discriminate between ICP < 12 and ≥ 12 mm Hg was 0.798. To discriminate an ICP < 15 from ≥ 15 mm Hg the AUC was 0.833, and to discriminate ICP < 20 from ≥ 20 mm Hg the AUC was 0.889. CONCLUSIONS Increasingly elevated ICP was associated with increasingly abnormal eye tracking detected while patients were watching a short film clip. These results suggest that eye tracking may be used as a noninvasive, automatable means to quantitate the physiological impact of elevated ICP, which has clinical application for assessment of shunt malfunction, pseudotumor cerebri, concussion, and prevention of second-impact syndrome.
State of the States 2009. Renewable Energy Development and the Role of Policy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doris, Elizabeth; McLaren, Joyce; Healey, Victoria
2009-10-01
This report tracks the progress of U.S. renewable energy development at the state level, with metrics on development status and reviews of relevant policies. The analysis offers state-by-state policy suggestions and develops performance-based evaluation metrics to accelerate and improve renewable energy development.
Technical Note: Gray tracking in medical color displays-A report of Task Group 196.
Badano, Aldo; Wang, Joel; Boynton, Paul; Le Callet, Patrick; Cheng, Wei-Chung; Deroo, Danny; Flynn, Michael J; Matsui, Takashi; Penczek, John; Revie, Craig; Samei, Ehsan; Steven, Peter M; Swiderski, Stan; Van Hoey, Gert; Yamaguchi, Matsuhiro; Hasegawa, Mikio; Nagy, Balázs Vince
2016-07-01
The authors discuss measurement methods and instrumentation useful for the characterization of the gray tracking performance of medical color monitors for diagnostic applications. The authors define gray tracking as the variability in the chromaticity of the gray levels in a color monitor. The authors present data regarding the capability of color measurement instruments with respect to their abilities to measure a target white point corresponding to the CIE Standard Illuminant D65 at different luminance values within the grayscale palette of a medical display. The authors then discuss evidence of significant differences in performance among color measurement instruments currently available for medical physicists to perform calibrations and image quality checks for the consistent representation of color in medical displays. In addition, the authors introduce two metrics for quantifying grayscale chromaticity consistency of gray tracking. The authors' findings show that there is an order of magnitude difference in the accuracy of field and reference instruments. The gray tracking metrics quantify how close the grayscale chromaticity is to the chromaticity of the full white point (equal amounts of red, green, and blue at maximum level) or to consecutive levels (equal values for red, green, and blue), with a lower value representing an improved grayscale tracking performance. An illustrative example of how to calculate and report the gray tracking performance according to the Task Group definitions is provided. The authors' proposed methodology for characterizing the grayscale degradation in chromaticity for color monitors that can be used to establish standards and procedures aiding in the quality control testing of color displays and color measurement instrumentation.
Kasturi, Rangachar; Goldgof, Dmitry; Soundararajan, Padmanabhan; Manohar, Vasant; Garofolo, John; Bowers, Rachel; Boonstra, Matthew; Korzhova, Valentina; Zhang, Jing
2009-02-01
Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.
Cross-sectional evaluation of visuomotor tracking performance following subconcussive head impacts.
Brokaw, E B; Fine, M S; Kindschi, K E; Santago Ii, A C; Lum, P S; Higgins, M
2018-01-01
Repeated mild traumatic brain injury (mTBI) has been associated with increased risk of degenerative neurological disorders. While the effects of mTBI and repeated injury are known, studies have only recently started examining repeated subconcussive impacts, impacts that do not result in a clinically diagnosed mTBI. In these studies, repeated subconcussive impacts have been connected to cognitive performance and brain imaging changes. Recent research suggests that performance on a visuomotor tracking (VMT) task may help improve the identification of mTBI. The goal of this study was to investigate if VMT performance is sensitive to the cumulative effect of repeated subconcussive head impacts in collegiate men's lacrosse players. A cross-sectional, prospective study was completed with eleven collegiate men's lacrosse players. Participants wore helmet-mounted sensors and completed VMT and reaction time assessments. The relationship between cumulative impact metrics and VMT metrics were investigated. In this study, VMT performance correlated with repeated subconcussive head impacts; individuals approached clinically diagnosed mTBI-like performance as the cumulative rotational velocity they experienced increased. This suggests that repeated subconcussive impacts can result in measurable impairments and indicates that visuomotor tracking performance may be a useful tool for monitoring the effects of repeated subconcussive impacts.
Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovesdi, Casey Robert; Rice, Brandon Charles; Bower, Gordon Ross
Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator’s eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.
Assessing Upper Extremity Motor Function in Practice of Virtual Activities of Daily Living
Adams, Richard J.; Lichter, Matthew D.; Krepkovich, Eileen T.; Ellington, Allison; White, Marga; Diamond, Paul T.
2015-01-01
A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An Unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user’s avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman’s rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs. PMID:25265612
Assessing upper extremity motor function in practice of virtual activities of daily living.
Adams, Richard J; Lichter, Matthew D; Krepkovich, Eileen T; Ellington, Allison; White, Marga; Diamond, Paul T
2015-03-01
A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user's avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman's rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs.
Calderon, Lindsay E; Kavanagh, Kevin T; Rice, Mara K
2015-10-01
Catheter-associated urinary tract infections (CAUTIs) occur in 290,000 US hospital patients annually, with an estimated cost of $290 million. Two different measurement systems are being used to track the US health care system's performance in lowering the rate of CAUTIs. Since 2010, the Agency for Healthcare Research and Quality (AHRQ) metric has shown a 28.2% decrease in CAUTI, whereas the Centers for Disease Control and Prevention metric has shown a 3%-6% increase in CAUTI since 2009. Differences in data acquisition and the definition of the denominator may explain this discrepancy. The AHRQ metric analyzes chart-audited data and reflects both catheter use and care. The Centers for Disease Control and Prevention metric analyzes self-reported data and primarily reflects catheter care. Because analysis of the AHRQ metric showed a progressive change in performance over time and the scientific literature supports the importance of catheter use in the prevention of CAUTI, it is suggested that risk-adjusted catheter-use data be incorporated into metrics that are used for determining facility performance and for value-based purchasing initiatives. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery
Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack
2015-01-01
Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286
NASA Astrophysics Data System (ADS)
Choi, J.; Jo, J.
2016-09-01
The optical satellite tracking data obtained by the first Korean optical satellite tracking system, Optical Wide-field patrol - Network (OWL-Net), had been examined for precision orbit determination. During the test observation at Israel site, we have successfully observed a satellite with Laser Retro Reflector (LRR) to calibrate the angle-only metric data. The OWL observation system is using a chopper equipment to get dense observation data in one-shot over 100 points for the low Earth orbit objects. After several corrections, orbit determination process was done with validated metric data. The TLE with the same epoch of the end of the first arc was used for the initial orbital parameter. Orbit Determination Tool Kit (ODTK) was used for an analysis of a performance of orbit estimation using the angle-only measurements. We have been developing batch style orbit estimator.
The Publications Tracking and Metrics Program at NOAO: Challenges and Opportunities
NASA Astrophysics Data System (ADS)
Hunt, Sharon
2015-08-01
The National Optical Astronomy Observatory (NOAO) is the U.S. national research and development center for ground-based nighttime astronomy. The NOAO librarian manages the organization’s publications tracking and metrics program, which consists of three components: identifying publications, organizing citation data, and disseminating publications information. We are developing methods to streamline these tasks, better organize our data, provide greater accessibility to publications data, and add value to our services.Our publications tracking process is complex, as we track refereed publications citing data from several sources: NOAO telescopes at two observatory sites, telescopes of consortia in which NOAO participates, the NOAO Science Archive, and NOAO-granted community-access time on non-NOAO telescopes. We also identify and document our scientific staff publications. In addition, several individuals contribute publications data.In the past year, we made several changes in our publications tracking and metrics program. To better organize our data and streamline the creation of reports and metrics, we created a MySQL publications database. When designing this relational database, we considered ease of use, the ability to incorporate data from various sources, efficiency in data inputting and sorting, and potential for growth. We also considered the types of metrics we wished to generate from our publications data based on our target audiences and the messages we wanted to convey. To increase accessibility and dissemination of publications information, we developed a publications section on the library’s website, with citation lists, acknowledgements guidelines, and metrics. We are now developing a searchable online database for our website using PHP.The publications tracking and metrics program has provided many opportunities for the library to market its services and contribute to the organization’s mission. As we make decisions on collecting, organizing, and disseminating publications information and metrics, we add to the visibility of the library, gain professional recognition, and produce a value-added service.
Tracking and Data Relay Satellite System (TDRSS) navigation with DSN radio metric data
NASA Technical Reports Server (NTRS)
Ellis, J.
1981-01-01
The use of DSN radiometric data for enhancing the orbit determination capability for TDRS is examined. Results of a formal covariance analysis are presented which establish the nominal TDRS navigation performance and assess the performance improvement based on augmenting the nominal TDRS data strategy with radiometric data from DSN sites.
WISE: Automated support for software project management and measurement. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ramakrishnan, Sudhakar
1995-01-01
One important aspect of software development and IV&V is measurement. Unless a software development effort is measured in some way, it is difficult to judge the effectiveness of current efforts and predict future performances. Collection of metrics and adherence to a process are difficult tasks in a software project. Change activity is a powerful indicator of project status. Automated systems that can handle change requests, issues, and other process documents provide an excellent platform for tracking the status of the project. A World Wide Web based architecture is developed for (a) making metrics collection an implicit part of the software process, (b) providing metric analysis dynamically, (c) supporting automated tools that can complement current practices of in-process improvement, and (d) overcoming geographical barrier. An operational system (WISE) instantiates this architecture allowing for the improvement of software process in a realistic environment. The tool tracks issues in software development process, provides informal communication between the users with different roles, supports to-do lists (TDL), and helps in software process improvement. WISE minimizes the time devoted to metrics collection, analysis, and captures software change data. Automated tools like WISE focus on understanding and managing the software process. The goal is improvement through measurement.
Quantifying Pilot Visual Attention in Low Visibility Terminal Operations
NASA Technical Reports Server (NTRS)
Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.
2012-01-01
Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation
Correlation Filter Learning Toward Peak Strength for Visual Tracking.
Sui, Yao; Wang, Guanghui; Zhang, Li
2018-04-01
This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.
Relevance of motion-related assessment metrics in laparoscopic surgery.
Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J
2013-06-01
Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.
Model assessment using a multi-metric ranking technique
NASA Astrophysics Data System (ADS)
Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.
2017-12-01
Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.
GEO Optical Data Association with Concurrent Metric and Photometric Information
NASA Astrophysics Data System (ADS)
Dao, P.; Monet, D.
Data association in a congested area of the GEO belt with occasional visits by non-resident objects can be treated as a Multi-Target-Tracking (MTT) problem. For a stationary sensor surveilling the GEO belt, geosynchronous and near GEO objects are not completely motionless in the earth-fixed frame and can be observed as moving targets. In some clusters, metric or positional information is insufficiently accurate or up-to-date to associate the measurements. In the presence of measurements with uncertain origin, star tracks (residuals) and other sensor artifacts, heuristic techniques based on hard decision assignment do not perform adequately. In the MMT community, Bar-Shalom [2009 Bar-Shalom] was first in introducing the use of measurements to update the state of the target of interest in the tracking filter, e.g. Kalman filter. Following Bar-Shalom’s idea, we use the Probabilistic Data Association Filter (PDAF) but to make use of all information obtainable in the measurement of three-axis-stabilized GEO satellites, we combine photometric with metric measurements to update the filter. Therefore, our technique Concurrent Spatio- Temporal and Brightness (COSTB) has the stand-alone ability of associating a track with its identity –for resident objects. That is possible because the light curve of a stabilized GEO satellite changes minimally from night to night. We exercised COSTB on camera cadence data to associate measurements, correct mistags and detect non-residents in a simulated near real time cadence. Data on GEO clusters were used.
Statistical analysis of the surface figure of the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Lightsey, Paul A.; Chaney, David; Gallagher, Benjamin B.; Brown, Bob J.; Smith, Koby; Schwenker, John
2012-09-01
The performance of an optical system is best characterized by either the point spread function (PSF) or the optical transfer function (OTF). However, for system budgeting purposes, it is convenient to use a single scalar metric, or a combination of a few scalar metrics to track performance. For the James Webb Space Telescope, the Observatory level requirements were expressed in metrics of Strehl Ratio, and Encircled Energy. These in turn were converted to the metrics of total rms WFE and rms WFE within spatial frequency domains. The 18 individual mirror segments for the primary mirror segment assemblies (PMSA), the secondary mirror (SM), tertiary mirror (TM), and Fine Steering Mirror have all been fabricated. They are polished beryllium mirrors with a protected gold reflective coating. The statistical analysis of the resulting Surface Figure Error of these mirrors has been analyzed. The average spatial frequency distribution and the mirror-to-mirror consistency of the spatial frequency distribution are reported. The results provide insight to system budgeting processes for similar optical systems.
Ellerbe, Laura S; Manfredi, Luisa; Gupta, Shalini; Phelps, Tyler E; Bowe, Thomas R; Rubinsky, Anna D; Burden, Jennifer L; Harris, Alex H S
2017-04-04
In the U.S. Department of Veterans Affairs (VA), residential treatment programs are an important part of the continuum of care for patients with a substance use disorder (SUD). However, a limited number of program-specific measures to identify quality gaps in SUD residential programs exist. This study aimed to: (1) Develop metrics for two pre-admission processes: Wait Time and Engagement While Waiting, and (2) Interview program management and staff about program structures and processes that may contribute to performance on these metrics. The first aim sought to supplement the VA's existing facility-level performance metrics with SUD program-level metrics in order to identify high-value targets for quality improvement. The second aim recognized that not all key processes are reflected in the administrative data, and even when they are, new insight may be gained from viewing these data in the context of day-to-day clinical practice. VA administrative data from fiscal year 2012 were used to calculate pre-admission metrics for 97 programs (63 SUD Residential Rehabilitation Treatment Programs (SUD RRTPs); 34 Mental Health Residential Rehabilitation Treatment Programs (MH RRTPs) with a SUD track). Interviews were then conducted with management and front-line staff to learn what factors may have contributed to high or low performance, relative to the national average for their program type. We hypothesized that speaking directly to residential program staff may reveal innovative practices, areas for improvement, and factors that may explain system-wide variability in performance. Average wait time for admission was 16 days (SUD RRTPs: 17 days; MH RRTPs with a SUD track: 11 days), with 60% of Veterans waiting longer than 7 days. For these Veterans, engagement while waiting occurred in an average of 54% of the waiting weeks (range 3-100% across programs). Fifty-nine interviews representing 44 programs revealed factors perceived to potentially impact performance in these domains. Efficient screening processes, effective patient flow, and available beds were perceived to facilitate shorter wait times, while lack of beds, poor staffing levels, and lengths of stay of existing patients were thought to lengthen wait times. Accessible outpatient services, strong patient outreach, and strong encouragement of pre-admission outpatient treatment emerged as facilitators of engagement while waiting; poor staffing levels, socioeconomic barriers, and low patient motivation were viewed as barriers. Metrics for pre-admission processes can be helpful for monitoring residential SUD treatment programs. Interviewing program management and staff about drivers of performance metrics can play a complementary role by identifying innovative and other strong practices, as well as high-value targets for quality improvement. Key facilitators of high-performing facilities may offer programs with lower performance useful strategies to improve specific pre-admission processes.
Comparison of information theoretic divergences for sensor management
NASA Astrophysics Data System (ADS)
Yang, Chun; Kadar, Ivan; Blasch, Erik; Bakich, Michael
2011-06-01
In this paper, we compare the information-theoretic metrics of the Kullback-Leibler (K-L) and Renyi (α) divergence formulations for sensor management. Information-theoretic metrics have been well suited for sensor management as they afford comparisons between distributions resulting from different types of sensors under different actions. The difference in distributions can also be measured as entropy formulations to discern the communication channel capacity (i.e., Shannon limit). In this paper, we formulate a sensor management scenario for target tracking and compare various metrics for performance evaluation as a function of the design parameter (α) so as to determine which measures might be appropriate for sensor management given the dynamics of the scenario and design parameter.
Scoring sensor observations to facilitate the exchange of space surveillance data
NASA Astrophysics Data System (ADS)
Weigel, M.; Fiedler, H.; Schildknecht, T.
2017-08-01
In this paper, a scoring metric for space surveillance sensor observations is introduced. A scoring metric allows for direct comparison of data quantity and data quality, and makes transparent the effort made by different sensor operators. The concept might be applied to various sensor types like tracking and surveillance radar, active optical laser tracking, or passive optical telescopes as well as combinations of different measurement types. For each measurement type, a polynomial least squares fit is performed on the measurement values contained in the track. The track score is the average sum over the polynomial coefficients uncertainties and scaled by reference measurement accuracy. Based on the newly developed scoring metric, an accounting model and a rating model are introduced. Both models facilitate the exchange of observation data within a network of space surveillance sensors operators. In this paper, optical observations are taken as an example for analysis purposes, but both models can also be utilized for any other type of observations. The rating model has the capability to distinguish between network participants with major and minor data contribution to the network. The level of sanction on data reception is defined by the participants themselves enabling a high flexibility. The more elaborated accounting model translates the track score to credit points earned for data provision and spend for data reception. In this model, data reception is automatically limited for participants with low contribution to the network. The introduced method for observation scoring is first applied for transparent data exchange within the Small Aperture Robotic Telescope Network (SMARTnet). Therefore a detailed mathematical description is presented for line of sight measurements from optical telescopes, as well as numerical simulations for different network setups.
Video redaction: a survey and comparison of enabling technologies
NASA Astrophysics Data System (ADS)
Sah, Shagan; Shringi, Ameya; Ptucha, Raymond; Burry, Aaron; Loce, Robert
2017-09-01
With the prevalence of video recordings from smart phones, dash cams, body cams, and conventional surveillance cameras, privacy protection has become a major concern, especially in light of legislation such as the Freedom of Information Act. Video redaction is used to obfuscate sensitive and personally identifiable information. Today's typical workflow involves simple detection, tracking, and manual intervention. Automated methods rely on accurate detection mechanisms being paired with robust tracking methods across the video sequence to ensure the redaction of all sensitive information while minimizing spurious obfuscations. Recent studies have explored the use of convolution neural networks and recurrent neural networks for object detection and tracking. The present paper reviews the redaction problem and compares a few state-of-the-art detection, tracking, and obfuscation methods as they relate to redaction. The comparison introduces an evaluation metric that is specific to video redaction performance. The metric can be evaluated in a manner that allows balancing the penalty for false negatives and false positives according to the needs of particular application, thereby assisting in the selection of component methods and their associated hyperparameters such that the redacted video has fewer frames that require manual review.
Consumer sleep tracking devices: a critical review.
Lee, Jeon; Finkelstein, Joseph
2015-01-01
Consumer sleep tracking devices are widely advertised as effective means to monitor and manage sleep quality and to provide positive effects on overall heath. However objective evidence supporting these claims is not always readily available. The goal of this study was to perform a comprehensive review of available information on six representative sleep tracking devices: BodyMedia FIT, Fitbit Flex, Jawbone UP, Basis Band, Innovative Sleep Solutions SleepTracker, and Zeo Sleep Manager Pro. The review was conducted along the following dimensions: output metrics, theoretical frameworks, systematic evaluation, and FDA clearance. The review identified a critical lack of basic information about the devices: five out of six devices provided no supporting information on their sensor accuracy and four out of six devices provided no information on their output metrics accuracy. Only three devices were found to have related peer-reviewed articles. However in these articles wake detection accuracy was revealed to be quite low and to vary widely (BodyMedia, 49.9±3.6%; Fitbit, 19.8%; Zeo, 78.9% to 83.5%). No supporting evidence on how well tracking devices can help mitigate sleep loss and manage sleep disturbances in practical life was provided.
Standard metrics for a plug-and-play tracker
NASA Astrophysics Data System (ADS)
Antonisse, Jim; Young, Darrell
2012-06-01
The Motion Imagery Standards Board (MISB) has previously established a metadata "micro-architecture" for standards-based tracking. The intent of this work is to facilitate both the collaborative development of competent tracking systems, and the potentially distributed and dispersed execution of tracker system components in real-world execution environments. The approach standardizes a set of five quasi-sequential modules in image-based tracking. However, in order to make the plug-and-play architecture truly useful we need metrics associated with each module (so that, for instance, a researcher who "plugs in" a new component can ascertain whether he/she did better or worse with the component). This paper proposes the choice of a new, unifying set of metrics based on an informationtheoretic approach to tracking, which the MISB is nominating as DoD/IC/NATO standards.
NASA Technical Reports Server (NTRS)
Shim, J. S.; Kuznetsova, M.; Rastatter, L.; Hesse, M.; Bilitza, D.; Butala, M.; Codrescu, M.; Emery, B.; Foster, B.; Fuller-Rowell, T.;
2011-01-01
Objective quantification of model performance based on metrics helps us evaluate the current state of space physics modeling capability, address differences among various modeling approaches, and track model improvements over time. The Coupling, Energetics, and Dynamics of Atmospheric Regions (CEDAR) Electrodynamics Thermosphere Ionosphere (ETI) Challenge was initiated in 2009 to assess accuracy of various ionosphere/thermosphere models in reproducing ionosphere and thermosphere parameters. A total of nine events and five physical parameters were selected to compare between model outputs and observations. The nine events included two strong and one moderate geomagnetic storm events from GEM Challenge events and three moderate storms and three quiet periods from the first half of the International Polar Year (IPY) campaign, which lasted for 2 years, from March 2007 to March 2009. The five physical parameters selected were NmF2 and hmF2 from ISRs and LEO satellites such as CHAMP and COSMIC, vertical drifts at Jicamarca, and electron and neutral densities along the track of the CHAMP satellite. For this study, four different metrics and up to 10 models were used. In this paper, we focus on preliminary results of the study using ground-based measurements, which include NmF2 and hmF2 from Incoherent Scatter Radars (ISRs), and vertical drifts at Jicamarca. The results show that the model performance strongly depends on the type of metrics used, and thus no model is ranked top for all used metrics. The analysis further indicates that performance of the model also varies with latitude and geomagnetic activity level.
Integrated Resilient Aircraft Control Project Full Scale Flight Validation
NASA Technical Reports Server (NTRS)
Bosworth, John T.
2009-01-01
Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.
The psychometrics of mental workload: multiple measures are sensitive but divergent.
Matthews, Gerald; Reinerman-Jones, Lauren E; Barber, Daniel J; Abich, Julian
2015-02-01
A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.
Evaluation metrics for bone segmentation in ultrasound
NASA Astrophysics Data System (ADS)
Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas
2015-03-01
Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.
Eye Tracking Metrics for Workload Estimation in Flight Deck Operation
NASA Technical Reports Server (NTRS)
Ellis, Kyle; Schnell, Thomas
2010-01-01
Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.
The Value of Metrics for Science Data Center Management
NASA Astrophysics Data System (ADS)
Moses, J.; Behnke, J.; Watts, T. H.; Lu, Y.
2005-12-01
The Earth Observing System Data and Information System (EOSDIS) has been collecting and analyzing records of science data archive, processing and product distribution for more than 10 years. The types of information collected and the analysis performed has matured and progressed to become an integral and necessary part of the system management and planning functions. Science data center managers are realizing the importance that metrics can play in influencing and validating their business model. New efforts focus on better understanding of users and their methods. Examples include tracking user web site interactions and conducting user surveys such as the government authorized American Customer Satisfaction Index survey. This paper discusses the metrics methodology, processes and applications that are growing in EOSDIS, the driving requirements and compelling events, and the future envisioned for metrics as an integral part of earth science data systems.
Partridge, Roland W; Hughes, Mark A; Brennan, Paul M; Hennessey, Iain A M
2014-08-01
Objective performance feedback has potential to maximize the training benefit of laparoscopic simulators. Instrument movement metrics are, however, currently the preserve of complex and expensive systems. We aimed to develop and validate affordable, user-ready software that provides objective feedback by tracking instrument movement in a "take-home" laparoscopic simulator. Computer-vision processing tracks the movement of colored bands placed around the distal instrument shafts. The position of each instrument is logged from the simulator camera feed and movement metrics calculated in real time. Ten novices (junior doctors) and 13 general surgery trainees (StR) (training years 3-7) performed a standardized task (threading string through hoops) on the eoSim (eoSurgical™ Ltd., Edinburgh, Scotland, United Kingdom) take-home laparoscopic simulator. Statistical analysis was performed using unpaired t tests with Welch's correction. The software was able to track the instrument tips reliably and effectively. Significant differences between the two groups were observed in time to complete task (StR versus novice, 2 minutes 33 seconds versus 9 minutes 53 seconds; P=.01), total distance traveled by instruments (3.29 m versus 11.38 m, respectively; P=.01), average instrument motion smoothness (0.15 mm/second(3) versus 0.06 mm/second(3), respectively; P<.01), and handedness (mean difference between dominant and nondominant hand) (0.55 m versus 2.43 m, respectively; P=.03). There was no significant difference seen in the distance between instrument tips, acceleration, speed of instruments, or time off-screen. We have developed software that brings objective performance feedback to the portable laparoscopic box simulator. Construct validity has been demonstrated. Removing the need for additional motion-tracking hardware makes it affordable and accessible. It is user-ready and has the potential to enhance the training benefit of portable simulators both in the workplace and at home.
Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean
2015-12-01
Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Advanced Navigation Strategies For Asteroid Sample Return Missions
NASA Technical Reports Server (NTRS)
Getzandanner, K.; Bauman, J.; Williams, B.; Carpenter, J.
2010-01-01
Flyby and rendezvous missions to asteroids have been accomplished using navigation techniques derived from experience gained in planetary exploration. This paper presents analysis of advanced navigation techniques required to meet unique challenges for precision navigation to acquire a sample from an asteroid and return it to Earth. These techniques rely on tracking data types such as spacecraft-based laser ranging and optical landmark tracking in addition to the traditional Earth-based Deep Space Network radio metric tracking. A systematic study of navigation strategy, including the navigation event timeline and reduction in spacecraft-asteroid relative errors, has been performed using simulation and covariance analysis on a representative mission.
A data set for evaluating the performance of multi-class multi-object video tracking
NASA Astrophysics Data System (ADS)
Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David
2017-05-01
One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A.; Kölzsch, Andrea; Prins, Herbert H. T.; de Boer, W. Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations in behaviour at the individual level. PMID:26107643
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat.
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A; Kölzsch, Andrea; Prins, Herbert H T; de Boer, W Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations in behaviour at the individual level.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
Sánchez-Margallo, Juan A; Sánchez-Margallo, Francisco M; Oropesa, Ignacio; Enciso, Silvia; Gómez, Enrique J
2017-02-01
The aim of this study is to present the construct and concurrent validity of a motion-tracking method of laparoscopic instruments based on an optical pose tracker and determine its feasibility as an objective assessment tool of psychomotor skills during laparoscopic suturing. A group of novice ([Formula: see text] laparoscopic procedures), intermediate (11-100 laparoscopic procedures) and experienced ([Formula: see text] laparoscopic procedures) surgeons performed three intracorporeal sutures on an ex vivo porcine stomach. Motion analysis metrics were recorded using the proposed tracking method, which employs an optical pose tracker to determine the laparoscopic instruments' position. Construct validation was measured for all 10 metrics across the three groups and between pairs of groups. Concurrent validation was measured against a previously validated suturing checklist. Checklists were completed by two independent surgeons over blinded video recordings of the task. Eighteen novices, 15 intermediates and 11 experienced surgeons took part in this study. Execution time and path length travelled by the laparoscopic dissector presented construct validity. Experienced surgeons required significantly less time ([Formula: see text]), travelled less distance using both laparoscopic instruments ([Formula: see text]) and made more efficient use of the work space ([Formula: see text]) compared with novice and intermediate surgeons. Concurrent validation showed strong correlation between both the execution time and path length and the checklist score ([Formula: see text] and [Formula: see text], [Formula: see text]). The suturing performance was successfully assessed by the motion analysis method. Construct and concurrent validity of the motion-based assessment method has been demonstrated for the execution time and path length metrics. This study demonstrates the efficacy of the presented method for objective evaluation of psychomotor skills in laparoscopic suturing. However, this method does not take into account the quality of the suture. Thus, future works will focus on developing new methods combining motion analysis and qualitative outcome evaluation to provide a complete performance assessment to trainees.
Jiang, Jingfeng; Hall, Timothy J
2011-04-01
A hybrid approach that inherits both the robustness of the regularized motion tracking approach and the efficiency of the predictive search approach is reported. The basic idea is to use regularized speckle tracking to obtain high-quality seeds in an explorative search that can be used in the subsequent intelligent predictive search. The performance of the hybrid speckle-tracking algorithm was compared with three published speckle-tracking methods using in vivo breast lesion data. We found that the hybrid algorithm provided higher displacement quality metric values, lower root mean squared errors compared with a locally smoothed displacement field, and higher improvement ratios compared with the classic block-matching algorithm. On the basis of these comparisons, we concluded that the hybrid method can further enhance the accuracy of speckle tracking compared with its real-time counterparts, at the expense of slightly higher computational demands. © 2011 IEEE
An experimental comparison of online object-tracking algorithms
NASA Astrophysics Data System (ADS)
Wang, Qing; Chen, Feng; Xu, Wenli; Yang, Ming-Hsuan
2011-09-01
This paper reviews and evaluates several state-of-the-art online object tracking algorithms. Notwithstanding decades of efforts, object tracking remains a challenging problem due to factors such as illumination, pose, scale, deformation, motion blur, noise, and occlusion. To account for appearance change, most recent tracking algorithms focus on robust object representations and effective state prediction. In this paper, we analyze the components of each tracking method and identify their key roles in dealing with specific challenges, thereby shedding light on how to choose and design algorithms for different situations. We compare state-of-the-art online tracking methods including the IVT,1 VRT,2 FragT,3 BoostT,4 SemiT,5 BeSemiT,6 L1T,7 MILT,8 VTD9 and TLD10 algorithms on numerous challenging sequences, and evaluate them with different performance metrics. The qualitative and quantitative comparative results demonstrate the strength and weakness of these algorithms.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J
2017-03-03
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.
2017-01-01
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796
Tracking and data system support for the Viking 1975 mission to Mars. Volume 3: Planetary operations
NASA Technical Reports Server (NTRS)
Mudgway, D. J.
1977-01-01
The support provided by the Deep Space Network to the 1975 Viking Mission from the first landing on Mars July 1976 to the end of the Prime Mission on November 15, 1976 is described and evaluated. Tracking and data acquisition support required the continuous operation of a worldwide network of tracking stations with 64-meter and 26-meter diameter antennas, together with a global communications system for the transfer of commands, telemetry, and radio metric data between the stations and the Network Operations Control Center in Pasadena, California. Performance of the deep-space communications links between Earth and Mars, and innovative new management techniques for operations and data handling are included.
JPSS-1 VIIRS Pre-Launch Radiometric Performance
NASA Technical Reports Server (NTRS)
Oudrari, Hassan; Mcintire, Jeffrey; Xiong, Xiaoxiong; Butler, James; Ji, Qiang; Schwarting, Tom; Zeng, Jinan
2015-01-01
The first Joint Polar Satellite System (JPSS-1 or J1) mission is scheduled to launch in January 2017, and will be very similar to the Suomi-National Polar-orbiting Partnership (SNPP) mission. The Visible Infrared Imaging Radiometer Suite (VIIRS) on board the J1 spacecraft completed its sensor level performance testing in December 2014. VIIRS instrument is expected to provide valuable information about the Earth environment and properties on a daily basis, using a wide-swath (3,040 km) cross-track scanning radiometer. The design covers the wavelength spectrum from reflective to long-wave infrared through 22 spectral bands, from 0.412 m to 12.01 m, and has spatial resolutions of 370 m and 740 m at nadir for imaging and moderate bands, respectively. This paper will provide an overview of pre-launch J1 VIIRS performance testing and methodologies, describing the at-launch baseline radiometric performance as well as the metrics needed to calibrate the instrument once on orbit. Key sensor performance metrics include the sensor signal to noise ratios (SNRs), dynamic range, reflective and emissive bands calibration performance, polarization sensitivity, bands spectral performance, response-vs-scan (RVS), near field response, and stray light rejection. A set of performance metrics generated during the pre-launch testing program will be compared to the sensor requirements and to SNPP VIIRS pre-launch performance.
Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty
Swihart, Robert K.; Sundaram, Mekala; Höök, Tomas O.; DeWoody, J. Andrew; Kellner, Kenneth F.
2016-01-01
Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the “law of constant ratios”, used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods, illustrate their use when applied to new data, and suggest future improvements. Our benchmarking approach may provide a useful tool to augment detailed, qualitative assessment of performance. PMID:27152838
Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty.
Swihart, Robert K; Sundaram, Mekala; Höök, Tomas O; DeWoody, J Andrew; Kellner, Kenneth F
2016-01-01
Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the "law of constant ratios", used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods, illustrate their use when applied to new data, and suggest future improvements. Our benchmarking approach may provide a useful tool to augment detailed, qualitative assessment of performance.
Weissman, David E; Morrison, R Sean; Meier, Diane E
2010-02-01
Data collection and analysis are vital for strategic planning, quality improvement, and demonstration of palliative care program impact to hospital administrators, private funders and policymakers. Since 2000, the Center to Advance Palliative Care (CAPC) has provided technical assistance to hospitals, health systems and hospices working to start, sustain, and grow nonhospice palliative care programs. CAPC convened a consensus panel in 2008 to develop recommendations for specific clinical and customer metrics that programs should track. The panel agreed on four key domains of clinical metrics and two domains of customer metrics. Clinical metrics include: daily assessment of physical/psychological/spiritual symptoms by a symptom assessment tool; establishment of patient-centered goals of care; support to patient/family caregivers; and management of transitions across care sites. For customer metrics, consensus was reached on two domains that should be tracked to assess satisfaction: patient/family satisfaction, and referring clinician satisfaction. In an effort to ensure access to reliably high-quality palliative care data throughout the nation, hospital palliative care programs are encouraged to collect and report outcomes for each of the metric domains described here.
Intubation Success in Critical Care Transport: A Multicenter Study.
Reichert, Ryan J; Gothard, Megan; Gothard, M David; Schwartz, Hamilton P; Bigham, Michael T
2018-02-21
Tracheal intubation (TI) is a lifesaving critical care skill. Failed TI attempts, however, can harm patients. Critical care transport (CCT) teams function as the first point of critical care contact for patients being transported to tertiary medical centers for specialized surgical, medical, and trauma care. The Ground and Air Medical qUality in Transport (GAMUT) Quality Improvement Collaborative uses a quality metric database to track CCT quality metric performance, including TI. We sought to describe TI among GAMUT participants with the hypothesis that CCT would perform better than other prehospital TI reports and similarly to hospital TI success. The GAMUT Database is a global, voluntary database for tracking consensus quality metric performance among CCT programs performing neonatal, pediatric, and adult transports. The TI-specific quality metrics are "first attempt TI success" and "definitive airway sans hypoxia/hypotension on first attempt (DASH-1A)." The 2015 GAMUT Database was queried and analysis included patient age, program type, and intubation success rate. Analysis included simple statistics and Pearson chi-square with Bonferroni-adjusted post hoc z tests (significance = p < 0.05 via two-sided testing). Overall, 85,704 patient contacts (neonatal n [%] = 12,664 [14.8%], pediatric n [%] = 28,992 [33.8%], adult n [%] = 44,048 [51.4%]) were included, with 4,036 (4.7%) TI attempts. First attempt TI success was lowest in neonates (59.3%, 617 attempts), better in pediatrics (81.7%, 519 attempts), and best in adults (87%, 2900 attempts), p < 0.001. Adult-focused CCT teams had higher overall first attempt TI success versus pediatric- and neonatal-focused teams (86.9% vs. 63.5%, p < 0.001) and also in pediatric first attempt TI success (86.5% vs. 75.3%, p < 0.001). DASH-1A rates were lower across all patient types (neonatal = 51.9%, pediatric = 74.3%, adult = 79.8%). CCT TI is not uncommon, and rates of TI and DASH-1A success are higher in adult patients and adult-focused CCT teams. TI success rates are higher in CCT than other prehospital settings, but lower than in-hospital success TI rates. Identifying factors influencing TI success among high performers should influence best practice strategies for TI.
Rispin, Karen; Wee, Joy
2015-07-01
This study was conducted to compare the performance of three types of chairs in a low-resource setting. The larger goal was to provide information which will enable more effective use of limited funds by wheelchair manufacturers and suppliers in low-resource settings. The Motivation Rough Terrain and Whirlwind Rough Rider were compared in six skills tests which participants completed in one wheelchair type and then a day later in the other. A hospital-style folding transport wheelchair was also included in one test. For all skills, participants rated the ease or difficulty on a visual analogue scale. For all tracks, distance traveled and the physiological cost index were recorded. Data were analyzed using repeated measures analysis of variance. The Motivation wheelchair outperformed Whirlwind wheelchair on rough and smooth tracks, and in some metrics on the tight spaces track. Motivation and Whirlwind wheelchairs significantly outperformed the hospital transport wheelchair in all metrics on the rough track skills test. This comparative study provides data that are valuable for manufacturers and for those who provide wheelchairs to users. The comparison with the hospital-style transport chair confirms the cost to users of inappropriate wheelchair provision. Implications for Rehabilitation For those with compromised lower limb function, wheelchairs are essential to enable full participation and improved quality of life. Therefore, provision of wheelchairs which effectively enable mobility in the cultures and environments in which people with disabilities live is crucial. This includes low-resource settings where the need for appropriate seating is especially urgent. A repeated measures study to measure wheelchair performances in everyday skills in the setting where wheelchairs are used gives information on the quality of mobility provided by those wheelchairs. This study highlights differences in the performance of three types of wheelchairs often distributed in low-resource settings. This information can improve mobility for wheelchair users in those settings by enabling wheelchair manufacturers to optimize wheelchair design and providers to optimize the use of limited funds.
The five traps of performance measurement.
Likierman, Andrew
2009-10-01
Evaluating a company's performance often entails wading through a thicket of numbers produced by a few simple metrics, writes the author, and senior executives leave measurement to those whose specialty is spreadsheets. To take ownership of performance assessment, those executives should find qualitative, forward-looking measures that will help them avoid five common traps: Measuring against yourself. Find data from outside the company, and reward relative, rather than absolute, performance. Enterprise Rent-A-Car uses a service quality index to measure customers' repeat purchase intentions. Looking backward. Use measures that lead rather than lag the profits in your business. Humana, a health insurer, found that the sickest 10% of its patients account for 80% of its costs; now it offers customers incentives for early screening. Putting your faith in numbers. The soft drinks company Britvic evaluates its executive coaching program not by trying to assign it an ROI number but by tracking participants' careers for a year. Gaming your metrics. The law firm Clifford Chance replaced its single, easy-to-game metric of billable hours with seven criteria on which to base bonuses. Sticking to your numbers too long. Be precise about what you want to assess and explicit about what metrics are assessing it. Such clarity would have helped investors interpret the AAA ratings involved in the financial meltdown. Really good assessment will combine finance managers' relative independence with line managers' expertise.
Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard
2015-08-01
In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.
Track reconstruction at LHC as a collaborative data challenge use case with RAMP
NASA Astrophysics Data System (ADS)
Amrouche, Sabrina; Braun, Nils; Calafiura, Paolo; Farrell, Steven; Gemmler, Jochen; Germain, Cécile; Gligorov, Vladimir Vava; Golling, Tobias; Gray, Heather; Guyon, Isabelle; Hushchyn, Mikhail; Innocente, Vincenzo; Kégl, Balázs; Neuhaus, Sara; Rousseau, David; Salzburger, Andreas; Ustyuzhanin, Andrei; Vlimant, Jean-Roch; Wessel, Christian; Yilmaz, Yetkin
2017-08-01
Charged particle track reconstruction is a major component of data-processing in high-energy physics experiments such as those at the Large Hadron Collider (LHC), and is foreseen to become more and more challenging with higher collision rates. A simplified two-dimensional version of the track reconstruction problem is set up on a collaborative platform, RAMP, in order for the developers to prototype and test new ideas. A small-scale competition was held during the Connecting The Dots / Intelligent Trackers 2017 (CTDWIT 2017) workshop. Despite the short time scale, a number of different approaches have been developed and compared along a single score metric, which was kept generic enough to accommodate a summarized performance in terms of both efficiency and fake rates.
ERIC Educational Resources Information Center
Ryan, C. Anthony; Higgs, Bettie; Kilcommins, Shane
2009-01-01
Background: The resources, needs and implementation activities of educational projects are often straightforward to document, especially if objectives are clear. However, developing appropriate metrics and indicators of outcomes and performance is not only challenging but is often overlooked in the excitement of project design and implementation.…
Developing the Systems Engineering Experience Accelerator (SEEA) Prototype and Roadmap
2012-10-24
system attributes. These metrics track non-requirements performance, typically relate to production cost per unit, maintenance costs, training costs...immediately implement lessons learned from the training experience to the job, assuming the culture allows this. 1.3 MANAGEMENT PLAN/TECHNICAL OVERVIEW...resolving potential conflicts as they arise. Incrementally implement and continuously integrate capability in priority order, to ensure that final system
Presson, Nora; Beers, Sue R; Morrow, Lisa; Wagener, Lauren M; Bird, William A; Van Eman, Gina; Krishnaswamy, Deepa; Penderville, Joshua; Borrasso, Allison J; Benso, Steven; Puccio, Ava; Fissell, Catherine; Okonkwo, David O; Schneider, Walter
2015-09-01
To realize the potential value of tractography in traumatic brain injury (TBI), we must identify metrics that provide meaningful information about functional outcomes. The current study explores quantitative metrics describing the spatial properties of tractography from advanced diffusion imaging (High Definition Fiber Tracking, HDFT). In a small number of right-handed males from military TBI (N = 7) and civilian control (N = 6) samples, both tract homologue symmetry and tract spread (proportion of brain mask voxels contacted) differed for several tracts among civilian controls and extreme groups in the TBI sample (high scorers and low scorers) for verbal recall, serial reaction time, processing speed index, and trail-making. Notably, proportion of voxels contacted in the arcuate fasciculus distinguished high and low performers on the CVLT-II and PSI, potentially reflecting linguistic task demands, and GFA in the left corticospinal tract distinguished high and low performers in PSI and Trail Making Test Part A, potentially reflecting right hand motor response demands. The results suggest that, for advanced diffusion imaging, spatial properties of tractography may add analytic value to measures of tract anisotropy.
Upside-down: Perceived space affects object-based attention.
Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus
2017-07-01
Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Recommended metric for tracking visibility progress in the Regional Haze Rule.
Gantt, Brett; Beaver, Melinda; Timin, Brian; Lorang, Phil
2018-05-01
For many national parks and wilderness areas with special air quality protections (Class I areas) in the western United States (U.S.), wildfire smoke and dust events can have a large impact on visibility. The U.S. Environmental Protection Agency's (EPA) 1999 Regional Haze Rule used the 20% haziest days to track visibility changes over time even if they are dominated by smoke or dust. Visibility on the 20% haziest days has remained constant or degraded over the last 16 yr at some Class I areas despite widespread emission reductions from anthropogenic sources. To better track visibility changes specifically associated with anthropogenic pollution sources rather than natural sources, the EPA has revised the Regional Haze Rule to track visibility on the 20% most anthropogenically impaired (hereafter, most impaired) days rather than the haziest days. To support the implementation of this revised requirement, the EPA has proposed (but not finalized) a recommended metric for characterizing the anthropogenic and natural portions of the daily extinction budget at each site. This metric selects the 20% most impaired days based on these portions using a "delta deciview" approach to quantify the deciview scale impact of anthropogenic light extinction. Using this metric, sulfate and nitrate make up the majority of the anthropogenic extinction in 2015 on these days, with natural extinction largely made up of organic carbon mass in the eastern U.S. and a combination of organic carbon mass, dust components, and sea salt in the western U.S. For sites in the western U.S., the seasonality of days selected as the 20% most impaired is different than the seasonality of the 20% haziest days, with many more winter and spring days selected. Applying this new metric to the 2000-2015 period across sites representing Class I areas results in substantial changes in the calculated visibility trend for the northern Rockies and southwest U.S., but little change for the eastern U.S. Changing the approach for tracking visibility in the Regional Haze Rule allows the EPA, states, and the public to track visibility on days when reductions in anthropogenic emissions have the greatest potential to improve the view. The calculations involved with the recommended metric can be incorporated into the routine IMPROVE (Interagency Monitoring of Protected Visual Environments) data processing, enabling rapid analysis of current and future visibility trends. Natural visibility conditions are important in the calculations for the recommended metric, necessitating the need for additional analysis and potential refinement of their values.
Effect of an emergency department fast track on Press-Ganey patient satisfaction scores.
Hwang, Calvin E; Lipman, Grant S; Kane, Marlena
2015-01-01
Mandated patient surveys have become an integral part of Medicare remuneration, putting hundreds of millions of dollars in funding at risk. The Centers for Medicare & Medicaid Services (CMS) recently announced a patient experience survey for the emergency department (ED). Development of an ED Fast Track, where lower acuity patients are rapidly seen, has been shown to improve many of the metrics that CMS examines. This is the first study examining if ED Fast Track implementation affects Press-Ganey scores of patient satisfaction. We analyzed returned Press-Ganey questionnaires from all ESI 4 and 5 patients seen 11AM - 1PM, August-December 2011 (pre-fast track), and during the identical hours of fast track, August-December 2012. Raw ordinal scores were converted to continuous scores for paired student t-test analysis. We calculated an odds ratio with 100% satisfaction considered a positive response. An academic ED with 52,000 annual visits had 140 pre-fast track and 85 fast track respondents. Implementation of a fast track significantly increased patient satisfaction with the following: wait times (68% satisfaction to 88%, OR 4.13, 95% CI [2.32-7.33]), doctor courtesy (90% to 95%, OR 1.97, 95% CI [1.04-3.73]), nurse courtesy (87% to 95%, OR 2.75, 95% CI [1.46-5.15]), pain control (79% to 87%, OR 2.13, 95% CI [1.16-3.92]), likelihood to recommend (81% to 90%, OR 2.62, 95% CI [1.42-4.83]), staff caring (82% to 91%, OR 2.82, 95% CI [1.54-5.19]), and staying informed about delays (66% to 83%, OR 3.00, 95% CI [1.65-5.44]). Implementation of an ED Fast Track more than doubled the odds of significant improvements in Press-Ganey patient satisfaction metrics and may play an important role in improving ED performance on CMS benchmarks.
Investigating emergency room service quality using lean manufacturing.
Abdelhadi, Abdelhakim
2015-01-01
The purpose of this paper is to investigate a lean manufacturing metric called Takt time as a benchmark evaluation measure to evaluate a public hospital's service quality. Lean manufacturing is an established managerial philosophy with a proven track record in industry. A lean metric called Takt time is applied as a measure to compare the relative efficiency between two emergency departments (EDs) belonging to the same public hospital. Outcomes guide managers to improve patient services and increase hospital performances. The patient treatment lead time within the hospital's two EDs (one department serves male and the other female patients) are the study's focus. A lean metric called Takt time is used to find the service's relative efficiency. Findings show that the lean manufacturing metric called Takt time can be used as an effective way to measure service efficiency by analyzing relative efficiency and identifies bottlenecks in different departments providing the same services. The paper presents a new procedure to compare relative efficiency between two EDs. It can be applied to any healthcare facility.
Schultz, Elise V; Schultz, Christopher J; Carey, Lawrence D; Cecil, Daniel J; Bateman, Monte
2016-01-01
This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.
NASA Technical Reports Server (NTRS)
Schultz, Elise; Schultz, Christopher Joseph; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte
2016-01-01
This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.
SCHULTZ, ELISE V.; SCHULTZ, CHRISTOPHER J.; CAREY, LAWRENCE D.; CECIL, DANIEL J.; BATEMAN, MONTE
2017-01-01
This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system’s performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system’s performance is evaluated with adjustments to parameter sensitivity. The system’s performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system’s performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system. PMID:29303164
Rudmik, Luke; Mattos, Jose; Schneider, John; Manes, Peter R; Stokken, Janalee K; Lee, Jivianne; Higgins, Thomas S; Schlosser, Rodney J; Reh, Douglas D; Setzen, Michael; Soler, Zachary M
2017-09-01
Measuring quality outcomes is an important prerequisite to improve quality of care. Rhinosinusitis represents a high value target to improve quality of care because it has a high prevalence of disease, large economic burden, and large practice variation. In this study we review the current state of quality measurement for management of both acute (ARS) and chronic rhinosinusitis (CRS). The major national quality metric repositories and clearinghouses were queried. Additional searches included the American Academy of Otolaryngology-Head and Neck Surgery database, PubMed, and Google to attempt to capture any additional quality metrics. Seven quality metrics for ARS and 4 quality metrics for CRS were identified. ARS metrics focused on appropriateness of diagnosis (n = 1), antibiotic prescribing (n = 4), and radiologic imaging (n = 2). CRS quality metrics focused on appropriateness of diagnosis (n = 1), radiologic imaging (n = 1), and measurement of patient quality of life (n = 2). The Physician Quality Reporting System (PQRS) currently tracks 3 ARS quality metrics and 1 CRS quality metric. There are no outcome-based rhinosinusitis quality metrics and no metrics that assess domains of safety, patient-centeredness, and timeliness of care. The current status of quality measurement for rhinosinusitis has focused primarily on the quality domain of efficiency and process measures for ARS. More work is needed to develop, validate, and track outcome-based quality metrics along with CRS-specific metrics. Although there has been excellent work done to improve quality measurement for rhinosinusitis, there remain major gaps and challenges that need to be considered during the development of future metrics. © 2017 ARS-AAOA, LLC.
Quality Metrics in Neonatal and Pediatric Critical Care Transport: A National Delphi Project.
Schwartz, Hamilton P; Bigham, Michael T; Schoettker, Pamela J; Meyer, Keith; Trautman, Michael S; Insoft, Robert M
2015-10-01
The transport of neonatal and pediatric patients to tertiary care facilities for specialized care demands monitoring the quality of care delivered during transport and its impact on patient outcomes. In 2011, pediatric transport teams in Ohio met to identify quality indicators permitting comparisons among programs. However, no set of national consensus quality metrics exists for benchmarking transport teams. The aim of this project was to achieve national consensus on appropriate neonatal and pediatric transport quality metrics. Modified Delphi technique. The first round of consensus determination was via electronic mail survey, followed by rounds of consensus determination in-person at the American Academy of Pediatrics Section on Transport Medicine's 2012 Quality Metrics Summit. All attendees of the American Academy of Pediatrics Section on Transport Medicine Quality Metrics Summit, conducted on October 21-23, 2012, in New Orleans, LA, were eligible to participate. Candidate quality metrics were identified through literature review and those metrics currently tracked by participating programs. Participants were asked in a series of rounds to identify "very important" quality metrics for transport. It was determined a priori that consensus on a metric's importance was achieved when at least 70% of respondents were in agreement. This is consistent with other Delphi studies. Eighty-two candidate metrics were considered initially. Ultimately, 12 metrics achieved consensus as "very important" to transport. These include metrics related to airway management, team mobilization time, patient and crew injuries, and adverse patient care events. Definitions were assigned to the 12 metrics to facilitate uniform data tracking among programs. The authors succeeded in achieving consensus among a diverse group of national transport experts on 12 core neonatal and pediatric transport quality metrics. We propose that transport teams across the country use these metrics to benchmark and guide their quality improvement activities.
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
Samadani, Uzma; Ritlop, Robert; Reyes, Marleen; Nehrbass, Elena; Li, Meng; Lamm, Elizabeth; Schneider, Julia; Shimunov, David; Sava, Maria; Kolecki, Radek; Burris, Paige; Altomare, Lindsey; Mehmood, Talha; Smith, Theodore; Huang, Jason H; McStay, Christopher; Todd, S Rob; Qian, Meng; Kondziolka, Douglas; Wall, Stephen; Huang, Paul
2015-04-15
Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury.
Ritlop, Robert; Reyes, Marleen; Nehrbass, Elena; Li, Meng; Lamm, Elizabeth; Schneider, Julia; Shimunov, David; Sava, Maria; Kolecki, Radek; Burris, Paige; Altomare, Lindsey; Mehmood, Talha; Smith, Theodore; Huang, Jason H.; McStay, Christopher; Todd, S. Rob; Qian, Meng; Kondziolka, Douglas; Wall, Stephen; Huang, Paul
2015-01-01
Abstract Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury. PMID:25582436
Immersive training and mentoring for laparoscopic surgery
NASA Astrophysics Data System (ADS)
Nistor, Vasile; Allen, Brian; Dutson, E.; Faloutsos, P.; Carman, G. P.
2007-04-01
We describe in this paper a training system for minimally invasive surgery (MIS) that creates an immersive training simulation by recording the pathways of the instruments from an expert surgeon while performing an actual training task. Instrument spatial pathway data is stored and later accessed at the training station in order to visualize the ergonomic experience of the expert surgeon and trainees. Our system is based on tracking the spatial position and orientation of the instruments on the console for both the expert surgeon and the trainee. The technology is the result of recent developments in miniaturized position sensors that can be integrated seamlessly into the MIS instruments without compromising functionality. In order to continuously monitor the positions of laparoscopic tool tips, DC magnetic tracking sensors are used. A hardware-software interface transforms the coordinate data points into instrument pathways, while an intuitive graphic user interface displays the instruments spatial position and orientation for the mentor/trainee, and endoscopic video information. These data are recorded and saved in a database for subsequent immersive training and training performance analysis. We use two 6 DOF DC magnetic trackers with a sensor diameter of just 1.3 mm - small enough for insertion into 4 French catheters, embedded in the shaft of a endoscopic grasper and a needle driver. One sensor is located at the distal end of the shaft while the second sensor is located at the proximal end of the shaft. The placement of these sensors does not impede the functionally of the instrument. Since the sensors are located inside the shaft there are no sealing issues between the valve of the trocar and the instrument. We devised a peg transfer training task in accordance to validated training procedures, and tested our system on its ability to differentiate between the expert surgeon and the novices, based on a set of performance metrics. These performance metrics: motion smoothness, total path length, and time to completion, are derived from the kinematics of the instrument. An affine combination of the above mentioned metrics is provided to give a general score for the training performance. Clear differentiation between the expert surgeons and the novice trainees is visible in the test results. Strictly kinematics based performance metrics can be used to evaluate the training progress of MIS trainees in the context of UCLA - LTS.
76 FR 18073 - Track Safety Standards; Concrete Crossties
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-01
... metrics would be undesirable and restrict certain fastener assembly designs and capabilities to control... Track Safety Standards Working Group IV. FRA's Approach to Concrete Crossties A. Rail Cant B. Automated... and non-compliant track geometry can cause high- concentrated non-uniform dynamic loading, usually...
Department of Defense Plan to Establish Public Access to the Results of Federally Funded Research
2015-02-01
journal articles , data management plans, and will track metrics on compliance and public usage. The current DTIC infrastructure will be modified to...tokens to authors who submit digitally formatted scientific data and articles . DTIC will establish compliance metrics in FY15. DoD will explore...15 10. METRICS , COMPLIANCE and EVALUATION
NASA Technical Reports Server (NTRS)
Renzetti, N. A.; Siegmeth, A. J.
1973-01-01
The Tracking and Data System supported the deep space phases of the Pioneer 6, 7, 8, and 9 missions, with two spacecraft in an inward trajectory and two spacecraft in an outward trajectory from the earth in heliocentric orbits. Scientific instruments aboard each of the spacecraft continued to register information relative to interplanetary particles and fields, and radio metric data generated by the network continued to improve our knowledge of the celestial mechanics of the solar system. In addition to network support activity detail, network performance and special support activities are covered.
EUV process improvement with novel litho track hardware
NASA Astrophysics Data System (ADS)
Stokes, Harold; Harumoto, Masahiko; Tanaka, Yuji; Kaneyama, Koji; Pieczulewski, Charles; Asai, Masaya
2017-03-01
Currently, there are many developments in the field of EUV lithography that are helping to move it towards increased HVM feasibility. Targeted improvements in hardware design for advanced lithography are of interest to our group specifically for metrics such as CD uniformity, LWR, and defect density. Of course, our work is focused on EUV process steps that are specifically affected by litho track performance, and consequently, can be improved by litho track design improvement and optimization. In this study we are building on our experience to provide continual improvement for LWR, CDU, and Defects as applied to a standard EUV process by employing novel hardware solutions on our SOKUDO DUO coat develop track system. Although it is preferable to achieve such improvements post-etch process we feel, as many do, that improvements after patterning are a precursor to improvements after etching. We hereby present our work utilizing the SOKUDO DUO coat develop track system with an ASML NXE:3300 in the IMEC (Leuven, Belgium) cleanroom environment to improve aggressive dense L/S patterns.
Prediction of user preference over shared-control paradigms for a robotic wheelchair.
Erdogan, Ahmetcan; Argall, Brenna D
2017-07-01
The design of intelligent powered wheelchairs has traditionally focused heavily on providing effective and efficient navigation assistance. Significantly less attention has been given to the end-user's preference between different assistance paradigms. It is possible to include these subjective evaluations in the design process, for example by soliciting feedback in post-experiment questionnaires. However, constantly querying the user for feedback during real-world operation is not practical. In this paper, we present a model that correlates objective performance metrics and subjective evaluations of autonomous wheelchair control paradigms. Using off-the-shelf machine learning techniques, we show that it is possible to build a model that can predict the most preferred shared-control method from task execution metrics such as effort, safety, performance and utilization. We further characterize the relative contributions of each of these metrics to the individual choice of most preferred assistance paradigm. Our evaluation includes Spinal Cord Injured (SCI) and uninjured subject groups. The results show that our proposed correlation model enables the continuous tracking of user preference and offers the possibility of autonomy that is customized to each user.
Metrix Matrix: A Cloud-Based System for Tracking Non-Relative Value Unit Value-Added Work Metrics.
Kovacs, Mark D; Sheafor, Douglas H; Thacker, Paul G; Hardie, Andrew D; Costello, Philip
2018-03-01
In the era of value-based medicine, it will become increasingly important for radiologists to provide metrics that demonstrate their value beyond clinical productivity. In this article the authors describe their institution's development of an easy-to-use system for tracking value-added but non-relative value unit (RVU)-based activities. Metrix Matrix is an efficient cloud-based system for tracking value-added work. A password-protected home page contains links to web-based forms created using Google Forms, with collected data populating Google Sheets spreadsheets. Value-added work metrics selected for tracking included interdisciplinary conferences, hospital committee meetings, consulting on nonbilled outside studies, and practice-based quality improvement. Over a period of 4 months, value-added work data were collected for all clinical attending faculty members in a university-based radiology department (n = 39). Time required for data entry was analyzed for 2 faculty members over the same time period. Thirty-nine faculty members (equivalent to 36.4 full-time equivalents) reported a total of 1,223.5 hours of value-added work time (VAWT). A formula was used to calculate "value-added RVUs" (vRVUs) from VAWT. VAWT amounted to 5,793.6 vRVUs or 6.0% of total work performed (vRVUs plus work RVUs [wRVUs]). Were vRVUs considered equivalent to wRVUs for staffing purposes, this would require an additional 2.3 full-time equivalents, on the basis of average wRVU calculations. Mean data entry time was 56.1 seconds per day per faculty member. As health care reimbursement evolves with an emphasis on value-based medicine, it is imperative that radiologists demonstrate the value they add to patient care beyond wRVUs. This free and easy-to-use cloud-based system allows the efficient quantification of value-added work activities. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Classification of Animal Movement Behavior through Residence in Space and Time.
Torres, Leigh G; Orben, Rachael A; Tolkova, Irina; Thompson, David R
2017-01-01
Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST's ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST's ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST's response to less resolved data. Finally, we evaluate RST's performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can enhance our fine- and broad- scale understanding of animal movement ecology.
Evaluation Metrics for Biostatistical and Epidemiological Collaborations
Rubio, Doris McGartland; del Junco, Deborah J.; Bhore, Rafia; Lindsell, Christopher J.; Oster, Robert A.; Wittkowski, Knut M.; Welty, Leah J.; Li, Yi-Ju; DeMets, Dave
2011-01-01
Increasing demands for evidence-based medicine and for the translation of biomedical research into individual and public health benefit have been accompanied by the proliferation of special units that offer expertise in biostatistics, epidemiology, and research design (BERD) within academic health centers. Objective metrics that can be used to evaluate, track, and improve the performance of these BERD units are critical to their successful establishment and sustainable future. To develop a set of reliable but versatile metrics that can be adapted easily to different environments and evolving needs, we consulted with members of BERD units from the consortium of academic health centers funded by the Clinical and Translational Science Award Program of the National Institutes of Health. Through a systematic process of consensus building and document drafting, we formulated metrics that covered the three identified domains of BERD practices: the development and maintenance of collaborations with clinical and translational science investigators, the application of BERD-related methods to clinical and translational research, and the discovery of novel BERD-related methodologies. In this article, we describe the set of metrics and advocate their use for evaluating BERD practices. The routine application, comparison of findings across diverse BERD units, and ongoing refinement of the metrics will identify trends, facilitate meaningful changes, and ultimately enhance the contribution of BERD activities to biomedical research. PMID:21284015
NASA Astrophysics Data System (ADS)
Kierkels, R. G. J.; den Otter, L. A.; Korevaar, E. W.; Langendijk, J. A.; van der Schaaf, A.; Knopf, A. C.; Sijtsema, N. M.
2018-02-01
A prerequisite for adaptive dose-tracking in radiotherapy is the assessment of the deformable image registration (DIR) quality. In this work, various metrics that quantify DIR uncertainties are investigated using realistic deformation fields of 26 head and neck and 12 lung cancer patients. Metrics related to the physiologically feasibility (the Jacobian determinant, harmonic energy (HE), and octahedral shear strain (OSS)) and numerically robustness of the deformation (the inverse consistency error (ICE), transitivity error (TE), and distance discordance metric (DDM)) were investigated. The deformable registrations were performed using a B-spline transformation model. The DIR error metrics were log-transformed and correlated (Pearson) against the log-transformed ground-truth error on a voxel level. Correlations of r ⩾ 0.5 were found for the DDM and HE. Given a DIR tolerance threshold of 2.0 mm and a negative predictive value of 0.90, the DDM and HE thresholds were 0.49 mm and 0.014, respectively. In conclusion, the log-transformed DDM and HE can be used to identify voxels at risk for large DIR errors with a large negative predictive value. The HE and/or DDM can therefore be used to perform automated quality assurance of each CT-based DIR for head and neck and lung cancer patients.
Metrics for measuring performance of market transformation initiatives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, F.; Schlegel, J.; Grabner, K.
1998-07-01
Regulators have traditionally rewarded utility efficiency programs based on energy and demand savings. Now, many regulators are encouraging utilities and other program administrators to save energy by transforming markets. Prior to achieving sustainable market transformation, the program administrators often must take actions to understand the markets, establish baselines for success, reduce market barriers, build alliances, and build market momentum. Because these activities often precede savings, year-by-year measurement of savings can be an inappropriate measure of near-term success. Because ultimate success in transforming markets is defined in terms of sustainable changes in market structure and practice, traditional measures of success canmore » also be misleading as initiatives reach maturity. This paper reviews early efforts in Massachusetts to develop metrics, or yardsticks, to gauge regulatory rewards for utility market transformation initiatives. From experience in multiparty negotiations, the authors review options for metrics based alternatively on market effects, outcomes, and good faith implementation. Additionally, alternative approaches are explored, based on end-results, interim results, and initial results. The political and practical constraints are described which have thus far led to a preference for one-year metrics, based primarily on good faith implementation. Strategies are offered for developing useful metrics which might be acceptable to regulators, advocates, and program administrators. Finally, they emphasize that the use of market transformation performance metrics is in its infancy. Both regulators and program administrators are encouraged to advance into this area with an experimental mind-set; don't put all the money on one horse until there's more of a track record.« less
Joint Multi-Leaf Segmentation, Alignment, and Tracking for Fluorescence Plant Videos.
Yin, Xi; Liu, Xiaoming; Chen, Jin; Kramer, David M
2018-06-01
This paper proposes a novel framework for fluorescence plant video processing. The plant research community is interested in the leaf-level photosynthetic analysis within a plant. A prerequisite for such analysis is to segment all leaves, estimate their structures, and track them over time. We identify this as a joint multi-leaf segmentation, alignment, and tracking problem. First, leaf segmentation and alignment are applied on the last frame of a plant video to find a number of well-aligned leaf candidates. Second, leaf tracking is applied on the remaining frames with leaf candidate transformation from the previous frame. We form two optimization problems with shared terms in their objective functions for leaf alignment and tracking respectively. A quantitative evaluation framework is formulated to evaluate the performance of our algorithm with four metrics. Two models are learned to predict the alignment accuracy and detect tracking failure respectively in order to provide guidance for subsequent plant biology analysis. The limitation of our algorithm is also studied. Experimental results show the effectiveness, efficiency, and robustness of the proposed method.
2013-08-01
earplug and earmuff showing HPD simulator elements for energy flow paths...unprotected or protected ear traditionally start with analysis of energy flow through schematic diagrams based on electroacoustic (EA) analogies between...Schröter, 1983; Schröter and Pösselt, 1986; Shaw and Thiessen, 1958, 1962; Zwislocki, 1957). The analysis method tracks energy flow through fluid and
Computer-enhanced laparoscopic training system (CELTS): bridging the gap.
Stylopoulos, N; Cotin, S; Maithel, S K; Ottensmeye, M; Jackson, P G; Bardsley, R S; Neumann, P F; Rattner, D W; Dawson, S L
2004-05-01
There is a large and growing gap between the need for better surgical training methodologies and the systems currently available for such training. In an effort to bridge this gap and overcome the disadvantages of the training simulators now in use, we developed the Computer-Enhanced Laparoscopic Training System (CELTS). CELTS is a computer-based system capable of tracking the motion of laparoscopic instruments and providing feedback about performance in real time. CELTS consists of a mechanical interface, a customizable set of tasks, and an Internet-based software interface. The special cognitive and psychomotor skills a laparoscopic surgeon should master were explicitly defined and transformed into quantitative metrics based on kinematics analysis theory. A single global standardized and task-independent scoring system utilizing a z-score statistic was developed. Validation exercises were performed. The scoring system clearly revealed a gap between experts and trainees, irrespective of the task performed; none of the trainees obtained a score above the threshold that distinguishes the two groups. Moreover, CELTS provided educational feedback by identifying the key factors that contributed to the overall score. Among the defined metrics, depth perception, smoothness of motion, instrument orientation, and the outcome of the task are major indicators of performance and key parameters that distinguish experts from trainees. Time and path length alone, which are the most commonly used metrics in currently available systems, are not considered good indicators of performance. CELTS is a novel and standardized skills trainer that combines the advantages of computer simulation with the features of the traditional and popular training boxes. CELTS can easily be used with a wide array of tasks and ensures comparability across different training conditions. This report further shows that a set of appropriate and clinically relevant performance metrics can be defined and a standardized scoring system can be designed.
Brusa, Jamie L
2017-12-30
Successful recruiting for collegiate track & field athletes has become a more competitive and essential component of coaching. This study aims to determine the relationship between race performances of distance runners at the United States high school and National Collegiate Athletic Association (NCAA) levels. Conditional inference classification tree models were built and analysed to predict the probability that runners would qualify for the NCAA Division I National Cross Country Meet and/or the East or West NCAA Division I Outdoor Track & Field Preliminary Round based on their high school race times in the 800 m, 1600 m, and 3200 m. Prediction accuracies of the classification trees ranged from 60.0 to 76.6 percent. The models produced the most reliable estimates for predicting qualifiers in cross country, the 1500 m, and the 800 m for females and cross country, the 5000 m, and the 800 m for males. NCAA track & field coaches can use the results from this study as a guideline for recruiting decisions. Additionally, future studies can apply the methodological foundations of this research to predicting race performances set at different metrics, such as national meets in other countries or Olympic qualifications, from previous race data.
Pirsiavash, Ali; Broumandan, Ali; Lachapelle, Gérard
2017-07-05
The performance of Signal Quality Monitoring (SQM) techniques under different multipath scenarios is analyzed. First, SQM variation profiles are investigated as critical requirements in evaluating the theoretical performance of SQM metrics. The sensitivity and effectiveness of SQM approaches for multipath detection and mitigation are then defined and analyzed by comparing SQM profiles and multipath error envelopes for different discriminators. Analytical discussions includes two discriminator strategies, namely narrow and high resolution correlator techniques for BPSK(1), and BOC(1,1) signaling schemes. Data analysis is also carried out for static and kinematic scenarios to validate the SQM profiles and examine SQM performance in actual multipath environments. Results show that although SQM is sensitive to medium and long-delay multipath, its effectiveness in mitigating these ranges of multipath errors varies based on tracking strategy and signaling scheme. For short-delay multipath scenarios, the multipath effect on pseudorange measurements remains mostly undetected due to the low sensitivity of SQM metrics.
2018-01-01
The data collection and reporting approaches of four major altmetric data aggregators are studied. The main aim of this study is to understand how differences in social media tracking and data collection methodologies can have effects on the analytical use of altmetric data. For this purpose, discrepancies in the metrics across aggregators have been studied in order to understand how the methodological choices adopted by these aggregators can explain the discrepancies found. Our results show that different forms of accessing the data from diverse social media platforms, together with different approaches of collecting, processing, summarizing, and updating social media metrics cause substantial differences in the data and metrics offered by these aggregators. These results highlight the importance that methodological choices in the tracking, collecting, and reporting of altmetric data can have in the analytical value of the data. Some recommendations for altmetric users and data aggregators are proposed and discussed. PMID:29772003
Application of online measures to monitor and evaluate multiplatform fusion performance
NASA Astrophysics Data System (ADS)
Stubberud, Stephen C.; Kowalski, Charlene; Klamer, Dale M.
1999-07-01
A primary concern of multiplatform data fusion is assessing the quality and utility of data shared among platforms. Constraints such as platform and sensor capability and task load necessitate development of an on-line system that computes a metric to determine which other platform can provide the best data for processing. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. Entropy measures quality of processed information such as localization, classification, and ambiguity in measurement-to-track association. Lower entropy scores imply less uncertainty about a particular target. When new information is provided, we compuete the level of improvement a particular track obtains from one measurement to another. The measure permits us to evaluate the utility of the new information. We couple entropy with intelligent agents that provide two main data gathering functions: estimation of another platform's performance and evaluation of the new measurement data's quality. Both functions result from the entropy metric. The intelligent agent on a platform makes an estimate of another platform's measurement and provides it to its own fusion system, which can then incorporate it, for a particular target. A resulting entropy measure is then calculated and returned to its own agent. From this metric, the agent determines a perceived value of the offboard platform's measurement. If the value is satisfactory, the agent requests the measurement from the other platform, usually by interacting with the other platform's agent. Once the actual measurement is received, again entropy is computed and the agent assesses its estimation process and refines it accordingly.
Kesler, Kyle; Dillon, Neal P; Fichera, Loris; Labadie, Robert F
2017-09-01
Objectives Document human motions associated with cochlear implant electrode insertion at different speeds and determine the lower limit of continuous insertion speed by a human. Study Design Observational. Setting Academic medical center. Subjects and Methods Cochlear implant forceps were coupled to a frame containing reflective fiducials, which enabled optical tracking of the forceps' tip position in real time. Otolaryngologists (n = 14) performed mock electrode insertions at different speeds based on recommendations from the literature: "fast" (96 mm/min), "stable" (as slow as possible without stopping), and "slow" (15 mm/min). For each insertion, the following metrics were calculated from the tracked position data: percentage of time at prescribed speed, percentage of time the surgeon stopped moving forward, and number of direction reversals (ie, going from forward to backward motion). Results Fast insertion trials resulted in better adherence to the prescribed speed (45.4% of the overall time), no motion interruptions, and no reversals, as compared with slow insertions (18.6% of time at prescribed speed, 15.7% stopped time, and an average of 18.6 reversals per trial). These differences were statistically significant for all metrics ( P < .01). The metrics for the fast and stable insertions were comparable; however, stable insertions were performed 44% slower on average. The mean stable insertion speed was 52 ± 19.3 mm/min. Conclusion Results indicate that continuous insertion of a cochlear implant electrode at 15 mm/min is not feasible for human operators. The lower limit of continuous forward insertion is 52 mm/min on average. Guidelines on manual insertion kinematics should consider this practical limit of human motion.
2016-03-02
some close- ness constant and dissimilar pairs be more distant than some larger constant. Online and non -linear extensions to the ITML methodology are...is obtained, instead of solving an objective function formed from the entire dataset. Many online learning methods have regret guarantees, that is... function Metric learning seeks to learn a metric that encourages data points marked as similar to be close and data points marked as different to be far
NASA Astrophysics Data System (ADS)
Griffiths, D.; Boehm, J.
2018-05-01
With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.
Segre, Paolo S; Dakin, Roslyn; Zordan, Victor B; Dickinson, Michael H; Straw, Andrew D; Altshuler, Douglas L
2015-01-01
Despite recent advances in the study of animal flight, the biomechanical determinants of maneuverability are poorly understood. It is thought that maneuverability may be influenced by intrinsic body mass and wing morphology, and by physiological muscle capacity, but this hypothesis has not yet been evaluated because it requires tracking a large number of free flight maneuvers from known individuals. We used an automated tracking system to record flight sequences from 20 Anna's hummingbirds flying solo and in competition in a large chamber. We found that burst muscle capacity predicted most performance metrics. Hummingbirds with higher burst capacity flew with faster velocities, accelerations, and rotations, and they used more demanding complex turns. In contrast, body mass did not predict variation in maneuvering performance, and wing morphology predicted only the use of arcing turns and high centripetal accelerations. Collectively, our results indicate that burst muscle capacity is a key predictor of maneuverability. DOI: http://dx.doi.org/10.7554/eLife.11159.001 PMID:26583753
Evaluation metrics for biostatistical and epidemiological collaborations.
Rubio, Doris McGartland; Del Junco, Deborah J; Bhore, Rafia; Lindsell, Christopher J; Oster, Robert A; Wittkowski, Knut M; Welty, Leah J; Li, Yi-Ju; Demets, Dave
2011-10-15
Increasing demands for evidence-based medicine and for the translation of biomedical research into individual and public health benefit have been accompanied by the proliferation of special units that offer expertise in biostatistics, epidemiology, and research design (BERD) within academic health centers. Objective metrics that can be used to evaluate, track, and improve the performance of these BERD units are critical to their successful establishment and sustainable future. To develop a set of reliable but versatile metrics that can be adapted easily to different environments and evolving needs, we consulted with members of BERD units from the consortium of academic health centers funded by the Clinical and Translational Science Award Program of the National Institutes of Health. Through a systematic process of consensus building and document drafting, we formulated metrics that covered the three identified domains of BERD practices: the development and maintenance of collaborations with clinical and translational science investigators, the application of BERD-related methods to clinical and translational research, and the discovery of novel BERD-related methodologies. In this article, we describe the set of metrics and advocate their use for evaluating BERD practices. The routine application, comparison of findings across diverse BERD units, and ongoing refinement of the metrics will identify trends, facilitate meaningful changes, and ultimately enhance the contribution of BERD activities to biomedical research. Copyright © 2011 John Wiley & Sons, Ltd.
3-D model-based vehicle tracking.
Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J
2005-10-01
This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.
Robotics-based synthesis of human motion.
Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S
2009-01-01
The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.
Defining quality metrics and improving safety and outcome in allergy care.
Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J
2014-04-01
The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.
Reconstructing the flight kinematics of swarming and mating in wild mosquitoes
Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Ribeiro, José M.; Lehmann, Tovi; Paley, Derek A.
2012-01-01
We describe a novel tracking system for reconstructing three-dimensional tracks of individual mosquitoes in wild swarms and present the results of validating the system by filming swarms and mating events of the malaria mosquito Anopheles gambiae in Mali. The tracking system is designed to address noisy, low frame-rate (25 frames per second) video streams from a stereo camera system. Because flying A. gambiae move at 1–4 m s−1, they appear as faded streaks in the images or sometimes do not appear at all. We provide an adaptive algorithm to search for missing streaks and a likelihood function that uses streak endpoints to extract velocity information. A modified multi-hypothesis tracker probabilistically addresses occlusions and a particle filter estimates the trajectories. The output of the tracking algorithm is a set of track segments with an average length of 0.6–1 s. The segments are verified and combined under human supervision to create individual tracks up to the duration of the video (90 s). We evaluate tracking performance using an established metric for multi-target tracking and validate the accuracy using independent stereo measurements of a single swarm. Three-dimensional reconstructions of A. gambiae swarming and mating events are presented. PMID:22628212
Problem formulation, metrics, open government, and on-line collaboration
NASA Astrophysics Data System (ADS)
Ziegler, C. R.; Schofield, K.; Young, S.; Shaw, D.
2010-12-01
Problem formulation leading to effective environmental management, including synthesis and application of science by government agencies, may benefit from collaborative on-line environments. This is illustrated by two interconnected projects: 1) literature-based evidence tools that support causal assessment and problem formulation, and 2) development of output, outcome, and sustainability metrics for tracking environmental conditions. Specifically, peer-production mechanisms allow for global contribution to science-based causal evidence databases, and subsequent crowd-sourced development of causal networks supported by that evidence. In turn, science-based causal networks may inform problem formulation and selection of metrics or indicators to track environmental condition (or problem status). Selecting and developing metrics in a collaborative on-line environment may improve stakeholder buy-in, the explicit relevance of metrics to planning, and the ability to approach problem apportionment or accountability, and to define success or sustainability. Challenges include contribution governance, data-sharing incentives, linking on-line interfaces to data service providers, and the intersection of environmental science and social science. Degree of framework access and confidentiality may vary by group and/or individual, but may ultimately be geared at demonstrating connections between science and decision making and supporting a culture of open government, by fostering transparency, public engagement, and collaboration.
Mallon, William T; Jones, Robert F
2002-02-01
The authors describe their findings from a study that (1) identified 41 medical schools or medical school departments that used metric systems to quantify faculty activity and productivity in teaching and (2) analyzed the purposes and progress of those systems. Among the reasons articulated for developing these systems, the most common was to identify a "rational" method for distributing funds to departments. More generally, institutions wanted to emphasize the importance of the school's educational mission. The schools varied in the types of information they tracked, ranging from a selective focus on medical school education to a comprehensive assessment of teaching activity and educational administration, committee work, and advising. Schools were almost evenly split between those that used a relative-value-unit method of tracking activity and those that used a contact-hour method. This study also identified six challenges that the institutions encountered with these metric systems: (1) the lack of a culture of data in management; (2) skepticism of faculty and chairs; (3) the misguided search for one perfect metric; (4) the expectation that a metric system will erase ambiguity regarding faculty teaching contributions; (5) the lack of, and difficulty with developing, measures of quality; and (6) the tendency to become overly complex. Because of the concern about the teaching mission at medical schools, the number of institutions developing educational metric systems will likely increase in the coming years. By documenting and accounting financially for teaching, medical schools can ensure that the educational mission is valued and appropriately supported.
Virtual reality-based assessment of basic laparoscopic skills using the Leap Motion controller.
Lahanas, Vasileios; Loukas, Constantinos; Georgiou, Konstantinos; Lababidi, Hani; Al-Jaroudi, Dania
2017-12-01
The majority of the current surgical simulators employ specialized sensory equipment for instrument tracking. The Leap Motion controller is a new device able to track linear objects with sub-millimeter accuracy. The aim of this study was to investigate the potential of a virtual reality (VR) simulator for assessment of basic laparoscopic skills, based on the low-cost Leap Motion controller. A simple interface was constructed to simulate the insertion point of the instruments into the abdominal cavity. The controller provided information about the position and orientation of the instruments. Custom tools were constructed to simulate the laparoscopic setup. Three basic VR tasks were developed: camera navigation (CN), instrument navigation (IN), and bimanual operation (BO). The experiments were carried out in two simulation centers: MPLSC (Athens, Greece) and CRESENT (Riyadh, Kingdom of Saudi Arabia). Two groups of surgeons (28 experts and 21 novices) participated in the study by performing the VR tasks. Skills assessment metrics included time, pathlength, and two task-specific errors. The face validity of the training scenarios was also investigated via a questionnaire completed by the participants. Expert surgeons significantly outperformed novices in all assessment metrics for IN and BO (p < 0.05). For CN, a significant difference was found in one error metric (p < 0.05). The greatest difference between the performances of the two groups occurred for BO. Qualitative analysis of the instrument trajectory revealed that experts performed more delicate movements compared to novices. Subjects' ratings on the feedback questionnaire highlighted the training value of the system. This study provides evidence regarding the potential use of the Leap Motion controller for assessment of basic laparoscopic skills. The proposed system allowed the evaluation of dexterity of the hand movements. Future work will involve comparison studies with validated simulators and development of advanced training scenarios on current Leap Motion controller.
The Response Dynamics of Recognition Memory: Sensitivity and Bias
ERIC Educational Resources Information Center
Koop, Gregory J.; Criss, Amy H.
2016-01-01
Advances in theories of memory are hampered by insufficient metrics for measuring memory. The goal of this paper is to further the development of model-independent, sensitive empirical measures of the recognition decision process. We evaluate whether metrics from continuous mouse tracking, or response dynamics, uniquely identify response bias and…
Objective assessment of operator performance during ultrasound-guided procedures.
Tabriz, David M; Street, Mandie; Pilgram, Thomas K; Duncan, James R
2011-09-01
Simulation permits objective assessment of operator performance in a controlled and safe environment. Image-guided procedures often require accurate needle placement, and we designed a system to monitor how ultrasound guidance is used to monitor needle advancement toward a target. The results were correlated with other estimates of operator skill. The simulator consisted of a tissue phantom, ultrasound unit, and electromagnetic tracking system. Operators were asked to guide a needle toward a visible point target. Performance was video-recorded and synchronized with the electromagnetic tracking data. A series of algorithms based on motor control theory and human information processing were used to convert raw tracking data into different performance indices. Scoring algorithms converted the tracking data into efficiency, quality, task difficulty, and targeting scores that were aggregated to create performance indices. After initial feasibility testing, a standardized assessment was developed. Operators (N = 12) with a broad spectrum of skill and experience were enrolled and tested. Overall scores were based on performance during ten simulated procedures. Prior clinical experience was used to independently estimate operator skill. When summed, the performance indices correlated well with estimated skill. Operators with minimal or no prior experience scored markedly lower than experienced operators. The overall score tended to increase according to operator's clinical experience. Operator experience was linked to decreased variation in multiple aspects of performance. The aggregated results of multiple trials provided the best correlation between estimated skill and performance. A metric for the operator's ability to maintain the needle aimed at the target discriminated between operators with different levels of experience. This study used a highly focused task model, standardized assessment, and objective data analysis to assess performance during simulated ultrasound-guided needle placement. The performance indices were closely related to operator experience.
Development of ecological indicator guilds for land management
Krzysik, A.J.; Balbach, H.E.; Duda, J.J.; Emlen, J.M.; Freeman, D.C.; Graham, J.H.; Kovacic, D.A.; Smith, L.M.; Zak, J.C.
2005-01-01
Agency land-use must be efficiently and cost-effectively monitored to assess conditions and trends in ecosystem processes and natural resources relevant to mission requirements and legal mandates. Ecological Indicators represent important land management tools for tracking ecological changes and preventing irreversible environmental damage in disturbed landscapes. The overall objective of the research was to develop both individual and integrated sets (i.e., statistically derived guilds) of Ecological Indicators to: quantify habitat conditions and trends, track and monitor ecological changes, provide early warning or threshold detection, and provide guidance for land managers. The derivation of Ecological Indicators was based on statistical criteria, ecosystem relevance, reliability and robustness, economy and ease of use for land managers, multi-scale performance, and stress response criteria. The basis for the development of statistically based Ecological Indicators was the identification of ecosystem metrics that analytically tracked a landscape disturbance gradient.
Samosky, Joseph T; Allen, Pete; Boronyak, Steve; Branstetter, Barton; Hein, Steven; Juhas, Mark; Nelson, Douglas A; Orebaugh, Steven; Pinto, Rohan; Smelko, Adam; Thompson, Mitch; Weaver, Robert A
2011-01-01
We are developing a simulator of peripheral nerve block utilizing a mixed-reality approach: the combination of a physical model, an MRI-derived virtual model, mechatronics and spatial tracking. Our design uses tangible (physical) interfaces to simulate surface anatomy, haptic feedback during needle insertion, mechatronic display of muscle twitch corresponding to the specific nerve stimulated, and visual and haptic feedback for the injection syringe. The twitch response is calculated incorporating the sensed output of a real neurostimulator. The virtual model is isomorphic with the physical model and is derived from segmented MRI data. This model provides the subsurface anatomy and, combined with electromagnetic tracking of a sham ultrasound probe and a standard nerve block needle, supports simulated ultrasound display and measurement of needle location and proximity to nerves and vessels. The needle tracking and virtual model also support objective performance metrics of needle targeting technique.
2014-12-01
management structure set up for Study 4 - COMPLETED Task 17 (Months 37-48) Operationalize database for Study 4 analysis scheme – COMPLETED Task...Heaton, K.J., Laufer, A.S., Maule, A., Vincent, A.S. (abstract submitted). Effects of acute sleep deprivation on ANAM4 TBI Battery performance in...and visual tracking degradation during acute sleep deprivation in a military sample. Aviat Space Environ Med 2014; 85:497 – 503. Background: Fatigue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John E.; English, Christine M.; Gesick, Joshua C.
This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.
Sensor trustworthiness in uncertain time varying stochastic environments
NASA Astrophysics Data System (ADS)
Verma, Ajay; Fernandes, Ronald; Vadakkeveedu, Kalyan
2011-06-01
Persistent surveillance applications require unattended sensors deployed in remote regions to track and monitor some physical stimulant of interest that can be modeled as output of time varying stochastic process. However, the accuracy or the trustworthiness of the information received through a remote and unattended sensor and sensor network cannot be readily assumed, since sensors may get disabled, corrupted, or even compromised, resulting in unreliable information. The aim of this paper is to develop information theory based metric to determine sensor trustworthiness from the sensor data in an uncertain and time varying stochastic environment. In this paper we show an information theory based determination of sensor data trustworthiness using an adaptive stochastic reference sensor model that tracks the sensor performance for the time varying physical feature, and provides a baseline model that is used to compare and analyze the observed sensor output. We present an approach in which relative entropy is used for reference model adaptation and determination of divergence of the sensor signal from the estimated reference baseline. We show that that KL-divergence is a useful metric that can be successfully used in determination of sensor failures or sensor malice of various types.
Guiding Principles and Checklist for Population-Based Quality Metrics
Brunelli, Steven M.; Maddux, Franklin W.; Parker, Thomas F.; Johnson, Douglas; Nissenson, Allen R.; Collins, Allan; Lacson, Eduardo
2014-01-01
The Centers for Medicare and Medicaid Services oversees the ESRD Quality Incentive Program to ensure that the highest quality of health care is provided by outpatient dialysis facilities that treat patients with ESRD. To that end, Centers for Medicare and Medicaid Services uses clinical performance measures to evaluate quality of care under a pay-for-performance or value-based purchasing model. Now more than ever, the ESRD therapeutic area serves as the vanguard of health care delivery. By translating medical evidence into clinical performance measures, the ESRD Prospective Payment System became the first disease-specific sector using the pay-for-performance model. A major challenge for the creation and implementation of clinical performance measures is the adjustments that are necessary to transition from taking care of individual patients to managing the care of patient populations. The National Quality Forum and others have developed effective and appropriate population-based clinical performance measures quality metrics that can be aggregated at the physician, hospital, dialysis facility, nursing home, or surgery center level. Clinical performance measures considered for endorsement by the National Quality Forum are evaluated using five key criteria: evidence, performance gap, and priority (impact); reliability; validity; feasibility; and usability and use. We have developed a checklist of special considerations for clinical performance measure development according to these National Quality Forum criteria. Although the checklist is focused on ESRD, it could also have broad application to chronic disease states, where health care delivery organizations seek to enhance quality, safety, and efficiency of their services. Clinical performance measures are likely to become the norm for tracking performance for health care insurers. Thus, it is critical that the methodologies used to develop such metrics serve the payer and the provider and most importantly, reflect what represents the best care to improve patient outcomes. PMID:24558050
Productivity in Pediatric Palliative Care: Measuring and Monitoring an Elusive Metric.
Kaye, Erica C; Abramson, Zachary R; Snaman, Jennifer M; Friebert, Sarah E; Baker, Justin N
2017-05-01
Workforce productivity is poorly defined in health care. Particularly in the field of pediatric palliative care (PPC), the absence of consensus metrics impedes aggregation and analysis of data to track workforce efficiency and effectiveness. Lack of uniformly measured data also compromises the development of innovative strategies to improve productivity and hinders investigation of the link between productivity and quality of care, which are interrelated but not interchangeable. To review the literature regarding the definition and measurement of productivity in PPC; to identify barriers to productivity within traditional PPC models; and to recommend novel metrics to study productivity as a component of quality care in PPC. PubMed ® and Cochrane Database of Systematic Reviews searches for scholarly literature were performed using key words (pediatric palliative care, palliative care, team, workforce, workflow, productivity, algorithm, quality care, quality improvement, quality metric, inpatient, hospital, consultation, model) for articles published between 2000 and 2016. Organizational searches of Center to Advance Palliative Care, National Hospice and Palliative Care Organization, National Association for Home Care & Hospice, American Academy of Hospice and Palliative Medicine, Hospice and Palliative Nurses Association, National Quality Forum, and National Consensus Project for Quality Palliative Care were also performed. Additional semistructured interviews were conducted with directors from seven prominent PPC programs across the U.S. to review standard operating procedures for PPC team workflow and productivity. Little consensus exists in the PPC field regarding optimal ways to define, measure, and analyze provider and program productivity. Barriers to accurate monitoring of productivity include difficulties with identification, measurement, and interpretation of metrics applicable to an interdisciplinary care paradigm. In the context of inefficiencies inherent to traditional consultation models, novel productivity metrics are proposed. Further research is needed to determine optimal metrics for monitoring productivity within PPC teams. Innovative approaches should be studied with the goal of improving efficiency of care without compromising value. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Fault Tolerance Analysis of L1 Adaptive Control System for Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Krishnamoorthy, Kiruthika
Trajectory tracking is a critical element for the better functionality of autonomous vehicles. The main objective of this research study was to implement and analyze L1 adaptive control laws for autonomous flight under normal and upset flight conditions. The West Virginia University (WVU) Unmanned Aerial Vehicle flight simulation environment was used for this purpose. A comparison study between the L1 adaptive controller and a baseline conventional controller, which relies on position, proportional, and integral compensation, has been performed for a reduced size jet aircraft, the WVU YF-22. Special attention was given to the performance of the proposed control laws in the presence of abnormal conditions. The abnormal conditions considered are locked actuators (stabilator, aileron, and rudder) and excessive turbulence. Several levels of abnormal condition severity have been considered. The performance of the control laws was assessed over different-shape commanded trajectories. A set of comprehensive evaluation metrics was defined and used to analyze the performance of autonomous flight control laws in terms of control activity and trajectory tracking errors. The developed L1 adaptive control laws are supported by theoretical stability guarantees. The simulation results show that L1 adaptive output feedback controller achieves better trajectory tracking with lower level of control actuation as compared to the baseline linear controller under nominal and abnormal conditions.
The Completion Arch: Measuring Community College Student Success--2012
ERIC Educational Resources Information Center
Horn, Laura; Radwin, David
2012-01-01
Essential to tracking student success at community colleges is the availability of solid data and commonly defined metrics that go beyond measuring the traditional (and limited) enrollment and graduation rates that these colleges report to the federal government. In particular, what is needed are metrics that illuminate what happens to students…
An Examination of Selected Software Testing Tools: 1992
1992-12-01
Report ....................................................... 27-19 Figure 27-17. Metrics Manager Database Full Report...historical test database , the test management and problem reporting tools were examined using the sample test database provided by each supplier. 4-4...track the impact of new methods, organi- zational structures, and technologies. Metrics Manager is supported by an industry database that allows
The data quality analyzer: A quality control program for seismic data
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.
2015-03-01
The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
Creating a dashboard to track progress toward IOM recommendations for the future of nursing.
Spetz, Joanne; Bates, Timothy; Chu, Lela; Lin, Jessica; Fishman, Nancy W; Melichar, Lori
2013-01-01
This article explains the process used to identify and develop a set of data used to track national progress toward the recommendations of the Institute of Medicine Committee for the Future of Nursing. The data are presented in a dashboard format to visually summarize information and quickly measure progress. The approach selected by the research team is outlined, the criteria for selecting candidate metrics are detailed, the process for seeking external guidance is described, and the final dashboard measures are presented. Finally, the methods for data collection for each metric are explicated, to guide states and local regions in the collection of their own data.
NASA Astrophysics Data System (ADS)
Shields, C. A.; Rutz, J. J.; Wehner, M. F.; Ralph, F. M.; Leung, L. R.
2017-12-01
The Atmospheric River Tracking Method Intercomparison Project (ARTMIP) is a community effort whose purpose is to quantify uncertainties in atmospheric river (AR) research solely due to different identification and tracking techniques. Atmospheric rivers transport significant amounts of moisture in long, narrow filamentary bands, typically travelling from the subtropics to the mid-latitudes. They are an important source of regional precipitation impacting local hydroclimate, and in extreme cases, cause severe flooding and infrastructure damage in local communities. Our understanding of ARs, from forecast skill to future climate projections, all hinge on how we define ARs. By comparing a diverse set of detection algorithms, the uncertainty in our definition of ARs, (including statistics and climatology), and the implications of those uncertainties, can be analyzed and quantified. ARTMIP is divided into two broad phases that aim to answer science questions impacted by choice of detection algorithm. How robust are AR metrics such as climatology, storm duration, and relationship to extreme precipitation? How are the AR metrics in future climate projections impacted by choice of algorithm? Some algorithms rely on threshold values for water vapor. In a warmer world, the background state, by definition, is moister due to the Clausius-Clapeyron relationship, and could potentially skew results. Can uncertainty bounds be accurately placed on each metric? Tier 1 participants will apply their algorithms to a high resolution common dataset (MERRA2) and provide the greater group AR metrics (frequency, location, duration, etc). Tier 2 research will encompass sensitivity studies regarding resolution, reanalysis choice, and future climate change scenarios. ARTMIP is currently in the Tier 1 Phase and will begin Tier 2 in 2018. Preliminary metrics and analysis from Tier 1 will be presented.
Gahm, Jin Kyu; Shi, Yonggang
2018-01-01
Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer’s disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. PMID:29574399
Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria
2015-01-01
Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence. PMID:26029135
Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria
2015-01-01
Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence.
Shiraishi, Satomi; Grams, Michael P; Fong de Los Santos, Luis E
2018-05-01
The purpose of this study was to demonstrate an objective quality control framework for the image review process. A total of 927 cone-beam computed tomography (CBCT) registrations were retrospectively analyzed for 33 bilateral head and neck cancer patients who received definitive radiotherapy. Two registration tracking volumes (RTVs) - cervical spine (C-spine) and mandible - were defined, within which a similarity metric was calculated and used as a registration quality tracking metric over the course of treatment. First, sensitivity to large misregistrations was analyzed for normalized cross-correlation (NCC) and mutual information (MI) in the context of statistical analysis. The distribution of metrics was obtained for displacements that varied according to a normal distribution with standard deviation of σ = 2 mm, and the detectability of displacements greater than 5 mm was investigated. Then, similarity metric control charts were created using a statistical process control (SPC) framework to objectively monitor the image registration and review process. Patient-specific control charts were created using NCC values from the first five fractions to set a patient-specific process capability limit. Population control charts were created using the average of the first five NCC values for all patients in the study. For each patient, the similarity metrics were calculated as a function of unidirectional translation, referred to as the effective displacement. Patient-specific action limits corresponding to 5 mm effective displacements were defined. Furthermore, effective displacements of the ten registrations with the lowest similarity metrics were compared with a three dimensional (3DoF) couch displacement required to align the anatomical landmarks. Normalized cross-correlation identified suboptimal registrations more effectively than MI within the framework of SPC. Deviations greater than 5 mm were detected at 2.8σ and 2.1σ from the mean for NCC and MI, respectively. Patient-specific control charts using NCC evaluated daily variation and identified statistically significant deviations. This study also showed that subjective evaluations of the images were not always consistent. Population control charts identified a patient whose tracking metrics were significantly lower than those of other patients. The patient-specific action limits identified registrations that warranted immediate evaluation by an expert. When effective displacements in the anterior-posterior direction were compared to 3DoF couch displacements, the agreement was ±1 mm for seven of 10 patients for both C-spine and mandible RTVs. Qualitative review alone of IGRT images can result in inconsistent feedback to the IGRT process. Registration tracking using NCC objectively identifies statistically significant deviations. When used in conjunction with the current image review process, this tool can assist in improving the safety and consistency of the IGRT process. © 2018 American Association of Physicists in Medicine.
Semi-supervised tracking of extreme weather events in global spatio-temporal climate datasets
NASA Astrophysics Data System (ADS)
Kim, S. K.; Prabhat, M.; Williams, D. N.
2017-12-01
Deep neural networks have been successfully applied to solve problem to detect extreme weather events in large scale climate datasets and attend superior performance that overshadows all previous hand-crafted methods. Recent work has shown that multichannel spatiotemporal encoder-decoder CNN architecture is able to localize events in semi-supervised bounding box. Motivated by this work, we propose new learning metric based on Variational Auto-Encoders (VAE) and Long-Short-Term-Memory (LSTM) to track extreme weather events in spatio-temporal dataset. We consider spatio-temporal object tracking problems as learning probabilistic distribution of continuous latent features of auto-encoder using stochastic variational inference. For this, we assume that our datasets are i.i.d and latent features is able to be modeled by Gaussian distribution. In proposed metric, we first train VAE to generate approximate posterior given multichannel climate input with an extreme climate event at fixed time. Then, we predict bounding box, location and class of extreme climate events using convolutional layers given input concatenating three features including embedding, sampled mean and standard deviation. Lastly, we train LSTM with concatenated input to learn timely information of dataset by recurrently feeding output back to next time-step's input of VAE. Our contribution is two-fold. First, we show the first semi-supervised end-to-end architecture based on VAE to track extreme weather events which can apply to massive scaled unlabeled climate datasets. Second, the information of timely movement of events is considered for bounding box prediction using LSTM which can improve accuracy of localization. To our knowledge, this technique has not been explored neither in climate community or in Machine Learning community.
A Metric to Quantify Shared Visual Attention in Two-Person Teams
NASA Technical Reports Server (NTRS)
Gontar, Patrick; Mulligan, Jeffrey B.
2015-01-01
Critical tasks in high-risk environments are often performed by teams, the members of which must work together efficiently. In some situations, the team members may have to work together to solve a particular problem, while in others it may be better for them to divide the work into separate tasks that can be completed in parallel. We hypothesize that these two team strategies can be differentiated on the basis of shared visual attention, measured by gaze tracking.
NASA Astrophysics Data System (ADS)
Wilkins, M.; Moyer, E. J.; Hussein, Islam I.; Schumacher, P. W., Jr.
Correlating new detections back to a large catalog of resident space objects (RSOs) requires solving one of three types of data association problems: observation-to-track, track-to-track, or observation-to-observation. The authors previous work has explored the use of various information divergence metrics for solving these problems: Kullback-Leibler (KL) divergence, mutual information, and Bhattacharrya distance. In addition to approaching the data association problem strictly from the metric tracking aspect, we have explored fusing metric and photometric data using Bayesian probabilistic reasoning for RSO identification to aid in our ability to correlate data to specific RS Os. In this work, we will focus our attention on the KL Divergence, which is a measure of the information gained when new evidence causes the observer to revise their beliefs. We can apply the Principle of Minimum Discrimination Information such that new data produces as small an information gain as possible and this information change is bounded by ɛ. Choosing an appropriate value for ɛ for both convergence and change detection is a function of your risk tolerance. Small ɛ for change detection increases alarm rates while larger ɛ for convergence means that new evidence need not be identical in information content. We need to understand what this change detection metric implies for Type I α and Type II β errors when we are forced to make a decision on whether new evidence represents a true change in characterization of an object or is merely within the bounds of our measurement uncertainty. This is unclear for the case of fusing multiple kinds and qualities of characterization evidence that may exist in different metric spaces or are even semantic statements. To this end, we explore the use of Sequential Probability Ratio Testing where we suppose that we may need to collect additional evidence before accepting or rejecting the null hypothesis that a change has occurred. In this work, we will explore the effects of choosing ɛ as a function of α and β. Our intent is that this work will help bridge understanding between the well-trodden grounds of Type I and Type II errors and changes in information theoretic content.
Multiple hypothesis tracking for the cyber domain
NASA Astrophysics Data System (ADS)
Schwoegler, Stefan; Blackman, Sam; Holsopple, Jared; Hirsch, Michael J.
2011-09-01
This paper discusses how methods used for conventional multiple hypothesis tracking (MHT) can be extended to domain-agnostic tracking of entities from non-kinematic constraints such as those imposed by cyber attacks in a potentially dense false alarm background. MHT is widely recognized as the premier method to avoid corrupting tracks with spurious data in the kinematic domain but it has not been extensively applied to other problem domains. The traditional approach is to tightly couple track maintenance (prediction, gating, filtering, probabilistic pruning, and target confirmation) with hypothesis management (clustering, incompatibility maintenance, hypothesis formation, and Nassociation pruning). However, by separating the domain specific track maintenance portion from the domain agnostic hypothesis management piece, we can begin to apply the wealth of knowledge gained from ground and air tracking solutions to the cyber (and other) domains. These realizations led to the creation of Raytheon's Multiple Hypothesis Extensible Tracking Architecture (MHETA). In this paper, we showcase MHETA for the cyber domain, plugging in a well established method, CUBRC's INFormation Engine for Real-time Decision making, (INFERD), for the association portion of the MHT. The result is a CyberMHT. We demonstrate the power of MHETA-INFERD using simulated data. Using metrics from both the tracking and cyber domains, we show that while no tracker is perfect, by applying MHETA-INFERD, advanced nonkinematic tracks can be captured in an automated way, perform better than non-MHT approaches, and decrease analyst response time to cyber threats.
A Computable Definition of Sepsis Facilitates Screening and Performance Improvement Tracking.
Alessi, Lauren J; Warmus, Holly R; Schaffner, Erin K; Kantawala, Sajel; Carcillo, Joseph; Rosen, Johanna; Horvat, Christopher M
2018-03-01
Sepsis kills almost 5,000 children annually, accounting for 16% of pediatric health care spending in the United States. We sought to identify sepsis within the Electronic Health Record (EHR) of a quaternary children's hospital to characterize disease incidence, improve recognition and response, and track performance metrics. Methods are organized in a plan-do-study-act cycle. During the "plan" phase, electronic definitions of sepsis (blood culture and antibiotic within 24 hours) and septic shock (sepsis plus vasoactive medication) were created to establish benchmark data and track progress with statistical process control. The performance of a screening tool was evaluated in the emergency department. During the "do" phase, a novel inpatient workflow is being piloted, which involves regular sepsis screening by nurses using the tool, and a regimented response to high risk patients. Screening tool use in the emergency department reduced time to antibiotics (Fig. 1). Of the 6,159 admissions, EHR definitions identified 1,433 (23.3%) between July and December 2016 with sepsis, of which 159 (11.1%) had septic shock. Hospital mortality for all sepsis patients was 2.2% and 15.7% for septic shock (Table 1). These findings approximate epidemiologic studies of sepsis and severe sepsis, which report a prevalence range of 0.45-8.2% and mortality range of 8.2-25% (Table 2). 1-5 . Implementation of a sepsis screening tool is associated with improved performance. The prevalence of sepsis conditions identified with electronic definitions approximates the epidemiologic landscape characterized by other point-prevalence and administrative studies, providing face validity to this approach, and proving useful for tracking performance improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Lynn; Murtishaw, Scott; Worrell, Ernst
2003-06-01
Executive Summary: The California Climate Action Registry, which was initially established in 2000 and began operation in Fall 2002, is a voluntary registry for recording annual greenhouse gas (GHG) emissions. The purpose of the Registry is to assist California businesses and organizations in their efforts to inventory and document emissions in order to establish a baseline and to document early actions to increase energy efficiency and decrease GHG emissions. The State of California has committed to use its ''best efforts'' to ensure that entities that establish GHG emissions baselines and register their emissions will receive ''appropriate consideration under any futuremore » international, federal, or state regulatory scheme relating to greenhouse gas emissions.'' Reporting of GHG emissions involves documentation of both ''direct'' emissions from sources that are under the entity's control and indirect emissions controlled by others. Electricity generated by an off-site power source is consider ed to be an indirect GHG emission and is required to be included in the entity's report. Registry participants include businesses, non-profit organizations, municipalities, state agencies, and other entities. Participants are required to register the GHG emissions of all operations in California, and are encouraged to report nationwide. For the first three years of participation, the Registry only requires the reporting of carbon dioxide (CO2) emissions, although participants are encouraged to report the remaining five Kyoto Protocol GHGs (CH4, N2O, HFCs, PFCs, and SF6). After three years, reporting of all six Kyoto GHG emissions is required. The enabling legislation for the Registry (SB 527) requires total GHG emissions to be registered and requires reporting of ''industry-specific metrics'' once such metrics have been adopted by the Registry. The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab) was asked to provide technical assistance to the California Energy Commission (Energy Commission) related to the Registry in three areas: (1) assessing the availability and usefulness of industry-specific metrics, (2) evaluating various methods for establishing baselines for calculating GHG emissions reductions related to specific actions taken by Registry participants, and (3) establishing methods for calculating electricity CO2 emission factors. The third area of research was completed in 2002 and is documented in Estimating Carbon Dioxide Emissions Factors for the California Electric Power Sector (Marnay et al., 2002). This report documents our findings related to the first areas of research. For the first area of research, the overall objective was to evaluate the metrics, such as emissions per economic unit or emissions per unit of production that can be used to report GHG emissions trends for potential Registry participants. This research began with an effort to identify methodologies, benchmarking programs, inventories, protocols, and registries that u se industry-specific metrics to track trends in energy use or GHG emissions in order to determine what types of metrics have already been developed. The next step in developing industry-specific metrics was to assess the availability of data needed to determine metric development priorities. Berkeley Lab also determined the relative importance of different potential Registry participant categories in order to asses s the availability of sectoral or industry-specific metrics and then identified industry-specific metrics in use around the world. While a plethora of metrics was identified, no one metric that adequately tracks trends in GHG emissions while maintaining confidentiality of data was identified. As a result of this review, Berkeley Lab recommends the development of a GHG intensity index as a new metric for reporting and tracking GHG emissions trends.Such an index could provide an industry-specific metric for reporting and tracking GHG emissions trends to accurately reflect year to year changes while protecting proprietary data. This GHG intensity index changes while protecting proprietary data. This GHG intensity index would provide Registry participants with a means for demonstrating improvements in their energy and GHG emissions per unit of production without divulging specific values. For the second research area, Berkeley Lab evaluated various methods used to calculate baselines for documentation of energy consumption or GHG emissions reductions, noting those that use industry-specific metrics. Accounting for actions to reduce GHGs can be done on a project-by-project basis or on an entity basis. Establishing project-related baselines for mitigation efforts has been widely discussed in the context of two of the so-called ''flexible mechanisms'' of the Kyoto Protocol to the United Nations Framework Convention on Climate Change (Kyoto Protocol) Joint Implementation (JI) and the Clean Development Mechanism (CDM).« less
Geometric Factors in Target Positioning and Tracking
2009-07-01
Shalom and X.R. Li, Multitarget-Multisensor Tracking: Principles and Techniques, YBS Publishing, Storrs, CT, 1995. [2] S. Blackman and R. Popoli, Design...Multitarget-Multisensor Tracking: Applications and Advances, Vol.2, Y. Bar- Shalom (Ed.), 325-392, Artech House, Norwood, MA, 1999. [10] B. Ristic...R. Yarlagadda, I. Ali , N. Al-Dhahir, and J. Hershey, “GPS GDOP Metric,” IEE Proc. Radar, Sonar Navig, 147(5), Oct. 2000. [14] A. Kelly
Inlet Trade Study for a Low-Boom Aircraft Demonstrator
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Slater, John W.; Rallabhandi, Sriram K.
2016-01-01
Propulsion integration for low-boom supersonic aircraft requires careful inlet selection, placement, and tailoring to achieve acceptable propulsive and aerodynamic performance, without compromising vehicle sonic boom loudness levels. In this investigation, an inward-turning streamline-traced and axisymmetric spike inlet are designed and independently installed on a conceptual low-boom supersonic demonstrator aircraft. The airframe was pre-shaped to achieve a target ground under-track loudness of 76.4 PLdB at cruise using an adjoint-based design optimization process. Aircraft and inlet performance characteristics were obtained by solution of the steady-state Reynolds-averaged Navier-Stokes equations. Isolated cruise inlet performance including total pressure recovery and distortion were computed and compared against installed inlet performance metrics. Evaluation of vehicle near-field pressure signatures, along with under- and off-track propagated loudness levels is also reported. Results indicate the integrated axisymmetric spike design offers higher inlet pressure recovery, lower fan distortion, and reduced sonic boom. The vehicle with streamline-traced inlet exhibits lower external wave drag, which translates to a higher lift-to-drag ratio and increased range capability.
Latent uncertainties of the precalculated track Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, Marc-André; Seuntjens, Jan; Roberge, David
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less
Latent uncertainties of the precalculated track Monte Carlo method.
Renaud, Marc-André; Roberge, David; Seuntjens, Jan
2015-01-01
While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.
Semi-automated location identification of catheters in digital chest radiographs
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Reeves, Anthony P.; Cham, Matthew D.; Henschke, Claudia I.; Yankelevitz, David F.
2007-03-01
Localization of catheter tips is the most common task in intensive care unit imaging. In this work, catheters appearing in digital chest radiographs acquired by portable chest x-rays were tracked using a semi-automatic method. Due to the fact that catheters are synthetic objects, its profile does not vary drastically over its length. Therefore, we use forward looking registration with normalized cross-correlation in order to take advantage of a priori information of the catheter profile. The registration is accomplished with a two-dimensional template representative of the catheter to be tracked generated using two seed points given by the user. To validate catheter tracking with this method, we look at two metrics: accuracy and precision. The algorithms results are compared to a ground truth established by catheter midlines marked by expert radiologists. Using 12 objects of interest comprised of naso-gastric, endo-tracheal tubes, and chest tubes, and PICC and central venous catheters, we find that our algorithm can fully track 75% of the objects of interest, with a average tracking accuracy and precision of 85.0%, 93.6% respectively using the above metrics. Such a technique would be useful for physicians wishing to verify the positioning of catheter tips using chest radiographs.
LeClerc, Emma; Wiersma, Yolanda F
2017-04-01
This study investigates land cover change near the abandoned Pine Point Mine in Canada's Northwest Territories. Industrial mineral development transforms local environments, and the effects of such disturbances are often long-lasting, particularly in subarctic, boreal environments where vegetation conversion can take decades. Located in the Boreal Plains Ecozone, the Pine Point Mine was an extensive open pit operation that underwent little reclamation when it shut down in 1988. We apply remote sensing and landscape ecology methods to quantify land cover change in the 20 years following the mine's closure. Using a time series of near-anniversary Landsat images, we performed a supervised classification to differentiate seven land cover classes. We used raster algebra and landscape metrics to track changes in land cover composition and configuration in the 20 years since the mine shut down. We compared our results with a site in Wood Buffalo National Park that was never subjected to extensive anthropogenic disturbance. This space-for-time substitution provided an analog for how the ecosystem in the Pine Point region might have developed in the absence of industrial mineral development. We found that the dense conifer class was dominant in the park and exhibited larger and more contiguous patches than at the mine site. Bare land at the mine site showed little conversion through time. While the combination of raster algebra and landscape metrics allowed us to track broad changes in land cover composition and configuration, improved access to affordable, high-resolution imagery is necessary to effectively monitor land cover dynamics at abandoned mines.
Pattern Activity Clustering and Evaluation (PACE)
NASA Astrophysics Data System (ADS)
Blasch, Erik; Banas, Christopher; Paul, Michael; Bussjager, Becky; Seetharaman, Guna
2012-06-01
With the vast amount of network information available on activities of people (i.e. motions, transportation routes, and site visits) there is a need to explore the salient properties of data that detect and discriminate the behavior of individuals. Recent machine learning approaches include methods of data mining, statistical analysis, clustering, and estimation that support activity-based intelligence. We seek to explore contemporary methods in activity analysis using machine learning techniques that discover and characterize behaviors that enable grouping, anomaly detection, and adversarial intent prediction. To evaluate these methods, we describe the mathematics and potential information theory metrics to characterize behavior. A scenario is presented to demonstrate the concept and metrics that could be useful for layered sensing behavior pattern learning and analysis. We leverage work on group tracking, learning and clustering approaches; as well as utilize information theoretical metrics for classification, behavioral and event pattern recognition, and activity and entity analysis. The performance evaluation of activity analysis supports high-level information fusion of user alerts, data queries and sensor management for data extraction, relations discovery, and situation analysis of existing data.
Oculometric Assessment of Dynamic Visual Processing
NASA Technical Reports Server (NTRS)
Liston, Dorion Bryce; Stone, Lee
2014-01-01
Eye movements are the most frequent (3 per second), shortest-latency (150-250 ms), and biomechanically simplest (1 joint, no inertial complexities) voluntary motor behavior in primates, providing a model system to assess sensorimotor disturbances arising from trauma, fatigue, aging, or disease states (e.g., Diefendorf and Dodge, 1908). We developed a 15-minute behavioral tracking protocol consisting of randomized stepramp radial target motion to assess several aspects of the behavioral response to dynamic visual motion, including pursuit initiation, steadystate tracking, direction-tuning, and speed-tuning thresholds. This set of oculomotor metrics provide valid and reliable measures of dynamic visual performance (Stone and Krauzlis, 2003; Krukowski and Stone, 2005; Stone et al, 2009; Liston and Stone, 2014), and may prove to be a useful assessment tool for functional impairments of dynamic visual processing.
Software risk management through independent verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Zhou, Tong C.; Wood, Ralph
1995-01-01
Software project managers need tools to estimate and track project goals in a continuous fashion before, during, and after development of a system. In addition, they need an ability to compare the current project status with past project profiles to validate management intuition, identify problems, and then direct appropriate resources to the sources of problems. This paper describes a measurement-based approach to calculating the risk inherent in meeting project goals that leverages past project metrics and existing estimation and tracking models. We introduce the IV&V Goal/Questions/Metrics model, explain its use in the software development life cycle, and describe our attempts to validate the model through the reverse engineering of existing projects.
Tests of general relativity using Starprobe radio metric tracking data
NASA Technical Reports Server (NTRS)
Mease, K. D.; Anderson, J. D.; Wood, L. J.; White, L. K.
1982-01-01
The potential of a proposed spacecraft mission, called Starprobe, for testing general relativity and providing information on the interior structure and dynamics of the sun is investigated. Parametric, gravitational perturbation terms are derived which represent relativistic effects and effects due to spatial and temporal variations in the solar potential at a given radial distance. A covariance analysis based on Kalman filtering theory predicts the accuracies with which the free parameters in the perturbation terms can be estimated with radio metric tracking data through the process of trajectory reconstruction. It is concluded that Starprobe can contribute significant information on both the nature of gravitation and the structure and dynamics of the solar interior.
Countermeasure development using a formalised metric-based process
NASA Astrophysics Data System (ADS)
Barker, Laurence
2008-10-01
Guided weapons, are a potent threat to both air and surface platforms; to protect the platform, Countermeasures are often used to disrupt the operation of the tracking system. Development of effective techniques to defeat the guidance sensors is a complex activity. The countermeasure often responds to the behaviour of a responsive sensor system, creating a "closed loop" interaction. Performance assessment is difficult, and determining that enough knowledge exists to make a case that a platform is adequately protected is challenging. A set of metrics known as Countermeasure Confidence Levels (CCL) is described. These set out a measure of confidence in prediction of Countermeasure performance. The CCL scale provides, for the first time, a method to determine whether enough evidence exists to support development activity and introduction to operational service. Application of the CCL scale to development of a hypothetical countermeasure is described. This tracks how the countermeasure is matured from initial concept to in-service application. The purpose of each stage is described, together with a description of what work is likely to be needed. This will involve timely use of analysis, simulation, laboratory work and field testing. The use of the CCL scale at key decision points is described. These include procurement decision points, and entry-to-service decisions. Each stage requires collection of evidence of effectiveness. Completeness of the available evidence can be assessed, and duplication can be avoided. Read-across between concepts, weapon systems and platforms can be addressed and the impact of technology insertion can be assessed.
A performance study of unmanned aerial vehicle-based sensor networks under cyber attack
NASA Astrophysics Data System (ADS)
Puchaty, Ethan M.
In UAV-based sensor networks, an emerging area of interest is the performance of these networks under cyber attack. This study seeks to evaluate the performance trade-offs from a System-of-Systems (SoS) perspective between various UAV communications architecture options in the context two missions: tracking ballistic missiles and tracking insurgents. An agent-based discrete event simulation is used to model a sensor communication network consisting of UAVs, military communications satellites, ground relay stations, and a mission control center. Network susceptibility to cyber attack is modeled with probabilistic failures and induced data variability, with performance metrics focusing on information availability, latency, and trustworthiness. Results demonstrated that using UAVs as routers increased network availability with a minimal latency penalty and communications satellite networks were best for long distance operations. Redundancy in the number of links between communication nodes helped mitigate cyber-caused link failures and add robustness in cases of induced data variability by an adversary. However, when failures were not independent, redundancy and UAV routing were detrimental in some cases to network performance. Sensitivity studies indicated that long cyber-caused downtimes and increasing failure dependencies resulted in build-ups of failures and caused significant degradations in network performance.
Getting a Tenure-Track Faculty Position at a Teaching-Centered Research University
ERIC Educational Resources Information Center
Wilkens, Robert; Comfort, Kristen
2016-01-01
The goal of this article is to provide critical information to chemical engineers seeking a tenure-track faculty position within academia. We outline the application and submission process from start to finish, including a discussion on critical evaluation metrics sought by search committees. In addition, we highlight frequent mistakes made by…
Tang, Junqing; Heinimann, Hans Rudolf
2018-01-01
Traffic congestion brings not only delay and inconvenience, but other associated national concerns, such as greenhouse gases, air pollutants, road safety issues and risks. Identification, measurement, tracking, and control of urban recurrent congestion are vital for building a livable and smart community. A considerable amount of works has made contributions to tackle the problem. Several methods, such as time-based approaches and level of service, can be effective for characterizing congestion on urban streets. However, studies with systemic perspectives have been minor in congestion quantification. Resilience, on the other hand, is an emerging concept that focuses on comprehensive systemic performance and characterizes the ability of a system to cope with disturbance and to recover its functionality. In this paper, we symbolized recurrent congestion as internal disturbance and proposed a modified metric inspired by the well-applied "R4" resilience-triangle framework. We constructed the metric with generic dimensions from both resilience engineering and transport science to quantify recurrent congestion based on spatial-temporal traffic patterns and made the comparison with other two approaches in freeway and signal-controlled arterial cases. Results showed that the metric can effectively capture congestion patterns in the study area and provides a quantitative benchmark for comparison. Also, it suggested not only a good comparative performance in measuring strength of proposed metric, but also its capability of considering the discharging process in congestion. The sensitivity tests showed that proposed metric possesses robustness against parameter perturbation in Robustness Range (RR), but the number of identified congestion patterns can be influenced by the existence of ϵ. In addition, the Elasticity Threshold (ET) and the spatial dimension of cell-based platform differ the congestion results significantly on both the detected number and intensity. By tackling this conventional problem with emerging concept, our metric provides a systemic alternative approach and enriches the toolbox for congestion assessment. Future work will be conducted on a larger scale with multiplex scenarios in various traffic conditions.
Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search.
Liu, Meiqin; Zhang, Duo; Zhang, Senlin; Zhang, Qunfei
2017-12-04
Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme.
Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search
Zhang, Senlin; Zhang, Qunfei
2017-01-01
Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme. PMID:29207541
Multi-phenomenology Observation Network Evaluation Tool'' (MONET)
NASA Astrophysics Data System (ADS)
Oltrogge, D.; North, P.; Vallado, D.
2014-09-01
Evaluating overall performance of an SSA "system-of-systems" observational network collecting against thousands of Resident Space Objects (RSO) is very difficult for typical tasking or scheduling-based analysis tools. This is further complicated by networks that have a wide variety of sensor types and phenomena, to include optical, radar and passive RF types, each having unique resource, ops tempo, competing customer and detectability constraints. We present details of the Multi-phenomenology Observation Network Evaluation Tool (MONET), which circumvents these difficulties by assessing the ideal performance of such a network via a digitized supply-vs-demand approach. Cells of each sensors supply time are distributed among RSO targets of interest to determine the average performance of the network against that set of RSO targets. Orbit Determination heuristics are invoked to represent observation quantity and geometry notionally required to obtain the desired orbit estimation quality. To feed this approach, we derive the detectability and collection rate performance of optical, radar and passive RF sensor physical and performance characteristics. We then prioritize the selected RSO targets according to object size, active/inactive status, orbit regime, and/or other considerations. Finally, the OD-derived tracking demands of each RSO of interest are levied against remaining sensor supply until either (a) all sensor time is exhausted; or (b) the list of RSO targets is exhausted. The outputs from MONET include overall network performance metrics delineated by sensor type, objects and orbits tracked, along with likely orbit accuracies which might result from the conglomerate network tracking.
Comparing Institution Nitrogen Footprints: Metrics for Assessing and Tracking Environmental Impact
Leach, Allison M.; Compton, Jana E.; Galloway, James N.; Andrews, Jennifer
2017-01-01
Abstract When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate nitrogen footprints using the Nitrogen Footprint Tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This article compares those seven institutions’ results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, amount of food served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The comparisons also pointed to differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. PMID:29350218
Robust Visual Tracking Revisited: From Correlation Filter to Template Matching.
Liu, Fanghui; Gong, Chen; Huang, Xiaolin; Zhou, Tao; Yang, Jie; Tao, Dacheng
2018-06-01
In this paper, we propose a novel matching based tracker by investigating the relationship between template matching and the recent popular correlation filter based trackers (CFTs). Compared to the correlation operation in CFTs, a sophisticated similarity metric termed mutual buddies similarity is proposed to exploit the relationship of multiple reciprocal nearest neighbors for target matching. By doing so, our tracker obtains powerful discriminative ability on distinguishing target and background as demonstrated by both empirical and theoretical analyses. Besides, instead of utilizing single template with the improper updating scheme in CFTs, we design a novel online template updating strategy named memory, which aims to select a certain amount of representative and reliable tracking results in history to construct the current stable and expressive template set. This scheme is beneficial for the proposed tracker to comprehensively understand the target appearance variations, recall some stable results. Both qualitative and quantitative evaluations on two benchmarks suggest that the proposed tracking method performs favorably against some recently developed CFTs and other competitive trackers.
Store-and-feedforward adaptive gaming system for hand-finger motion tracking in telerehabilitation.
Lockery, Daniel; Peters, James F; Ramanna, Sheela; Shay, Barbara L; Szturm, Tony
2011-05-01
This paper presents a telerehabilitation system that encompasses a webcam and store-and-feedforward adaptive gaming system for tracking finger-hand movement of patients during local and remote therapy sessions. Gaming-event signals and webcam images are recorded as part of a gaming session and then forwarded to an online healthcare content management system (CMS) that separates incoming information into individual patient records. The CMS makes it possible for clinicians to log in remotely and review gathered data using online reports that are provided to help with signal and image analysis using various numerical measures and plotting functions. Signals from a 6 degree-of-freedom magnetic motion tracking system provide a basis for video-game sprite control. The MMT provides a path for motion signals between common objects manipulated by a patient and a computer game. During a therapy session, a webcam that captures images of the hand together with a number of performance metrics provides insight into the quality, efficiency, and skill of a patient.
Gahm, Jin Kyu; Shi, Yonggang
2018-05-01
Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.
A novel method for quantification of beam's-eye-view tumor tracking performance.
Hu, Yue-Houng; Myronakis, Marios; Rottmann, Joerg; Wang, Adam; Morf, Daniel; Shedlock, Daniel; Baturin, Paul; Star-Lack, Josh; Berbeco, Ross
2017-11-01
In-treatment imaging using an electronic portal imaging device (EPID) can be used to confirm patient and tumor positioning. Real-time tumor tracking performance using current digital megavolt (MV) imagers is hindered by poor image quality. Novel EPID designs may help to improve quantum noise response, while also preserving the high spatial resolution of the current clinical detector. Recently investigated EPID design improvements include but are not limited to multi-layer imager (MLI) architecture, thick crystalline and amorphous scintillators, and phosphor pixilation and focusing. The goal of the present study was to provide a method of quantitating improvement in tracking performance as well as to reveal the physical underpinnings of detector design that impact tracking quality. The study employs a generalizable ideal observer methodology for the quantification of tumor tracking performance. The analysis is applied to study both the effect of increasing scintillator thickness on a standard, single-layer imager (SLI) design as well as the effect of MLI architecture on tracking performance. The present study uses the ideal observer signal-to-noise ratio (d') as a surrogate for tracking performance. We employ functions which model clinically relevant tasks and generalized frequency-domain imaging metrics to connect image quality with tumor tracking. A detection task for relevant Cartesian shapes (i.e., spheres and cylinders) was used to quantitate trackability of cases employing fiducial markers. Automated lung tumor tracking algorithms often leverage the differences in benign and malignant lung tissue textures. These types of algorithms (e.g., soft-tissue localization - STiL) were simulated by designing a discrimination task, which quantifies the differentiation of tissue textures, measured experimentally and fit as a power-law in trend (with exponent β) using a cohort of MV images of patient lungs. The modeled MTF and NPS were used to investigate the effect of scintillator thickness and MLI architecture on tumor tracking performance. Quantification of MV images of lung tissue as an inverse power-law with respect to frequency yields exponent values of β = 3.11 and 3.29 for benign and malignant tissues, respectively. Tracking performance with and without fiducials was found to be generally limited by quantum noise, a factor dominated by quantum detective efficiency (QDE). For generic SLI construction, increasing the scintillator thickness (gadolinium oxysulfide - GOS) from a standard 290 μm to 1720 μm reduces noise to about 10%. However, 81% of this reduction is appreciated between 290 and 1000 μm. In comparing MLI and SLI detectors of equivalent individual GOS layer thickness, the improvement in noise is equal to the number of layers in the detector (i.e., 4) with almost no difference in MTF. Further, improvement in tracking performance was slightly less than the square-root of the reduction in noise, approximately 84-90%. In comparing an MLI detector with an SLI with a GOS scintillator of equivalent total thickness, improvement in object detectability is approximately 34-39%. We have presented a novel method for quantification of tumor tracking quality and have applied this model to evaluate the performance of SLI and MLI EPID designs. We showed that improved tracking quality is primarily limited by improvements in NPS. When compared to very thick scintillator SLI, employing MLI architecture exhibits the same gains in QDE, but by mitigating the effect of optical Swank noise, results in more dramatic improvements in tracking performance. © 2017 American Association of Physicists in Medicine.
Interfacing with USSTRATCOM and UTTR during Stardust Earth Return
NASA Technical Reports Server (NTRS)
Jefferson, David C.; Baird, Darren T.; Cangahuala, Laureano A.; Lewis, George D.
2006-01-01
The Stardust Sample Return Capsule separated from the main spacecraft four hours prior to atmospheric entry. Between this time and the time at which the SRC touched down at the Utah Test and Training Range, two organizations external to JPL were involved in tracking the Sample Return Capsule. Orbit determination for the Stardust spacecraft during deep space cruise, the encounters of asteroid Annefrank and comet Wild 2, and the final approach to Earth used X-band radio metric Doppler and range data obtained through the Deep Space Network. The SRC lacked the electronics needed for coherently transponded radio metric tracking, so the DSN was not able to track the SRC after it separated from the main spacecraft. Although the expected delivery accuracy at atmospheric entry was well within the capability needed to target the SRC to the desired ground location, it was still desirable to obtain direct knowledge of the SRC trajectory in case of anomalies. For this reason U.S. Strategic Command was engaged to track the SRC between separation and atmospheric entry. Once the SRC entered the atmosphere, ground sensors at UTTR were tasked to acquire the descending SRC and maintain track during the descent in order to determine the landing location, to which the ground recovery team was then directed. This paper discusses organizational interfaces, data products, and delivery schedules, and the actual tracking operations are described.
A Computable Definition of Sepsis Facilitates Screening and Performance Improvement Tracking
Warmus, Holly R.; Schaffner, Erin K.; Kantawala, Sajel; Carcillo, Joseph; Rosen, Johanna; Horvat, Christopher M.
2018-01-01
Background: Sepsis kills almost 5,000 children annually, accounting for 16% of pediatric health care spending in the United States. Objectives: We sought to identify sepsis within the Electronic Health Record (EHR) of a quaternary children’s hospital to characterize disease incidence, improve recognition and response, and track performance metrics. Methods: Methods are organized in a plan-do-study-act cycle. During the “plan” phase, electronic definitions of sepsis (blood culture and antibiotic within 24 hours) and septic shock (sepsis plus vasoactive medication) were created to establish benchmark data and track progress with statistical process control. The performance of a screening tool was evaluated in the emergency department. During the “do” phase, a novel inpatient workflow is being piloted, which involves regular sepsis screening by nurses using the tool, and a regimented response to high risk patients. Results: Screening tool use in the emergency department reduced time to antibiotics (Fig. 1). Of the 6,159 admissions, EHR definitions identified 1,433 (23.3%) between July and December 2016 with sepsis, of which 159 (11.1%) had septic shock. Hospital mortality for all sepsis patients was 2.2% and 15.7% for septic shock (Table 1). These findings approximate epidemiologic studies of sepsis and severe sepsis, which report a prevalence range of 0.45–8.2% and mortality range of 8.2–25% (Table 2).1–5 Conclusions/Implications: Implementation of a sepsis screening tool is associated with improved performance. The prevalence of sepsis conditions identified with electronic definitions approximates the epidemiologic landscape characterized by other point-prevalence and administrative studies, providing face validity to this approach, and proving useful for tracking performance improvement. PMID:29732457
The impact of fatigue on latent print examinations as revealed by behavioral and eye gaze testing.
Busey, Thomas; Swofford, Henry J; Vanderkolk, John; Emerick, Brandi
2015-06-01
Eye tracking and behavioral methods were used to assess the effects of fatigue on performance in latent print examiners. Eye gaze was measured both before and after a fatiguing exercise involving fine-grained examination decisions. The eye tracking tasks used similar images, often laterally reversed versions of previously viewed prints, which holds image detail constant while minimizing prior recognition. These methods, as well as a within-subject design with fine grained analyses of the eye gaze data, allow fairly strong conclusions despite a relatively small subject population. Consistent with the effects of fatigue on practitioners in other fields such as radiology, behavioral performance declined with fatigue, and the eye gaze statistics suggested a smaller working memory capacity. Participants also terminated the search/examination process sooner when fatigued. However, fatigue did not produce changes in inter-examiner consistency as measured by the Earth Mover Metric. Implications for practice are discussed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Giraudo, Chiara; Motyka, Stanislav; Weber, Michael; Resinger, Christoph; Thorsten, Feiweier; Traxler, Hannes; Trattnig, Siegfried; Bogner, Wolfgang
2017-08-01
The aim of this study was to investigate the origin of random image artifacts in stimulated echo acquisition mode diffusion tensor imaging (STEAM-DTI), assess the role of averaging, develop an automated artifact postprocessing correction method using weighted mean of signal intensities (WMSIs), and compare it with other correction techniques. Institutional review board approval and written informed consent were obtained. The right calf and thigh of 10 volunteers were scanned on a 3 T magnetic resonance imaging scanner using a STEAM-DTI sequence.Artifacts (ie, signal loss) in STEAM-based DTI, presumably caused by involuntary muscle contractions, were investigated in volunteers and ex vivo (ie, human cadaver calf and turkey leg using the same DTI parameters as for the volunteers). An automated postprocessing artifact correction method based on the WMSI was developed and compared with previous approaches (ie, iteratively reweighted linear least squares and informed robust estimation of tensors by outlier rejection [iRESTORE]). Diffusion tensor imaging and fiber tracking metrics, using different averages and artifact corrections, were compared for region of interest- and mask-based analyses. One-way repeated measures analysis of variance with Greenhouse-Geisser correction and Bonferroni post hoc tests were used to evaluate differences among all tested conditions. Qualitative assessment (ie, images quality) for native and corrected images was performed using the paired t test. Randomly localized and shaped artifacts affected all volunteer data sets. Artifact burden during voluntary muscle contractions increased on average from 23.1% to 77.5% but were absent ex vivo. Diffusion tensor imaging metrics (mean diffusivity, fractional anisotropy, radial diffusivity, and axial diffusivity) had a heterogeneous behavior, but in the range reported by literature. Fiber track metrics (number, length, and volume) significantly improved in both calves and thighs after artifact correction in region of interest- and mask-based analyses (P < 0.05 each). Iteratively reweighted linear least squares and iRESTORE showed equivalent results, but WMSI was faster than iRESTORE. Muscle delineation and artifact load significantly improved after correction (P < 0.05 each). Weighted mean of signal intensity correction significantly improved STEAM-based quantitative DTI analyses and fiber tracking of lower-limb muscles, providing a robust tool for musculoskeletal applications.
Do the Eyes Have It? Using Eye Tracking to Assess Students Cognitive Dimensions
ERIC Educational Resources Information Center
Nisiforou, Efi A.; Laghos, Andrew
2013-01-01
Field dependence/independence (FD/FI) is a significant dimension of cognitive styles. The paper presents results of a study that seeks to identify individuals' level of field independence during visual stimulus tasks processing. Specifically, it examined the relationship between the Hidden Figure Test (HFT) scores and the eye tracking metrics.…
ERIC Educational Resources Information Center
Mallon, William T.; Jones, Robert F.
2002-01-01
Identified medical schools or departments that used metric systems to quantify faculty activity and productivity in teaching and analyzed purposes and progress of those systems. Found that identifying a "rational" method for distributing funds was the most common reason articulated, and that schools varied in types of information tracked. Also…
A Case Study: Analyzing City Vitality with Four Pillars of Activity-Live, Work, Shop, and Play.
Griffin, Matt; Nordstrom, Blake W; Scholes, Jon; Joncas, Kate; Gordon, Patrick; Krivenko, Elliott; Haynes, Winston; Higdon, Roger; Stewart, Elizabeth; Kolker, Natali; Montague, Elizabeth; Kolker, Eugene
2016-03-01
This case study evaluates and tracks vitality of a city (Seattle), based on a data-driven approach, using strategic, robust, and sustainable metrics. This case study was collaboratively conducted by the Downtown Seattle Association (DSA) and CDO Analytics teams. The DSA is a nonprofit organization focused on making the city of Seattle and its Downtown a healthy and vibrant place to Live, Work, Shop, and Play. DSA primarily operates through public policy advocacy, community and business development, and marketing. In 2010, the organization turned to CDO Analytics ( cdoanalytics.org ) to develop a process that can guide and strategically focus DSA efforts and resources for maximal benefit to the city of Seattle and its Downtown. CDO Analytics was asked to develop clear, easily understood, and robust metrics for a baseline evaluation of the health of the city, as well as for ongoing monitoring and comparisons of the vitality, sustainability, and growth. The DSA and CDO Analytics teams strategized on how to effectively assess and track the vitality of Seattle and its Downtown. The two teams filtered a variety of data sources, and evaluated the veracity of multiple diverse metrics. This iterative process resulted in the development of a small number of strategic, simple, reliable, and sustainable metrics across four pillars of activity: Live, Work, Shop, and Play. Data during the 5 years before 2010 were used for the development of the metrics and model and its training, and data during the 5 years from 2010 and on were used for testing and validation. This work enabled DSA to routinely track these strategic metrics, use them to monitor the vitality of Downtown Seattle, prioritize improvements, and identify new value-added programs. As a result, the four-pillar approach became an integral part of the data-driven decision-making and execution of the Seattle community's improvement activities. The approach described in this case study is actionable, robust, inexpensive, and easy to adopt and sustain. It can be applied to cities, districts, counties, regions, states, or countries, enabling cross-comparisons and improvements of vitality, sustainability, and growth.
Digital Image Correlation for Performance Monitoring
NASA Technical Reports Server (NTRS)
Palaviccini, Miguel; Turner, Dan; Herzberg, Michael
2016-01-01
Evaluating the health of a mechanism requires more than just a binary evaluation of whether an operation was completed. It requires analyzing more comprehensive, full-field data. Health monitoring is a process of non-destructively identifying characteristics that indicate the fitness of an engineered component. In order to monitor unit health in a production setting, an automated test system must be created to capture the motion of mechanism parts in a real-time and non-intrusive manner. One way to accomplish this is by using high-speed video and Digital Image Correlation (DIC). In this approach, individual frames of the video are analyzed to track the motion of mechanism components. The derived performance metrics allow for state-of-health monitoring and improved fidelity of mechanism modeling. The results are in-situ state-of-health identification and performance prediction. This paper introduces basic concepts of this test method, and discusses two main themes: the use of laser marking to add fiducial patterns to mechanism components, and new software developed to track objects with complex shapes, even as they move behind obstructions. Finally, the implementation of these tests into an automated tester is discussed.
Reproducibility of graph metrics of human brain structural networks.
Duda, Jeffrey T; Cook, Philip A; Gee, James C
2014-01-01
Recent interest in human brain connectivity has led to the application of graph theoretical analysis to human brain structural networks, in particular white matter connectivity inferred from diffusion imaging and fiber tractography. While these methods have been used to study a variety of patient populations, there has been less examination of the reproducibility of these methods. A number of tractography algorithms exist and many of these are known to be sensitive to user-selected parameters. The methods used to derive a connectivity matrix from fiber tractography output may also influence the resulting graph metrics. Here we examine how these algorithm and parameter choices influence the reproducibility of proposed graph metrics on a publicly available test-retest dataset consisting of 21 healthy adults. The dice coefficient is used to examine topological similarity of constant density subgraphs both within and between subjects. Seven graph metrics are examined here: mean clustering coefficient, characteristic path length, largest connected component size, assortativity, global efficiency, local efficiency, and rich club coefficient. The reproducibility of these network summary measures is examined using the intraclass correlation coefficient (ICC). Graph curves are created by treating the graph metrics as functions of a parameter such as graph density. Functional data analysis techniques are used to examine differences in graph measures that result from the choice of fiber tracking algorithm. The graph metrics consistently showed good levels of reproducibility as measured with ICC, with the exception of some instability at low graph density levels. The global and local efficiency measures were the most robust to the choice of fiber tracking algorithm.
The Creation of a Pediatric Hospital Medicine Dashboard: Performance Assessment for Improvement.
Fox, Lindsay Anne; Walsh, Kathleen E; Schainker, Elisabeth G
2016-07-01
Leaders of pediatric hospital medicine (PHM) recommended a clinical dashboard to monitor clinical practice and make improvements. To date, however, no programs report implementing a dashboard including the proposed broad range of metrics across multiple sites. We sought to (1) develop and populate a clinical dashboard to demonstrate productivity, quality, group sustainability, and value added for an academic division of PHM across 4 inpatient sites; (2) share dashboard data with division members and administrations to improve performance and guide program development; and (3) revise the dashboard to optimize its utility. Division members proposed a dashboard based on PHM recommendations. We assessed feasibility of data collection and defined and modified metrics to enable collection of comparable data across sites. We gathered data and shared the results with division members and administrations. We collected quarterly and annual data from October 2011 to September 2013. We found comparable metrics across all sites for descriptive, productivity, group sustainability, and value-added domains; only 72% of all quality metrics were tracked in a comparable fashion. After sharing the data, we saw increased timeliness of nursery discharges and an increase in hospital committee participation and grant funding. PHM dashboards have the potential to guide program development, mobilize faculty to improve care, and demonstrate program value to stakeholders. Dashboard implementation at other institutions and data sharing across sites may help to better define and strengthen the field of PHM by creating benchmarks and help improve the quality of pediatric hospital care. Copyright © 2016 by the American Academy of Pediatrics.
Real-Time Performance Feedback for the Manual Control of Spacecraft
NASA Astrophysics Data System (ADS)
Karasinski, John Austin
Real-time performance metrics were developed to quantify workload, situational awareness, and manual task performance for use as visual feedback to pilots of aerospace vehicles. Results from prior lunar lander experiments with variable levels of automation were replicated and extended to provide insights for the development of real-time metrics. Increased levels of automation resulted in increased flight performance, lower workload, and increased situational awareness. Automated Speech Recognition (ASR) was employed to detect verbal callouts as a limited measure of subjects' situational awareness. A one-dimensional manual tracking task and simple instructor-model visual feedback scheme was developed. This feedback was indicated to the operator by changing the color of a guidance element on the primary flight display, similar to how a flight instructor points out elements of a display to a student pilot. Experiments showed that for this low-complexity task, visual feedback did not change subject performance, but did increase the subjects' measured workload. Insights gained from these experiments were applied to a Simplified Aid for EVA Rescue (SAFER) inspection task. The effects of variations of an instructor-model performance-feedback strategy on human performance in a novel SAFER inspection task were investigated. Real-time feedback was found to have a statistically significant effect of improving subject performance and decreasing workload in this complicated four degree of freedom manual control task with two secondary tasks.
QoS-aware health monitoring system using cloud-based WBANs.
Almashaqbeh, Ghada; Hayajneh, Thaier; Vasilakos, Athanasios V; Mohd, Bassam J
2014-10-01
Wireless Body Area Networks (WBANs) are amongst the best options for remote health monitoring. However, as standalone systems WBANs have many limitations due to the large amount of processed data, mobility of monitored users, and the network coverage area. Integrating WBANs with cloud computing provides effective solutions to these problems and promotes the performance of WBANs based systems. Accordingly, in this paper we propose a cloud-based real-time remote health monitoring system for tracking the health status of non-hospitalized patients while practicing their daily activities. Compared with existing cloud-based WBAN frameworks, we divide the cloud into local one, that includes the monitored users and local medical staff, and a global one that includes the outer world. The performance of the proposed framework is optimized by reducing congestion, interference, and data delivery delay while supporting users' mobility. Several novel techniques and algorithms are proposed to accomplish our objective. First, the concept of data classification and aggregation is utilized to avoid clogging the network with unnecessary data traffic. Second, a dynamic channel assignment policy is developed to distribute the WBANs associated with the users on the available frequency channels to manage interference. Third, a delay-aware routing metric is proposed to be used by the local cloud in its multi-hop communication to speed up the reporting process of the health-related data. Fourth, the delay-aware metric is further utilized by the association protocols used by the WBANs to connect with the local cloud. Finally, the system with all the proposed techniques and algorithms is evaluated using extensive ns-2 simulations. The simulation results show superior performance of the proposed architecture in optimizing the end-to-end delay, handling the increased interference levels, maximizing the network capacity, and tracking user's mobility.
NASA Technical Reports Server (NTRS)
Monaghan, Mark W.; Gillespie, Amanda M.
2013-01-01
During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.
Guiding principles and checklist for population-based quality metrics.
Krishnan, Mahesh; Brunelli, Steven M; Maddux, Franklin W; Parker, Thomas F; Johnson, Douglas; Nissenson, Allen R; Collins, Allan; Lacson, Eduardo
2014-06-06
The Centers for Medicare and Medicaid Services oversees the ESRD Quality Incentive Program to ensure that the highest quality of health care is provided by outpatient dialysis facilities that treat patients with ESRD. To that end, Centers for Medicare and Medicaid Services uses clinical performance measures to evaluate quality of care under a pay-for-performance or value-based purchasing model. Now more than ever, the ESRD therapeutic area serves as the vanguard of health care delivery. By translating medical evidence into clinical performance measures, the ESRD Prospective Payment System became the first disease-specific sector using the pay-for-performance model. A major challenge for the creation and implementation of clinical performance measures is the adjustments that are necessary to transition from taking care of individual patients to managing the care of patient populations. The National Quality Forum and others have developed effective and appropriate population-based clinical performance measures quality metrics that can be aggregated at the physician, hospital, dialysis facility, nursing home, or surgery center level. Clinical performance measures considered for endorsement by the National Quality Forum are evaluated using five key criteria: evidence, performance gap, and priority (impact); reliability; validity; feasibility; and usability and use. We have developed a checklist of special considerations for clinical performance measure development according to these National Quality Forum criteria. Although the checklist is focused on ESRD, it could also have broad application to chronic disease states, where health care delivery organizations seek to enhance quality, safety, and efficiency of their services. Clinical performance measures are likely to become the norm for tracking performance for health care insurers. Thus, it is critical that the methodologies used to develop such metrics serve the payer and the provider and most importantly, reflect what represents the best care to improve patient outcomes. Copyright © 2014 by the American Society of Nephrology.
Bibliometrics: tracking research impact by selecting the appropriate metrics.
Agarwal, Ashok; Durairajanayagam, Damayanthi; Tatagari, Sindhuja; Esteves, Sandro C; Harlev, Avi; Henkel, Ralf; Roychoudhury, Shubhadeep; Homa, Sheryl; Puchalt, Nicolás Garrido; Ramasamy, Ranjith; Majzoub, Ahmad; Ly, Kim Dao; Tvrda, Eva; Assidi, Mourad; Kesari, Kavindra; Sharma, Reecha; Banihani, Saleem; Ko, Edmund; Abu-Elmagd, Muhammad; Gosalvez, Jaime; Bashiri, Asher
2016-01-01
Traditionally, the success of a researcher is assessed by the number of publications he or she publishes in peer-reviewed, indexed, high impact journals. This essential yardstick, often referred to as the impact of a specific researcher, is assessed through the use of various metrics. While researchers may be acquainted with such matrices, many do not know how to use them to enhance their careers. In addition to these metrics, a number of other factors should be taken into consideration to objectively evaluate a scientist's profile as a researcher and academician. Moreover, each metric has its own limitations that need to be considered when selecting an appropriate metric for evaluation. This paper provides a broad overview of the wide array of metrics currently in use in academia and research. Popular metrics are discussed and defined, including traditional metrics and article-level metrics, some of which are applied to researchers for a greater understanding of a particular concept, including varicocele that is the thematic area of this Special Issue of Asian Journal of Andrology. We recommend the combined use of quantitative and qualitative evaluation using judiciously selected metrics for a more objective assessment of scholarly output and research impact.
Bibliometrics: tracking research impact by selecting the appropriate metrics
Agarwal, Ashok; Durairajanayagam, Damayanthi; Tatagari, Sindhuja; Esteves, Sandro C; Harlev, Avi; Henkel, Ralf; Roychoudhury, Shubhadeep; Homa, Sheryl; Puchalt, Nicolás Garrido; Ramasamy, Ranjith; Majzoub, Ahmad; Ly, Kim Dao; Tvrda, Eva; Assidi, Mourad; Kesari, Kavindra; Sharma, Reecha; Banihani, Saleem; Ko, Edmund; Abu-Elmagd, Muhammad; Gosalvez, Jaime; Bashiri, Asher
2016-01-01
Traditionally, the success of a researcher is assessed by the number of publications he or she publishes in peer-reviewed, indexed, high impact journals. This essential yardstick, often referred to as the impact of a specific researcher, is assessed through the use of various metrics. While researchers may be acquainted with such matrices, many do not know how to use them to enhance their careers. In addition to these metrics, a number of other factors should be taken into consideration to objectively evaluate a scientist's profile as a researcher and academician. Moreover, each metric has its own limitations that need to be considered when selecting an appropriate metric for evaluation. This paper provides a broad overview of the wide array of metrics currently in use in academia and research. Popular metrics are discussed and defined, including traditional metrics and article-level metrics, some of which are applied to researchers for a greater understanding of a particular concept, including varicocele that is the thematic area of this Special Issue of Asian Journal of Andrology. We recommend the combined use of quantitative and qualitative evaluation using judiciously selected metrics for a more objective assessment of scholarly output and research impact. PMID:26806079
Adaptive and accelerated tracking-learning-detection
NASA Astrophysics Data System (ADS)
Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu
2013-08-01
An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.
Garcia Castro, Leyla Jael; Berlanga, Rafael; Garcia, Alexander
2015-10-01
Although full-text articles are provided by the publishers in electronic formats, it remains a challenge to find related work beyond the title and abstract context. Identifying related articles based on their abstract is indeed a good starting point; this process is straightforward and does not consume as many resources as full-text based similarity would require. However, further analyses may require in-depth understanding of the full content. Two articles with highly related abstracts can be substantially different regarding the full content. How similarity differs when considering title-and-abstract versus full-text and which semantic similarity metric provides better results when dealing with full-text articles are the main issues addressed in this manuscript. We have benchmarked three similarity metrics - BM25, PMRA, and Cosine, in order to determine which one performs best when using concept-based annotations on full-text documents. We also evaluated variations in similarity values based on title-and-abstract against those relying on full-text. Our test dataset comprises the Genomics track article collection from the 2005 Text Retrieval Conference. Initially, we used an entity recognition software to semantically annotate titles and abstracts as well as full-text with concepts defined in the Unified Medical Language System (UMLS®). For each article, we created a document profile, i.e., a set of identified concepts, term frequency, and inverse document frequency; we then applied various similarity metrics to those document profiles. We considered correlation, precision, recall, and F1 in order to determine which similarity metric performs best with concept-based annotations. For those full-text articles available in PubMed Central Open Access (PMC-OA), we also performed dispersion analyses in order to understand how similarity varies when considering full-text articles. We have found that the PubMed Related Articles similarity metric is the most suitable for full-text articles annotated with UMLS concepts. For similarity values above 0.8, all metrics exhibited an F1 around 0.2 and a recall around 0.1; BM25 showed the highest precision close to 1; in all cases the concept-based metrics performed better than the word-stem-based one. Our experiments show that similarity values vary when considering only title-and-abstract versus full-text similarity. Therefore, analyses based on full-text become useful when a given research requires going beyond title and abstract, particularly regarding connectivity across articles. Visualization available at ljgarcia.github.io/semsim.benchmark/, data available at http://dx.doi.org/10.5281/zenodo.13323. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
O'Brien, K.; Hapgood, K.
2012-12-01
While universities are often perceived within the wider population as a flexible family-friendly work environment, continuous full-time employment remains the norm in tenure track roles. This traditional career path is strongly re-inforced by research metrics, which typically measure accumulated historical performance. There is a strong feedback between historical and future research output, and there is a minimum threshold of research output below which it becomes very difficult to attract funding, high quality students and collaborators. The competing timescales of female fertility and establishment of a research career mean that many women do not exceed this threshold before having children. Using a mathematical model taken from an ecological analogy, we demonstrate how these mechanisms create substantial barriers to pursuing a research career while working part-time or returning from extended parental leave. The model highlights a conundrum for research managers: metrics can promote research productivity and excellence within an organisation, but can classify highly capable scientists as poor performers simply because they have not followed the traditional career path of continuous full-time employment. Based on this analysis, we make concrete recommendations for researchers and managers seeking to retain the skills and training invested in female scientists. We also provide survival tactics for women and men who wish to pursue a career in science while also spending substantial time and energy raising their family.
Modular Filter and Source-Management Upgrade of RADAC
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Smith, Donna C.
2007-01-01
In an upgrade of the Range Data Acquisition Computer (RADAC) software, a modular software object library was developed to implement required functionality for filtering of flight-vehicle-tracking data and management of tracking-data sources. (The RADAC software is used to process flight-vehicle metric data for realtime display in the Wallops Flight Facility Range Control Center and Mobile Control Center.)
NASA Technical Reports Server (NTRS)
Tikidjian, Raffi; Mackey, Ryan
2008-01-01
The DSN Array Simulator (wherein 'DSN' signifies NASA's Deep Space Network) is an updated version of software previously denoted the DSN Receive Array Technology Assessment Simulation. This software (see figure) is used for computational modeling of a proposed DSN facility comprising user-defined arrays of antennas and transmitting and receiving equipment for microwave communication with spacecraft on interplanetary missions. The simulation includes variations in spacecraft tracked and communication demand changes for up to several decades of future operation. Such modeling is performed to estimate facility performance, evaluate requirements that govern facility design, and evaluate proposed improvements in hardware and/or software. The updated version of this software affords enhanced capability for characterizing facility performance against user-defined mission sets. The software includes a Monte Carlo simulation component that enables rapid generation of key mission-set metrics (e.g., numbers of links, data rates, and date volumes), and statistical distributions thereof as functions of time. The updated version also offers expanded capability for mixed-asset network modeling--for example, for running scenarios that involve user-definable mixtures of antennas having different diameters (in contradistinction to a fixed number of antennas having the same fixed diameter). The improved version also affords greater simulation fidelity, sufficient for validation by comparison with actual DSN operations and analytically predictable performance metrics.
The Quantified Self: Fundamental Disruption in Big Data Science and Biological Discovery.
Swan, Melanie
2013-06-01
A key contemporary trend emerging in big data science is the quantified self (QS)-individuals engaged in the self-tracking of any kind of biological, physical, behavioral, or environmental information as n=1 individuals or in groups. There are opportunities for big data scientists to develop new models to support QS data collection, integration, and analysis, and also to lead in defining open-access database resources and privacy standards for how personal data is used. Next-generation QS applications could include tools for rendering QS data meaningful in behavior change, establishing baselines and variability in objective metrics, applying new kinds of pattern recognition techniques, and aggregating multiple self-tracking data streams from wearable electronics, biosensors, mobile phones, genomic data, and cloud-based services. The long-term vision of QS activity is that of a systemic monitoring approach where an individual's continuous personal information climate provides real-time performance optimization suggestions. There are some potential limitations related to QS activity-barriers to widespread adoption and a critique regarding scientific soundness-but these may be overcome. One interesting aspect of QS activity is that it is fundamentally a quantitative and qualitative phenomenon since it includes both the collection of objective metrics data and the subjective experience of the impact of these data. Some of this dynamic is being explored as the quantified self is becoming the qualified self in two new ways: by applying QS methods to the tracking of qualitative phenomena such as mood, and by understanding that QS data collection is just the first step in creating qualitative feedback loops for behavior change. In the long-term future, the quantified self may become additionally transformed into the extended exoself as data quantification and self-tracking enable the development of new sense capabilities that are not possible with ordinary senses. The individual body becomes a more knowable, calculable, and administrable object through QS activity, and individuals have an increasingly intimate relationship with data as it mediates the experience of reality.
Baker, Richard M; Brasch, Megan E; Manning, M Lisa; Henderson, James H
2014-08-06
Understanding single and collective cell motility in model environments is foundational to many current research efforts in biology and bioengineering. To elucidate subtle differences in cell behaviour despite cell-to-cell variability, we introduce an algorithm for tracking large numbers of cells for long time periods and present a set of physics-based metrics that quantify differences in cell trajectories. Our algorithm, termed automated contour-based tracking for in vitro environments (ACTIVE), was designed for adherent cell populations subject to nuclear staining or transfection. ACTIVE is distinct from existing tracking software because it accommodates both variability in image intensity and multi-cell interactions, such as divisions and occlusions. When applied to low-contrast images from live-cell experiments, ACTIVE reduced error in analysing cell occlusion events by as much as 43% compared with a benchmark-tracking program while simultaneously tracking cell divisions and resulting daughter-daughter cell relationships. The large dataset generated by ACTIVE allowed us to develop metrics that capture subtle differences between cell trajectories on different substrates. We present cell motility data for thousands of cells studied at varying densities on shape-memory-polymer-based nanotopographies and identify several quantitative differences, including an unanticipated difference between two 'control' substrates. We expect that ACTIVE will be immediately useful to researchers who require accurate, long-time-scale motility data for many cells. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Fabius, Raymond; Loeppke, Ronald R; Hohn, Todd; Fabius, Dan; Eisenberg, Barry; Konicki, Doris L; Larson, Paul
2016-01-01
The aim of this study was to assess the hypothesis that stock market performance of companies achieving high scores on either health or safety in the Corporate Health Achievement Award (CHAA) process will be superior to average index performance. The stock market performance of portfolios of CHAA winners was examined under six different scenarios using simulation and past market performance in tests of association framed to inform the investor community. CHAA portfolios out-performed the S&P average on all tests. This study adds to the growing evidence that a healthy and safe workforce correlates with a company's performance and its ability to provide positive returns to shareholders. It advances the idea that a proven set of health and safety metrics based on the CHAA evaluation process merits inclusion with existing measures for market valuation.
FAST TRACK COMMUNICATION: Symmetry breaking, conformal geometry and gauge invariance
NASA Astrophysics Data System (ADS)
Ilderton, Anton; Lavelle, Martin; McMullan, David
2010-08-01
When the electroweak action is rewritten in terms of SU(2) gauge-invariant variables, the Higgs can be interpreted as a conformal metric factor. We show that asymptotic flatness of the metric is required to avoid a Gribov problem: without it, the new variables fail to be nonperturbatively gauge invariant. We also clarify the relations between this approach and unitary gauge fixing, and the existence of similar transformations in other gauge theories.
NASA Technical Reports Server (NTRS)
Hoffman, Edward J. (Editor); Lawbaugh, William M. (Editor)
1997-01-01
Topics Considered Include: NASA's Shared Experiences Program; Core Issues for the Future of the Agency; National Space Policy Strategic Management; ISO 9000 and NASA; New Acquisition Initiatives; Full Cost Initiative; PM Career Development; PM Project Database; NASA Fast Track Studies; Fast Track Projects; Earned Value Concept; Value-Added Metrics; Saturn Corporation Lessons Learned; Project Manager Credibility.
Toward an optimisation technique for dynamically monitored environment
NASA Astrophysics Data System (ADS)
Shurrab, Orabi M.
2016-10-01
The data fusion community has introduced multiple procedures of situational assessments; this is to facilitate timely responses to emerging situations. More directly, the process refinement of the Joint Directors of Laboratories (JDL) is a meta-process to assess and improve the data fusion task during real-time operation. In other wording, it is an optimisation technique to verify the overall data fusion performance, and enhance it toward the top goals of the decision-making resources. This paper discusses the theoretical concept of prioritisation. Where the analysts team is required to keep an up to date with the dynamically changing environment, concerning different domains such as air, sea, land, space and cyberspace. Furthermore, it demonstrates an illustration example of how various tracking activities are ranked, simultaneously into a predetermined order. Specifically, it presents a modelling scheme for a case study based scenario, where the real-time system is reporting different classes of prioritised events. Followed by a performance metrics for evaluating the prioritisation process of situational awareness (SWA) domain. The proposed performance metrics has been designed and evaluated using an analytical approach. The modelling scheme represents the situational awareness system outputs mathematically, in the form of a list of activities. Such methods allowed the evaluation process to conduct a rigorous analysis of the prioritisation process, despite any constrained related to a domain-specific configuration. After conducted three levels of assessments over three separates scenario, The Prioritisation Capability Score (PCS) has provided an appropriate scoring scheme for different ranking instances, Indeed, from the data fusion perspectives, the proposed metric has assessed real-time system performance adequately, and it is capable of conducting a verification process, to direct the operator's attention to any issue, concerning the prioritisation capability of situational awareness domain.
A first generation dynamic ingress, redistribution and transport model of soil track-in: DIRT.
Johnson, D L
2008-12-01
This work introduces a spatially resolved quantitative model, based on conservation of mass and first order transfer kinetics, for following the transport and redistribution of outdoor soil to, and within, the indoor environment by track-in on footwear. Implementations of the DIRT model examined the influence of room size, rug area and location, shoe size, and mass transfer coefficients for smooth and carpeted floor surfaces using the ratio of mass loading on carpeted to smooth floor surfaces as a performance metric. Results showed that in the limit for large numbers of random steps the dual aspects of deposition to and track-off from the carpets govern this ratio. Using recently obtained experimental measurements, historic transport and distribution parameters, cleaning efficiencies for the different floor surfaces, and indoor dust deposition rates to provide model boundary conditions, DIRT predicts realistic floor surface loadings. The spatio-temporal variability in model predictions agrees with field observations and suggests that floor surface dust loadings are constantly in flux; steady state distributions are hardly, if ever, achieved.
Counter unmanned aerial system testing and evaluation methodology
NASA Astrophysics Data System (ADS)
Kouhestani, C.; Woo, B.; Birch, G.
2017-05-01
Unmanned aerial systems (UAS) are increasing in flight times, ease of use, and payload sizes. Detection, classification, tracking, and neutralization of UAS is a necessary capability for infrastructure and facility protection. We discuss test and evaluation methodology developed at Sandia National Laboratories to establish a consistent, defendable, and unbiased means for evaluating counter unmanned aerial system (CUAS) technologies. The test approach described identifies test strategies, performance metrics, UAS types tested, key variables, and the necessary data analysis to accurately quantify the capabilities of CUAS technologies. The tests conducted, as defined by this approach, will allow for the determination of quantifiable limitations, strengths, and weaknesses in terms of detection, tracking, classification, and neutralization. Communicating the results of this testing in such a manner informs decisions by government sponsors and stakeholders that can be used to guide future investments and inform procurement, deployment, and advancement of such systems into their specific venues.
JPSS-1 VIIRS Pre-Launch Radiometric Performance
NASA Technical Reports Server (NTRS)
Oudrari, Hassan; McIntire, Jeff; Xiong, Xiaoxiong; Butler, James; Efremova, Boryana; Ji, Jack; Lee, Shihyan; Schwarting, Tom
2015-01-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) on-board the first Joint Polar Satellite System (JPSS) completed its sensor level testing on December 2014. The JPSS-1 (J1) mission is scheduled to launch in December 2016, and will be very similar to the Suomi-National Polar-orbiting Partnership (SNPP) mission. VIIRS instrument was designed to provide measurements of the globe twice daily. It is a wide-swath (3,040 kilometers) cross-track scanning radiometer with spatial resolutions of 370 and 740 meters at nadir for imaging and moderate bands, respectively. It covers the wavelength spectrum from reflective to long-wave infrared through 22 spectral bands [0.412 microns to 12.01 microns]. VIIRS observations are used to generate 22 environmental data products (EDRs). This paper will briefly describe J1 VIIRS characterization and calibration performance and methodologies executed during the pre-launch testing phases by the independent government team, to generate the at-launch baseline radiometric performance, and the metrics needed to populate the sensor data record (SDR) Look-Up-Tables (LUTs). This paper will also provide an assessment of the sensor pre-launch radiometric performance, such as the sensor signal to noise ratios (SNRs), dynamic range, reflective and emissive bands calibration performance, polarization sensitivity, bands spectral performance, response-vs-scan (RVS), near field and stray light responses. A set of performance metrics generated during the pre-launch testing program will be compared to the SNPP VIIRS pre-launch performance.
Hoyer, Erik H; Padula, William V; Brotman, Daniel J; Reid, Natalie; Leung, Curtis; Lepley, Diane; Deutschendorf, Amy
2018-01-01
Hospital performance on the 30-day hospital-wide readmission (HWR) metric as calculated by the Centers for Medicare and Medicaid Services (CMS) is currently reported as a quality measure. Focusing on patient-level factors may provide an incomplete picture of readmission risk at the hospital level to explain variations in hospital readmission rates. To evaluate and quantify hospital-level characteristics that track with hospital performance on the current HWR metric. Retrospective cohort study. A total of 4785 US hospitals. We linked publically available data on individual hospitals published by CMS on patient-level adjusted 30-day HWR rates from July 1, 2011, through June 30, 2014, to the 2014 American Hospital Association annual survey. Primary outcome was performance in the worst CMS-calculated HWR quartile. Primary hospital-level exposure variables were defined as: size (total number of beds), safety net status (top quartile of disproportionate share), academic status [member of the Association of American Medical Colleges (AAMC)], National Cancer Institute Comprehensive Cancer Center (NCI-CCC) status, and hospital services offered (e.g., transplant, hospice, emergency department). Multilevel regression was used to evaluate the association between 30-day HWR and the hospital-level factors. Hospital-level characteristics significantly associated with performing in the worst CMS-calculated HWR quartile included: safety net status [adjusted odds ratio (aOR) 1.99, 95% confidence interval (95% CI) 1.61-2.45, p < 0.001], large size (> 400 beds, aOR 1.42, 95% CI 1.07-1.90, p = 0.016), AAMC alone status (aOR 1.95, 95% CI 1.35-2.83, p < 0.001), and AAMC plus NCI-CCC status (aOR 5.16, 95% CI 2.58-10.31, p < 0.001). Hospitals with more critical care beds (aOR 1.26, 95% CI 1.02-1.56, p = 0.033), those with transplant services (aOR 2.80, 95% CI 1.48-5.31,p = 0.001), and those with emergency room services (aOR 3.37, 95% CI 1.12-10.15, p = 0.031) demonstrated significantly worse HWR performance. Hospice service (aOR 0.64, 95% CI 0.50-0.82, p < 0.001) and having a higher proportion of total discharges being surgical cases (aOR 0.62, 95% CI 0.50-0.76, p < 0.001) were associated with better performance. The study approach was not intended to be an alternate readmission metric to compete with the existing CMS metric, which would require a re-examination of patient-level data combined with hospital-level data. A number of hospital-level characteristics (such as academic tertiary care center status) were significantly associated with worse performance on the CMS-calculated HWR metric, which may have important health policy implications. Until the reasons for readmission variability can be addressed, reporting the current HWR metric as an indicator of hospital quality should be reevaluated.
Launch vehicle tracking enhancement through Global Positioning System Metric Tracking
NASA Astrophysics Data System (ADS)
Moore, T. C.; Li, Hanchu; Gray, T.; Doran, A.
United Launch Alliance (ULA) initiated operational flights of both the Atlas V and Delta IV launch vehicle families in 2002. The Atlas V and Delta IV launch vehicles were developed jointly with the US Air Force (USAF) as part of the Evolved Expendable Launch Vehicle (EELV) program. Both Launch Vehicle (LV) families have provided 100% mission success since their respective inaugural launches and demonstrated launch capability from both Vandenberg Air Force Base (VAFB) on the Western Test Range and Cape Canaveral Air Force Station (CCAFS) on the Eastern Test Range. However, the current EELV fleet communications, tracking, & control architecture & technology, which date back to the origins of the space launch business, require support by a large and high cost ground footprint. The USAF has embarked on an initiative known as Future Flight Safety System (FFSS) that will significantly reduce Test Range Operations and Maintenance (O& M) cost by closing facilities and decommissioning ground assets. In support of the FFSS, a Global Positioning System Metric Tracking (GPS MT) System based on the Global Positioning System (GPS) satellite constellation has been developed for EELV which will allow both Ranges to divest some of their radar assets. The Air Force, ULA and Space Vector have flown the first 2 Atlas Certification vehicles demonstrating the successful operation of the GPS MT System. The first Atlas V certification flight was completed in February 2012 from CCAFS, the second Atlas V certification flight from VAFB was completed in September 2012 and the third certification flight on a Delta IV was completed October 2012 from CCAFS. The GPS MT System will provide precise LV position, velocity and timing information that can replace ground radar tracking resource functionality. The GPS MT system will provide an independent position/velocity S-Band telemetry downlink to support the current man-in-the-loop ground-based commanded destruct of an anomalous flight- The system utilizes a 50 channel digital receiver capable of navigating in high dynamic environments and high altitudes fed by antennas mounted diametrically opposed on the second stage airframe skin. To enhance cost effectiveness, the GPS MT System design implemented existing commercial parts and common environmental and interface requirements for both EELVs. The EELV GPS MT System design is complete, successfully qualified and has demonstrated that the system performs as simulated. This paper summarizes the current development status, system cost comparison, and performance capabilities of the EELV GPS MT System.
Digital Image Correlation for Performance Monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palaviccini, Miguel; Turner, Daniel Z.; Herzberg, Michael
2016-02-01
Evaluating the health of a mechanism requires more than just a binary evaluation of whether an operation was completed. It requires analyzing more comprehensive, full-field data. Health monitoring is a process of nondestructively identifying characteristics that indicate the fitness of an engineered component. In order to monitor unit health in a production setting, an automated test system must be created to capture the motion of mechanism parts in a real-time and non-intrusive manner. One way to accomplish this is by using high-speed video (HSV) and Digital Image Correlation (DIC). In this approach, individual frames of the video are analyzed tomore » track the motion of mechanism components. The derived performance metrics allow for state-of-health monitoring and improved fidelity of mechanism modeling. The results are in-situ state-of-health identification and performance prediction. This paper introduces basic concepts of this test method, and discusses two main themes: the use of laser marking to add fiducial patterns to mechanism components, and new software developed to track objects with complex shapes, even as they move behind obstructions. Finally, the implementation of these tests into an automated tester is discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepard, A; Bednarz, B
Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localizedmore » block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA190298.« less
Earth Observation for monitoring phenology for european land use and ecosystems over 1998-2011
NASA Astrophysics Data System (ADS)
Ceccherini, Guido; Gobron, Nadine
2013-04-01
Long-term measurements of plant phenology have been used to track vegetation responses to climate change but are often limited to particular species and locations and may not represent synoptic patterns. Given the limitations of working directly with in-situ data, many researchers have instead used available satellite remote sensing. Remote sensing extends the possible spatial coverage and temporal range of phenological assessments of environmental change due to the greater availability of observations. Variations and trends of vegetation dynamics are important because they alter the surface carbon, water and energy balance. For example, the net ecosystem CO2 exchange of vegetation is strongly linked to length of the growing season: extentions and decreases in length of growing season modify carbon uptake and the amount of CO2 in the atmosphere. Advances and delays in starting of growing season also affect the surface energy balance and consequently transpiration. The Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) is a key climate variable identified by Global Terrestrial Observing System (GTOS) that can be monitored from space. This dimensionless variable - varying between 0 and 1- is directly linked to the photosynthetic activity of vegetation, and therefore, can monitor changes in phenology. In this study, we identify the spatio/temporal patterns of vegetation dynamics using a long-term remotely sensed FAPAR dataset over Europe. Our aim is to provide a quantitative analysis of vegetation dynamics relevant to climate studies in Europe. As part of this analysis, six vegetation phenological metrics have been defined and made routinely in Europe. Over time, such metrics can track simple, yet critical, impacts of climate change on ecosystems. Validation has been performed through a direct comparison against ground-based data over ecological sites. Subsequently, using the spatio/temporal variability of this suite of metrics, we classify areas with similar vegetation dynamics. This permits assessment of variations and trends of vegetation dynamics over Europe. Statistical tests to assess the significance of temporal changes are used to evaluate trends in the metrics derived from the recorded time series of the FAPAR.
Real-time probabilistic covariance tracking with efficient model update.
Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li
2012-05-01
The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.
NASA Astrophysics Data System (ADS)
Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry
2016-03-01
In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.
Gholami, Mohammad; Brennan, Robert W
2016-01-06
In this paper, we investigate alternative distributed clustering techniques for wireless sensor node tracking in an industrial environment. The research builds on extant work on wireless sensor node clustering by reporting on: (1) the development of a novel distributed management approach for tracking mobile nodes in an industrial wireless sensor network; and (2) an objective comparison of alternative cluster management approaches for wireless sensor networks. To perform this comparison, we focus on two main clustering approaches proposed in the literature: pre-defined clusters and ad hoc clusters. These approaches are compared in the context of their reconfigurability: more specifically, we investigate the trade-off between the cost and the effectiveness of competing strategies aimed at adapting to changes in the sensing environment. To support this work, we introduce three new metrics: a cost/efficiency measure, a performance measure, and a resource consumption measure. The results of our experiments show that ad hoc clusters adapt more readily to changes in the sensing environment, but this higher level of adaptability is at the cost of overall efficiency.
Gholami, Mohammad; Brennan, Robert W.
2016-01-01
In this paper, we investigate alternative distributed clustering techniques for wireless sensor node tracking in an industrial environment. The research builds on extant work on wireless sensor node clustering by reporting on: (1) the development of a novel distributed management approach for tracking mobile nodes in an industrial wireless sensor network; and (2) an objective comparison of alternative cluster management approaches for wireless sensor networks. To perform this comparison, we focus on two main clustering approaches proposed in the literature: pre-defined clusters and ad hoc clusters. These approaches are compared in the context of their reconfigurability: more specifically, we investigate the trade-off between the cost and the effectiveness of competing strategies aimed at adapting to changes in the sensing environment. To support this work, we introduce three new metrics: a cost/efficiency measure, a performance measure, and a resource consumption measure. The results of our experiments show that ad hoc clusters adapt more readily to changes in the sensing environment, but this higher level of adaptability is at the cost of overall efficiency. PMID:26751447
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aho, Jacob; Pao, Lucy Y.; Fleming, Paul
2014-11-13
As wind energy becomes a larger portion of the world's energy portfolio there has been an increased interest for wind turbines to control their active power output to provide ancillary services which support grid reliability. One of these ancillary services is the provision of frequency regulation, also referred to as secondary frequency control or automatic generation control (AGC), which is often procured through markets which recently adopted performance-based compensation. A wind turbine with a control system developed to provide active power ancillary services can be used to provide frequency regulation services. Simulations have been performed to determine the AGC trackingmore » performance at various power schedule set-points, participation levels, and wind conditions. The performance metrics used in this study are based on those used by several system operators in the US. Another metric that is analyzed is the damage equivalent loads (DELs) on turbine structural components, though the impacts on the turbine electrical components are not considered. The results of these single-turbine simulations show that high performance scores can be achieved when there are insufficient wind resources available. The capability of a wind turbine to rapidly and accurately follow power commands allows for high performance even when tracking rapidly changing AGC signals. As the turbine de-rates to meet decreased power schedule set-points there is a reduction in the DELs, and the participation in frequency regulation has a negligible impact on these loads.« less
ERIC Educational Resources Information Center
Shapiro, Doug; Dundar, Afet; Huie, Faye; Wakhungu, Phoebe Khasiala; Yuan, Xin; Nathan, Angel; Hwang, Youngsik
2017-01-01
This report is an update of the January 2016 Transfer Tracking report (see ED563499). It is the first in an annual series from the National Student Clearinghouse Research Center that will investigate postsecondary student transfer outcomes. The goal is to provide institutions and states with a set of specific, up-to-date metrics with which to…
2013-01-01
Background Developing effective methods for measuring the health impact of social franchising programs is vital for demonstrating the value of this innovative service delivery model, particularly given its rapid expansion worldwide. Currently, these programs define success through patient volume and number of outlets, widely acknowledged as poor reflections of true program impact. An existing metric, the disability-adjusted life years averted (DALYs averted), offers promise as a measure of projected impact. Country-specific and service-specific, DALYs averted enables impact comparisons between programs operating in different contexts. This study explores the use of DALYs averted as a social franchise performance metric. Methods Using data collected by the Social Franchising Compendia in 2010 and 2011, we compared franchise performance, analyzing by region and program area. Coefficients produced by Population Services International converted each franchise's service delivery data into DALYs averted. For the 32 networks with two years of data corresponding to these metrics, a paired t-test compared all metrics. Finally, to test data reporting quality, we compared services provided to patient volume. Results Social franchising programs grew considerably from 2010 to 2011, measured by services provided (215%), patient volume (31%), and impact (couple-years of protection (CYPs): 86% and DALYs averted: 519%), but not by the total number of outlets. Non-family planning services increased by 857%, with diversification centered in Asia and Africa. However, paired t-test comparisons showed no significant increase within the networks, whether categorized as family planning or non-family planning. The ratio of services provided to patient visits yielded considerable range, with one network reporting a ratio of 16,000:1. Conclusion In theory, the DALYs averted metric is a more robust and comprehensive metric for social franchising than current program measures. As social franchising spreads beyond family planning, having a metric that captures the impact of a range of diverse services and allows comparisons will be increasingly important. However, standardizing reporting will be essential to make such comparisons useful. While not widespread, errors in self-reported data appear to have included social marketing distribution data in social franchising reporting, requiring clearer data collection and reporting guidelines. Differences noted above must be interpreted cautiously as a result. PMID:23902679
Montagu, Dominic; Ngamkitpaiboon, Lek; Duvall, Susan; Ratcliffe, Amy
2013-01-01
Developing effective methods for measuring the health impact of social franchising programs is vital for demonstrating the value of this innovative service delivery model, particularly given its rapid expansion worldwide. Currently, these programs define success through patient volume and number of outlets, widely acknowledged as poor reflections of true program impact. An existing metric, the disability-adjusted life years averted (DALYs averted), offers promise as a measure of projected impact. Country-specific and service-specific, DALYs averted enables impact comparisons between programs operating in different contexts. This study explores the use of DALYs averted as a social franchise performance metric. Using data collected by the Social Franchising Compendia in 2010 and 2011, we compared franchise performance, analyzing by region and program area. Coefficients produced by Population Services International converted each franchise's service delivery data into DALYs averted. For the 32 networks with two years of data corresponding to these metrics, a paired t-test compared all metrics. Finally, to test data reporting quality, we compared services provided to patient volume. Social franchising programs grew considerably from 2010 to 2011, measured by services provided (215%), patient volume (31%), and impact (couple-years of protection (CYPs): 86% and DALYs averted: 519%), but not by the total number of outlets. Non-family planning services increased by 857%, with diversification centered in Asia and Africa. However, paired t-test comparisons showed no significant increase within the networks, whether categorized as family planning or non-family planning. The ratio of services provided to patient visits yielded considerable range, with one network reporting a ratio of 16,000:1. In theory, the DALYs averted metric is a more robust and comprehensive metric for social franchising than current program measures. As social franchising spreads beyond family planning, having a metric that captures the impact of a range of diverse services and allows comparisons will be increasingly important. However, standardizing reporting will be essential to make such comparisons useful. While not widespread, errors in self-reported data appear to have included social marketing distribution data in social franchising reporting, requiring clearer data collection and reporting guidelines. Differences noted above must be interpreted cautiously as a result.
NASA Technical Reports Server (NTRS)
2008-01-01
As Global Positioning Satellite (GPS) applications become more prevalent for land- and air-based vehicles, GPS applications for space vehicles will also increase. The Applied Technology Directorate of Kennedy Space Center (KSC) has developed a lightweight, low-cost GPS Metric Tracking Unit (GMTU), the first of two steps in developing a lightweight, low-cost Space-Based Tracking and Command Subsystem (STACS) designed to meet Range Safety's link margin and latency requirements for vehicle command and telemetry data. The goals of STACS are to improve Range Safety operations and expand tracking capabilities for space vehicles. STACS will track the vehicle, receive commands, and send telemetry data through the space-based asset, which will dramatically reduce dependence on ground-based assets. The other step was the Low-Cost Tracking and Data Relay Satellite System (TDRSS) Transceiver (LCT2), developed by the Wallops Flight Facility (WFF), which allows the vehicle to communicate with a geosynchronous relay satellite. Although the GMTU and LCT2 were independently implemented and tested, the design collaboration of KSC and WFF engineers allowed GMTU and LCT2 to be integrated into one enclosure, leading to the final STACS. In operation, GMTU needs only a radio frequency (RF) input from a GPS antenna and outputs position and velocity data to the vehicle through a serial or pulse code modulation (PCM) interface. GMTU includes one commercial GPS receiver board and a custom board, the Command and Telemetry Processor (CTP) developed by KSC. The CTP design is based on a field-programmable gate array (FPGA) with embedded processors to support GPS functions.
NASA Astrophysics Data System (ADS)
Worley, Marilyn E.; Ren, Ping; Sandu, Corina; Hong, Dennis
2007-04-01
This study focuses on developing an assessment tool for the performance prediction of lightweight autonomous vehicles with varying locomotion platforms on coastal terrain involves three segments. A table based on the House of Quality shows the relationships - high, low, or adverse - between mission profile requirements and general performance measures and geometries of vehicles under consideration for use. This table, when combined with known values for vehicle metrics, provides information for an index formula used to quantitatively compare the mobility of a user-chosen set of vehicles, regardless of their methods of locomotion. To study novel forms of locomotion, and to compare their mobility and performance with more traditional wheeled and tracked vehicles, several new autonomous vehicles - bipedal, self-excited dynamic tripedal, active spoke-wheel - are currently under development. While the terramechanics properties of wheeled and tracked vehicles, such as the contact patch pressure distribution, have been understood and models have been developed for heavy vehicles, the feasibility of extrapolating them to the analysis of light vehicles is still under analysis. wheeled all-terrain vehicle and a lightweight autonomous tracked vehicle have been tested for effects of sand gradation, vehicle speed, and vehicle payload on measures of pressure and sinkage in the contact patch, and preliminary analysis is presented on the sinkage of the wheeled all-terrain vehicle. These three segments - development of the comparison matrix and indexing function, modeling and development of novel forms of locomotion, and physical experimentation of lightweight tracked and wheeled vehicles on varying terrain types for terramechanic model validation - combine to give an overall picture of mobility that spans across different forms of locomotion.
Hawkins, Keith A; Jennings, Danna; Vincent, Andrea S; Gilliland, Kirby; West, Adrienne; Marek, Kenneth
2012-08-01
The automated neuropsychological assessment metrics battery-4 for PD offers the promise of a computerized approach to cognitive assessment. To assess its utility, the ANAM4-PD was administered to 72 PD patients and 24 controls along with a traditional battery. Reliability was assessed by retesting 26 patients. The cognitive efficiency score (CES; a global score) exhibited high reliability (r = 0.86). Constituent variables exhibited lower reliability. The CES correlated strongly with the traditional battery global score, but displayed weaker relationships to UPDRS scores than the traditional score. Multivariate analysis of variance revealed a significant difference between the patient and control groups in ANAM4-PD performance, with three ANAM4-PD tests, math, tower, and pursuit tracking, displaying sizeable differences. In discriminant analyses these variables were as effective as the total ANAM4-PD in classifying cases designated as impaired based on traditional variables. Principal components analyses uncovered fewer factors in the ANAM4-PD relative to the traditional battery. ANAM4-PD variables correlated at higher levels with traditional motor and processing speed variables than with untimed executive, intellectual or memory variables. The ANAM4-PD displays high global reliability, but variable subtest reliability. The battery assesses a narrower range of cognitive functions than traditional tests, and discriminates between patients and controls less effectively. Three ANAM4-PD tests, pursuit tracking, math, and tower performed as well as the total ANAM4-PD in classifying patients as cognitively impaired. These findings could guide the refinement of the ANAM4-PD as an efficient method of screening for mild to moderate cognitive deficits in PD patients. Copyright © 2012 Elsevier Ltd. All rights reserved.
All-automatic swimmer tracking system based on an optimized scaled composite JTC technique
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2016-04-01
In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.
Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.
Nindl, Bradley C; Jaffin, Dianna P; Dretsch, Michael N; Cheuvront, Samuel N; Wesensten, Nancy J; Kent, Michael L; Grunberg, Neil E; Pierce, Joseph R; Barry, Erin S; Scott, Jonathan M; Young, Andrew J; OʼConnor, Francis G; Deuster, Patricia A
2015-11-01
Human performance optimization (HPO) is defined as "the process of applying knowledge, skills and emerging technologies to improve and preserve the capabilities of military members, and organizations to execute essential tasks." The lack of consensus for operationally relevant and standardized metrics that meet joint military requirements has been identified as the single most important gap for research and application of HPO. In 2013, the Consortium for Health and Military Performance hosted a meeting to develop a toolkit of standardized HPO metrics for use in military and civilian research, and potentially for field applications by commanders, units, and organizations. Performance was considered from a holistic perspective as being influenced by various behaviors and barriers. To accomplish the goal of developing a standardized toolkit, key metrics were identified and evaluated across a spectrum of domains that contribute to HPO: physical performance, nutritional status, psychological status, cognitive performance, environmental challenges, sleep, and pain. These domains were chosen based on relevant data with regard to performance enhancers and degraders. The specific objectives at this meeting were to (a) identify and evaluate current metrics for assessing human performance within selected domains; (b) prioritize metrics within each domain to establish a human performance assessment toolkit; and (c) identify scientific gaps and the needed research to more effectively assess human performance across domains. This article provides of a summary of 150 total HPO metrics across multiple domains that can be used as a starting point-the beginning of an HPO toolkit: physical fitness (29 metrics), nutrition (24 metrics), psychological status (36 metrics), cognitive performance (35 metrics), environment (12 metrics), sleep (9 metrics), and pain (5 metrics). These metrics can be particularly valuable as the military emphasizes a renewed interest in Human Dimension efforts, and leverages science, resources, programs, and policies to optimize the performance capacities of all Service members.
The performance measurement manifesto.
Eccles, R G
1991-01-01
The leading indicators of business performance cannot be found in financial data alone. Quality, customer satisfaction, innovation, market share--metrics like these often reflect a company's economic condition and growth prospects better than its reported earnings do. Depending on an accounting department to reveal a company's future will leave it hopelessly mired in the past. More and more managers are changing their company's performance measurement systems to track nonfinancial measures and reinforce new competitive strategies. Five activities are essential: developing an information architecture; putting the technology in place to support this architecture; aligning bonuses and other incentives with the new system; drawing on outside resources; and designing an internal process to ensure the other four activities occur. New technologies and more sophisticated databases have made the change to nonfinancial performance measurement systems possible and economically feasible. Industry and trade associations, consulting firms, and public accounting firms that already have well-developed methods for assessing market share and other performance metrics can add to the revolution's momentum--as well as profit from the business opportunities it presents. Every company will have its own key measures and distinctive process for implementing the change. But making it happen will always require careful preparation, perseverance, and the conviction of the CEO that it must be carried through. When one leading company can demonstrate the long-term advantage of its superior performance on quality or innovation or any other nonfinancial measure, it will change the rules for all its rivals forever.
Key Metrics and Goals for NASA's Advanced Air Transportation Technologies Program
NASA Technical Reports Server (NTRS)
Kaplan, Bruce; Lee, David
1998-01-01
NASA's Advanced Air Transportation Technologies (AATT) program is developing a set of decision support tools to aid air traffic service providers, pilots, and airline operations centers in improving operations of the National Airspace System (NAS). NASA needs a set of unifying metrics to tie these efforts together, which it can use to track the progress of the AATT program and communicate program objectives and status within NASA and to stakeholders in the NAS. This report documents the results of our efforts and the four unifying metrics we recommend for the AATT program. They are: airport peak capacity, on-route sector capacity, block time and fuel, and free flight-enabling.
A no-reference video quality assessment metric based on ROI
NASA Astrophysics Data System (ADS)
Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan
2015-01-01
A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.
Algebraic Approach for Recovering Topology in Distributed Camera Networks
2009-01-14
not valid for camera networks. Spatial sam- pling of plenoptic function [2] from a network of cameras is rarely i.i.d. (independent and identi- cally...coverage can be used to track and compare paths in a wireless camera network without any metric calibration information. In particular, these results can...edition edition, 2000. [14] A. Rahimi, B. Dunagan, and T. Darrell. Si- multaneous calibration and tracking with a network of non-overlapping sensors. In
2007-09-01
include a machine shop, a welding shop, carpenter and wood shop, metal heat treatment shop, bead blast shop, paint shop, non-destructive inspection...annually. In 2005, 227 motors were fired. Sled operation can involve activities such as carrying explosives, testing ejection seats, shooting lasers ...Cinetheodolite-type metric cameras and/or laser tracking equipment are used for aircraft flight trajectories exceeding 500 feet above ground level
NASA Technical Reports Server (NTRS)
Burke, Eric R.
2009-01-01
Comparison metrics can be established to reliably and repeatedly establish the health of the joggle region of the Orbiter Wing Leading Edge reinforced carbon carbon (RCC) panels. Using these metrics can greatly reduced the man hours needed to perform, wing leading edge scanning for service induced damage. These time savings have allowed for more thorough inspections to be preformed in the necessary areas with out affecting orbiter flow schedule. Using specialized local inspections allows for a larger margin of safety by allowing for more complete characterizations of panel defects. The presence of the t-seal during thermographic inspection can have adverse masking affects on ability properly characterize defects that exist in the joggle region of the RCC panels. This masking affect dictates the final specialized inspection should be preformed with the t-seal removed. Removal of the t-seal and use of the higher magnification optics has lead to the most effective and repeatable inspection method for characterizing and tracking defects in the wing leading edge. Through this study some inadequacies in the main health monitoring system for the orbiter wing leading edge have been identified and corrected. The use of metrics and local specialized inspection have lead to a greatly increased reliability and repeatable inspection of the shuttle wing leading edge.
International Space Station Payload Operations Integration
NASA Technical Reports Server (NTRS)
Fanske, Elizabeth Anne
2011-01-01
The Payload Operations Integrator (POINT) plays an integral part in the Certification of Flight Readiness process for the Mission Operations Laboratory and the Payload Operations Integration Function that supports International Space Station Payload operations. The POINTs operate in support of the POIF Payload Operations Manager to bring together and integrate the Certification of Flight Readiness inputs from various MOL teams through maintaining an open work tracking log. The POINTs create monthly metrics for current and future payloads that the Payload Operations Integration Function supports. With these tools, the POINTs assemble the Certification of Flight Readiness package before a given flight, stating that the Mission Operations Laboratory is prepared to support it. I have prepared metrics for Increment 29/30, maintained the Open Work Tracking Logs for Flights ULF6 (STS-134) and ULF7 (STS-135), and submitted the Mission Operations Laboratory Certification of Flight Readiness package for Flight 44P to the Mission Operations Directorate (MOD/OZ).
NASA Astrophysics Data System (ADS)
Hwang, Darryl H.; Ma, Kevin; Yepes, Fernando; Nadamuni, Mridula; Nayyar, Megha; Liu, Brent; Duddalwar, Vinay; Lepore, Natasha
2015-12-01
A conventional radiology report primarily consists of a large amount of unstructured text, and lacks clear, concise, consistent and content-rich information. Hence, an area of unmet clinical need consists of developing better ways to communicate radiology findings and information specific to each patient. Here, we design a new workflow and reporting system that combines and integrates advances in engineering technology with those from the medical sciences, the Multidimensional Interactive Radiology Report and Analysis (MIRRA). Until recently, clinical standards have primarily relied on 2D images for the purpose of measurement, but with the advent of 3D processing, many of the manually measured metrics can be automated, leading to better reproducibility and less subjective measurement placement. Hence, we make use this newly available 3D processing in our workflow. Our pipeline is used here to standardize the labeling, tracking, and quantifying of metrics for renal masses.
Ogunmoroti, Oluseye; Younus, Adnan; Rouseff, Maribeth; Spatz, Erica S; Das, Sankalp; Parris, Don; Aneni, Ehimen; Holzwarth, Leah; Guzman, Henry; Tran, Thinh; Roberson, Lara; Ali, Shozab S; Agatston, Arthur; Maziak, Wasim; Feldman, Theodore; Veledar, Emir; Nasir, Khurram
2015-07-01
Healthcare organizations and their employees are critical role models for healthy living in their communities. The American Heart Association (AHA) 2020 impact goal provides a national framework that can be used to track the success of employee wellness programs with a focus on improving cardiovascular (CV) health. This study aimed to assess the CV health of the employees of Baptist Health South Florida (BHSF), a large nonprofit healthcare organization. HRAs and wellness examinations can be used to measure the cardiovascular health status of an employee population. The AHA's 7 CV health metrics (diet, physical activity, smoking, body mass index, blood pressure, total cholesterol, and blood glucose) categorized as ideal, intermediate, or poor were estimated among employees of BHSF participating voluntarily in an annual health risk assessment (HRA) and wellness fair. Age and gender differences were analyzed using χ(2) test. The sample consisted of 9364 employees who participated in the 2014 annual HRA and wellness fair (mean age [standard deviation], 43 [12] years, 74% women). Sixty (1%) individuals met the AHA's definition of ideal CV health. Women were more likely than men to meet the ideal criteria for more than 5 CV health metrics. The proportion of participants meeting the ideal criteria for more than 5 CV health metrics decreased with age. A combination of HRAs and wellness examinations can provide useful insights into the cardiovascular health status of an employee population. Future tracking of the CV health metrics will provide critical feedback on the impact of system wide wellness efforts as well as identifying proactive programs to assist in making substantial progress toward the AHA 2020 Impact Goal. © 2015 Wiley Periodicals, Inc.
Berardo, Mattia; Lo Presti, Letizia
2016-07-02
In this work, a novel signal processing method is proposed to assist the Receiver Autonomous Integrity Monitoring (RAIM) module used in a receiver of Global Navigation Satellite Systems (GNSS) to improve the integrity of the estimated position. The proposed technique represents an evolution of the Multipath Distance Detector (MPDD), thanks to the introduction of a Signal Quality Index (SQI), which is both a metric able to evaluate the goodness of the signal, and a parameter used to improve the performance of the RAIM modules. Simulation results show the effectiveness of the proposed method.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean
2015-02-01
Endovascular robotics systems, now approved for clinical use in the United States and Europe, are seeing rapid growth in interest. Determining who has sufficient expertise for safe and effective clinical use remains elusive. Our aim was to analyze performance on a robotic platform to determine what defines an expert user. During three sessions, 21 subjects with a range of endovascular expertise and endovascular robotic experience (novices <2 hours to moderate-extensive experience with >20 hours) performed four tasks on a training model. All participants completed a 2-hour training session on the robot by a certified instructor. Completion times, global rating scores, and motion metrics were collected to assess performance. Electromagnetic tracking was used to capture and to analyze catheter tip motion. Motion analysis was based on derivations of speed and position including spectral arc length and total number of submovements (inversely proportional to proficiency of motion) and duration of submovements (directly proportional to proficiency). Ninety-eight percent of competent subjects successfully completed the tasks within the given time, whereas 91% of noncompetent subjects were successful. There was no significant difference in completion times between competent and noncompetent users except for the posterior branch (151 s:105 s; P = .01). The competent users had more efficient motion as evidenced by statistically significant differences in the metrics of motion analysis. Users with >20 hours of experience performed significantly better than those newer to the system, independent of prior endovascular experience. This study demonstrates that motion-based metrics can differentiate novice from trained users of flexible robotics systems for basic endovascular tasks. Efficiency of catheter movement, consistency of performance, and learning curves may help identify users who are sufficiently trained for safe clinical use of the system. This work will help identify the learning curve and specific movements that translate to expert robotic navigation. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Comparing NEO Search Telescopes
NASA Astrophysics Data System (ADS)
Myhrvold, Nathan
2016-04-01
Multiple terrestrial and space-based telescopes have been proposed for detecting and tracking near-Earth objects (NEOs). Detailed simulations of the search performance of these systems have used complex computer codes that are not widely available, which hinders accurate cross-comparison of the proposals and obscures whether they have consistent assumptions. Moreover, some proposed instruments would survey infrared (IR) bands, whereas others would operate in the visible band, and differences among asteroid thermal and visible-light models used in the simulations further complicate like-to-like comparisons. I use simple physical principles to estimate basic performance metrics for the ground-based Large Synoptic Survey Telescope and three space-based instruments—Sentinel, NEOCam, and a Cubesat constellation. The performance is measured against two different NEO distributions, the Bottke et al. distribution of general NEOs, and the Veres et al. distribution of Earth-impacting NEO. The results of the comparison show simplified relative performance metrics, including the expected number of NEOs visible in the search volumes and the initial detection rates expected for each system. Although these simplified comparisons do not capture all of the details, they give considerable insight into the physical factors limiting performance. Multiple asteroid thermal models are considered, including FRM, NEATM, and a new generalized form of FRM. I describe issues with how IR albedo and emissivity have been estimated in previous studies, which may render them inaccurate. A thermal model for tumbling asteroids is also developed and suggests that tumbling asteroids may be surprisingly difficult for IR telescopes to observe.
A reference standard-based quality assurance program for radiology.
Liu, Patrick T; Johnson, C Daniel; Miranda, Rafael; Patel, Maitray D; Phillips, Carrie J
2010-01-01
The authors have developed a comprehensive radiology quality assurance (QA) program that evaluates radiology interpretations and procedures by comparing them with reference standards. Performance metrics are calculated and then compared with benchmarks or goals on the basis of published multicenter data and meta-analyses. Additional workload for physicians is kept to a minimum by having trained allied health staff members perform the comparisons of radiology reports with the reference standards. The performance metrics tracked by the QA program include the accuracy of CT colonography for detecting polyps, the false-negative rate for mammographic detection of breast cancer, the accuracy of CT angiography detection of coronary artery stenosis, the accuracy of meniscal tear detection on MRI, the accuracy of carotid artery stenosis detection on MR angiography, the accuracy of parathyroid adenoma detection by parathyroid scintigraphy, the success rate for obtaining cortical tissue on ultrasound-guided core biopsies of pelvic renal transplants, and the technical success rate for peripheral arterial angioplasty procedures. In contrast with peer-review programs, this reference standard-based QA program minimizes the possibilities of reviewer bias and erroneous second reviewer interpretations. The more objective assessment of performance afforded by the QA program will provide data that can easily be used for education and management conferences, research projects, and multicenter evaluations. Additionally, such performance data could be used by radiology departments to demonstrate their value over nonradiology competitors to referring clinicians, hospitals, patients, and third-party payers. Copyright 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.
van Leeuwen, Willem J. D.
2008-01-01
This study examines how satellite based time-series vegetation greenness data and phenological measurements can be used to monitor and quantify vegetation recovery after wildfire disturbances and examine how pre-fire fuel reduction restoration treatments impact fire severity and impact vegetation recovery trajectories. Pairs of wildfire affected sites and a nearby unburned reference site were chosen to measure the post-disturbance recovery in relation to climate variation. All site pairs were chosen in forested uplands in Arizona and were restricted to the area of the Rodeo-Chediski fire that occurred in 2002. Fuel reduction treatments were performed in 1999 and 2001. The inter-annual and seasonal vegetation dynamics before, during, and after wildfire events can be monitored using a time series of biweekly composited MODIS NDVI (Moderate Resolution Imaging Spectroradiometer - Normalized Difference Vegetation Index) data. Time series analysis methods included difference metrics, smoothing filters, and fitting functions that were applied to extract seasonal and inter-annual change and phenological metrics from the NDVI time series data from 2000 to 2007. Pre- and post-fire Landsat data were used to compute the Normalized Burn Ratio (NBR) and examine burn severity at the selected sites. The phenological metrics (pheno-metrics) included the timing and greenness (i.e. NDVI) for the start, peak and end of the growing season as well as proxy measures for the rate of green-up and senescence and the annual vegetation productivity. Pre-fire fuel reduction treatments resulted in lower fire severity, which reduced annual productivity much less than untreated areas within the Rodeo-Chediski fire perimeter. The seasonal metrics were shown to be useful for estimating the rate of post-fire disturbance recovery and the timing of phenological greenness phases. The use of satellite time series NDVI data and derived pheno-metrics show potential for tracking vegetation cover dynamics and successional changes in response to drought, wildfire disturbances, and forest restoration treatments in fire-suppressed forests. PMID:27879809
Comparing alternative and traditional dissemination metrics in medical education.
Amath, Aysah; Ambacher, Kristin; Leddy, John J; Wood, Timothy J; Ramnanan, Christopher J
2017-09-01
The impact of academic scholarship has traditionally been measured using citation-based metrics. However, citations may not be the only measure of impact. In recent years, other platforms (e.g. Twitter) have provided new tools for promoting scholarship to both academic and non-academic audiences. Alternative metrics (altmetrics) can capture non-traditional dissemination data such as attention generated on social media platforms. The aims of this exploratory study were to characterise the relationships among altmetrics, access counts and citations in an international and pre-eminent medical education journal, and to clarify the roles of these metrics in assessing the impact of medical education academic scholarship. A database study was performed (September 2015) for all papers published in Medical Education in 2012 (n = 236) and 2013 (n = 246). Citation, altmetric and access (HTML views and PDF downloads) data were obtained from Scopus, the Altmetric Bookmarklet tool and the journal Medical Education, respectively. Pearson coefficients (r-values) between metrics of interest were then determined. Twitter and Mendeley (an academic bibliography tool) were the only altmetric-tracked platforms frequently (> 50%) utilised in the dissemination of articles. Altmetric scores (composite measures of all online attention) were driven by Twitter mentions. For short and full-length articles in 2012 and 2013, both access counts and citation counts were most strongly correlated with one another, as well as with Mendeley downloads. By comparison, Twitter metrics and altmetric scores demonstrated weak to moderate correlations with both access and citation counts. Whereas most altmetrics showed limited correlations with readership (access counts) and impact (citations), Mendeley downloads correlated strongly with both readership and impact indices for articles published in the journal Medical Education and may therefore have potential use that is complementary to that of citations in assessment of the impact of medical education scholarship. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
NCAR Earth Observing Laboratory's Data Tracking System
NASA Astrophysics Data System (ADS)
Cully, L. E.; Williams, S. F.
2014-12-01
The NCAR Earth Observing Laboratory (EOL) maintains an extensive collection of complex, multi-disciplinary datasets from national and international, current and historical projects accessible through field project web pages (https://www.eol.ucar.edu/all-field-projects-and-deployments). Data orders are processed through the EOL Metadata Database and Cyberinfrastructure (EMDAC) system. Behind the scenes is the institutionally created EOL Computing, Data, and Software/Data Management Group (CDS/DMG) Data Tracking System (DTS) tool. The DTS is used to track the complete life cycle (from ingest to long term stewardship) of the data, metadata, and provenance for hundreds of projects and thousands of data sets. The DTS is an EOL internal only tool which consists of three subsystems: Data Loading Notes (DLN), Processing Inventory Tool (IVEN), and Project Metrics (STATS). The DLN is used to track and maintain every dataset that comes to the CDS/DMG. The DLN captures general information such as title, physical locations, responsible parties, high level issues, and correspondence. When the CDS/DMG processes a data set, IVEN is used to track the processing status while collecting sufficient information to ensure reproducibility. This includes detailed "How To" documentation, processing software (with direct links to the EOL Subversion software repository), and descriptions of issues and resolutions. The STATS subsystem generates current project metrics such as archive size, data set order counts, "Top 10" most ordered data sets, and general information on who has ordered these data. The DTS was developed over many years to meet the specific needs of the CDS/DMG, and it has been successfully used to coordinate field project data management efforts for the past 15 years. This paper will describe the EOL CDS/DMG Data Tracking System including its basic functionality, the provenance maintained within the system, lessons learned, potential improvements, and future developments.
NASA Astrophysics Data System (ADS)
Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.
2015-03-01
Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.
Experimental Comparison of High Duty Cycle and Pulsed Active Sonars in a Littoral Environment
2014-09-30
A series of metrics (eg. number of detections, matched-filter gain, false alarm rates, track purity, track latency, etc.) will be used to quantify...for QA. These data were used to generate spectrograms, ambient noise and reverberation decay plots, and clutter images, all of which helped...Perhaps the most useful of these for QA were the clutter images which provided a rapid visual assessment to estimate SNR, identify at what range the
Clinical decision-making tools for exam selection, reporting and dose tracking.
Brink, James A
2014-10-01
Although many efforts have been made to reduce the radiation dose associated with individual medical imaging examinations to "as low as reasonably achievable," efforts to ensure such examinations are performed only when medically indicated and appropriate are equally if not more important. Variations in the use of ionizing radiation for medical imaging are concerning, regardless of whether they occur on a local, regional or national basis. Such variations among practices can be reduced with the use of decision support tools at the time of order entry. These tools help reduce radiation exposure among practices through the appropriate use of medical imaging. Similarly, adoption of best practices among imaging facilities can be promoted through tracking the radiation exposure among imaging patients. Practices can benchmark their aggregate radiation exposures for medical imaging through the use of dose index registries. However several variables must be considered when contemplating individual patient dose tracking. The specific dose measures and the variation among them introduced by variations in body habitus must be understood. Moreover the uncertainties in risk estimation from dose metrics related to age, gender and life expectancy must also be taken into account.
Adopting ORCID as a unique identifier will benefit all involved in scholarly communication.
Arunachalam, Subbiah; Madhan, Muthu
2016-01-01
ORCID, the Open Researcher and Contributor ID, is a non- profit, community-driven effort to create and maintain a registry of unique researcher identifiers and a transparent method of linking research activities and outputs to these identifiers. Together with other persistent identifiers for scholarly works such as digital object identifiers (DOIs) and identifiers for organizations, ORCID makes research more discoverable. It helps ensure that one's grants, publications and outputs are correctly attributed. It helps the research community not just in aggregating publications, but in every stage of research, viz. publishing, reviewing, profiling, metrics, accessing and archiving. Funding agencies in Austria, Australia, Denmark, Portugal, Sweden and the UK, and the world's leading scholarly publishers and associations have integrated their systems with ORCID registry. Among the BRICS countries, China and South Africa are adopting ORCID avidly. India is yet to make a beginning. If research councils and funding agencies in India require researchers to adopt ORCID and link ORCID iDs to funding as well as tracking performance, it will help them keep track of the workflow. Journal editors can also keep track of contributions made by different authors and work assigned to different reviewers through their ORCID iDs.
Historical Trends in Ground-Based Optical Space Surveillance System Design
NASA Astrophysics Data System (ADS)
Shoemaker, M.; Shroyer, L.
In the spirit of the 50th anniversary of the launch of the first man-made satellite, an historical overview of ground-based optical space surveillance systems is provided. Specific emphasis is given on gathering metrics to analyze design trends. The subject of space surveillance spans the history of spaceflight: from the early tracking cameras at missile ranges, the first observations of Sputnik, to the evolution towards highly capable commercial off-the-shelf (COTS) systems, and much in between. Whereas previous reviews in the literature have been limited in scope to specific time periods, operational programs, countries, etc., a broad overview of a wide range of sources is presented. This review is focused on systems whose primary design purpose can be classified as Space Object Identification (SOI) or Orbit Determination (OD). SOI systems are those that capture images or data to determine information about the satellite itself, such as attitude, features, and material composition. OD systems are those that produce estimates of the satellite position, usually in the form of orbital elements or a time history of tracking angles. Systems are also categorized based on the orbital regime in which their targets reside, which has been simplified in this study to either Low Earth Orbit (LEO) or Geosynchronous Earth Orbit (GEO). The systems are further classified depending on the industry segment (government/commercial or academic), and whether the program is foreign or domestic. In addition to gathering metrics on systems designed solely for man-made satellite observations, it is interesting to find examples of other systems being similarly used. Examples include large astronomical telescopes being used for GEO debris surveys and anomaly resolution for deep-space probes. Another interesting development is the increase in number and capability of COTS systems, some of which are specifically marketed to consumers as satellite trackers. After describing the results of the literature review and presenting further information on various systems, we gather specific metrics on the optical design. Technical specifications, such as aperture and field of view (FOV), are plotted with time to ascertain trends in ground system design. Aperture is a useful metric because it gives insight into the light-gathering capability, as well as the overall size and complexity of the system. The size of the FOV can indicate user priorities or system performance, such as tracking capability of the mount for SOI systems and star detection ability in OD systems that use celestial references for position measurements. The review is restricted to systems that use natural sunlight to illuminate targets, for the simple reason of having commonality between systems that span half a century, particularly recent COTS systems.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1985-01-01
For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.
Optimal control of motorsport differentials
NASA Astrophysics Data System (ADS)
Tremlett, A. J.; Massaro, M.; Purdy, D. J.; Velenis, E.; Assadian, F.; Moore, A. P.; Halley, M.
2015-12-01
Modern motorsport limited slip differentials (LSD) have evolved to become highly adjustable, allowing the torque bias that they generate to be tuned in the corner entry, apex and corner exit phases of typical on-track manoeuvres. The task of finding the optimal torque bias profile under such varied vehicle conditions is complex. This paper presents a nonlinear optimal control method which is used to find the minimum time optimal torque bias profile through a lane change manoeuvre. The results are compared to traditional open and fully locked differential strategies, in addition to considering related vehicle stability and agility metrics. An investigation into how the optimal torque bias profile changes with reduced track-tyre friction is also included in the analysis. The optimal LSD profile was shown to give a performance gain over its locked differential counterpart in key areas of the manoeuvre where a quick direction change is required. The methodology proposed can be used to find both optimal passive LSD characteristics and as the basis of a semi-active LSD control algorithm.
RNAV STAR Procedural Adherence
NASA Technical Reports Server (NTRS)
Stewart, Michael J.; Matthews, Bryan L.
2017-01-01
In this exploratory archival study we mined the performance of 24 major US airports area navigation standard terminal arrival routes (RNAV STARs) over the preceding three years. Overlaying radar track data on top of RNAV STAR routes provided a comparison between aircraft flight paths and the waypoint positions and altitude restrictions. NASA Ames Supercomputing resources were utilized to perform the data mining and processing. We investigated STARs by lateral transition path (full-lateral), vertical restrictions (full-lateral/full-vertical), and skipped waypoints (skips). In addition, we graphed altitudes and their frequencies of occurrence for altitude restrictions. Full-lateral compliance was generally greater than Full-lateral/full-vertical, but the delta between the rates was not always consistent. Full-lateral/full-vertical usage medians of the 2016 procedures ranged from 0 in KDEN (Denver) to 21 in KMEM (Memphis). Waypoint skips ranged from 0 to nearly 100 for specific waypoints. Altitudes restrictions were sometimes missed by systemic amounts in 1000 ft. increments from the restriction, creating multi-modal distributions. Other times, altitude misses looked to be more normally distributed around the restriction. This work is a preliminary investigation into the objective performance of instrument procedures and provides a framework to track how procedural concepts and design intervention function. In addition, this tool may aid in providing acceptability metrics as well as risk assessment information.
Mathematical Modeling of RNA-Based Architectures for Closed Loop Control of Gene Expression.
Agrawal, Deepak K; Tang, Xun; Westbrook, Alexandra; Marshall, Ryan; Maxwell, Colin S; Lucks, Julius; Noireaux, Vincent; Beisel, Chase L; Dunlop, Mary J; Franco, Elisa
2018-05-08
Feedback allows biological systems to control gene expression precisely and reliably, even in the presence of uncertainty, by sensing and processing environmental changes. Taking inspiration from natural architectures, synthetic biologists have engineered feedback loops to tune the dynamics and improve the robustness and predictability of gene expression. However, experimental implementations of biomolecular control systems are still far from satisfying performance specifications typically achieved by electrical or mechanical control systems. To address this gap, we present mathematical models of biomolecular controllers that enable reference tracking, disturbance rejection, and tuning of the temporal response of gene expression. These controllers employ RNA transcriptional regulators to achieve closed loop control where feedback is introduced via molecular sequestration. Sensitivity analysis of the models allows us to identify which parameters influence the transient and steady state response of a target gene expression process, as well as which biologically plausible parameter values enable perfect reference tracking. We quantify performance using typical control theory metrics to characterize response properties and provide clear selection guidelines for practical applications. Our results indicate that RNA regulators are well-suited for building robust and precise feedback controllers for gene expression. Additionally, our approach illustrates several quantitative methods useful for assessing the performance of biomolecular feedback control systems.
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
NASA Astrophysics Data System (ADS)
Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.
2014-06-01
The study in this paper belongs to a more general research of discovering facial sub-clusters in different ethnicity face databases. These new sub-clusters along with other metadata (such as race, sex, etc.) lead to a vector for each face in the database where each vector component represents the likelihood of participation of a given face to each cluster. This vector is then used as a feature vector in a human identification and tracking system based on face and other biometrics. The first stage in this system involves a clustering method which evaluates and compares the clustering results of five different clustering algorithms (average, complete, single hierarchical algorithm, k-means and DIGNET), and selects the best strategy for each data collection. In this paper we present the comparative performance of clustering results of DIGNET and four clustering algorithms (average, complete, single hierarchical and k-means) on fabricated 2D and 3D samples, and on actual face images from various databases, using four different standard metrics. These metrics are the silhouette figure, the mean silhouette coefficient, the Hubert test Γ coefficient, and the classification accuracy for each clustering result. The results showed that, in general, DIGNET gives more trustworthy results than the other algorithms when the metrics values are above a specific acceptance threshold. However when the evaluation results metrics have values lower than the acceptance threshold but not too low (too low corresponds to ambiguous results or false results), then it is necessary for the clustering results to be verified by the other algorithms.
Cell tracking for cell image analysis
NASA Astrophysics Data System (ADS)
Bise, Ryoma; Sato, Yoichi
2017-04-01
Cell image analysis is important for research and discovery in biology and medicine. In this paper, we present our cell tracking methods, which is capable of obtaining fine-grain cell behavior metrics. In order to address difficulties under dense culture conditions, where cell detection cannot be done reliably since cell often touch with blurry intercellular boundaries, we proposed two methods which are global data association and jointly solving cell detection and association. We also show the effectiveness of the proposed methods by applying the method to the biological researches.
Comparing Institution Nitrogen Footprints: Metrics for ...
When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate their nitrogen footprints using the nitrogen footprint tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This paper compares the results of those seven results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, number of meals served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The results also found differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors that are both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. This paper is being submitt
Predicting the Overall Spatial Quality of Automotive Audio Systems
NASA Astrophysics Data System (ADS)
Koya, Daisuke
The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.
Competing on talent analytics.
Davenport, Thomas H; Harris, Jeanne; Shapiro, Jeremy
2010-10-01
Do investments in your employees actually affect workforce performance? Who are your top performers? How can you empower and motivate other employees to excel? Leading-edge companies such as Google, Best Buy, Procter & Gamble, and Sysco use sophisticated data-collection technology and analysis to answer these questions, leveraging a range of analytics to improve the way they attract and retain talent, connect their employee data to business performance, differentiate themselves from competitors, and more. The authors present the six key ways in which companies track, analyze, and use data about their people-ranging from a simple baseline of metrics to monitor the organization's overall health to custom modeling for predicting future head count depending on various "what if" scenarios. They go on to show that companies competing on talent analytics manage data and technology at an enterprise level, support what analytical leaders do, choose realistic targets for analysis, and hire analysts with strong interpersonal skills as well as broad expertise.
Quantifying and visualizing site performance in clinical trials.
Yang, Eric; O'Donovan, Christopher; Phillips, JodiLyn; Atkinson, Leone; Ghosh, Krishnendu; Agrafiotis, Dimitris K
2018-03-01
One of the keys to running a successful clinical trial is the selection of high quality clinical sites, i.e., sites that are able to enroll patients quickly, engage them on an ongoing basis to prevent drop-out, and execute the trial in strict accordance to the clinical protocol. Intuitively, the historical track record of a site is one of the strongest predictors of its future performance; however, issues such as data availability and wide differences in protocol complexity can complicate interpretation. Here, we demonstrate how operational data derived from central laboratory services can provide key insights into the performance of clinical sites and help guide operational planning and site selection for new clinical trials. Our methodology uses the metadata associated with laboratory kit shipments to clinical sites (such as trial and anonymized patient identifiers, investigator names and addresses, sample collection and shipment dates, etc.) to reconstruct the complete schedule of patient visits and derive insights about the operational performance of those sites, including screening, enrollment, and drop-out rates and other quality indicators. This information can be displayed in its raw form or normalized to enable direct comparison of site performance across studies of varied design and complexity. Leveraging Covance's market leadership in central laboratory services, we have assembled a database of operational metrics that spans more than 14,000 protocols, 1400 indications, 230,000 unique investigators, and 23 million patient visits and represents a significant fraction of all clinical trials run globally in the last few years. By analyzing this historical data, we are able to assess and compare the performance of clinical investigators across a wide range of therapeutic areas and study designs. This information can be aggregated across trials and geographies to gain further insights into country and regional trends, sometimes with surprising results. The use of operational data from Covance Central Laboratories provides a unique perspective into the performance of clinical sites with respect to many important metrics such as patient enrollment and retention. These metrics can, in turn, be used to guide operational planning and site selection for new clinical trials, thereby accelerating recruitment, improving quality, and reducing cost.
Automated and comprehensive link engineering supporting branched, ring, and mesh network topologies
NASA Astrophysics Data System (ADS)
Farina, J.; Khomchenko, D.; Yevseyenko, D.; Meester, J.; Richter, A.
2016-02-01
Link design, while relatively easy in the past, can become quite cumbersome with complex channel plans and equipment configurations. The task of designing optical transport systems and selecting equipment is often performed by an applications or sales engineer using simple tools, such as custom Excel spreadsheets. Eventually, every individual has their own version of the spreadsheet as well as their own methodology for building the network. This approach becomes unmanageable very quickly and leads to mistakes, bending of the engineering rules and installations that do not perform as expected. We demonstrate a comprehensive planning environment, which offers an efficient approach to unify, control and expedite the design process by controlling libraries of equipment and engineering methodologies, automating the process and providing the analysis tools necessary to predict system performance throughout the system and for all channels. In addition to the placement of EDFAs and DCEs, performance analysis metrics are provided at every step of the way. Metrics that can be tracked include power, CD and OSNR, SPM, XPM, FWM and SBS. Automated routine steps assist in design aspects such as equalization, padding and gain setting for EDFAs, the placement of ROADMs and transceivers, and creating regeneration points. DWDM networks consisting of a large number of nodes and repeater huts, interconnected in linear, branched, mesh and ring network topologies, can be designed much faster when compared with conventional design methods. Using flexible templates for all major optical components, our technology-agnostic planning approach supports the constant advances in optical communications.
GEODSS Present Configuration and Potential
2014-06-28
to provide critical metric tracking capacity for deep space catalog maintenance. The follow-up TOS designed as a deployable gap filler in SSN deep...CASTOR) - A RAVEN System In Canada [3]WindowPane Observatory Lanphier Shutter System 2014 Retrieved From: http://windowpaneobservatory.com/ [4]J.N
Comparing Institution Nitrogen Footprints: Metrics for Assessing and Tracking Environmental Impact
When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate their nitrogen footprints using the nitrogen footprint tool have worked collaboratively to i...
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT
Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965
Identifying Memory Allocation Patterns in HEP Software
NASA Astrophysics Data System (ADS)
Kama, S.; Rauschmayr, N.
2017-10-01
HEP applications perform an excessive amount of allocations/deallocations within short time intervals which results in memory churn, poor locality and performance degradation. These issues are already known for a decade, but due to the complexity of software frameworks and billions of allocations for a single job, up until recently no efficient mechanism has been available to correlate these issues with source code lines. However, with the advent of the Big Data era, many tools and platforms are now available to do large scale memory profiling. This paper presents, a prototype program developed to track and identify each single (de-)allocation. The CERN IT Hadoop cluster is used to compute memory key metrics, like locality, variation, lifetime and density of allocations. The prototype further provides a web based visualization back-end that allows the user to explore the results generated on the Hadoop cluster. Plotting these metrics for every single allocation over time gives a new insight into application’s memory handling. For instance, it shows which algorithms cause which kind of memory allocation patterns, which function flow causes how many short-lived objects, what are the most commonly allocated sizes etc. The paper will give an insight into the prototype and will show profiling examples for the LHC reconstruction, digitization and simulation jobs.
Veselka, Walter; Rentch, James S; Grafton, William N; Kordek, Walter S; Anderson, James T
2010-11-01
Bioassessment methods for wetlands, and other bodies of water, have been developed worldwide to measure and quantify changes in "biological integrity." These assessments are based on a classification system, meant to ensure appropriate comparisons between wetland types. Using a local site-specific disturbance gradient, we built vegetation indices of biological integrity (Veg-IBIs) based on two commonly used wetland classification systems in the USA: One based on vegetative structure and the other based on a wetland's position in a landscape and sources of water. The resulting class-specific Veg-IBIs were comprised of 1-5 metrics that varied in their sensitivity to the disturbance gradient (R2=0.14-0.65). Moreover, the sensitivity to the disturbance gradient increased as metrics from each of the two classification schemes were combined (added). Using this information to monitor natural and created wetlands will help natural resource managers track changes in biological integrity of wetlands in response to anthropogenic disturbance and allows the use of vegetative communities to set ecological performance standards for mitigation banks.
Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi
2018-06-18
Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.
NASA Technical Reports Server (NTRS)
Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.
1975-01-01
Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.
Kelly, Brendan S; Rainford, Louise A; Darcy, Sarah P; Kavanagh, Eoin C; Toomey, Rachel J
2016-07-01
Purpose To investigate the development of chest radiograph interpretation skill through medical training by measuring both diagnostic accuracy and eye movements during visual search. Materials and Methods An institutional exemption from full ethical review was granted for the study. Five consultant radiologists were deemed the reference expert group, and four radiology registrars, five senior house officers (SHOs), and six interns formed four clinician groups. Participants were shown 30 chest radiographs, 14 of which had a pneumothorax, and were asked to give their level of confidence as to whether a pneumothorax was present. Receiver operating characteristic (ROC) curve analysis was carried out on diagnostic decisions. Eye movements were recorded with a Tobii TX300 (Tobii Technology, Stockholm, Sweden) eye tracker. Four eye-tracking metrics were analyzed. Variables were compared to identify any differences between groups. All data were compared by using the Friedman nonparametric method. Results The average area under the ROC curve for the groups increased with experience (0.947 for consultants, 0.792 for registrars, 0.693 for SHOs, and 0.659 for interns; P = .009). A significant difference in diagnostic accuracy was found between consultants and registrars (P = .046). All four eye-tracking metrics decreased with experience, and there were significant differences between registrars and SHOs. Total reading time decreased with experience; it was significantly lower for registrars compared with SHOs (P = .046) and for SHOs compared with interns (P = .025). Conclusion Chest radiograph interpretation skill increased with experience, both in terms of diagnostic accuracy and visual search. The observed level of experience at which there was a significant difference was higher for diagnostic accuracy than for eye-tracking metrics. (©) RSNA, 2016 Online supplemental material is available for this article.
NASA Astrophysics Data System (ADS)
Bhattarai, Nishan; Wagle, Pradeep; Gowda, Prasanna H.; Kakani, Vijaya G.
2017-11-01
The ability of remote sensing-based surface energy balance (SEB) models to track water stress in rain-fed switchgrass (Panicum virgatum L.) has not been explored yet. In this paper, the theoretical framework of crop water stress index (CWSI; 0 = extremely wet or no water stress condition and 1 = extremely dry or no transpiration) was utilized to estimate CWSI in rain-fed switchgrass using Landsat-derived evapotranspiration (ET) from five remote sensing based single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and Operational Simplified Surface Energy Balance (SSEBop). CWSI estimates from the five SEB models and a simple regression model that used normalized difference vegetation index (NDVI), near-surface temperature difference, and measured soil moisture (SM) as covariates were compared with those derived from eddy covariance measured ET (CWSIEC) for the 32 Landsat image acquisition dates during the 2011 (dry) and 2013 (wet) growing seasons. Results indicate that most SEB models can predict CWSI reasonably well. For example, the root mean square error (RMSE) ranged from 0.14 (SEBAL) to 0.29 (SSEBop) and the coefficient of determination (R2) ranged from 0.25 (SSEBop) to 0.72 (SEBAL), justifying the added complexity in CWSI modeling as compared to results from the simple regression model (R2 = 0.55, RMSE = 0.16). All SEB models underestimated CWSI in the dry year but the estimates from SEBAL and S-SEBI were within 7% of the mean CWSIEC and explained over 60% of variations in CWSIEC. In the wet year, S-SEBI mostly overestimated CWSI (around 28%), while estimates from METRIC, SEBAL, SEBS, and SSEBop were within 8% of the mean CWSIEC. Overall, SEBAL was the most robust model under all conditions followed by METRIC, whose performance was slightly worse and better than SEBAL in dry and wet years, respectively. Underestimation of CWSI under extremely dry soil conditions and the substantial role of SM in the regression model suggest that integration of SM in SEB models could improve their performances under dry conditions. These insights will provide useful guidance on the broader applicability of SEB models for mapping water stresses in switchgrass under varying geographical and meteorological conditions.
Texture metric that predicts target detection performance
NASA Astrophysics Data System (ADS)
Culpepper, Joanne B.
2015-12-01
Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.
Comparing the Performance of Indoor Localization Systems through the EvAAL Framework.
Potortì, Francesco; Park, Sangjoon; Jiménez Ruiz, Antonio Ramón; Barsocchi, Paolo; Girolami, Michele; Crivello, Antonino; Lee, So Yeon; Lim, Jae Hyun; Torres-Sospedra, Joaquín; Seco, Fernando; Montoliu, Raul; Mendoza-Silva, Germán Martin; Pérez Rubio, Maria Del Carmen; Losada-Gutiérrez, Cristina; Espinosa, Felipe; Macias-Guarasa, Javier
2017-10-13
In recent years, indoor localization systems have been the object of significant research activity and of growing interest for their great expected social impact and their impressive business potential. Application areas include tracking and navigation, activity monitoring, personalized advertising, Active and Assisted Living (AAL), traceability, Internet of Things (IoT) networks, and Home-land Security. In spite of the numerous research advances and the great industrial interest, no canned solutions have yet been defined. The diversity and heterogeneity of applications, scenarios, sensor and user requirements, make it difficult to create uniform solutions. From that diverse reality, a main problem is derived that consists in the lack of a consensus both in terms of the metrics and the procedures used to measure the performance of the different indoor localization and navigation proposals. This paper introduces the general lines of the EvAAL benchmarking framework, which is aimed at a fair comparison of indoor positioning systems through a challenging competition under complex, realistic conditions. To evaluate the framework capabilities, we show how it was used in the 2016 Indoor Positioning and Indoor Navigation (IPIN) Competition. The 2016 IPIN competition considered three different scenario dimensions, with a variety of use cases: (1) pedestrian versus robotic navigation, (2) smartphones versus custom hardware usage and (3) real-time positioning versus off-line post-processing. A total of four competition tracks were evaluated under the same EvAAL benchmark framework in order to validate its potential to become a standard for evaluating indoor localization solutions. The experience gained during the competition and feedback from track organizers and competitors showed that the EvAAL framework is flexible enough to successfully fit the very different tracks and appears adequate to compare indoor positioning systems.
Comparing the Performance of Indoor Localization Systems through the EvAAL Framework
2017-01-01
In recent years, indoor localization systems have been the object of significant research activity and of growing interest for their great expected social impact and their impressive business potential. Application areas include tracking and navigation, activity monitoring, personalized advertising, Active and Assisted Living (AAL), traceability, Internet of Things (IoT) networks, and Home-land Security. In spite of the numerous research advances and the great industrial interest, no canned solutions have yet been defined. The diversity and heterogeneity of applications, scenarios, sensor and user requirements, make it difficult to create uniform solutions. From that diverse reality, a main problem is derived that consists in the lack of a consensus both in terms of the metrics and the procedures used to measure the performance of the different indoor localization and navigation proposals. This paper introduces the general lines of the EvAAL benchmarking framework, which is aimed at a fair comparison of indoor positioning systems through a challenging competition under complex, realistic conditions. To evaluate the framework capabilities, we show how it was used in the 2016 Indoor Positioning and Indoor Navigation (IPIN) Competition. The 2016 IPIN competition considered three different scenario dimensions, with a variety of use cases: (1) pedestrian versus robotic navigation, (2) smartphones versus custom hardware usage and (3) real-time positioning versus off-line post-processing. A total of four competition tracks were evaluated under the same EvAAL benchmark framework in order to validate its potential to become a standard for evaluating indoor localization solutions. The experience gained during the competition and feedback from track organizers and competitors showed that the EvAAL framework is flexible enough to successfully fit the very different tracks and appears adequate to compare indoor positioning systems. PMID:29027948
Huang, Chien-Ting; Hwang, Ing-Shiou
2012-01-01
Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498
Tracking a Very Near Earth Asteroid
NASA Astrophysics Data System (ADS)
Bruck, R.; Rashid, S.; Peppard, T.
2013-09-01
The potential effects of an asteroid passing within close proximity to the Earth were recently realized. During the February 16, 2013 event, Asteroid 2012 DA14 passed within an estimated 27,700 kilometers of the earth, well within the geosynchronous (GEO) orbital belt. This was the closest known approach of a planetoid of this size, in modern history. The GEO belt is a region that is filled with critical communications satellites which provide relays for essential government, business and private datum. On the day of the event, optical instruments at Detachment 3, 21OG, Maui GEODSS were able to open in marginal atmospheric conditions, locate and collect metric and raw video data on the asteroid as it passed a point of near heliocentric orbital propinquity to the Earth. Prior to the event, the Joint Space Operations Center (JSpOC) used propagated trajectory data from NASA's Near Earth Object Program Office at the Jet Propulsion Laboratory to assess potential collisions with man-made objects in Earth orbit. However, the ability to actively track this asteroid through the populated satellite belt not only allowed surveillance for possible late orbital perturbations of the asteroid, but, afforded the ability to monitor possible strikes on all other orbiting bodies of anthropogenic origin either not in orbital catalogs or not recently updated in those catalogs. Although programmed only for tracking satellites in geocentric orbits, GEODSS was able to compensate and maintain track on DA14, collecting one hundred and fifty four metric observations during the event.
78 FR 68039 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... leadership regarding travel, training and supplies. Data is used by leadership to effectively and efficiently... of providing operational metrics, tracking budgets, and presenting work products to senior leadership regarding travel, training, and supply. Data is used by leadership to effectively and efficiently make...
DOT National Transportation Integrated Search
2017-09-01
This report documents work done by Volpe staff to support the FAAs development of Unmanned Aerial Systems (UAS) noise certification and noise measurement criteria. The primary elements were the development of a small, lightweight Global Navigation...
The return map: tracking product teams.
House, C H; Price, R L
1991-01-01
With a new product, time is now more valuable than money. The costs of conceiving and designing a product are less important to its ultimate success than timeliness to market. One of the most important ways to speed up product development is through interfunctional teamwork. The "Return Map," developed at Hewlett-Packard, provides a way for people from different functions to triangulate on the product development process as a whole. It graphically represents the contributions of all team members to the moment when a project breaks even. It forces the team to estimate and re-estimate the time it will take to perform critical tasks, so that products can get out fast. It subjects the team to the only discipline that works, namely, self-discipline. The map is, in effect, a graph representing time and money, where the time line is divided into three phases: investigation, development, and manufacturing and sales. Meanwhile, costs are plotted against time--as are revenues when they are realized after manufacturing release. Within these points of reference, four novel metrics emerge: Break-Even-Time, Time-to-Market, Break-Even-After-Release, and the Return Factor. All metrics are estimated at the beginning of a project to determine its feasibility, then they are tracked carefully while the project evolves to determine its success. Missed forecasts are inevitable, but managers who punish employees for missing their marks will only encourage them to estimate conservatively, thus building slack into a system meant to eliminate slack. Estimates are a team responsibility, and deviations provide valuable information that spurs continuous investigation and improvement.
The Barcelona Hospital Clínic therapeutic apheresis database.
Cid, Joan; Carbassé, Gloria; Cid-Caballero, Marc; López-Púa, Yolanda; Alba, Cristina; Perea, Dolores; Lozano, Miguel
2017-09-22
A therapeutic apheresis (TA) database helps to increase knowledge about indications and type of apheresis procedures that are performed in clinical practice. The objective of the present report was to describe the type and number of TA procedures that were performed at our institution in a 10-year period, from 2007 to 2016. The TA electronic database was created by transferring patient data from electronic medical records and consultation forms into a Microsoft Access database developed exclusively for this purpose. Since 2007, prospective data from every TA procedure were entered in the database. A total of 5940 TA procedures were performed: 3762 (63.3%) plasma exchange (PE) procedures, 1096 (18.5%) hematopoietic progenitor cell (HPC) collections, and 1082 (18.2%) TA procedures other than PEs and HPC collections. The overall trend for the time-period was progressive increase in total number of TA procedures performed each year (from 483 TA procedures in 2007 to 822 in 2016). The tracking trend of each procedure during the 10-year period was different: the number of PE and other type of TA procedures increased 22% and 2818%, respectively, and the number of HPC collections decreased 28%. The TA database helped us to increase our knowledge about various indications and type of TA procedures that were performed in our current practice. We also believe that this database could serve as a model that other institutions can use to track service metrics. © 2017 Wiley Periodicals, Inc.
A Comprehensive Validation Methodology for Sparse Experimental Data
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Blattnig, Steve R.
2010-01-01
A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.
Real-time performance monitoring and management system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2007-06-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
New Metrics for Evaluating Viral Respiratory Pathogenesis
Menachery, Vineet D.; Gralinski, Lisa E.; Baric, Ralph S.; Ferris, Martin T.
2015-01-01
Viral pathogenesis studies in mice have relied on markers of severe systemic disease, rather than clinically relevant measures, to evaluate respiratory virus infection; thus confounding connections to human disease. Here, whole-body plethysmography was used to directly measure changes in pulmonary function during two respiratory viral infections. This methodology closely tracked with traditional pathogenesis metrics, distinguished both virus- and dose-specific responses, and identified long-term respiratory changes following both SARS-CoV and Influenza A Virus infection. Together, the work highlights the utility of examining respiratory function following infection in order to fully understand viral pathogenesis. PMID:26115403
Metric-driven harm: an exploration of unintended consequences of performance measurement.
Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck
2013-11-01
Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.
Technology survey on video face tracking
NASA Astrophysics Data System (ADS)
Zhang, Tong; Gomes, Herman Martins
2014-03-01
With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.
Performance assessment in brain-computer interface-based augmentative and alternative communication
2013-01-01
A large number of incommensurable metrics are currently used to report the performance of brain-computer interfaces (BCI) used for augmentative and alterative communication (AAC). The lack of standard metrics precludes the comparison of different BCI-based AAC systems, hindering rapid growth and development of this technology. This paper presents a review of the metrics that have been used to report performance of BCIs used for AAC from January 2005 to January 2012. We distinguish between Level 1 metrics used to report performance at the output of the BCI Control Module, which translates brain signals into logical control output, and Level 2 metrics at the Selection Enhancement Module, which translates logical control to semantic control. We recommend that: (1) the commensurate metrics Mutual Information or Information Transfer Rate (ITR) be used to report Level 1 BCI performance, as these metrics represent information throughput, which is of interest in BCIs for AAC; 2) the BCI-Utility metric be used to report Level 2 BCI performance, as it is capable of handling all current methods of improving BCI performance; (3) these metrics should be supplemented by information specific to each unique BCI configuration; and (4) studies involving Selection Enhancement Modules should report performance at both Level 1 and Level 2 in the BCI system. Following these recommendations will enable efficient comparison between both BCI Control and Selection Enhancement Modules, accelerating research and development of BCI-based AAC systems. PMID:23680020
Navy and Marine Corps Medical News. Issue 24
2008-12-19
for shipboard surgeries, including hernia repair and eye sur- gery. One of the mission’s most memorable surgeries involved two eight-year...the command in all BU- MED required and tracked popula- tion health metrics, except for one ( cervical cancer screening). Dick concluded, “My
A guide to calculating habitat-quality metrics to inform conservation of highly mobile species
Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.
2018-01-01
Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic processes and population dynamics, and require the most data.More real-world examples applying occupancy-based, agent-based, and continuous-based metrics to seasonally migratory species are needed to better understand challenges and opportunities for applying these metrics more broadly.
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
Smolinski, Mark S.; Olsen, Jennifer M.
2017-01-01
Rapid detection, reporting, and response to an infectious disease outbreak are critical to prevent localized health events from emerging as pandemic threats. Metrics to evaluate the timeliness of these critical activities, however, are lacking. Easily understood and comparable measures for tracking progress and encouraging investment in rapid detection, reporting, and response are sorely needed. We propose that the timeliness of outbreak detection, reporting, laboratory confirmation, response, and public communication should be considered as measures for improving global health security at the national level, allowing countries to track progress over time and inform investments in disease surveillance. PMID:28384035
Dance and Music in “Gangnam Style”: How Dance Observation Affects Meter Perception
Lee, Kyung Myun; Barrett, Karen Chan; Kim, Yeonhwa; Lim, Yeoeun; Lee, Kyogu
2015-01-01
Dance and music often co-occur as evidenced when viewing choreographed dances or singers moving while performing. This study investigated how the viewing of dance motions shapes sound perception. Previous research has shown that dance reflects the temporal structure of its accompanying music, communicating musical meter (i.e. a hierarchical organization of beats) via coordinated movement patterns that indicate where strong and weak beats occur. Experiments here investigated the effects of dance cues on meter perception, hypothesizing that dance could embody the musical meter, thereby shaping participant reaction times (RTs) to sound targets occurring at different metrical positions.In experiment 1, participants viewed a video with dance choreography indicating 4/4 meter (dance condition) or a series of color changes repeated in sequences of four to indicate 4/4 meter (picture condition). A sound track accompanied these videos and participants reacted to timbre targets at different metrical positions. Participants had the slowest RT’s at the strongest beats in the dance condition only. In experiment 2, participants viewed the choreography of the horse-riding dance from Psy’s “Gangnam Style” in order to examine how a familiar dance might affect meter perception. Moreover, participants in this experiment were divided into a group with experience dancing this choreography and a group without experience. Results again showed slower RTs to stronger metrical positions and the group with experience demonstrated a more refined perception of metrical hierarchy. Results likely stem from the temporally selective division of attention between auditory and visual domains. This study has implications for understanding: 1) the impact of splitting attention among different sensory modalities, and 2) the impact of embodiment, on perception of musical meter. Viewing dance may interfere with sound processing, particularly at critical metrical positions, but embodied familiarity with dance choreography may facilitate meter awareness. Results shed light on the processing of multimedia environments. PMID:26308092
One network metric datastore to track them all: the OSG network metric service
NASA Astrophysics Data System (ADS)
Quick, Robert; Babik, Marian; Fajardo, Edgar M.; Gross, Kyle; Hayashi, Soichi; Krenz, Marina; Lee, Thomas; McKee, Shawn; Pipes, Christopher; Teige, Scott
2017-10-01
The Open Science Grid (OSG) relies upon the network as a critical part of the distributed infrastructures it enables. In 2012, OSG added a new focus area in networking with a goal of becoming the primary source of network information for its members and collaborators. This includes gathering, organizing, and providing network metrics to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion, and traffic routing. In September of 2015, this service was deployed into the OSG production environment. We will report on the creation, implementation, testing, and deployment of the OSG Networking Service. Starting from organizing the deployment of perfSONAR toolkits within OSG and its partners, to the challenges of orchestrating regular testing between sites, to reliably gathering the resulting network metrics and making them available for users, virtual organizations, and higher level services, all aspects of implementation will be reviewed. In particular, several higher-level services were developed to bring the OSG network service to its full potential. These include a web-based mesh configuration system, which allows central scheduling and management of all the network tests performed by the instances; a set of probes to continually gather metrics from the remote instances and publish it to different sources; a central network datastore (esmond), which provides interfaces to access the network monitoring information in close to real time and historically (up to a year) giving the state of the tests; and a perfSONAR infrastructure monitor system, ensuring the current perfSONAR instances are correctly configured and operating as intended. We will also describe the challenges we encountered in ongoing operations of the network service and how we have evolved our procedures to address those challenges. Finally we will describe our plans for future extensions and improvements to the service.
Performance metrics for the evaluation of hyperspectral chemical identification systems
NASA Astrophysics Data System (ADS)
Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay
2016-02-01
Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.
GPS-based tracking system for TOPEX orbit determination
NASA Technical Reports Server (NTRS)
Melbourne, W. G.
1984-01-01
A tracking system concept is discussed that is based on the utilization of the constellation of Navstar satellites in the Global Positioning System (GPS). The concept involves simultaneous and continuous metric tracking of the signals from all visible Navstar satellites by approximately six globally distributed ground terminals and by the TOPEX spacecraft at 1300-km altitude. Error studies indicate that this system could be capable of obtaining decimeter position accuracies and, most importantly, around 5 cm in the radial component which is key to exploiting the full accuracy potential of the altimetric measurements for ocean topography. Topics covered include: background of the GPS, the precision mode for utilization of the system, past JPL research for using the GPS in precision applications, the present tracking system concept for high accuracy satellite positioning, and results from a proof-of-concept demonstration.
Best Practices Handbook: Traffic Engineering in Range Networks
2016-03-01
units of measurement. Measurement Methodology - A repeatable measurement technique used to derive one or more metrics of interest . Network...Performance measures - Metrics that provide quantitative or qualitative measures of the performance of systems or subsystems of interest . Performance Metric
Prendergast, Geoffrey P; Staff, Michael
2017-01-01
This study examines the use of the number of night-time sleep disturbances as a health-based metric to assess the cost effectiveness of rail noise mitigation strategies for situations, wherein high-intensity noises dominate such as freight train pass-bys and wheel squeal. Twenty residential properties adjacent to the existing and proposed rail tracks in a noise catchment area of the Epping to Thornleigh Third Track project were used as a case study. Awakening probabilities were calculated for individual's awakening 1, 3 and 5 times a night when subjected to 10 independent freight train pass-by noise events using internal maximum sound pressure levels (LAFmax). Awakenings were predicted using a random intercept multivariate logistic regression model. With source mitigation in place, the majority of the residents were still predicted to be awoken at least once per night (median 88.0%), although substantial reductions in the median probabilities of awakening three and five times per night from 50.9 to 29.4% and 9.2 to 2.7%, respectively, were predicted. This resulted in a cost-effective estimate of 7.6-8.8 less people being awoken at least three times per night per A$1 million spent on noise barriers. The study demonstrates that an easily understood metric can be readily used to assist making decisions related to noise mitigation for large-scale transport projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caillet, V; Colvill, E; Royal North Shore Hospital, Sydney, NSW
Purpose: The objective of this study was to investigate the dosimetric benefits of multi-leaf collimator (MLC) tracking for lung SABR treatments in end-to-end clinically realistic planning and delivery scenarios. Methods: The clinical benefits of MLC tracking were assessed using previously delivered treatment plans and physical experiments. The 10 most recent single lesion lung SABR patients were re-planned following a 4D-GTV-based real-time adaptive protocol (PTV defined as the end-of-exhalation GTV plus 5.0 mm margins). The plans were delivered on a Trilogy Varian linac. Electromagnetic transponders (Calypso, Varian Medical Systems, USA) were embedded into a programmable moving phantom (HexaMotion platform) tracked withmore » the Varian Calypso system. For each physical experiment, the MLC positions were collected and used as input for dose reconstruction. For both planned and physical experiments, the OAR dose metrics from the conventional and real-time adaptive SABR plans (Mean Lung Dose (MLD), V20 for lung, and near-maximum dose (D2%) for spine and heart) were statistically compared. The Wilcoxon test was used to compare plan and physical experiment dose metrics. Results: While maintaining target coverage, percentage reductions in dose metrics to the OARs were observed for both planned and physical experiments. Comparing the two plans showed MLD percentage reduction (MLDr) of 25.4% (absolute differences of 1.41 Gy) and 28.9% (1.29%) for the V20r. D2% percentage reduction for spine and heart were respectively 27.9% (0.3 Gy) and 20.2% (0.3 Gy). For the physical experiments, MLDr was 23.9% (1.3 Gy), and V20r 37.4% (1.6%). D2% reduction for spine and heart were respectively 27.3% (0.3 Gy) and 19.6% (0.3 Gy). For both plans and physical experiments, significant OAR dose differences (p<0.05) were found between the conventional SABR and real-time adaptive plans. Conclusion: Application of MLC tracking for lung SABR patients has the potential to reduce the dose to OARs during radiation therapy.« less
2013-01-01
Numerous quantitative PCR assays for microbial fecal source tracking (MST) have been developed and evaluated in recent years. Widespread application has been hindered by a lack of knowledge regarding the geographical stability and hence applicability of such methods beyond the regional level. This study assessed the performance of five previously reported quantitative PCR assays targeting human-, cattle-, or ruminant-associated Bacteroidetes populations on 280 human and animal fecal samples from 16 countries across six continents. The tested cattle-associated markers were shown to be ruminant-associated. The quantitative distributions of marker concentrations in target and nontarget samples proved to be essential for the assessment of assay performance and were used to establish a new metric for quantitative source-specificity. In general, this study demonstrates that stable target populations required for marker-based MST occur around the globe. Ruminant-associated marker concentrations were strongly correlated with total intestinal Bacteroidetes populations and with each other, indicating that the detected ruminant-associated populations seem to be part of the intestinal core microbiome of ruminants worldwide. Consequently tested ruminant-targeted assays appear to be suitable quantitative MST tools beyond the regional level while the targeted human-associated populations seem to be less prevalent and stable, suggesting potential for improvements in human-targeted methods. PMID:23755882
Demonstrating Success: Web Analytics and Continuous Improvement
ERIC Educational Resources Information Center
Loftus, Wayne
2012-01-01
As free and low-cost Web analytics tools become more sophisticated, libraries' approach to user analysis can become more nuanced and precise. Tracking appropriate metrics with a well-formulated analytics program can inform design decisions, demonstrate the degree to which those decisions have succeeded, and thereby inform the next iteration in the…
Recht, Michael; Macari, Michael; Lawson, Kirk; Mulholland, Tom; Chen, David; Kim, Danny; Babb, James
2013-03-01
The aim of this study was to evaluate all aspects of workflow in a large academic MRI department to determine whether process improvement (PI) efforts could improve key performance indicators (KPIs). KPI metrics in the investigators' MR imaging department include daily inpatient backlogs, on-time performance for outpatient examinations, examination volumes, appointment backlogs for pediatric anesthesia cases, and scan duration relative to time allotted for an examination. Over a 3-week period in April 2011, key members of the MR imaging department (including technologists, nurses, schedulers, physicians, and administrators) tracked all aspects of patient flow through the department, from scheduling to examination interpretation. Data were analyzed by the group to determine where PI could improve KPIs. Changes to MRI workflow were subsequently implemented, and KPIs were compared before (January 1, 2011, to April 30, 2011) and after (August 1, 2011, to December 31, 2011) using Mann-Whitney and Fisher's exact tests. The data analysis done during this PI led to multiple changes in the daily workflow of the MR department. In addition, a new sense of teamwork and empowerment was established within the MR staff. All of the measured KPIs showed statistically significant changes after the reengineering project. Intradepartmental PI efforts can significantly affect KPI metrics within an MR imaging department, making the process more patient centered. In addition, the process allowed significant growth without the need for additional equipment or personnel. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Roy, Sourav; Basu, Sankar; Dasgupta, Dipak; Bhattacharyya, Dhananjay; Banerjee, Rahul
2015-01-01
Currently, considerable interest exists with regard to the dissociation of close packed aminoacids within proteins, in the course of unfolding, which could result in either wet or dry moltenglobules. The progressive disjuncture of residues constituting the hydrophobic core ofcyclophilin from L. donovani (LdCyp) has been studied during the thermal unfolding of the molecule, by molecular dynamics simulations. LdCyp has been represented as a surface contactnetwork (SCN) based on the surface complementarity (Sm) of interacting residues within themolecular interior. The application of Sm to side chain packing within proteins make it a very sensitive indicator of subtle perturbations in packing, in the thermal unfolding of the protein. Network based metrics have been defined to track the sequential changes in the disintegration ofthe SCN spanning the hydrophobic core of LdCyp and these metrics prove to be highly sensitive compared to traditional metrics in indicating the increased conformational (and dynamical) flexibility in the network. These metrics have been applied to suggest criteria distinguishing DMG, WMG and transition state ensembles and to identify key residues involved in crucial conformational/topological events during the unfolding process. PMID:26545107
Sacchet, Matthew D.; Prasad, Gautam; Foland-Ross, Lara C.; Thompson, Paul M.; Gotlib, Ian H.
2015-01-01
Recently, there has been considerable interest in understanding brain networks in major depressive disorder (MDD). Neural pathways can be tracked in the living brain using diffusion-weighted imaging (DWI); graph theory can then be used to study properties of the resulting fiber networks. To date, global abnormalities have not been reported in tractography-based graph metrics in MDD, so we used a machine learning approach based on “support vector machines” to differentiate depressed from healthy individuals based on multiple brain network properties. We also assessed how important specific graph metrics were for this differentiation. Finally, we conducted a local graph analysis to identify abnormal connectivity at specific nodes of the network. We were able to classify depression using whole-brain graph metrics. Small-worldness was the most useful graph metric for classification. The right pars orbitalis, right inferior parietal cortex, and left rostral anterior cingulate all showed abnormal network connectivity in MDD. This is the first use of structural global graph metrics to classify depressed individuals. These findings highlight the importance of future research to understand network properties in depression across imaging modalities, improve classification results, and relate network alterations to psychiatric symptoms, medication, and comorbidities. PMID:25762941
Sacchet, Matthew D; Prasad, Gautam; Foland-Ross, Lara C; Thompson, Paul M; Gotlib, Ian H
2015-01-01
Recently, there has been considerable interest in understanding brain networks in major depressive disorder (MDD). Neural pathways can be tracked in the living brain using diffusion-weighted imaging (DWI); graph theory can then be used to study properties of the resulting fiber networks. To date, global abnormalities have not been reported in tractography-based graph metrics in MDD, so we used a machine learning approach based on "support vector machines" to differentiate depressed from healthy individuals based on multiple brain network properties. We also assessed how important specific graph metrics were for this differentiation. Finally, we conducted a local graph analysis to identify abnormal connectivity at specific nodes of the network. We were able to classify depression using whole-brain graph metrics. Small-worldness was the most useful graph metric for classification. The right pars orbitalis, right inferior parietal cortex, and left rostral anterior cingulate all showed abnormal network connectivity in MDD. This is the first use of structural global graph metrics to classify depressed individuals. These findings highlight the importance of future research to understand network properties in depression across imaging modalities, improve classification results, and relate network alterations to psychiatric symptoms, medication, and comorbidities.
Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.
2010-01-01
A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981
A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics
Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar
2017-01-01
This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744
Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug
2011-05-01
Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.
Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Douglas, E-mail: douglas.moore@utsouthwestern.edu; Sawant, Amit; Ruan, Dan
2016-01-15
Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for themore » trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et al. approach, while performing comparably to Ruan and Keall. Conclusions: This work improves upon the quality of the Sawant et al. approach, but does so without sacrificing run-time performance. In addition, using this framework allows for complex leaf-fitting strategies that can be used to account for PTV/OAR trade-off during real-time MLC tracking.« less
Eye Tracking Outcomes in Tobacco Control Regulation and Communication: A Systematic Review.
Meernik, Clare; Jarman, Kristen; Wright, Sarah Towner; Klein, Elizabeth G; Goldstein, Adam O; Ranney, Leah
2016-10-01
In this paper we synthesize the evidence from eye tracking research in tobacco control to inform tobacco regulatory strategies and tobacco communication campaigns. We systematically searched 11 databases for studies that reported eye tracking outcomes in regards to tobacco regulation and communication. Two coders independently reviewed studies for inclusion and abstracted study characteristics and findings. Eighteen studies met full criteria for inclusion. Eye tracking studies on health warnings consistently showed these warnings often were ignored, though eye tracking demonstrated that novel warnings, graphic warnings, and plain packaging can increase attention toward warnings. Eye tracking also revealed that greater visual attention to warnings on advertisements and packages consistently was associated with cognitive processing as measured by warning recall. Eye tracking is a valid indicator of attention, cognitive processing, and memory. The use of this technology in tobacco control research complements existing methods in tobacco regulatory and communication science; it also can be used to examine the effects of health warnings and other tobacco product communications on consumer behavior in experimental settings prior to the implementation of novel health communication policies. However, the utility of eye tracking will be enhanced by the standardization of methodology and reporting metrics.
Eye Tracking Outcomes in Tobacco Control Regulation and Communication: A Systematic Review
Meernik, Clare; Jarman, Kristen; Wright, Sarah Towner; Klein, Elizabeth G.; Goldstein, Adam O.; Ranney, Leah
2016-01-01
Objective In this paper we synthesize the evidence from eye tracking research in tobacco control to inform tobacco regulatory strategies and tobacco communication campaigns. Methods We systematically searched 11 databases for studies that reported eye tracking outcomes in regards to tobacco regulation and communication. Two coders independently reviewed studies for inclusion and abstracted study characteristics and findings. Results Eighteen studies met full criteria for inclusion. Eye tracking studies on health warnings consistently showed these warnings often were ignored, though eye tracking demonstrated that novel warnings, graphic warnings, and plain packaging can increase attention toward warnings. Eye tracking also revealed that greater visual attention to warnings on advertisements and packages consistently was associated with cognitive processing as measured by warning recall. Conclusions Eye tracking is a valid indicator of attention, cognitive processing, and memory. The use of this technology in tobacco control research complements existing methods in tobacco regulatory and communication science; it also can be used to examine the effects of health warnings and other tobacco product communications on consumer behavior in experimental settings prior to the implementation of novel health communication policies. However, the utility of eye tracking will be enhanced by the standardization of methodology and reporting metrics. PMID:27668270
Performance regression manager for large scale systems
Faraj, Daniel A.
2017-10-17
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.
Zone calculation as a tool for assessing performance outcome in laparoscopic suturing.
Buckley, Christina E; Kavanagh, Dara O; Nugent, Emmeline; Ryan, Donncha; Traynor, Oscar J; Neary, Paul C
2015-06-01
Simulator performance is measured by metrics, which are valued as an objective way of assessing trainees. Certain procedures such as laparoscopic suturing, however, may not be suitable for assessment under traditionally formulated metrics. Our aim was to assess if our new metric is a valid method of assessing laparoscopic suturing. A software program was developed to order to create a new metric, which would calculate the percentage of time spent operating within pre-defined areas called "zones." Twenty-five candidates (medical students N = 10, surgical residents N = 10, and laparoscopic experts N = 5) performed the laparoscopic suturing task on the ProMIS III(®) simulator. New metrics of "in-zone" and "out-zone" scores as well as traditional metrics of time, path length, and smoothness were generated. Performance was also assessed by two blinded observers using the OSATS and FLS rating scales. This novel metric was evaluated by comparing it to both traditional metrics and subjective scores. There was a significant difference in the average in-zone and out-zone scores between all three experience groups (p < 0.05). The new zone metrics scores correlated significantly with the subjective-blinded observer scores of OSATS and FLS (p = 0.0001). The new zone metric scores also correlated significantly with the traditional metrics of path length, time, and smoothness (p < 0.05). The new metric is a valid tool for assessing laparoscopic suturing objectively. This could be incorporated into a competency-based curriculum to monitor resident progression in the simulated setting.
Nekton Interaction Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-03-15
The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less
Distributed Tracking Fidelity-Metric Performance Analysis Using Confusion Matrices
2012-07-01
by: Stk = H ot k P t k|k-1 (H ot k ) T + R _ tk, where R _ k = ⎣⎢ ⎡ ⎦⎥ ⎤Rk 0 0 Dk Since Sk is the innovation covariance update, we can use Sk...ẑ lt k|k-1) T [ Stk ] -1 (ztk - ẑ lt k|k-1) ≤ γ for l = 1 … m o k (15) where γ is a validation threshold obtained from a χ2 table and Sk stands...o k β tlk ν t lk [ν t lk] T - νtk[ν t k] T (Wtk) T (18) where, P*k|k = [ ] I - W tk Hotk Ptk|k-1 (19) Wtk = P t k|k-1 [H ot k ] T ( Stk ) -1
CONTACT: An Air Force technical report on military satellite control technology
NASA Astrophysics Data System (ADS)
Weakley, Christopher K.
1993-07-01
This technical report focuses on Military Satellite Control Technologies and their application to the Air Force Satellite Control Network (AFSCN). This report is a compilation of articles that provide an overview of the AFSCN and the Advanced Technology Program, and discusses relevant technical issues and developments applicable to the AFSCN. Among the topics covered are articles on Future Technology Projections; Future AFSCN Topologies; Modeling of the AFSCN; Wide Area Communications Technology Evolution; Automating AFSCN Resource Scheduling; Health & Status Monitoring at Remote Tracking Stations; Software Metrics and Tools for Measuring AFSCN Software Performance; Human-Computer Interface Working Group; Trusted Systems Workshop; and the University Technical Interaction Program. In addition, Key Technology Area points of contact are listed in the report.
Evaluating Descriptive Metrics of the Human Cone Mosaic
Cooper, Robert F.; Wilk, Melissa A.; Tarima, Sergey; Carroll, Joseph
2016-01-01
Purpose To evaluate how metrics used to describe the cone mosaic change in response to simulated photoreceptor undersampling (i.e., cell loss or misidentification). Methods Using an adaptive optics ophthalmoscope, we acquired images of the cone mosaic from the center of fixation to 10° along the temporal, superior, inferior, and nasal meridians in 20 healthy subjects. Regions of interest (n = 1780) were extracted at regular intervals along each meridian. Cone mosaic geometry was assessed using a variety of metrics − density, density recovery profile distance (DRPD), nearest neighbor distance (NND), intercell distance (ICD), farthest neighbor distance (FND), percentage of six-sided Voronoi cells, nearest neighbor regularity (NNR), number of neighbors regularity (NoNR), and Voronoi cell area regularity (VCAR). The “performance” of each metric was evaluated by determining the level of simulated loss necessary to obtain 80% statistical power. Results Of the metrics assessed, NND and DRPD were the least sensitive to undersampling, classifying mosaics that lost 50% of their coordinates as indistinguishable from normal. The NoNR was the most sensitive, detecting a significant deviation from normal with only a 10% cell loss. Conclusions The robustness of cone spacing metrics makes them unsuitable for reliably detecting small deviations from normal or for tracking small changes in the mosaic over time. In contrast, regularity metrics are more sensitive to diffuse loss and, therefore, better suited for detecting such changes, provided the fraction of misidentified cells is minimal. Combining metrics with a variety of sensitivities may provide a more complete picture of the integrity of the photoreceptor mosaic. PMID:27273598
Ogawa, Mifuyu; Yamaura, Yuichi; Abe, Shin; Hoshino, Daisuke; Hoshizaki, Kazuhiko; Iida, Shigeo; Katsuki, Toshio; Masaki, Takashi; Niiyama, Kaoru; Saito, Satoshi; Sakai, Takeshi; Sugita, Hisashi; Tanouchi, Hiroyuki; Amano, Tatsuya; Taki, Hisatomo; Okabe, Kimiko
2011-07-01
Many indicators/indices provide information on whether the 2010 biodiversity target of reducing declines in biodiversity have been achieved. The strengths and limitations of the various measures used to assess the success of such measures are now being discussed. Biodiversity dynamics are often evaluated by a single biological population metric, such as the abundance of each species. Here we examined tree population dynamics of 52 families (192 species) at 11 research sites (three vegetation zones) of Japanese old-growth forests using two population metrics: number of stems and basal area. We calculated indices that track the rate of change in all species of tree by taking the geometric mean of changes in population metrics between the 1990s and the 2000s at the national level and at the levels of the vegetation zone and family. We specifically focused on whether indices based on these two metrics behaved similarly. The indices showed that (1) the number of stems declined, whereas basal area did not change at the national level and (2) the degree of change in the indices varied by vegetation zone and family. These results suggest that Japanese old-growth forests have not degraded and may even be developing in some vegetation zones, and indicate that the use of a single population metric (or indicator/index) may be insufficient to precisely understand the state of biodiversity. It is therefore important to incorporate more metrics into monitoring schemes to overcome the risk of misunderstanding or misrepresenting biodiversity dynamics.
Common world model for unmanned systems: Phase 2
NASA Astrophysics Data System (ADS)
Dean, Robert M. S.; Oh, Jean; Vinokurov, Jerry
2014-06-01
The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using semantic and symbolic as well as metric information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines to address Symbol Grounding and Uncertainty. The Common World Model must understand how these objects relate to each other. It includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and their histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model also includes models of how entities in the environment behave which enable prediction of future world states. To manage complexity, we have adopted a phased implementation approach. Phase 1, published in these proceedings in 2013 [1], presented the approach for linking metric with symbolic information and interfaces for traditional planners and cognitive reasoning. Here we discuss the design of "Phase 2" of this world model, which extends the Phase 1 design API, data structures, and reviews the use of the Common World Model as part of a semantic navigation use case.
Mort, Elizabeth A; Demehin, Akinluwa A; Marple, Keith B; McCullough, Kathryn Y; Meyer, Gregg S
2013-08-01
Hospitals are continually challenged to provide safer and higher-quality patient care despite resource constraints. With an ever-increasing range of quality and safety targets at the national, state, and local levels, prioritization is crucial in effective institutional quality goal setting and resource allocation.Organizational goal-setting theory is a performance improvement methodology with strong results across many industries. The authors describe a structured goal-setting process they have established at Massachusetts General Hospital for setting annual institutional quality and safety goals. Begun in 2008, this process has been conducted on an annual basis. Quality and safety data are gathered from many sources, both internal and external to the hospital. These data are collated and classified, and multiple approaches are used to identify the most pressing quality issues facing the institution. The conclusions are subject to stringent internal review, and then the top quality goals of the institution are chosen. Specific tactical initiatives and executive owners are assigned to each goal, and metrics are selected to track performance. A reporting tool based on these tactics and metrics is used to deliver progress updates to senior hospital leadership.The hospital has experienced excellent results and strong organizational buy-in using this effective, low-cost, and replicable goal-setting process. It has led to improvements in structural, process, and outcomes aspects of quality.
On Applying the Prognostic Performance Metrics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.
75 FR 7581 - RTO/ISO Performance Metrics; Notice Requesting Comments on RTO/ISO Performance Metrics
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-22
... performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to... of staff from all the jurisdictional ISOs/RTOs to develop a set of performance metrics that the ISOs/RTOs will use to report annually to the Commission. Commission staff and representatives from the ISOs...
Performance regression manager for large scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
Methods comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result ofmore » the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less
Performance regression manager for large scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputtingmore » for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less
Real time tracking by LOPF algorithm with mixture model
NASA Astrophysics Data System (ADS)
Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo
2007-11-01
A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.
Position Extrema in Keplerian Relative Motion: A Gröbner Basis Approach
NASA Astrophysics Data System (ADS)
Allgeier, Shawn E.; Fitz-Coy, Norman G.; Erwin, R. Scott
2012-12-01
This paper analyzes the relative motion between two spacecraft in orbit. Specifically, the paper provides bounds for relative spacecraft position-based measures which impact spacecraft formation-flight mission design and analysis. Previous efforts have provided bounds for the separation distance between two spacecraft. This paper presents a methodology for bounding the local vertical, horizontal, and cross track components of the relative position vector in a spacecraft centered, rotating reference frame. Three metrics are derived and a methodology for bounding them is presented. The solution of the extremal equations for the metrics is formulated as an affine variety and obtained using a Gröbner basis reduction. No approximations are utilized and the only assumption is that the two spacecraft are in bound Keplerian orbits. Numerical examples are included to demonstrate the efficacy of the method. The metrics have utility to the mission designer of formation flight architectures, with relevance to Earth observation constellations.
Tracking linkage to HIV care for former prisoners
Montague, Brian T.; Rosen, David L.; Solomon, Liza; Nunn, Amy; Green, Traci; Costa, Michael; Baillargeon, Jacques; Wohl, David A.; Paar, David P.; Rich, Josiah D.; Study Group, on behalf of the LINCS
2012-01-01
Improving testing and uptake to care among highly impacted populations is a critical element of Seek, Test, Treat and Retain strategies for reducing HIV incidence in the community. HIV disproportionately impacts prisoners. Though, incarceration provides an opportunity to diagnose and initiate therapy, treatment is frequently disrupted after release. Though model programs exist to support linkage to care on release, there is a lack of scalable metrics with which to assess adequacy of linkage to care after release. The linking data from Ryan White program Client Level Data (CLD) files reported to HRSA with corrections release data offers an attractive means of generating these metrics. Identified only by use of a confidential encrypted Unique Client Identifier (eUCI) these CLD files allow collection of key clinical indicators across the system of Ryan White funded providers. Using eUCIs generated from corrections release data sets as a linkage tool, the time to the first service at community providers along with key clinical indicators of patient status at entry into care can be determined as measures of linkage adequacy. Using this strategy, high and low performing sites can be identified and best practices can be identified to reproduce these successes in other settings. PMID:22561157
Temporal turnover and the maintenance of diversity in ecological assemblages
Magurran, Anne E.; Henderson, Peter A.
2010-01-01
Temporal variation in species abundances occurs in all ecological communities. Here, we explore the role that this temporal turnover plays in maintaining assemblage diversity. We investigate a three-decade time series of estuarine fishes and show that the abundances of the individual species fluctuate asynchronously around their mean levels. We then use a time-series modelling approach to examine the consequences of different patterns of turnover, by asking how the correlation between the abundance of a species in a given year and its abundance in the previous year influences the structure of the overall assemblage. Classical diversity measures that ignore species identities reveal that the observed assemblage structure will persist under all but the most extreme conditions. However, metrics that track species identities indicate a narrower set of turnover scenarios under which the predicted assemblage resembles the natural one. Our study suggests that species diversity metrics are insensitive to change and that measures that track species ranks may provide better early warning that an assemblage is being perturbed. It also highlights the need to incorporate temporal turnover in investigations of assemblage structure and function. PMID:20980310
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mörtsell, E., E-mail: edvard@fysik.su.se
The bimetric generalization of general relativity has been proven to be able to give an accelerated background expansion consistent with observations. Apart from the energy densities coupling to one or both of the metrics, the expansion will depend on the cosmological constant contribution to each of them, as well as the three parameters describing the interaction between the two metrics. Even for fixed values of these parameters can several possible solutions, so called branches, exist. Different branches can give similar background expansion histories for the observable metric, but may have different properties regarding, for example, the existence of ghosts andmore » the rate of structure growth. In this paper, we outline a method to find viable solution branches for arbitrary parameter values. We show how possible expansion histories in bimetric gravity can be inferred qualitatively, by picturing the ratio of the scale factors of the two metrics as the spatial coordinate of a particle rolling along a frictionless track. A particularly interesting example discussed is a specific set of parameter values, where a cosmological dark matter background is mimicked without introducing ghost modes into the theory.« less
Cosmological histories in bimetric gravity: a graphical approach
NASA Astrophysics Data System (ADS)
Mörtsell, E.
2017-02-01
The bimetric generalization of general relativity has been proven to be able to give an accelerated background expansion consistent with observations. Apart from the energy densities coupling to one or both of the metrics, the expansion will depend on the cosmological constant contribution to each of them, as well as the three parameters describing the interaction between the two metrics. Even for fixed values of these parameters can several possible solutions, so called branches, exist. Different branches can give similar background expansion histories for the observable metric, but may have different properties regarding, for example, the existence of ghosts and the rate of structure growth. In this paper, we outline a method to find viable solution branches for arbitrary parameter values. We show how possible expansion histories in bimetric gravity can be inferred qualitatively, by picturing the ratio of the scale factors of the two metrics as the spatial coordinate of a particle rolling along a frictionless track. A particularly interesting example discussed is a specific set of parameter values, where a cosmological dark matter background is mimicked without introducing ghost modes into the theory.
Practical nonlinear method for detection of respiratory and cardiac dysfunction in human subjects
NASA Astrophysics Data System (ADS)
Katz, Richard A.; Lawee, Michael S.; Newman, Anthony K.; Weiss, J. Woodrow; Chandra, Shalabh; Grimm, Richard A.; Thomas, James D.
1995-12-01
This research applies novel nonlinear signal detection techniques in studies of human subjects with respiratory and cardiac diseases. One of the studies concerns a breathing disorder during sleep, a disease called Obstructive Sleep Apnea (OSA). In a second study we investigate a disease of the heart, Atrial Fibrillation (AF). The former study involves nonlinear processing of the time sequences of sleep apnea recordings (cardio-respirograms) collected from patients with known obstructive sleep apnea, and from a normal control. In the latter study, we apply similar nonlinear metrics to Doppler flow measurements obtained by transesophageal echocardiography (TEE). One of our metrics, the 'chaotic radius' is used for tracking the position of points in phase space relative to some reference position. A second metric, the 'differential radius' provides a measure of the separation rate of contiguous (evolving) points in phase space. A third metric, the 'chaotic frequency' gives angular position of the phase space orbit as a function of time. All are useful for identifying change of physiologic condition that is not always apparent using conventional methods.
Effective monitoring of agriculture: a response.
Sachs, Jeffrey D; Remans, Roseline; Smukler, Sean M; Winowiecki, Leigh; Andelman, Sandy J; Cassman, Kenneth G; Castle, David; DeFries, Ruth; Denning, Glenn; Fanzo, Jessica; Jackson, Louise E; Leemans, Rik; Lehmann, Johannes; Milder, Jeffrey C; Naeem, Shahid; Nziguheba, Generose; Palm, Cheryl A; Pingali, Prabhu L; Reganold, John P; Richter, Daniel D; Scherr, Sara J; Sircely, Jason; Sullivan, Clare; Tomich, Thomas P; Sanchez, Pedro A
2012-03-01
The development of effective agricultural monitoring networks is essential to track, anticipate and manage changes in the social, economic and environmental aspects of agriculture. We welcome the perspective of Lindenmayer and Likens (J. Environ. Monit., 2011, 13, 1559) as published in the Journal of Environmental Monitoring on our earlier paper, "Monitoring the World's Agriculture" (Sachs et al., Nature, 2010, 466, 558-560). In this response, we address their three main critiques labeled as 'the passive approach', 'the problem with uniform metrics' and 'the problem with composite metrics'. We expand on specific research questions at the core of the network design, on the distinction between key universal and site-specific metrics to detect change over time and across scales, and on the need for composite metrics in decision-making. We believe that simultaneously measuring indicators of the three pillars of sustainability (environmentally sound, social responsible and economically viable) in an effectively integrated monitoring system will ultimately allow scientists and land managers alike to find solutions to the most pressing problems facing global food security. This journal is © The Royal Society of Chemistry 2012
Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans
2017-01-01
A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.
Uncooperative target-in-the-loop performance with backscattered speckle-field effects
NASA Astrophysics Data System (ADS)
Kansky, Jan E.; Murphy, Daniel V.
2007-09-01
Systems utilizing target-in-the-loop (TIL) techniques for adaptive optics phase compensation rely on a metric sensor to perform a hill climbing algorithm that maximizes the far-field Strehl ratio. In uncooperative TIL, the metric signal is derived from the light backscattered from a target. In cases where the target is illuminated with a laser with suffciently long coherence length, the potential exists for the validity of the metric sensor to be compromised by speckle-field effects. We report experimental results from a scaled laboratory designed to evaluate TIL performance in atmospheric turbulence and thermal blooming conditions where the metric sensors are influenced by varying degrees of backscatter speckle. We compare performance of several TIL configurations and metrics for cases with static speckle, and for cases with speckle fluctuations within the frequency range that the TIL system operates. The roles of metric sensor filtering and system bandwidth are discussed.
Impact of Different Economic Performance Metrics on the Perceived Value of Solar Photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drury, E.; Denholm, P.; Margolis, R.
2011-10-01
Photovoltaic (PV) systems are installed by several types of market participants, ranging from residential customers to large-scale project developers and utilities. Each type of market participant frequently uses a different economic performance metric to characterize PV value because they are looking for different types of returns from a PV investment. This report finds that different economic performance metrics frequently show different price thresholds for when a PV investment becomes profitable or attractive. Several project parameters, such as financing terms, can have a significant impact on some metrics [e.g., internal rate of return (IRR), net present value (NPV), and benefit-to-cost (B/C)more » ratio] while having a minimal impact on other metrics (e.g., simple payback time). As such, the choice of economic performance metric by different customer types can significantly shape each customer's perception of PV investment value and ultimately their adoption decision.« less
Bernard, Aaron W.; Ceccolini, Gabbriel; Feinn, Richard; Rockfeld, Jennifer; Rosenberg, Ilene; Thomas, Listy; Cassese, Todd
2017-01-01
ABSTRACT Background: Performance feedback is considered essential to clinical skills development. Formative objective structured clinical exams (F-OSCEs) often include immediate feedback by standardized patients. Students can also be provided access to performance metrics including scores, checklists, and video recordings after the F-OSCE to supplement this feedback. How often students choose to review this data and how review impacts future performance has not been documented. Objective: We suspect student review of F-OSCE performance data is variable. We hypothesize that students who review this data have better performance on subsequent F-OSCEs compared to those who do not. We also suspect that frequency of data review can be improved with faculty involvement in the form of student-faculty debriefing meetings. Design: Simulation recording software tracks and time stamps student review of performance data. We investigated a cohort of first- and second-year medical students from the 2015-16 academic year. Basic descriptive statistics were used to characterize frequency of data review and a linear mixed-model analysis was used to determine relationships between data review and future F-OSCE performance. Results: Students reviewed scores (64%), checklists (42%), and videos (28%) in decreasing frequency. Frequency of review of all metric and modalities improved when student-faculty debriefing meetings were conducted (p<.001). Among 92 first-year students, checklist review was associated with an improved performance on subsequent F-OSCEs (p = 0.038) by 1.07 percentage points on a scale of 0-100. Among 86 second year students, no review modality was associated with improved performance on subsequent F-OSCEs. Conclusion: Medical students review F-OSCE checklists and video recordings less than 50% of the time when not prompted. Student-faculty debriefing meetings increased student data reviews. First-year student’s review of checklists on F-OSCEs was associated with increases in performance on subsequent F-OSCEs, however this outcome was not observed among second-year students. PMID:28521646
Bernard, Aaron W; Ceccolini, Gabbriel; Feinn, Richard; Rockfeld, Jennifer; Rosenberg, Ilene; Thomas, Listy; Cassese, Todd
2017-01-01
Performance feedback is considered essential to clinical skills development. Formative objective structured clinical exams (F-OSCEs) often include immediate feedback by standardized patients. Students can also be provided access to performance metrics including scores, checklists, and video recordings after the F-OSCE to supplement this feedback. How often students choose to review this data and how review impacts future performance has not been documented. We suspect student review of F-OSCE performance data is variable. We hypothesize that students who review this data have better performance on subsequent F-OSCEs compared to those who do not. We also suspect that frequency of data review can be improved with faculty involvement in the form of student-faculty debriefing meetings. Simulation recording software tracks and time stamps student review of performance data. We investigated a cohort of first- and second-year medical students from the 2015-16 academic year. Basic descriptive statistics were used to characterize frequency of data review and a linear mixed-model analysis was used to determine relationships between data review and future F-OSCE performance. Students reviewed scores (64%), checklists (42%), and videos (28%) in decreasing frequency. Frequency of review of all metric and modalities improved when student-faculty debriefing meetings were conducted (p<.001). Among 92 first-year students, checklist review was associated with an improved performance on subsequent F-OSCEs (p = 0.038) by 1.07 percentage points on a scale of 0-100. Among 86 second year students, no review modality was associated with improved performance on subsequent F-OSCEs. Medical students review F-OSCE checklists and video recordings less than 50% of the time when not prompted. Student-faculty debriefing meetings increased student data reviews. First-year student's review of checklists on F-OSCEs was associated with increases in performance on subsequent F-OSCEs, however this outcome was not observed among second-year students.
An exploratory survey of methods used to develop measures of performance
NASA Astrophysics Data System (ADS)
Hamner, Kenneth L.; Lafleur, Charles A.
1993-09-01
Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.
Compression performance comparison in low delay real-time video for mobile applications
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2012-10-01
This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.
Tang, Tao; Stevenson, R Jan; Infante, Dana M
2016-10-15
Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chaganti, Shikha; Nelson, Katrina; Mundy, Kevin; Luo, Yifu; Harrigan, Robert L.; Damon, Steve; Fabbri, Daniel; Mawn, Louise; Landman, Bennett
2016-03-01
Pathologies of the optic nerve and orbit impact millions of Americans and quantitative assessment of the orbital structures on 3-D imaging would provide objective markers to enhance diagnostic accuracy, improve timely intervention, and eventually preserve visual function. Recent studies have shown that the multi-atlas methodology is suitable for identifying orbital structures, but challenges arise in the identification of the individual extraocular rectus muscles that control eye movement. This is increasingly problematic in diseased eyes, where these muscles often appear to fuse at the back of the orbit (at the resolution of clinical computed tomography imaging) due to inflammation or crowding. We propose the use of Kalman filters to track the muscles in three-dimensions to refine multi-atlas segmentation and resolve ambiguity due to imaging resolution, noise, and artifacts. The purpose of our study is to investigate a method of automatically generating orbital metrics from CT imaging and demonstrate the utility of the approach by correlating structural metrics of the eye orbit with clinical data and visual function measures in subjects with thyroid eye disease. The pilot study demonstrates that automatically calculated orbital metrics are strongly correlated with several clinical characteristics. Moreover, it is shown that the superior, inferior, medial and lateral rectus muscles obtained using Kalman filters are each correlated with different categories of functional deficit. These findings serve as foundation for further investigation in the use of CT imaging in the study, analysis and diagnosis of ocular diseases, specifically thyroid eye disease.
Prendergast, Geoffrey P.; Staff, Michael
2017-01-01
Introduction: This study examines the use of the number of night-time sleep disturbances as a health-based metric to assess the cost effectiveness of rail noise mitigation strategies for situations, wherein high-intensity noises dominate such as freight train pass-bys and wheel squeal. Materials and Methods: Twenty residential properties adjacent to the existing and proposed rail tracks in a noise catchment area of the Epping to Thornleigh Third Track project were used as a case study. Awakening probabilities were calculated for individual’s awakening 1, 3 and 5 times a night when subjected to 10 independent freight train pass-by noise events using internal maximum sound pressure levels (LAFmax). Results: Awakenings were predicted using a random intercept multivariate logistic regression model. With source mitigation in place, the majority of the residents were still predicted to be awoken at least once per night (median 88.0%), although substantial reductions in the median probabilities of awakening three and five times per night from 50.9 to 29.4% and 9.2 to 2.7%, respectively, were predicted. This resulted in a cost-effective estimate of 7.6–8.8 less people being awoken at least three times per night per A$1 million spent on noise barriers. Conclusion: The study demonstrates that an easily understood metric can be readily used to assist making decisions related to noise mitigation for large-scale transport projects. PMID:29192613
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.
2013-03-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2011-11-15
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
NASA Astrophysics Data System (ADS)
Uyeda, K. A.; Stow, D. A.; Roberts, D. A.; Riggan, P. J.
2015-12-01
Multi-temporal satellite imagery can provide valuable information on patterns of vegetation growth over large spatial extents and long time periods, but corresponding ground-referenced biomass information is often difficult to acquire, especially at an annual scale. In this study, I test the relationship between annual biomass estimated using shrub growth rings and metrics of seasonal growth derived from Moderate Resolution Imaging Spectroradiometer (MODIS) spectral vegetation indices (SVIs) for a small area of southern California chaparral to evaluate the potential for mapping biomass at larger spatial extents. The site had most recently burned in 2002, and annual biomass accumulation measurements were available from years 5 - 11 post-fire. I tested metrics of seasonal growth using six SVIs (Normalized Difference Vegetation Index, Enhanced Vegetation Index, Soil Adjusted Vegetation Index, Normalized Difference Water Index, Normalized Difference Infrared Index 6, and Vegetation Atmospherically Resistant Index). While additional research would be required to determine which of these metrics and SVIs are most promising over larger spatial extents, several of the seasonal growth metrics/ SVI combinations have a very strong relationship with annual biomass, and all SVIs have a strong relationship with annual biomass for at least one of the seasonal growth metrics.
Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising.
Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery
2017-01-01
The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube.
Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising
Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M.; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery
2017-01-01
The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube. PMID:29163251
MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.
Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram
2015-11-01
We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.
Gevins, Alan; Chan, Cynthia S.; Jiang, An; Sam-Vargas, Lita
2012-01-01
Objective Extend a method to track neurophysiological pharmacodynamics during repetitive cognitive testing to a more complex “lifelike” task. Methods Alcohol was used as an exemplar psychoactive substance. An equation, derived in an exploratory analysis to detect alcohol’s EEGs effects during repetitive cognitive testing, was validated in a confirmatory study on a new group whose EEGs after alcohol and placebo were recorded during working memory testing and while operating an automobile driving simulator. Results The equation recognized alcohol by combining five times beta plus theta power. It worked well (p<.0001) when applied to both tasks in the confirmatory group. The maximum EEG effect occurred 2–2.5 hours after drinking (>1hr after peak BAC) and remained at 90% at 3.5–4 hours (BAC <50% of peak). Individuals varied in the magnitude and timing of the EEG effect. Conclusion The equation tracked the EEG response to alcohol in the confirmatory study during both repetitive cognitive testing and a more complex “lifelike” task. The EEG metric was more sensitive to alcohol than several autonomic physiological measures, task performance measures or self-reports. Significance Using EEG as a biomarker to track neurophysiological pharmacodynamics during complex “lifelike” activities may prove useful for assessing how drugs affect integrated brain functioning. PMID:23194853
NASA Technical Reports Server (NTRS)
Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.
2017-01-01
Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)
Grading the Metrics: Performance-Based Funding in the Florida State University System
ERIC Educational Resources Information Center
Cornelius, Luke M.; Cavanaugh, Terence W.
2016-01-01
A policy analysis of Florida's 10-factor Performance-Based Funding system for state universities. The focus of the article is on the system of performance metrics developed by the state Board of Governors and their impact on institutions and their missions. The paper also discusses problems and issues with the metrics, their ongoing evolution, and…
Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F
2012-05-01
The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=-2.487 (-2.040 to -0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=-2.272 (-0.028 to -0.002). ANOVA reported significant differences across years of experience (0-1, 1-2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required.
Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance
NASA Technical Reports Server (NTRS)
Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.
2010-01-01
PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.
Ramot, Daniel; Johnson, Brandon E.; Berry, Tommie L.; Carnell, Lucinda; Goodman, Miriam B.
2008-01-01
Background Caenorhabditis elegans locomotion is a simple behavior that has been widely used to dissect genetic components of behavior, synaptic transmission, and muscle function. Many of the paradigms that have been created to study C. elegans locomotion rely on qualitative experimenter observation. Here we report the implementation of an automated tracking system developed to quantify the locomotion of multiple individual worms in parallel. Methodology/Principal Findings Our tracking system generates a consistent measurement of locomotion that allows direct comparison of results across experiments and experimenters and provides a standard method to share data between laboratories. The tracker utilizes a video camera attached to a zoom lens and a software package implemented in MATLAB®. We demonstrate several proof-of-principle applications for the tracker including measuring speed in the absence and presence of food and in the presence of serotonin. We further use the tracker to automatically quantify the time course of paralysis of worms exposed to aldicarb and levamisole and show that tracker performance compares favorably to data generated using a hand-scored metric. Conclusions/Signficance Although this is not the first automated tracking system developed to measure C. elegans locomotion, our tracking software package is freely available and provides a simple interface that includes tools for rapid data collection and analysis. By contrast with other tools, it is not dependent on a specific set of hardware. We propose that the tracker may be used for a broad range of additional worm locomotion applications including genetic and chemical screening. PMID:18493300
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
Ahtola, Eero; Stjerna, Susanna; Yrttiaho, Santeri; Nelson, Charles A.; Leppänen, Jukka M.; Vanhatalo, Sampsa
2014-01-01
Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102
Reilly-Harrington, Noreen A; Sylvia, Louisa G; Leon, Andrew C; Shesler, Leah W; Ketter, Terence A; Bowden, Charles L; Calabrese, Joseph R; Friedman, Edward S; Ostacher, Michael J; Iosifescu, Dan V; Rabideau, Dustin J; Thase, Michael E; Nierenberg, Andrew A
2013-11-01
This paper describes the development and use of the Medication Recommendation Tracking Form (MRTF), a novel method for capturing physician prescribing behavior and clinical decision making. The Bipolar Trials Network developed and implemented the MRTF in a comparative effectiveness study for bipolar disorder (LiTMUS). The MRTF was used to assess the frequency, types, and reasons for medication adjustments. Changes in treatment were operationalized by the metric Necessary Clinical Adjustments (NCA), defined as medication adjustments to reduce symptoms, optimize treatment response and functioning, or to address intolerable side effects. Randomized treatment groups did not differ in rates of NCAs, however, responders had significantly fewer NCAs than non-responders. Patients who had more NCAs during their previous visit had significantly lower odds of responding at the current visit. For each one-unit increase in previous CGI-BP depression score and CGI-BP overall severity score, patients had an increased NCA rate of 13% and 15%, respectively at the present visit. Ten-unit increases in previous Montgomery Asberg Depression Rating Scale (MADRS) and Young Mania Rating Scale (YMRS) scores resulted in an 18% and 14% increase in rates of NCAs, respectively. Patients with fewer NCAs had increased quality of life and decreased functional impairment. The MRTF standardizes the reporting and rationale for medication adjustments and provides an innovative metric for clinical effectiveness. As the first tool in psychiatry to track the types and reasons for medication changes, it has important implications for training new clinicians and examining clinical decision making. (ClinicalTrials.gov number NCT00667745). Copyright © 2013. Published by Elsevier Ltd.
Improved Space Surveillance Network (SSN) Scheduling using Artificial Intelligence Techniques
NASA Astrophysics Data System (ADS)
Stottler, D.
There are close to 20,000 cataloged manmade objects in space, the large majority of which are not active, functioning satellites. These are tracked by phased array and mechanical radars and ground and space-based optical telescopes, collectively known as the Space Surveillance Network (SSN). A better SSN schedule of observations could, using exactly the same legacy sensor resources, improve space catalog accuracy through more complementary tracking, provide better responsiveness to real-time changes, better track small debris in low earth orbit (LEO) through efficient use of applicable sensors, efficiently track deep space (DS) frequent revisit objects, handle increased numbers of objects and new types of sensors, and take advantage of future improved communication and control to globally optimize the SSN schedule. We have developed a scheduling algorithm that takes as input the space catalog and the associated covariance matrices and produces a globally optimized schedule for each sensor site as to what objects to observe and when. This algorithm is able to schedule more observations with the same sensor resources and have those observations be more complementary, in terms of the precision with which each orbit metric is known, to produce a satellite observation schedule that, when executed, minimizes the covariances across the entire space object catalog. If used operationally, the results would be significantly increased accuracy of the space catalog with fewer lost objects with the same set of sensor resources. This approach inherently can also trade-off fewer high priority tasks against more lower-priority tasks, when there is benefit in doing so. Currently the project has completed a prototyping and feasibility study, using open source data on the SSN's sensors, that showed significant reduction in orbit metric covariances. The algorithm techniques and results will be discussed along with future directions for the research.
Advanced Life Support System Value Metric
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)
1999-01-01
The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.
Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
NASA Astrophysics Data System (ADS)
Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
Improving Climate Projections Using "Intelligent" Ensembles
NASA Technical Reports Server (NTRS)
Baker, Noel C.; Taylor, Patrick C.
2015-01-01
Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.
NASA Astrophysics Data System (ADS)
Camp, H. A.; Moyer, Steven; Moore, Richard K.
2010-04-01
The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.
R&D100: Lightweight Distributed Metric Service
Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike
2018-06-12
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
R&D100: Lightweight Distributed Metric Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann; Brandt, Jim; Tucker, Tom
2015-11-19
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
Florida Atlantic University Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of North Florida Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida State University Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida International University Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of Central Florida Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida Gulf Coast University Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
USF Sarasota-Manatee Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida Polytechnic University Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of North Florida Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida International University Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of West Florida Work Plan, 2013-2014
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new Strategic Plan 2012-2025 is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's Annual Accountability Report provides yearly tracking for how the System is…
University of North Florida Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida Gulf Coast University Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida Polytechnic University Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of West Florida Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida A&M University Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida Gulf Coast University Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida Atlantic University Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida A&M University Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida Atlantic University Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida State University Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of Florida Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida International University Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of Central Florida Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
New College of Florida Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of Florida Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
New College of Florida Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Florida State University Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
New College of Florida Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of Florida Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of Central Florida Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of West Florida Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
System Summary of University Annual Work Plans, 2014-15
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future; (1) The Board of Governors' new Strategic Plan 2012-2025 is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's Annual Accountability Report provides yearly tracking for how the System is…
77 FR 54648 - Seventh Meeting: RTCA NextGen Advisory Committee (NAC)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-05
...' license/State-issued ID Card Number and State of Issuance Company Phone number contact Non-U.S. Citizens... can be used for NextGen Metrics Data Sources for Measuring NextGen Fuel Impact A discussion of a preliminary report on a critical data source to track and analyze the impact of NextGen Non-Technical Barriers...
James F. Fowler; Carolyn Hull Sieg; Shaula Hedwall
2015-01-01
Population size and density estimates have traditionally been acceptable ways to track speciesâ response to changing environments; however, species' population centroid elevation has recently been an equally important metric. Packera franciscana (Greene) W.A. Weber and A. Love (Asteraceae; San Francisco Peaks ragwort) is a single mountain endemic plant found only...
Water Network Tool for Resilience v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-12-09
WNTR is a python package designed to simulate and analyze resilience of water distribution networks. The software includes: - Pressure driven and demand driven hydraulic simulation - Water quality simulation to track concentration, trace, and water age - Conditional controls to simulate power outages - Models to simulate pipe breaks - A wide range of resilience metrics - Analysis and visualization tools
Science Communication Through Art: Objectives, Challenges, and Outcomes.
Lesen, Amy E; Rogan, Ama; Blum, Michael J
2016-09-01
The arts are becoming a favored medium for conveying science to the public. Tracking trending approaches, such as community-engaged learning, alongside challenges and goals can help establish metrics to achieve more impactful outcomes, and to determine the effectiveness of arts-based science communication for raising awareness or shaping public policy. Copyright © 2016 Elsevier Ltd. All rights reserved.
2016 System Summary of University Work Plans. Revised
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2016
2016-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' 2025 System Strategic Plan is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's Annual Accountability Report provides yearly tracking for how the System is progressing…
ERIC Educational Resources Information Center
Jenkins, Davis; Fink, John
2016-01-01
Increasing the effectiveness of two- to four-year college transfer is critical for meeting national goals for college attainment and promoting upward social mobility. Efforts to improve institutional effectiveness in serving transfer students and state transfer policy have been hampered by a lack of comparable metrics for measuring transfer…
Improving healthcare recruitment: the jupiter medical center experience.
Uomo, Paul Dell; Schwieters, Jill
2009-04-01
Hospitals that want to improve their recruitment efforts should: Make recruitment a priority within the organization. Take steps to reduce high vacancy rates and turnover among first-year employees. Develop a recruitment marketing plan for key positions. Establish human resources metrics to track costs and effectiveness of recruiting efforts. Enhance the recruitment process for hiring managers and job candidates.
Kamavuako, Ernest N; Scheme, Erik J; Englehart, Kevin B
2013-06-01
In this paper, the predictive capability of surface and untargeted intramuscular electromyography (EMG) was compared with respect to wrist-joint torque to quantify which type of measurement better represents joint torque during multiple degrees-of-freedom (DoF) movements for possible application in prosthetic control. Ten able-bodied subjects participated in the study. Surface and intramuscular EMG was recorded concurrently from the right forearm. The subjects were instructed to track continuous contraction profiles using single and combined DoF in two trials. The association between torque and EMG was assessed using an artificial neural network. Results showed a significant difference between the two types of EMG (P < 0.007) for all performance metrics: coefficient of determination (R(2)), Pearson correlation coefficient (PCC), and root mean square error (RMSE). The performance of surface EMG (R(2) = 0.93 ± 0.03; PCC = 0.98 ± 0.01; RMSE = 8.7 ± 2.1%) was found to be superior compared with intramuscular EMG (R(2) = 0.80 ± 0.07; PCC = 0.93 ± 0.03; RMSE = 14.5 ± 2.9%). The higher values of PCC compared with R(2) indicate that both methods are able to track the torque profile well but have some trouble (particularly intramuscular EMG) in estimating the exact amplitude. The possible cause for the difference, thus the low performance of intramuscular EMG, may be attributed to the very high selectivity of the recordings used in this study.
Improving HCAHPS Scores with Advances in Digital Radiography.
Matthews, Marianne; Cretella, Gregg; Nicholas, William
2016-01-01
The imaging department can be instrumental in contributing to a healthcare facility's ability to succeed in this new era of competition. Advances in DR technology can improve patient perceptions in the imaging department by improving efficiencies and outcomes which, in turn, can ultimately bolster overall HCAHPS scores. Specific areas for improved scores by utilization of DR include nurse communication, doctor communication, pain management, and communication about medication. Value based purchasing brought with it a mandate for hospitals to track key metrics, which requires an investment in time, tools, and human resources. However, this mandate also presents hospitals and imaging departments, with an opportunity to leverage those very metrics to better market their facilities.
Spatial frequency dependence of target signature for infrared performance modeling
NASA Astrophysics Data System (ADS)
Du Bosq, Todd; Olson, Jeffrey
2011-05-01
The standard model used to describe the performance of infrared imagers is the U.S. Army imaging system target acquisition model, based on the targeting task performance metric. The model is characterized by the resolution and sensitivity of the sensor as well as the contrast and task difficulty of the target set. The contrast of the target is defined as a spatial average contrast. The model treats the contrast of the target set as spatially white, or constant, over the bandlimit of the sensor. Previous experiments have shown that this assumption is valid under normal conditions and typical target sets. However, outside of these conditions, the treatment of target signature can become the limiting factor affecting model performance accuracy. This paper examines target signature more carefully. The spatial frequency dependence of the standard U.S. Army RDECOM CERDEC Night Vision 12 and 8 tracked vehicle target sets is described. The results of human perception experiments are modeled and evaluated using both frequency dependent and independent target signature definitions. Finally the function of task difficulty and its relationship to a target set is discussed.
The Fishery Performance Indicators: A Management Tool for Triple Bottom Line Outcomes
Anderson, James L.; Anderson, Christopher M.; Chu, Jingjie; Meredith, Jennifer; Asche, Frank; Sylvia, Gil; Smith, Martin D.; Anggraeni, Dessy; Arthur, Robert; Guttormsen, Atle; McCluney, Jessica K.; Ward, Tim; Akpalu, Wisdom; Eggert, Håkan; Flores, Jimely; Freeman, Matthew A.; Holland, Daniel S.; Knapp, Gunnar; Kobayashi, Mimako; Larkin, Sherry; MacLauchlin, Kari; Schnier, Kurt; Soboil, Mark; Tveteras, Sigbjorn; Uchida, Hirotsugu; Valderrama, Diego
2015-01-01
Pursuit of the triple bottom line of economic, community and ecological sustainability has increased the complexity of fishery management; fisheries assessments require new types of data and analysis to guide science-based policy in addition to traditional biological information and modeling. We introduce the Fishery Performance Indicators (FPIs), a broadly applicable and flexible tool for assessing performance in individual fisheries, and for establishing cross-sectional links between enabling conditions, management strategies and triple bottom line outcomes. Conceptually separating measures of performance, the FPIs use 68 individual outcome metrics—coded on a 1 to 5 scale based on expert assessment to facilitate application to data poor fisheries and sectors—that can be partitioned into sector-based or triple-bottom-line sustainability-based interpretative indicators. Variation among outcomes is explained with 54 similarly structured metrics of inputs, management approaches and enabling conditions. Using 61 initial fishery case studies drawn from industrial and developing countries around the world, we demonstrate the inferential importance of tracking economic and community outcomes, in addition to resource status. PMID:25946194
Advanced Life Support System Value Metric
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Arnold, James O. (Technical Monitor)
1999-01-01
The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.
Climate Classification is an Important Factor in Assessing Hospital Performance Metrics
NASA Astrophysics Data System (ADS)
Boland, M. R.; Parhi, P.; Gentine, P.; Tatonetti, N. P.
2017-12-01
Context/Purpose: Climate is a known modulator of disease, but its impact on hospital performance metrics remains unstudied. Methods: We assess the relationship between Köppen-Geiger climate classification and hospital performance metrics, specifically 30-day mortality, as reported in Hospital Compare, and collected for the period July 2013 through June 2014 (7/1/2013 - 06/30/2014). A hospital-level multivariate linear regression analysis was performed while controlling for known socioeconomic factors to explore the relationship between all-cause mortality and climate. Hospital performance scores were obtained from 4,524 hospitals belonging to 15 distinct Köppen-Geiger climates and 2,373 unique counties. Results: Model results revealed that hospital performance metrics for mortality showed significant climate dependence (p<0.001) after adjusting for socioeconomic factors. Interpretation: Currently, hospitals are reimbursed by Governmental agencies using 30-day mortality rates along with 30-day readmission rates. These metrics allow Government agencies to rank hospitals according to their `performance' along these metrics. Various socioeconomic factors are taken into consideration when determining individual hospitals performance. However, no climate-based adjustment is made within the existing framework. Our results indicate that climate-based variability in 30-day mortality rates does exist even after socioeconomic confounder adjustment. Use of standardized high-level climate classification systems (such as Koppen-Geiger) would be useful to incorporate in future metrics. Conclusion: Climate is a significant factor in evaluating hospital 30-day mortality rates. These results demonstrate that climate classification is an important factor when comparing hospital performance across the United States.
MALBEC: a new CUDA-C ray-tracer in general relativity
NASA Astrophysics Data System (ADS)
Quiroga, G. D.
2018-06-01
A new CUDA-C code for tracing orbits around non-charged black holes is presented. This code, named MALBEC, take advantage of the graphic processing units and the CUDA platform for tracking null and timelike test particles in Schwarzschild and Kerr. Also, a new general set of equations that describe the closed circular orbits of any timelike test particle in the equatorial plane is derived. These equations are extremely important in order to compare the analytical behavior of the orbits with the numerical results and verify the correct implementation of the Runge-Kutta algorithm in MALBEC. Finally, other numerical tests are performed, demonstrating that MALBEC is able to reproduce some well-known results in these metrics in a faster and more efficient way than a conventional CPU implementation.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale
Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786
Performance metrics for the assessment of satellite data products: an ocean color case study
Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coeffic...
Evaluating hydrological model performance using information theory-based metrics
USDA-ARS?s Scientific Manuscript database
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...
Performance Metrics for Soil Moisture Retrievals and Applications Requirements
USDA-ARS?s Scientific Manuscript database
Quadratic performance metrics such as root-mean-square error (RMSE) and time series correlation are often used to assess the accuracy of geophysical retrievals and true fields. These metrics are generally related; nevertheless each has advantages and disadvantages. In this study we explore the relat...
NASA Astrophysics Data System (ADS)
Stisen, S.; Demirel, C.; Koch, J.
2017-12-01
Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.
Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F
2012-01-01
Objectives The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Methods Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Results Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. Conclusion It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required. PMID:21304005
Up Periscope! Designing a New Perceptual Metric for Imaging System Performance
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2016-01-01
Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.
Schoppy, David W; Rhoads, Kim F; Ma, Yifei; Chen, Michelle M; Nussenbaum, Brian; Orosco, Ryan K; Rosenthal, Eben L; Divi, Vasu
2017-11-01
Negative margins and lymph node yields (LNY) of 18 or more from neck dissections in patients with head and neck squamous cell carcinomas (HNSCC) have been associated with improved patient survival. It is unclear whether these metrics can be used to identify hospitals with improved outcomes. To determine whether 2 patient-level metrics would predict outcomes at the hospital level. A retrospective review of records from the National Cancer Database (NCDB) was used to identify patients who underwent primary surgery and concurrent neck dissection for HNSCC between 2004 and 2013. The percentage of patients at each hospital with negative margins on primary resection and an LNY 18 or more from a neck dissection was quantified. Cox proportional hazard models were used to define the association between hospital performance on these metrics and overall survival. Margin status and lymph node yield at hospital level. Overall survival (OS). We identified 1008 hospitals in the NCDB where 64 738 patients met inclusion criteria. Of the 64 738 participants, 45 170 (69.8%) were men and 19 568 (30.2%) were women. The mean SD age of included patients was 60.5 (12.0) years. Patients treated at hospitals attaining the combined metric of a 90% or higher negative margin rate and 80% or more of cases with LNYs of 18 or more experienced a significant reduction in mortality (hazard ratio [HR] 0.93; 95% CI, 0.89-0.98). This benefit in survival was independent of the patient-level improvement associated with negative margins (HR, 0.73; 95% CI, 0.71-0.76) and LNY of 18 or more (HR, 0.85; 95% CI, 0.83-0.88). Including these metrics in the model neutralized the association of traditional measures of hospital quality (volume and teaching status). Treatment at hospitals that attain a high rate of negative margins and LNY of 18 or more is associated with improved survival in patients undergoing surgery for HNSCC. These surgical outcome measures predicted outcomes independent of traditional, but generally nonmodifiable characteristics. Tracking of these metrics may help identify high-quality centers and provide guidance for institution-level quality improvement.
Managing for value. It's not just about the numbers.
Haspeslagh, P; Noda, T; Boulos, F
2001-01-01
In theory, value-based management programs sound seductively simple. Just adopt an economic profit metric, tie compensation to agreed-upon improvement targets in that metric, and voilà! Managers and employees will start making all kinds of value-creating decisions. If only it were that easy. The reality is, almost half of the companies that have adopted a VBM metric have met with mediocre success. That's because, the authors contend, the successful VBM program is really about introducing fundamental changes to a big company's culture. Results from their major research project into the practice of VBM reveal that putting VBM into practice is far more complicated than many of its proponents make it out to be, requiring a great deal of patience, effort, and money. According to the authors' study, companies that successfully use VBM programs share five main characteristics. First, nearly all made explicit and public their commitment to shareholder value. Second, through training, they created an environment receptive to the changes the program would engender. Third, they reinforced that training with broad-based incentive systems closely tied to the VBM performance measures, which gave employees a sense of ownership in both the company and the program. Fourth, they were willing to craft major organizational changes to allow all their workers to make those value-creating decisions. Finally, the changes they introduced to the company's systems and processes were broad and inclusive rather than focused narrowly on financial reports and compensation. A VBM program is difficult and expensive. Still, the authors argue, properly applied, it will put your company's profitability firmly on track.
Korst, Lisa M; Aydin, Carolyn E; Signer, Jordana M K; Fink, Arlene
2011-08-01
The development of readiness metrics for organizational participation in health information exchange is critical for monitoring progress toward, and achievement of, successful inter-organizational collaboration. In preparation for the development of a tool to measure readiness for data-sharing, we tested whether organizational capacities known to be related to readiness were associated with successful participation in an American data-sharing collaborative for quality improvement. Cross-sectional design, using an on-line survey of hospitals in a large, mature data-sharing collaborative organized for benchmarking and improvement in nursing care quality. Factor analysis was used to identify salient constructs, and identified factors were analyzed with respect to "successful" participation. "Success" was defined as the incorporation of comparative performance data into the hospital dashboard. The most important factor in predicting success included survey items measuring the strength of organizational leadership in fostering a culture of quality improvement (QI Leadership): (1) presence of a supportive hospital executive; (2) the extent to which a hospital values data; (3) the presence of leaders' vision for how the collaborative advances the hospital's strategic goals; (4) hospital use of the collaborative data to track quality outcomes; and (5) staff recognition of a strong mandate for collaborative participation (α=0.84, correlation with Success 0.68 [P<0.0001]). The data emphasize the importance of hospital QI Leadership in collaboratives that aim to share data for QI or safety purposes. Such metrics should prove useful in the planning and development of this complex form of inter-organizational collaboration. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Korst, Lisa M.; Aydin, Carolyn E.; Signer, Jordana M. K.; Fink, Arlene
2011-01-01
Objective The development of readiness metrics for organizational participation in health information exchange is critical for monitoring progress toward, and achievement of, successful inter-organizational collaboration. In preparation for the development of a tool to measure readiness for data-sharing, we tested whether organizational capacities known to be related to readiness were associated with successful participation in an American data-sharing collaborative for quality improvement. Design Cross-sectional design, using an on-line survey of hospitals in a large, mature data-sharing collaborative organized for benchmarking and improvement in nursing care quality. Measurements Factor analysis was used to identify salient constructs, and identified factors were analyzed with respect to “successful” participation. “Success” was defined as the incorporation of comparative performance data into the hospital dashboard. Results The most important factor in predicting success included survey items measuring the strength of organizational leadership in fostering a culture of quality improvement (QI Leadership): 1) presence of a supportive hospital executive; 2) the extent to which a hospital values data; 3) the presence of leaders’ vision for how the collaborative advances the hospital’s strategic goals; 4) hospital use of the collaborative data to track quality outcomes; and 5) staff recognition of a strong mandate for collaborative participation (α = 0.84, correlation with Success 0.68 [P < 0.0001]). Conclusion The data emphasize the importance of hospital QI Leadership in collaboratives that aim to share data for QI or safety purposes. Such metrics should prove useful in the planning and development of this complex form of inter-organizational collaboration. PMID:21330191
Gaewsky, James P; Weaver, Ashley A; Koya, Bharath; Stitzel, Joel D
2015-01-01
A 3-phase real-world motor vehicle crash (MVC) reconstruction method was developed to analyze injury variability as a function of precrash occupant position for 2 full-frontal Crash Injury Research and Engineering Network (CIREN) cases. Phase I: A finite element (FE) simplified vehicle model (SVM) was developed and tuned to mimic the frontal crash characteristics of the CIREN case vehicle (Camry or Cobalt) using frontal New Car Assessment Program (NCAP) crash test data. Phase II: The Toyota HUman Model for Safety (THUMS) v4.01 was positioned in 120 precrash configurations per case within the SVM. Five occupant positioning variables were varied using a Latin hypercube design of experiments: seat track position, seat back angle, D-ring height, steering column angle, and steering column telescoping position. An additional baseline simulation was performed that aimed to match the precrash occupant position documented in CIREN for each case. Phase III: FE simulations were then performed using kinematic boundary conditions from each vehicle's event data recorder (EDR). HIC15, combined thoracic index (CTI), femur forces, and strain-based injury metrics in the lung and lumbar vertebrae were evaluated to predict injury. Tuning the SVM to specific vehicle models resulted in close matches between simulated and test injury metric data, allowing the tuned SVM to be used in each case reconstruction with EDR-derived boundary conditions. Simulations with the most rearward seats and reclined seat backs had the greatest HIC15, head injury risk, CTI, and chest injury risk. Calculated injury risks for the head, chest, and femur closely correlated to the CIREN occupant injury patterns. CTI in the Camry case yielded a 54% probability of Abbreviated Injury Scale (AIS) 2+ chest injury in the baseline case simulation and ranged from 34 to 88% (mean = 61%) risk in the least and most dangerous occupant positions. The greater than 50% probability was consistent with the case occupant's AIS 2 hemomediastinum. Stress-based metrics were used to predict injury to the lower leg of the Camry case occupant. The regional-level injury metrics evaluated for the Cobalt case occupant indicated a low risk of injury; however, strain-based injury metrics better predicted pulmonary contusion. Approximately 49% of the Cobalt occupant's left lung was contused, though the baseline simulation predicted 40.5% of the lung to be injured. A method to compute injury metrics and risks as functions of precrash occupant position was developed and applied to 2 CIREN MVC FE reconstructions. The reconstruction process allows for quantification of the sensitivity and uncertainty of the injury risk predictions based on occupant position to further understand important factors that lead to more severe MVC injuries.
Automated Metrics in a Virtual-Reality Myringotomy Simulator: Development and Construct Validity.
Huang, Caiwen; Cheng, Horace; Bureau, Yves; Ladak, Hanif M; Agrawal, Sumit K
2018-06-15
The objectives of this study were: 1) to develop and implement a set of automated performance metrics into the Western myringotomy simulator, and 2) to establish construct validity. Prospective simulator-based assessment study. The Auditory Biophysics Laboratory at Western University, London, Ontario, Canada. Eleven participants were recruited from the Department of Otolaryngology-Head & Neck Surgery at Western University: four senior otolaryngology consultants and seven junior otolaryngology residents. Educational simulation. Discrimination between expert and novice participants on five primary automated performance metrics: 1) time to completion, 2) surgical errors, 3) incision angle, 4) incision length, and 5) the magnification of the microscope. Automated performance metrics were developed, programmed, and implemented into the simulator. Participants were given a standardized simulator orientation and instructions on myringotomy and tube placement. Each participant then performed 10 procedures and automated metrics were collected. The metrics were analyzed using the Mann-Whitney U test with Bonferroni correction. All metrics discriminated senior otolaryngologists from junior residents with a significance of p < 0.002. Junior residents had 2.8 times more errors compared with the senior otolaryngologists. Senior otolaryngologists took significantly less time to completion compared with junior residents. The senior group also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. Automated quantitative performance metrics were successfully developed and implemented, and construct validity was established by discriminating between expert and novice participants.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
Single slice US-MRI registration for neurosurgical MRI-guided US
NASA Astrophysics Data System (ADS)
Pardasani, Utsav; Baxter, John S. H.; Peters, Terry M.; Khan, Ali R.
2016-03-01
Image-based ultrasound to magnetic resonance image (US-MRI) registration can be an invaluable tool in image-guided neuronavigation systems. State-of-the-art commercial and research systems utilize image-based registration to assist in functions such as brain-shift correction, image fusion, and probe calibration. Since traditional US-MRI registration techniques use reconstructed US volumes or a series of tracked US slices, the functionality of this approach can be compromised by the limitations of optical or magnetic tracking systems in the neurosurgical operating room. These drawbacks include ergonomic issues, line-of-sight/magnetic interference, and maintenance of the sterile field. For those seeking a US vendor-agnostic system, these issues are compounded with the challenge of instrumenting the probe without permanent modification and calibrating the probe face to the tracking tool. To address these challenges, this paper explores the feasibility of a real-time US-MRI volume registration in a small virtual craniotomy site using a single slice. We employ the Linear Correlation of Linear Combination (LC2) similarity metric in its patch-based form on data from MNI's Brain Images for Tumour Evaluation (BITE) dataset as a PyCUDA enabled Python module in Slicer. By retaining the original orientation information, we are able to improve on the poses using this approach. To further assist the challenge of US-MRI registration, we also present the BOXLC2 metric which demonstrates a speed improvement to LC2, while retaining a similar accuracy in this context.
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
Direct energy balance based active disturbance rejection control for coal-fired power plant.
Sun, Li; Hua, Qingsong; Li, Donghai; Pan, Lei; Xue, Yali; Lee, Kwang Y
2017-09-01
The conventional direct energy balance (DEB) based PI control can fulfill the fundamental tracking requirements of the coal-fired power plant. However, it is challenging to deal with the cases when the coal quality variation is present. To this end, this paper introduces the active disturbance rejection control (ADRC) to the DEB structure, where the coal quality variation is deemed as a kind of unknown disturbance that can be estimated and mitigated promptly. Firstly, the nonlinearity of a recent power plant model is analyzed based on the gap metric, which provides guidance on how to set the pressure set-point in line with the power demand. Secondly, the approximate decoupling effect of the DEB structure is analyzed based on the relative gain analysis in frequency domain. Finally, the synthesis of the DEB based ADRC control system is carried out based on multi-objective optimization. The optimized ADRC results show that the integrated absolute error (IAE) indices of the tracking performances in both loops can be simultaneously improved, in comparison with the DEB based PI control and H ∞ control system. The regulation performance in the presence of the coal quality variation is significantly improved under the ADRC control scheme. Moreover, the robustness of the proposed strategy is shown comparable with the H ∞ control. Copyright © 2017. Published by Elsevier Ltd.
Zhou, Junhong; Habtemariam, Daniel; Iloputaife, Ikechukwu; Lipsitz, Lewis A; Manor, Brad
2017-06-07
Standing postural control is complex, meaning that it is dependent upon numerous inputs interacting across multiple temporal-spatial scales. Diminished physiologic complexity of postural sway has been linked to reduced ability to adapt to stressors. We hypothesized that older adults with lower postural sway complexity would experience more falls in the future. 738 adults aged ≥70 years completed the Short Physical Performance Battery test (SPPB) test and assessments of single and dual-task standing postural control. Postural sway complexity was quantified using multiscale entropy. Falls were subsequently tracked for 48 months. Negative binomial regression demonstrated that older adults with lower postural sway complexity in both single and dual-task conditions had higher future fall rate (incident rate ratio (IRR) = 0.98, p = 0.02, 95% Confidence Limits (CL) = 0.96-0.99). Notably, participants in the lowest quintile of complexity during dual-task standing suffered 48% more falls during the four-year follow-up as compared to those in the highest quintile (IRR = 1.48, p = 0.01, 95% CL = 1.09-1.99). Conversely, traditional postural sway metrics or SPPB performance did not associate with future falls. As compared to traditional metrics, the degree of multi-scale complexity contained within standing postural sway-particularly during dual task conditions- appears to be a better predictor of future falls in older adults.
A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric
NASA Technical Reports Server (NTRS)
Simon, Donald L.
2011-01-01
Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed
Reconceiving the hippocampal map as a topological template
Dabaghian, Yuri; Brandt, Vicky L; Frank, Loren M
2014-01-01
The role of the hippocampus in spatial cognition is incontrovertible yet controversial. Place cells, initially thought to be location-specifiers, turn out to respond promiscuously to a wide range of stimuli. Here we test the idea, which we have recently demonstrated in a computational model, that the hippocampal place cells may ultimately be interested in a space's topological qualities (its connectivity) more than its geometry (distances and angles); such higher-order functioning would be more consistent with other known hippocampal functions. We recorded place cell activity in rats exploring morphing linear tracks that allowed us to dissociate the geometry of the track from its topology. The resulting place fields preserved the relative sequence of places visited along the track but did not vary with the metrical features of the track or the direction of the rat's movement. These results suggest a reinterpretation of previous studies and new directions for future experiments. DOI: http://dx.doi.org/10.7554/eLife.03476.001 PMID:25141375
Ocean Heat Content Reveals Secrets of Fish Migrations
Luo, Jiangang; Ault, Jerald S.; Shay, Lynn K.; Hoolihan, John P.; Prince, Eric D.; Brown, Craig A.; Rooker, Jay R.
2015-01-01
For centuries, the mechanisms surrounding spatially complex animal migrations have intrigued scientists and the public. We present a new methodology using ocean heat content (OHC), a habitat metric that is normally a fundamental part of hurricane intensity forecasting, to estimate movements and migration of satellite-tagged marine fishes. Previous satellite-tagging research of fishes using archival depth, temperature and light data for geolocations have been too coarse to resolve detailed ocean habitat utilization. We combined tag data with OHC estimated from ocean circulation and transport models in an optimization framework that substantially improved geolocation accuracy over SST-based tracks. The OHC-based movement track provided the first quantitative evidence that many of the tagged highly migratory fishes displayed affinities for ocean fronts and eddies. The OHC method provides a new quantitative tool for studying dynamic use of ocean habitats, migration processes and responses to environmental changes by fishes, and further, improves ocean animal tracking and extends satellite-based animal tracking data for other potential physical, ecological, and fisheries applications. PMID:26484541
Integrated framework for developing search and discrimination metrics
NASA Astrophysics Data System (ADS)
Copeland, Anthony C.; Trivedi, Mohan M.
1997-06-01
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
Pilot modeling and closed-loop analysis of flexible aircraft in the pitch tracking task
NASA Technical Reports Server (NTRS)
Schmidt, D. K.
1983-01-01
The issue addressed in the appropriate modeling technique for pilot vehicle analysis of large flexible aircraft, when the frequency separation between the rigid-body mode and the dynamic aeroelastic modes is reduced. This situation was shown to have significant effects on pitch-tracking performance and subjective rating of the task, obtained via fixed base simulation. Further, the dynamics in these cases are not well modeled with a rigid-body-like model obtained by including only 'static elastic' effects, for example. It is shown that pilot/vehicle analysis of this data supports the hypothesis that an appropriate pilot-model structure is an optimal-control pilot model of full order. This is in contrast to the contention that a representative model is of reduced order when the subject is controlling high-order dynamics as in a flexible vehicle. The key appears to be in the correct assessment of the pilot's objective of attempting to control 'rigid-body' vehicle response, a response that must be estimated by the pilot from observations contaminated by aeroelastic dynamics. Finally, a model-based metric is shown to correlate well with the pilot's subjective ratings.
Azar, A D; Finley, E; Harris, K D
2015-01-01
A complete analysis of strain tolerance in a stretchable transparent conductor (TC) should include tracking of both electrical conductivity and transparency during strain; however, transparency is generally neglected in contemporary analyses. In this paper, we describe an apparatus that tracks both parameters while TCs of arbitrary composition are deformed under stretching-mode strain. We demonstrate the tool by recording the electrical resistance and light transmission spectra for indium tin oxide-coated plastic substrates under both linearly increasing strain and complex cyclic strain processes. The optics are sensitive across the visible spectrum and into the near-infrared region (∼400-900 nm), and without specifically optimizing for sampling speed, we achieve a time resolution of ∼200 ms. In our automated analysis routine, we include a calculation of a common TC figure of merit (FOM), and because solar cell electrodes represent a key TC application, we also weigh both our transparency and FOM results against the solar power spectrum to determine "solar transparency" and "solar FOM." Finally, we demonstrate how the apparatus may be adapted to measure the basic performance metrics for complete solar cells under uniaxial strain.
NASA Astrophysics Data System (ADS)
Jeffries, G. R.; Cohn, A.
2016-12-01
Soy-corn double cropping (DC) has been widely adopted in Central Brazil alongside single cropped (SC) soybean production. DC involves different cropping calendars, soy varieties, and may be associated with different crop yield patterns and volatility than SC. Study of the performance of the region's agriculture in a changing climate depends on tracking differences in the productivity of SC vs. DC, but has been limited by crop yield data that conflate the two systems. We predicted SC and DC yields across Central Brazil, drawing on field observations and remotely sensed data. We first modeled field yield estimates as a function of remotely sensed DC status and vegetation index (VI) metrics, and other management and biophysical factors. We then used the statistical model estimated to predict SC and DC soybean yields at each 500 m2 grid cell of Central Brazil for harvest years 2001 - 2015. The yield estimation model was constructed using 1) a repeated cross-sectional survey of soybean yields and management factors for years 2007-2015, 2) a custom agricultural land cover classification dataset which assimilates earlier datasets for the region, and 3) 500m 8-day MODIS image composites used to calculate the wide dynamic range vegetation index (WDRVI) and derivative metrics such as area under the curve for WDRVI values in critical crop development periods. A statistical yield estimation model which primarily entails WDRVI metrics, DC status, and spatial fixed effects was developed on a subset of the yield dataset. Model validation was conducted by predicting previously withheld yield records, and then assessing error and goodness-of-fit for predicted values with metrics including root mean squared error (RMSE), mean squared error (MSE), and R2. We found a statistical yield estimation model which incorporates WDRVI and DC status to be way to estimate crop yields over the region. Statistical properties of the resulting gridded yield dataset may be valuable for understanding linkages between crop yields, farm management factors, and climate.
Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan
2014-12-01
Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of South Florida--System Work Plan Presentation for 2012-13 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2012
2012-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of South Florida Tampa Work Plan Presentation for 2013-14 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2013
2013-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
University of South Florida System Work Plan Presentation for 2014-15 Board of Governors Review
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2014
2014-01-01
The State University System of Florida has developed three tools that aid in guiding the System's future: (1) The Board of Governors' new "Strategic Plan 2012-2025" is driven by goals and associated metrics that stake out where the System is headed; (2) The Board's "Annual Accountability Report" provides yearly tracking for how…
Deeply-Integrated Feature Tracking for Embedded Navigation
2009-03-01
metric would result in increased feature strength, but a decrease in repeatability. The feature spacing also helped with repeatability of strong...locations in the second frame. This relationship is a constraint of projective geometry and states that the cross product of a point with itself (when...integrated refers to the incorporation of inertial information into the image processing, rather than just
Scaling Student Success with Predictive Analytics: Reflections after Four Years in the Data Trenches
ERIC Educational Resources Information Center
Wagner, Ellen; Longanecker, David
2016-01-01
The metrics used in the US to track students do not include adults and part-time students. This has led to the development of a massive data initiative--the Predictive Analytics Reporting (PAR) framework--that uses predictive analytics to trace the progress of all types of students in the system. This development has allowed actionable,…
ERIC Educational Resources Information Center
Hanushek, Eric A.; Woessmann, Ludger
2009-01-01
We provide evidence that the robust association between cognitive skills and economic growth reflects a causal effect of cognitive skills and supports the economic benefits of effective school policy. We develop a new common metric that allows tracking student achievement across countries, over time, and along the within-country distribution.…
Analyzing critical material demand: A revised approach.
Nguyen, Ruby Thuy; Fishman, Tomer; Zhao, Fu; Imholte, D D; Graedel, T E
2018-07-15
Apparent consumption has been widely used as a metric to estimate material demand. However, with technology advancement and complexity of material use, this metric has become less useful in tracking material flows, estimating recycling feedstocks, and conducting life cycle assessment of critical materials. We call for future research efforts to focus on building a multi-tiered consumption database for the global trade network of critical materials. This approach will help track how raw materials are processed into major components (e.g., motor assemblies) and eventually incorporated into complete pieces of equipment (e.g., wind turbines). Foreseeable challenges would involve: 1) difficulty in obtaining a comprehensive picture of trade partners due to business sensitive information, 2) complexity of materials going into components of a machine, and 3) difficulty maintaining such a database. We propose ways to address these challenges such as making use of digital design, learning from the experience of building similar databases, and developing a strategy for financial sustainability. We recommend that, with the advancement of information technology, small steps toward building such a database will contribute significantly to our understanding of material flows in society and the associated human impacts on the environment. Copyright © 2018 Elsevier B.V. All rights reserved.
Non-destructive evaluation of polyolefin thermal aging using infrared spectroscopy
NASA Astrophysics Data System (ADS)
Fifield, Leonard S.; Shin, Yongsoon; Simmons, Kevin L.
2017-04-01
Fourier transform infrared (FTIR) spectroscopy is an information-rich method that reveals chemical bonding near the surface of polymer composites. FTIR can be used to verify composite composition, identify chemical contaminants and expose composite moisture content. Polymer matrix changes due to thermal exposure including loss of additives, chain scission, oxidation and changes in crystallinity may also be determined using FTIR spectra. Portable handheld instruments using non-contact reflectance or surface contact attenuated total reflectance (ATR) may be used for nondestructive evaluation (NDE) of thermal aging in polymer and composite materials of in-service components. We report the use of ATR FTIR to track oxidative thermal aging in ethylene-propylene rubber (EPR) and chlorinated polyethylene (CPE) materials used in medium voltage nuclear power plant electrical cable insulation and jacketing. Mechanical property changes of the EPR and CPE materials with thermal degradation for correlation with FTIR data are tracked using indenter modulus (IM) testing. IM is often used as a local NDE metric of cable jacket health. The FTIR-determined carbonyl index was found to increase with IM and may be a valuable NDE metric with advantages over IM for assessing cable remaining useful life.
Martinho, Filipe; Nyitrai, Daniel; Crespo, Daniel; Pardal, Miguel A
2015-12-15
Facing a generalized increase in water degradation, several programmes have been implemented for protecting and enhancing the water quality and associated wildlife, which rely on ecological indicators to assess the degree of deviation from a pristine state. Here, single (species number, Shannon-Wiener H', Pielou J') and multi-metric (Estuarine Fish Assessment Index, EFAI) community-based ecological quality measures were evaluated in a temperate estuary over an 8-year period (2005-2012), and established their relationships with an anthropogenic pressure index (API). Single metric indices were highly variable and neither concordant amongst themselves nor with the EFAI. The EFAI was the only index significantly correlated with the API, indicating that higher ecological quality was associated with lower anthropogenic pressure. Pressure scenarios were related with specific fish community composition, as a result of distinct food web complexity and nursery functioning of the estuary. Results were discussed in the scope of the implementation of water protection programmes. Copyright © 2015 Elsevier Ltd. All rights reserved.
First International Diagnosis Competition - DXC'09
NASA Technical Reports Server (NTRS)
Kurtoglu, tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander
2009-01-01
A framework to compare and evaluate diagnosis algorithms (DAs) has been created jointly by NASA Ames Research Center and PARC. In this paper, we present the first concrete implementation of this framework as a competition called DXC 09. The goal of this competition was to evaluate and compare DAs in a common platform and to determine a winner based on diagnosis results. 12 DAs (model-based and otherwise) competed in this first year of the competition in 3 tracks that included industrial and synthetic systems. Specifically, the participants provided algorithms that communicated with the run-time architecture to receive scenario data and return diagnostic results. These algorithms were run on extended scenario data sets (different from sample set) to compute a set of pre-defined metrics. A ranking scheme based on weighted metrics was used to declare winners. This paper presents the systems used in DXC 09, description of faults and data sets, a listing of participating DAs, the metrics and results computed from running the DAs, and a superficial analysis of the results.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...
An Evaluation of the IntelliMetric[SM] Essay Scoring System
ERIC Educational Resources Information Center
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine
2006-01-01
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.
Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.
Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sego, Landon H.; Marquez, Andres; Rawson, Andrew
2013-06-30
As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less
NASA Astrophysics Data System (ADS)
Portnoy, David; Fisher, Brian; Phifer, Daniel
2015-06-01
The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal spectral data at 1 s time intervals, which represents data collected by a mobile system operating in a dynamic radiation background environment; and one that represents static measurements with a foreground spectrum (background plus source) and a background spectrum. These data include controlled variations in both Source Related Factors (nuclide, nuclide combinations, activities, distances, collection times, shielding configurations, and background spectra) and Detector Related Factors (currently only gain shifts, but resolution changes and non-linear energy calibration errors will be added soon). The software tools will allow the developer to evaluate the performance impact of each of these factors. Although this first implementation is somewhat limited in scope, considering only NaI-based detection systems and two application domains, it is hoped that (with community feedback) a wider range of detector types and applications will be included in the future. This article describes the methods used for dataset creation, the software validation/performance measurement tools, the performance metrics used, and examples of baseline performance.
Quantification of three-dimensional cell-mediated collagen remodeling using graph theory.
Bilgin, Cemal Cagatay; Lund, Amanda W; Can, Ali; Plopper, George E; Yener, Bülent
2010-09-30
Cell cooperation is a critical event during tissue development. We present the first precise metrics to quantify the interaction between mesenchymal stem cells (MSCs) and extra cellular matrix (ECM). In particular, we describe cooperative collagen alignment process with respect to the spatio-temporal organization and function of mesenchymal stem cells in three dimensions. We defined two precise metrics: Collagen Alignment Index and Cell Dissatisfaction Level, for quantitatively tracking type I collagen and fibrillogenesis remodeling by mesenchymal stem cells over time. Computation of these metrics was based on graph theory and vector calculus. The cells and their three dimensional type I collagen microenvironment were modeled by three dimensional cell-graphs and collagen fiber organization was calculated from gradient vectors. With the enhancement of mesenchymal stem cell differentiation, acceleration through different phases was quantitatively demonstrated. The phases were clustered in a statistically significant manner based on collagen organization, with late phases of remodeling by untreated cells clustering strongly with early phases of remodeling by differentiating cells. The experiments were repeated three times to conclude that the metrics could successfully identify critical phases of collagen remodeling that were dependent upon cooperativity within the cell population. Definition of early metrics that are able to predict long-term functionality by linking engineered tissue structure to function is an important step toward optimizing biomaterials for the purposes of regenerative medicine.
Introducing Co-Activation Pattern Metrics to Quantify Spontaneous Brain Network Dynamics
Chen, Jingyuan E.; Chang, Catie; Greicius, Michael D.; Glover, Gary H.
2015-01-01
Recently, fMRI researchers have begun to realize that the brain's intrinsic network patterns may undergo substantial changes during a single resting state (RS) scan. However, despite the growing interest in brain dynamics, metrics that can quantify the variability of network patterns are still quite limited. Here, we first introduce various quantification metrics based on the extension of co-activation pattern (CAP) analysis, a recently proposed point-process analysis that tracks state alternations at each individual time frame and relies on very few assumptions; then apply these proposed metrics to quantify changes of brain dynamics during a sustained 2-back working memory (WM) task compared to rest. We focus on the functional connectivity of two prominent RS networks, the default-mode network (DMN) and executive control network (ECN). We first demonstrate less variability of global Pearson correlations with respect to the two chosen networks using a sliding-window approach during WM task compared to rest; then we show that the macroscopic decrease in variations in correlations during a WM task is also well characterized by the combined effect of a reduced number of dominant CAPs, increased spatial consistency across CAPs, and increased fractional contributions of a few dominant CAPs. These CAP metrics may provide alternative and more straightforward quantitative means of characterizing brain network dynamics than time-windowed correlation analyses. PMID:25662866
Use of Business Intelligence Tools in the DSN
NASA Technical Reports Server (NTRS)
Statman, Joseph I.; Zendejas, Silvino C.
2010-01-01
JPL has operated the Deep Space Network (DSN) on behalf of NASA since the 1960's. Over the last two decades, the DSN budget has generally declined in real-year dollars while the aging assets required more attention, and the missions became more complex. As a result, the DSN budget has been increasingly consumed by Operations and Maintenance (O&M), significantly reducing the funding wedge available for technology investment and for enhancing the DSN capability and capacity. Responding to this budget squeeze, the DSN launched an effort to improve the cost-efficiency of the O&M. In this paper we: elaborate on the methodology adopted to understand "where the time and money are used"-surprisingly, most of the data required for metrics development was readily available in existing databases-we have used commercial Business Intelligence (BI) tools to mine the databases and automatically extract the metrics (including trends) and distribute them weekly to interested parties; describe the DSN-specific effort to convert the intuitive understanding of "where the time is spent" into meaningful and actionable metrics that quantify use of resources, highlight candidate areas of improvement, and establish trends; and discuss the use of the BI-derived metrics-one of the most fascinating processes was the dramatic improvement in some areas of operations when the metrics were shared with the operators-the visibility of the metrics, and a self-induced competition, caused almost immediate improvement in some areas. While the near-term use of the metrics is to quantify the processes and track the improvement, these techniques will be just as useful in monitoring the process, e.g. as an input to a lean-six-sigma process.
NASA Astrophysics Data System (ADS)
Chivukula, V. Keshav; McGah, Patrick; Prisco, Anthony; Beckman, Jennifer; Mokadam, Nanush; Mahr, Claudius; Aliseda, Alberto
2016-11-01
Flow in the aortic vasculature may impact stroke risk in patients with left ventricular assist devices (LVAD) due to severely altered hemodynamics. Patient-specific 3D models of the aortic arch and great vessels were created with an LVAD outflow graft at 45, 60 and 90° from centerline of the ascending aorta, in order to understand the effect of surgical placement on hemodynamics and thrombotic risk. Intermittent aortic valve opening (once every five cardiac cycles) was simulated and the impact of this residual native output investigated for the potential to wash out stagnant flow in the aortic root region. Unsteady CFD simulations with patient-specific boundary conditions were performed. Particle tracking for 10 cardiac cycles was used to determine platelet residence times and shear stress histories. Thrombosis risk was assessed by a combination of Eulerian and Lagrangian metrics and a newly developed thrombogenic potential metric. Results show a strong influence of LVAD outflow graft angle on hemodynamics in the ascending aorta and consequently on stroke risk, with a highly positive impact of aortic valve opening, even at low frequencies. Optimization of LVAD implantation and management strategies based on patient-specific simulations to minimize stroke risk will be presented
Comparison of three different techniques for camera and motion control of a teleoperated robot.
Doisy, Guillaume; Ronen, Adi; Edan, Yael
2017-01-01
This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.
Common world model for unmanned systems
NASA Astrophysics Data System (ADS)
Dean, Robert Michael S.
2013-05-01
The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines. The Common World Model must understand how these objects relate to each other. Our world model includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation approach to the world model. We discuss the design of "Phase 1" of this world model, and interfaces by tracing perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We close with lessons learned from implementation and how the design relates to Open Architecture.
McLean, Kathleen E.; Yao, Jiayun; Henderson, Sarah B.
2015-01-01
The British Columbia Asthma Monitoring System (BCAMS) tracks forest fire smoke exposure and asthma-related health outcomes, identifying excursions beyond expected daily counts. Weekly reports during the wildfire season support public health and emergency management decision-making. We evaluated BCAMS by identifying excursions for asthma-related physician visits and dispensations of the reliever medication salbutamol sulfate and examining their corresponding smoke exposures. A disease outbreak detection algorithm identified excursions from 1 July to 31 August 2014. Measured, modeled, and forecasted concentrations of fine particulate matter (PM2.5) were used to assess exposure. We assigned PM2.5 levels to excursions by choosing the highest value within a seven day window centred on the excursion day. Smoky days were defined as those with PM2.5 levels ≥ 25 µg/m3. Most excursions (57%–71%) were assigned measured or modeled PM2.5 concentrations of 10 µg/m3 or higher. Of the smoky days, 55.8% and 69.8% were associated with at least one excursion for physician visits and salbutamol dispensations, respectively. BCAMS alerted most often when measures of smoke exposure were relatively high. Better performance might be realized by combining asthma-related outcome metrics in a bivariate model. PMID:26075727
Najjar, Peter; Kachalia, Allen; Sutherland, Tori; Beloff, Jennifer; David-Kasdan, Jo Ann; Bates, David W; Urman, Richard D
2015-01-01
The AHRQ Patient Safety Indicators (PSIs) are used for calculation of risk-adjusted postoperative rates for adverse events. The payers and quality consortiums are increasingly requiring public reporting of hospital performance on these metrics. We discuss processes designed to improve the accuracy and clinical utility of PSI reporting in practice. The study was conducted at a 793-bed tertiary care academic medical center where PSI processes have been aggressively implemented to track patient safety events at discharge. A three-phased approach to improving administrative data quality was implemented. The initiative consisted of clinical review of all PSIs, documentation improvement, and provider outreach including active querying for patient safety events. This multidisciplinary effort to develop a streamlined process for PSI calculation reduced the reporting of miscoded PSIs and increased the clinical utility of PSI monitoring. Over 4 quarters, 4 of 41 (10%) PSI-11 and 9 of 138 (7%) PSI-15 errors were identified on review of clinical documentation and appropriate adjustments were made. A multidisciplinary, phased approach leveraging existing billing infrastructure for robust metric coding, ongoing clinical review, and frontline provider outreach is a novel and effective way to reduce the reporting of false-positive outcomes and improve the clinical utility of PSIs.
Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)
NASA Astrophysics Data System (ADS)
Blasch, Erik
2015-06-01
Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.
Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.
2003-01-01
Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.
An Analysis of NASA Technology Transfer. Degree awarded by Pennsylvania State Univ.
NASA Technical Reports Server (NTRS)
Bush, Lance B.
1996-01-01
A review of previous technology transfer metrics, recommendations, and measurements is presented within the paper. A quantitative and qualitative analysis of NASA's technology transfer efforts is performed. As a relative indicator, NASA's intellectual property performance is benchmarked against a database of over 100 universities. Successful technology transfer (commercial sales, production savings, etc.) cases were tracked backwards through their history to identify the key critical elements that lead to success. Results of this research indicate that although NASA's performance is not measured well by quantitative values (intellectual property stream data), it has a net positive impact on the private sector economy. Policy recommendations are made regarding technology transfer within the context of the documented technology transfer policies since the framing of the Constitution. In the second thrust of this study, researchers at NASA Langley Research Center were surveyed to determine their awareness of, attitude toward, and perception about technology transfer. Results indicate that although researchers believe technology transfer to be a mission of the Agency, they should not be held accountable or responsible for its performance. In addition, the researchers are not well educated about the mechanisms to perform, or policies regarding, technology transfer.
Model Adaptation for Prognostics in a Particle Filtering Framework
NASA Technical Reports Server (NTRS)
Saha, Bhaskar; Goebel, Kai Frank
2011-01-01
One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.
Hydrothermal Gasification for Waste to Energy
NASA Astrophysics Data System (ADS)
Epps, Brenden; Laser, Mark; Choo, Yeunun
2014-11-01
Hydrothermal gasification is a promising technology for harvesting energy from waste streams. Applications range from straightforward waste-to-energy conversion (e.g. municipal waste processing, industrial waste processing), to water purification (e.g. oil spill cleanup, wastewater treatment), to biofuel energy systems (e.g. using algae as feedstock). Products of the gasification process are electricity, bottled syngas (H2 + CO), sequestered CO2, clean water, and inorganic solids; further chemical reactions can be used to create biofuels such as ethanol and biodiesel. We present a comparison of gasification system architectures, focusing on efficiency and economic performance metrics. Various system architectures are modeled computationally, using a model developed by the coauthors. The physical model tracks the mass of each chemical species, as well as energy conversions and transfers throughout the gasification process. The generic system model includes the feedstock, gasification reactor, heat recovery system, pressure reducing mechanical expanders, and electricity generation system. Sensitivity analysis of system performance to various process parameters is presented. A discussion of the key technological barriers and necessary innovations is also presented.
Economic Metrics for Commercial Reusable Space Transportation Systems
NASA Technical Reports Server (NTRS)
Shaw, Eric J.; Hamaker, Joseph (Technical Monitor)
2000-01-01
The success of any effort depends upon the effective initial definition of its purpose, in terms of the needs to be satisfied and the goals to be fulfilled. If the desired product is "A System" that is well-characterized, these high-level need and goal statements can be transformed into system requirements by traditional systems engineering techniques. The satisfaction of well-designed requirements can be tracked by fairly straightforward cost, schedule, and technical performance metrics. Unfortunately, some types of efforts, including those that NASA terms "Programs," tend to resist application of traditional systems engineering practices. In the NASA hierarchy of efforts, a "Program" is often an ongoing effort with broad, high-level goals and objectives. A NASA "project" is a finite effort, in terms of budget and schedule, that usually produces or involves one System. Programs usually contain more than one project and thus more than one System. Special care must be taken in the formulation of NASA Programs and their projects, to ensure that lower-level project requirements are traceable to top-level Program goals, feasible with the given cost and schedule constraints, and measurable against top-level goals. NASA Programs and projects are tasked to identify the advancement of technology as an explicit goal, which introduces more complicating factors. The justification for funding of technology development may be based on the technology's applicability to more than one System, Systems outside that Program or even external to NASA. Application of systems engineering to broad-based technology development, leading to effective measurement of the benefits, can be valid, but it requires that potential beneficiary Systems be organized into a hierarchical structure, creating a "system of Systems." In addition, these Systems evolve with the successful application of the technology, which creates the necessity for evolution of the benefit metrics to reflect the changing baseline. Still, economic metrics for technology development in these Programs and projects remain fairly straightforward, being based on reductions in acquisition and operating costs of the Systems. One of the most challenging requirements that NASA levies on its Programs is to plan for the commercialization of the developed technology. Some NASA Programs are created for the express purpose of developing technology for a particular industrial sector, such as aviation or space transportation, in financial partnership with that sector. With industrial investment, another set of goals, constraints and expectations are levied on the technology program. Economic benefit metrics then expand beyond cost and cost savings to include the marketability, profit, and investment return requirements of the private sector. Commercial investment criteria include low risk, potential for high return, and strategic alignment with existing product lines. These corporate criteria derive from top-level strategic plans and investment goals, which rank high among the most proprietary types of information in any business. As a result, top-level economic goals and objectives that industry partners bring to cooperative programs cannot usually be brought into technical processes, such as systems engineering, that are worked collaboratively between Industry and Government. In spite of these handicaps, the top-level economic goals and objectives of a joint technology program can be crafted in such a way that they accurately reflect the fiscal benefits from both Industry and Government perspectives. Valid economic metrics can then be designed that can track progress toward these goals and objectives, while maintaining the confidentiality necessary for the competitive process.
Structural texture similarity metrics for image analysis and retrieval.
Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L
2013-07-01
We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
Implementation of SAP Waste Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frost, M.L.; LaBorde, C.M.; Nichols, C.D.
2008-07-01
The Y-12 National Security Complex (Y-12) assumed responsibility for newly generated waste on October 1, 2005. To ensure effective management and accountability of newly generated waste, Y-12 has opted to utilize SAP, Y-12's Enterprise Resource Planning (ERP) tool, to track low-level radioactive waste (LLW), mixed waste (MW), hazardous waste, and non-regulated waste from generation through acceptance and disposal. SAP Waste will include the functionality of the current waste tracking system and integrate with the applicable modules of SAP already in use. The functionality of two legacy systems, the Generator Entry System (GES) and the Waste Information Tracking System (WITS), andmore » peripheral spreadsheets, databases, and e-mail/fax communications will be replaced by SAP Waste. Fundamentally, SAP Waste will promote waste acceptance for certification and disposal, not storage. SAP Waste will provide a one-time data entry location where waste generators can enter waste container information, track the status of their waste, and maintain documentation. A benefit of the new system is that it will provide a single data repository where Y-12's Waste Management organization can establish waste profiles, verify and validate data, maintain inventory control utilizing hand-held data transfer devices, schedule and ship waste, manage project accounting, and report on waste handling activities. This single data repository will facilitate the production of detailed waste generation reports for use in forecasting and budgeting, provide the data for required regulatory reports, and generate metrics to evaluate the performance of the Waste Management organization and its subcontractors. SAP Waste will replace the outdated and expensive legacy system, establish tools the site needs to manage newly generated waste, and optimize the use of the site's ERP tool for integration with related business processes while promoting disposition of waste. (authors)« less
Information technology model for evaluating emergency medicine teaching
NASA Astrophysics Data System (ADS)
Vorbach, James; Ryan, James
1996-02-01
This paper describes work in progress to develop an Information Technology (IT) model and supporting information system for the evaluation of clinical teaching in the Emergency Medicine (EM) Department of North Shore University Hospital. In the academic hospital setting student physicians, i.e. residents, and faculty function daily in their dual roles as teachers and students respectively, and as health care providers. Databases exist that are used to evaluate both groups in either academic or clinical performance, but rarely has this information been integrated to analyze the relationship between academic performance and the ability to care for patients. The goal of the IT model is to improve the quality of teaching of EM physicians by enabling the development of integrable metrics for faculty and resident evaluation. The IT model will include (1) methods for tracking residents in order to develop experimental databases; (2) methods to integrate lecture evaluation, clinical performance, resident evaluation, and quality assurance databases; and (3) a patient flow system to monitor patient rooms and the waiting area in the Emergency Medicine Department, to record and display status of medical orders, and to collect data for analyses.
Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction.
Muruganantham, Arrchana; Tan, Kay Chen; Vadakkepat, Prahlad
2016-12-01
Evolutionary algorithms are effective in solving static multiobjective optimization problems resulting in the emergence of a number of state-of-the-art multiobjective evolutionary algorithms (MOEAs). Nevertheless, the interest in applying them to solve dynamic multiobjective optimization problems has only been tepid. Benchmark problems, appropriate performance metrics, as well as efficient algorithms are required to further the research in this field. One or more objectives may change with time in dynamic optimization problems. The optimization algorithm must be able to track the moving optima efficiently. A prediction model can learn the patterns from past experience and predict future changes. In this paper, a new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems. The predictions help to guide the search toward the changed optima, thereby accelerating convergence. A scoring scheme is devised to hybridize the KF prediction with a random reinitialization method. Experimental results and performance comparisons with other state-of-the-art algorithms demonstrate that the proposed algorithm is capable of significantly improving the dynamic optimization performance.
The Steinberg-Bernstein Centre for Minimally Invasive Surgery at McGill University.
Fried, Gerald M
2005-12-01
Surgical skills and simulation centers have been developed in recent years to meet the educational needs of practicing surgeons, residents, and students. The rapid pace of innovation in surgical procedures and technology, as well as the overarching desire to enhance patient safety, have driven the development of simulation technology and new paradigms for surgical education. McGill University has implemented an innovative approach to surgical education in the field of minimally invasive surgery. The goal is to measure surgical performance in the operating room using practical, reliable, and valid metrics, which allow the educational needs of the learner to be established and enable feedback and performance to be tracked over time. The GOALS system and the MISTELS program have been developed to measure operative performance and minimally invasive surgical technical skills in the inanimate skills lab, respectively. The MISTELS laparoscopic simulation-training program has been incorporated as the manual skills education and evaluation component of the Fundamentals of Laparoscopic Surgery program distributed by the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) and the American College of Surgeons.
Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.
Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier
2017-07-10
A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.
NASA Technical Reports Server (NTRS)
Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.
2015-01-01
Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP), in collaboration with the Behavioral Health and Performance (BHP) Element, is conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within that volume. NASA is looking for innovative methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods for collecting such data exist yet many are obtrusive and require significant post-processing. Example technologies used in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multiple camera filmography. However due to constraints of space operations many such methods are infeasible, such as inertial tracking systems which typically rely upon a gravity vector to normalize sensor readings, and traditional IR systems which are large and require extensive calibration. However multiple technologies have not yet been applied to space operations for these explicit purposes. Two of these include 3-Dimensional Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) and depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR).
NASA Astrophysics Data System (ADS)
Kwakkel, Jan; Haasnoot, Marjolijn
2015-04-01
In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the performance of a candidate plan with the performance of other candidate plans across a large ensemble of plausible futures. Initial results suggest that the simplest satisficing metric, inspired by the signal to noise ratio, results in very risk averse solutions. Other satisficing metrics, which handle the average performance and the dispersion around the average separately, provide substantial additional insights into the trade off between the average performance, and the dispersion around this average. In contrast, the regret-based metrics enhance insight into the relative merits of candidate plans, while being less clear on the average performance or the dispersion around this performance. These results suggest that it is beneficial to use multiple robustness metrics when doing a robust decision analysis study. Haasnoot, M., J. H. Kwakkel, W. E. Walker and J. Ter Maat (2013). "Dynamic Adaptive Policy Pathways: A New Method for Crafting Robust Decisions for a Deeply Uncertain World." Global Environmental Change 23(2): 485-498. Kwakkel, J. H., M. Haasnoot and W. E. Walker (2014). "Developing Dynamic Adaptive Policy Pathways: A computer-assisted approach for developing adaptive strategies for a deeply uncertain world." Climatic Change.
Metric for evaluation of filter efficiency in spectral cameras.
Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani
2016-11-10
Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.
A Comparison of 3D3C Velocity Measurement Techniques
NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2013-11-01
The velocity measurement fidelity of several 3D3C PIV measurement techniques including tomographic PIV, synthetic aperture PIV, plenoptic PIV, defocusing PIV, and 3D PTV are compared in simulations. A physically realistic ray-tracing algorithm is used to generate synthetic images of a standard calibration grid and of illuminated particle fields advected by homogeneous isotropic turbulence. The simulated images for the tomographic, synthetic aperture, and plenoptic PIV cases are then used to create three-dimensional reconstructions upon which cross-correlations are performed to yield the measured velocity field. Particle tracking algorithms are applied to the images for the defocusing PIV and 3D PTV to directly yield the three-dimensional velocity field. In all cases the measured velocity fields are compared to one-another and to the true velocity field using several metrics.
Optimal SSN Tasking to Enhance Real-time Space Situational Awareness
NASA Astrophysics Data System (ADS)
Ferreira, J., III; Hussein, I.; Gerber, J.; Sivilli, R.
2016-09-01
Space Situational Awareness (SSA) is currently constrained by an overwhelming number of resident space objects (RSOs) that need to be tracked and the amount of data these observations produce. The Joint Centralized Autonomous Tasking System (JCATS) is an autonomous, net-centric tool that approaches these SSA concerns from an agile, information-based stance. Finite set statistics and stochastic optimization are used to maintain an RSO catalog and develop sensor tasking schedules based on operator configured, state information-gain metrics to determine observation priorities. This improves the efficiency of sensors to target objects as awareness changes and new information is needed, not at predefined frequencies solely. A net-centric, service-oriented architecture (SOA) allows for JCATS integration into existing SSA systems. Testing has shown operationally-relevant performance improvements and scalability across multiple types of scenarios and against current sensor tasking tools.
DOT National Transportation Integrated Search
2013-04-01
"This report provides a Quick Guide to the concept of asset sustainability metrics. Such metrics address the long-term performance of highway assets based upon expected expenditure levels. : It examines how such metrics are used in Australia, Britain...
NASA Technical Reports Server (NTRS)
McFarland, Shane M.; Norcross, Jason
2016-01-01
Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.
Woods, Carl T; Veale, James P; Collier, Neil; Robertson, Sam
2017-02-01
This study investigated the extent to which position in the Australian Football League (AFL) national draft is associated with individual game performance metrics. Physical/technical skill performance metrics were collated from all participants in the 2014 national under 18 (U18) championships (18 games) drafted into the AFL (n = 65; 17.8 ± 0.5 y); 232 observations. Players were subdivided into draft position (ranked 1-65) and then draft round (1-4). Here, earlier draft selection (i.e., closer to 1) reflects a more desirable player. Microtechnology and a commercial provider facilitated the quantification of individual game performance metrics (n = 16). Linear mixed models were fitted to data, modelling the extent to which draft position was associated with these metrics. Draft position in the first/second round was negatively associated with "contested possessions" and "contested marks", respectively. Physical performance metrics were positively associated with draft position in these rounds. Correlations weakened for the third/fourth rounds. Contested possessions/marks were associated with an earlier draft selection. Physical performance metrics were associated with a later draft selection. Recruiters change the type of U18 player they draft as the selection pool reduces. juniors with contested skill appear prioritised.
Precise interferometric tracking of the DSCS II geosynchronous orbiter
NASA Astrophysics Data System (ADS)
Border, J. S.; Donivan, F. F., Jr.; Shiomi, T.; Kawano, N.
1986-01-01
A demonstration of the precise tracking of a geosynchronous orbiter by radio metric techniques based on very-long-baseline interferometry (VLBI) has been jointly conducted by the Jet Propulsion Laboratory and Japan's Radio Research Laboratory. Simultaneous observations of a U.S. Air Force communications satellite from tracking stations in California, Australia, and Japan have determined the satellite's position with an accuracy of a few meters. Accuracy claims are based on formal statistics, which include the effects of errors in non-estimated parameters and which are supported by a chi-squared of less than one, and on the consistency of orbit solutions from disjoint data sets. A study made to assess the impact of shorter baselines and reduced data noise concludes that with a properly designed system, similar accuracy could be obtained for either a satellite viewed from stations located within the continental U.S. or for a satellite viewed from stations within Japanese territory.
Single quantum dot tracking reveals the impact of nanoparticle surface on intracellular state.
Zahid, Mohammad U; Ma, Liang; Lim, Sung Jun; Smith, Andrew M
2018-05-08
Inefficient delivery of macromolecules and nanoparticles to intracellular targets is a major bottleneck in drug delivery, genetic engineering, and molecular imaging. Here we apply live-cell single-quantum-dot imaging and tracking to analyze and classify nanoparticle states after intracellular delivery. By merging trajectory diffusion parameters with brightness measurements, multidimensional analysis reveals distinct and heterogeneous populations that are indistinguishable using single parameters alone. We derive new quantitative metrics of particle loading, cluster distribution, and vesicular release in single cells, and evaluate intracellular nanoparticles with diverse surfaces following osmotic delivery. Surface properties have a major impact on cell uptake, but little impact on the absolute cytoplasmic numbers. A key outcome is that stable zwitterionic surfaces yield uniform cytosolic behavior, ideal for imaging agents. We anticipate that this combination of quantum dots and single-particle tracking can be widely applied to design and optimize next-generation imaging probes, nanoparticle therapeutics, and biologics.
ERIC Educational Resources Information Center
Fairchild, Susan; Gunton, Brad; Donohue, Beverly; Berry, Carolyn; Genn, Ruth; Knevals, Jessica
2011-01-01
Students who achieve critical academic benchmarks such as high attendance rates, continuous levels of credit accumulation, and high grades have a greater likelihood of success throughout high school and beyond. However, keeping students on track toward meeting graduation requirements and quickly identifying students who are at risk of falling off…
Screen Fingerprints as a Novel Modality for Active Authentication
2014-03-01
and mouse dynamics [9]. Some other examples of the computational behavior metrics of the cognitive fingerprint include eye tracking, how Approved...SCREEN FINGERPRINTS AS A NOVEL MODALITY FOR ACTIVE AUTHENTICATION UNIVERSITY OF MARYLAND MARCH 2014 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC...COVERED (From - To) MAY 2012 – OCT 2013 4. TITLE AND SUBTITLE SCREEN FINGERPRINTS AS A NOVEL MODALITY FOR ACTIVE AUTHENTICATION 5a. CONTRACT
Program to compute the positions of the aircraft and of the aircraft sensor footprints
NASA Technical Reports Server (NTRS)
Paris, J. F. (Principal Investigator)
1982-01-01
The positions of the ground track of the aircraft and of the aircraft sensor footprints, in particular the metric camera and the radar scatterometer on the C-130 aircraft, are estimated by a program called ACTRK. The program uses the altitude, speed, and attitude informaton contained in the radar scatterometer data files to calculate the positions. The ACTRK program is documented.
2016-10-01
total of 52 subjects have enrolled in the study: Veteran subjects n=34 and University of Utah subjects n= 18 . Preliminary analysis indicates that daily...relevant change 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18 . NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC a. REPORT
SU-G-JeP1-15: Sliding Window Prior Data Assisted Compressed Sensing for MRI Lung Tumor Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, E; Wachowicz, K; Rathee, S
Purpose: Prior Data Assisted Compressed Sensing (PDACS) is a partial k-space acquisition and reconstruction method for mobile tumour (i.e. lung) tracking using on-line MRI in radiotherapy. PDACS partially relies on prior data acquired at the beginning of dynamic scans, and is therefore susceptible to artifacts in longer duration scan due to slow drifts in MR signal. A novel sliding window strategy is presented to mitigate this effect. Methods: MRI acceleration is simulated by retrospective removal of data from the fully sampled sets. Six lung cancer patients were scanned (clinical 3T MRI) using a balanced steady state free precession (bSSFP) sequencemore » for 3 minutes at approximately 4 frames per second, for a total of 650 dynamics. PDACS acceleration is achieved by undersampling of k-space in a single pseudo-random pattern. Reconstruction iteratively minimizes the total variations while constraining the images to satisfy both the currently acquired data and the prior data in missing k-space. Our novel sliding window technique (SW-PDACS), uses a series of distinct pseudo-random under-sampling patterns of partial k-space – with the prior data drawn from a sliding window of the most recent data available. Under-sampled data, simulating 2 – 5x acceleration are reconstructed using PDACS and SW-PDACS. Three quantitative metrics: artifact power, centroid error and Dice’s coefficient are computed for comparison. Results: Quantitively metric values from all 6 patients are averaged in 3 bins, each containing approximately one minute of dynamic data. For the first minute bin, PDACS and SW-PDACS give comparable results. Progressive decline in image quality metrics in bins 2 and 3 are observed for PDACS. No decline in image quality is observed for SW-PDACS. Conclusion: The novel approach presented (SW-PDACS) is a more robust for accelerating longer duration (>1 minute) dynamic MRI scans for tracking lung tumour motion using on-line MRI in radiotherapy. B.G. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization).« less
Measuring Distribution Performance? Benchmarking Warrants Your Attention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ericson, Sean J; Alvarez, Paul
Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.
National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?
Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N
2017-12-01
To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P < 0.001), with 45.9% of hospitals performing well on all 3 measures concurrently in the most recent study year. Overall, 5-year survival was 75.0%, 72.3%, 72.5%, and 69.5% for those treated at hospitals with high performance on 3, 2, 1, and 0 metrics, respectively (log-rank, P < 0.001). Care at hospitals with high metric performance was associated with lower risk of death in a dose-response fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.
Metrics for Evaluation of Student Models
ERIC Educational Resources Information Center
Pelanek, Radek
2015-01-01
Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J.
Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison withmore » normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking performance. A future study will investigate spatial uniformity of motion and its effect on the motion estimation errors.« less
Applying Sigma Metrics to Reduce Outliers.
Litten, Joseph
2017-03-01
Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Radomski, M. S.; Doll, C. E.
1995-01-01
The Differenced Range (DR) Versus Integrated Doppler (ID) (DRVID) method exploits the opposition of high-frequency signal versus phase retardation by plasma media to obtain information about the plasma's corruption of simultaneous range and Doppler spacecraft tracking measurements. Thus, DR Plus ID (DRPID) is an observable independent of plasma refraction, while actual DRVID (DR minus ID) measures the time variation of the path electron content independently of spacecraft motion. The DRVID principle has been known since 1961. It has been used to observe interplanetary plasmas, is implemented in Deep Space Network tracking hardware, and has recently been applied to single-frequency Global Positioning System user navigation This paper discusses exploration at the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) of DRVID synthesized from simultaneous two-way range and Doppler tracking for low Earth-orbiting missions supported by the Tracking and Data Relay Satellite System (TDRSS) The paper presents comparisons of actual DR and ID residuals and relates those comparisons to predictions of the Bent model. The complications due to the pilot tone influence on relayed Doppler measurements are considered. Further use of DRVID to evaluate ionospheric models is discussed, as is use of DRPID in reducing dependence on ionospheric modeling in orbit determination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Michael; Battista, Jerry; Chen, Jeff
Purpose: To develop a radiotherapy dose tracking and plan evaluation technique using cone-beam computed tomography (CBCT) images. Methods: We developed a patient-specific method of calibrating CBCT image sets for dose calculation. The planning CT was first registered with the CBCT using deformable image registration (DIR). A scatter plot was generated between the CT numbers of the planning CT and CBCT for each slice. The CBCT calibration curve was obtained by least-square fitting of the data, and applied to each CBCT slice. The calibrated CBCT was then merged with original planning CT to extend the small field of view of CBCT.more » Finally, the treatment plan was copied to the merged CT for dose tracking and plan evaluation. The proposed patient-specific calibration method was also compared to two methods proposed in literature. To evaluate the accuracy of each technique, 15 head-and-neck patients requiring plan adaptation were arbitrarily selected from our institution. The original plan was calculated on each method’s data set, including a second planning CT acquired within 48 hours of the CBCT (serving as gold standard). Clinically relevant dose metrics and 3D gamma analysis of dose distributions were compared between the different techniques. Results: Compared to the gold standard of using planning CTs, the patient-specific CBCT calibration method was shown to provide promising results with gamma pass rates above 95% and average dose metric agreement within 2.5%. Conclusions: The patient-specific CBCT calibration method could potentially be used for on-line dose tracking and plan evaluation, without requiring a re-planning CT session.« less
Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?
Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J
2014-02-01
Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual reality simulator metrics to detect intraoperative errors did not discriminate between groups. If the virtual reality simulator continues to be used for the technical assessment of trainees and board-eligible surgeons, the evaluation of performance should be limited to procedural metrics.
Real-time seam tracking control system based on line laser visions
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Wang, Yanbo; Zhou, Weilin; Chen, Xiangzhi
2018-07-01
A set of six-degree-of-freedom robotic welding automatic tracking platform was designed in this study to realize the real-time tracking of weld seams. Moreover, the feature point tracking method and the adaptive fuzzy control algorithm in the welding process were studied and analyzed. A laser vision sensor and its measuring principle were designed and studied, respectively. Before welding, the initial coordinate values of the feature points were obtained using morphological methods. After welding, the target tracking method based on Gaussian kernel was used to extract the real-time feature points of the weld. An adaptive fuzzy controller was designed to input the deviation value of the feature points and the change rate of the deviation into the controller. The quantization factors, scale factor, and weight function were adjusted in real time. The input and output domains, fuzzy rules, and membership functions were constantly updated to generate a series of smooth bias robot voltage. Three groups of experiments were conducted on different types of curve welds in a strong arc and splash noise environment using the welding current of 120 A short-circuit Metal Active Gas (MAG) Arc Welding. The tracking error was less than 0.32 mm and the sensor's metrical frequency can be up to 20 Hz. The end of the torch run smooth during welding. Weld trajectory can be tracked accurately, thereby satisfying the requirements of welding applications.
Measuring β-diversity with species abundance data.
Barwell, Louise J; Isaac, Nick J B; Kunin, William E
2015-07-01
In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
Nonlinear control of linear parameter varying systems with applications to hypersonic vehicles
NASA Astrophysics Data System (ADS)
Wilcox, Zachary Donald
The focus of this dissertation is to design a controller for linear parameter varying (LPV) systems, apply it specifically to air-breathing hypersonic vehicles, and examine the interplay between control performance and the structural dynamics design. Specifically a Lyapunov-based continuous robust controller is developed that yields exponential tracking of a reference model, despite the presence of bounded, nonvanishing disturbances. The hypersonic vehicle has time varying parameters, specifically temperature profiles, and its dynamics can be reduced to an LPV system with additive disturbances. Since the HSV can be modeled as an LPV system the proposed control design is directly applicable. The control performance is directly examined through simulations. A wide variety of applications exist that can be effectively modeled as LPV systems. In particular, flight systems have historically been modeled as LPV systems and associated control tools have been applied such as gain-scheduling, linear matrix inequalities (LMIs), linear fractional transformations (LFT), and mu-types. However, as the type of flight environments and trajectories become more demanding, the traditional LPV controllers may no longer be sufficient. In particular, hypersonic flight vehicles (HSVs) present an inherently difficult problem because of the nonlinear aerothermoelastic coupling effects in the dynamics. HSV flight conditions produce temperature variations that can alter both the structural dynamics and flight dynamics. Starting with the full nonlinear dynamics, the aerothermoelastic effects are modeled by a temperature dependent, parameter varying state-space representation with added disturbances. The model includes an uncertain parameter varying state matrix, an uncertain parameter varying non-square (column deficient) input matrix, and an additive bounded disturbance. In this dissertation, a robust dynamic controller is formulated for a uncertain and disturbed LPV system. The developed controller is then applied to a HSV model, and a Lyapunov analysis is used to prove global exponential reference model tracking in the presence of uncertainty in the state and input matrices and exogenous disturbances. Simulations with a spectrum of gains and temperature profiles on the full nonlinear dynamic model of the HSV is used to illustrate the performance and robustness of the developed controller. In addition, this work considers how the performance of the developed controller varies over a wide variety of control gains and temperature profiles and are optimized with respect to different performance metrics. Specifically, various temperature profile models and related nonlinear temperature dependent disturbances are used to characterize the relative control performance and effort for each model. Examining such metrics as a function of temperature provides a potential inroad to examine the interplay between structural/thermal protection design and control development and has application for future HSV design and control implementation.
An Exploratory Study of OEE Implementation in Indian Manufacturing Companies
NASA Astrophysics Data System (ADS)
Kumar, J.; Soni, V. K.
2015-04-01
Globally, the implementation of Overall equipment effectiveness (OEE) has proven to be highly effective in improving availability, performance rate and quality rate while reducing unscheduled breakdown and wastage that stems from the equipment. This paper investigates the present status and future scope of OEE metrics in Indian manufacturing companies through an extensive survey. In this survey, opinions of Production and Maintenance Managers have been analyzed statistically to explore the relationship between factors, perspective of OEE and potential use of OEE metrics. Although the sample has been divers in terms of product, process type, size, and geographic location of the companies, they are enforced to implement improvement techniques such as OEE metrics to improve performance. The findings reveal that OEE metrics has huge potential and scope to improve performance. Responses indicate that Indian companies are aware of OEE but they are not utilizing full potential of OEE metrics.
Bradshaw, Corey J. A.; Brook, Barry W.
2016-01-01
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68–0.84 Spearman’s ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows. PMID:26930052
Bradshaw, Corey J A; Brook, Barry W
2016-01-01
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68-0.84 Spearman's ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.
NASA Astrophysics Data System (ADS)
Shields, C. A.; Ullrich, P. A.; Rutz, J. J.; Wehner, M. F.; Ralph, M.; Ruby, L.
2017-12-01
Atmospheric rivers (ARs) are long, narrow filamentary structures that transport large amounts of moisture in the lower layers of the atmosphere, typically from subtropical regions to mid-latitudes. ARs play an important role in regional hydroclimate by supplying significant amounts of precipitation that can alleviate drought, or in extreme cases, produce dangerous floods. Accurately detecting, or tracking, ARs is important not only for weather forecasting, but is also necessary to understand how these events may change under global warming. Detection algorithms are used on both regional and global scales, and most accurately, using high resolution datasets, or model output. Different detection algorithms can produce different answers. Detection algorithms found in the current literature fall broadly into two categories: "time-stitching", where the AR is tracked with a Lagrangian approach through time and space; and "counting", where ARs are identified for a single point in time for a single location. Counting routines can be further subdivided into algorithms that use absolute thresholds with specific geometry, to algorithms that use relative thresholds, to algorithms based on statistics, to pattern recognition and machine learning techniques. With such a large diversity in detection code, differences in AR tracking and "counts" can vary widely from technique to technique. Uncertainty increases for future climate scenarios, where the difference between relative and absolute thresholding produce vastly different counts, simply due to the moister background state in a warmer world. In an effort to quantify the uncertainty associated with tracking algorithms, the AR detection community has come together to participate in ARTMIP, the Atmospheric River Tracking Method Intercomparison Project. Each participant will provide AR metrics to the greater group by applying their code to a common reanalysis dataset. MERRA2 data was chosen for both temporal and spatial resolution. After completion of this first phase, Tier 1, ARTMIP participants may choose to contribute to Tier 2, which will range from reanalysis uncertainty, to analysis of future climate scenarios from high resolution model output. ARTMIP's experimental design, techniques, and preliminary metrics will be presented.
A neural net-based approach to software metrics
NASA Technical Reports Server (NTRS)
Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.
1992-01-01
Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.
Metrication report to the Congress
NASA Technical Reports Server (NTRS)
1991-01-01
NASA's principal metrication accomplishments for FY 1990 were establishment of metrication policy for major programs, development of an implementing instruction for overall metric policy and initiation of metrication planning for the major program offices. In FY 1991, development of an overall NASA plan and individual program office plans will be completed, requirement assessments will be performed for all support areas, and detailed assessment and transition planning will be undertaken at the institutional level. Metric feasibility decisions on a number of major programs are expected over the next 18 months.
Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.
Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony
2017-12-01
Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Prakash, S R; Herrmann, Barbara S; Milojcic, Rupprecht; Rauch, Steven D; Guinan, John J
2015-01-01
Vestibular evoked myogenic potentials (VEMPs) are due to vestibular responses producing brief inhibitions of muscle contractions that are detectable in electromyographic (EMG) responses. VEMP amplitudes are traditionally measured by the peak to peak amplitude of the averaged EMG response (VEMPpp) or by a normalized VEMPpp (nVEMPpp). However, a brief EMG inhibition does not satisfy the statistical assumptions for the average to be the optimal processing strategy. Here, it is postulated that the inhibition depth of motoneuron firing is the desired metric for showing the influence of the vestibular system on the muscle system. The authors present a metric called "VEMPid" that estimates this inhibition depth from the EMG data obtained in a usual VEMP data acquisition. The goal of this article was to compare how well VEMPid, VEMPpp, and nVEMPpp track inhibition depth. To find a robust method to compare VEMPid, VEMPpp, and nVEMPpp, realistic physiological models for the inhibition of VEMP EMG signals were made using VEMP data from four measurement sessions on each of the five normal subjects. Each of the resulting 20 EMG-production models was adjusted to match the EMG autocorrelation of an individual subject and session. Simulated VEMP traces produced by these models were used to compare how well VEMPid, VEMPpp, and nVEMPpp tracked model inhibition depth. Applied to simulated and real VEMP data, VEMPid showed good test-retest consistency and greater sensitivity at low stimulus levels than VEMPpp or nVEMPpp. For large-amplitude responses, nVEMPpp and VEMPid were equivalent in their consistency across subjects and sessions, but for low-amplitude responses, VEMPid was superior. Unnormalized VEMPpp was always worse than nVEMPpp or VEMPid. VEMPid provides a more reliable measurement of vestibular function at low sound levels than the traditional nVEMPpp, without requiring a change in how VEMP tests are performed. The calculation method for VEMPid should be applicable whenever an ongoing muscle contraction is briefly inhibited by an external stimulus.
Tracking Data Certification for the Lunar Reconnaissance Orbiter
NASA Technical Reports Server (NTRS)
Morinelli, Patrick J.; Socoby, Joseph; Hendry, Steve; Campion, Richard
2010-01-01
This paper details the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) tracking data certification effort of the Lunar Reconnaissance Orbiter (LRO) Space Communications Network (SCN) complement of tracking stations consisting of the NASA White Sands 1 antenna (WS1), and the commercial provider Universal Space Network (USN) antennas at South Point, Hawaii; Dongara Australia; Weilheim, Germany; and Kiruna, Sweden. Certification assessment required the cooperation and coordination of parties not under the control of either the LRO project or ground stations as uplinks on cooperating spacecraft were necessary. The LRO range-tracking requirement of 10m 1 sigma could be satisfactorily demonstrated using any typical spacecraft capable of range tracking. Though typical Low Earth Orbiting (LEO) or Geosynchronous Earth Orbiting (GEO) spacecraft may be adequate for range certification, their measurement dynamics and noise would be unacceptable for proper Doppler certification of 1-3mm/sec 1 sigma. As LRO will orbit the Moon, it was imperative that a suitable target spacecraft be utilized which can closely mimic the expected lunar orbital Doppler dynamics of +/-1.6km/sec and +/-1.5m/sq sec to +/-0.15m/sq sec, is in view of the ground stations, supports coherent S-Band Doppler tracking measurements, and can be modeled by the FDF. In order to meet the LRO metric tracking data specifications, the SCN ground stations employed previously uncertified numerically controlled tracking receivers. Initial certification testing revealed certain characteristics of the units that required resolution before being granted certification.
Restaurant Energy Use Benchmarking Guideline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedrick, R.; Smith, V.; Field, K.
2011-07-01
A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.
Tide or Tsunami? The Impact of Metrics on Scholarly Research
ERIC Educational Resources Information Center
Bonnell, Andrew G.
2016-01-01
Australian universities are increasingly resorting to the use of journal metrics such as impact factors and ranking lists in appraisal and promotion processes, and are starting to set quantitative "performance expectations" which make use of such journal-based metrics. The widespread use and misuse of research metrics is leading to…
Test and Evaluation Metrics of Crew Decision-Making And Aircraft Attitude and Energy State Awareness
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Ellis, Kyle K. E.; Stephens, Chad L.
2013-01-01
NASA has established a technical challenge, under the Aviation Safety Program, Vehicle Systems Safety Technologies project, to improve crew decision-making and response in complex situations. The specific objective of this challenge is to develop data and technologies which may increase a pilot's (crew's) ability to avoid, detect, and recover from adverse events that could otherwise result in accidents/incidents. Within this technical challenge, a cooperative industry-government research program has been established to develop innovative flight deck-based counter-measures that can improve the crew's ability to avoid, detect, mitigate, and recover from unsafe loss-of-aircraft state awareness - specifically, the loss of attitude awareness (i.e., Spatial Disorientation, SD) or the loss-of-energy state awareness (LESA). A critical component of this research is to develop specific and quantifiable metrics which identify decision-making and the decision-making influences during simulation and flight testing. This paper reviews existing metrics and methods for SD testing and criteria for establishing visual dominance. The development of Crew State Monitoring technologies - eye tracking and other psychophysiological - are also discussed as well as emerging new metrics for identifying channelized attention and excessive pilot workload, both of which have been shown to contribute to SD/LESA accidents or incidents.